WorldWideScience

Sample records for source count map

  1. A matrix-inversion method for gamma-source mapping from gamma-count data - 59082

    International Nuclear Information System (INIS)

    Bull, Richard K.; Adsley, Ian; Burgess, Claire

    2012-01-01

    Gamma ray counting is often used to survey the distribution of active waste material in various locations. Ideally the output from such surveys would be a map of the activity of the waste. In this paper a simple matrix-inversion method is presented. This allows an array of gamma-count data to be converted to an array of source activities. For each survey area the response matrix is computed using the gamma-shielding code Microshield [1]. This matrix links the activity array to the count array. The activity array is then obtained via matrix inversion. The method was tested on artificially-created arrays of count-data onto which statistical noise had been added. The method was able to reproduce, quite faithfully, the original activity distribution used to generate the dataset. The method has been applied to a number of practical cases, including the distribution of activated objects in a hot cell and to activated Nimonic springs amongst fuel-element debris in vaults at a nuclear plant. (authors)

  2. DAWN GRAND MAP CERES TPE NEUTRON COUNTS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — A global map thermal+epithermal neutron counting rates binned on twenty-degree quasi-equal-area pixels is provided. The map was determined from a time series of the...

  3. Observations of the Hubble Deep Field with the Infrared Space Observatory .3. Source counts and P(D) analysis

    DEFF Research Database (Denmark)

    Oliver, S.J.; Goldschmidt, P.; Franceschini, A.

    1997-01-01

    We present source counts at 6.7 and 15 mu m from our maps of the Hubble Deep Field (HDF) region, reaching 38.6 mu Jy at 6.7 mu m and 255 mu Jy at 15 mu m. These are the first ever extragalactic number counts to be presented at 6.7 mu m, and are three decades fainter than IRAS at 12 mu m. Both...

  4. The Herschel Virgo Cluster Survey. XVII. SPIRE point-source catalogs and number counts

    Science.gov (United States)

    Pappalardo, Ciro; Bendo, George J.; Bianchi, Simone; Hunt, Leslie; Zibetti, Stefano; Corbelli, Edvige; di Serego Alighieri, Sperello; Grossi, Marco; Davies, Jonathan; Baes, Maarten; De Looze, Ilse; Fritz, Jacopo; Pohlen, Michael; Smith, Matthew W. L.; Verstappen, Joris; Boquien, Médéric; Boselli, Alessandro; Cortese, Luca; Hughes, Thomas; Viaene, Sebastien; Bizzocchi, Luca; Clemens, Marcel

    2015-01-01

    Aims: We present three independent catalogs of point-sources extracted from SPIRE images at 250, 350, and 500 μm, acquired with the Herschel Space Observatory as a part of the Herschel Virgo Cluster Survey (HeViCS). The catalogs have been cross-correlated to consistently extract the photometry at SPIRE wavelengths for each object. Methods: Sources have been detected using an iterative loop. The source positions are determined by estimating the likelihood to be a real source for each peak on the maps, according to the criterion defined in the sourceExtractorSussextractor task. The flux densities are estimated using the sourceExtractorTimeline, a timeline-based point source fitter that also determines the fitting procedure with the width of the Gaussian that best reproduces the source considered. Afterwards, each source is subtracted from the maps, removing a Gaussian function in every position with the full width half maximum equal to that estimated in sourceExtractorTimeline. This procedure improves the robustness of our algorithm in terms of source identification. We calculate the completeness and the flux accuracy by injecting artificial sources in the timeline and estimate the reliability of the catalog using a permutation method. Results: The HeViCS catalogs contain about 52 000, 42 200, and 18 700 sources selected at 250, 350, and 500 μm above 3σ and are ~75%, 62%, and 50% complete at flux densities of 20 mJy at 250, 350, 500 μm, respectively. We then measured source number counts at 250, 350, and 500 μm and compare them with previous data and semi-analytical models. We also cross-correlated the catalogs with the Sloan Digital Sky Survey to investigate the redshift distribution of the nearby sources. From this cross-correlation, we select ~2000 sources with reliable fluxes and a high signal-to-noise ratio, finding an average redshift z ~ 0.3 ± 0.22 and 0.25 (16-84 percentile). Conclusions: The number counts at 250, 350, and 500 μm show an increase in

  5. Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources.

    Science.gov (United States)

    Klumpp, John; Brandl, Alexander

    2015-03-01

    A particle counting and detection system is proposed that searches for elevated count rates in multiple energy regions simultaneously. The system analyzes time-interval data (e.g., time between counts), as this was shown to be a more sensitive technique for detecting low count rate sources compared to analyzing counts per unit interval (Luo et al. 2013). Two distinct versions of the detection system are developed. The first is intended for situations in which the sample is fixed and can be measured for an unlimited amount of time. The second version is intended to detect sources that are physically moving relative to the detector, such as a truck moving past a fixed roadside detector or a waste storage facility under an airplane. In both cases, the detection system is expected to be active indefinitely; i.e., it is an online detection system. Both versions of the multi-energy detection systems are compared to their respective gross count rate detection systems in terms of Type I and Type II error rates and sensitivity.

  6. Mapping the layer count of few-layer hexagonal boron nitride at high lateral spatial resolutions

    Science.gov (United States)

    Mohsin, Ali; Cross, Nicholas G.; Liu, Lei; Watanabe, Kenji; Taniguchi, Takashi; Duscher, Gerd; Gu, Gong

    2018-01-01

    Layer count control and uniformity of two dimensional (2D) layered materials are critical to the investigation of their properties and to their electronic device applications, but methods to map 2D material layer count at nanometer-level lateral spatial resolutions have been lacking. Here, we demonstrate a method based on two complementary techniques widely available in transmission electron microscopes (TEMs) to map the layer count of multilayer hexagonal boron nitride (h-BN) films. The mass-thickness contrast in high-angle annular dark-field (HAADF) imaging in the scanning transmission electron microscope (STEM) mode allows for thickness determination in atomically clean regions with high spatial resolution (sub-nanometer), but is limited by surface contamination. To complement, another technique based on the boron K ionization edge in the electron energy loss spectroscopy spectrum (EELS) of h-BN is developed to quantify the layer count so that surface contamination does not cause an overestimate, albeit at a lower spatial resolution (nanometers). The two techniques agree remarkably well in atomically clean regions with discrepancies within  ±1 layer. For the first time, the layer count uniformity on the scale of nanometers is quantified for a 2D material. The methodology is applicable to layer count mapping of other 2D layered materials, paving the way toward the synthesis of multilayer 2D materials with homogeneous layer count.

  7. How Fred Hoyle Reconciled Radio Source Counts and the Steady State Cosmology

    Science.gov (United States)

    Ekers, Ron

    2012-09-01

    In 1969 Fred Hoyle invited me to his Institute of Theoretical Astronomy (IOTA) in Cambridge to work with him on the interpretation of the radio source counts. This was a period of extreme tension with Ryle just across the road using the steep slope of the radio source counts to argue that the radio source population was evolving and Hoyle maintaining that the counts were consistent with the steady state cosmology. Both of these great men had made some correct deductions but they had also both made mistakes. The universe was evolving, but the source counts alone could tell us very little about cosmology. I will try to give some indication of the atmosphere and the issues at the time and look at what we can learn from this saga. I will conclude by briefly summarising the exponential growth of the size of the radio source counts since the early days and ask whether our understanding has grown at the same rate.

  8. Comparing MapReduce and Pipeline Implementations for Counting Triangles

    Directory of Open Access Journals (Sweden)

    Edelmira Pasarella

    2017-01-01

    Full Text Available A common method to define a parallel solution for a computational problem consists in finding a way to use the Divide and Conquer paradigm in order to have processors acting on its own data and scheduled in a parallel fashion. MapReduce is a programming model that follows this paradigm, and allows for the definition of efficient solutions by both decomposing a problem into steps on subsets of the input data and combining the results of each step to produce final results. Albeit used for the implementation of a wide variety of computational problems, MapReduce performance can be negatively affected whenever the replication factor grows or the size of the input is larger than the resources available at each processor. In this paper we show an alternative approach to implement the Divide and Conquer paradigm, named dynamic pipeline. The main features of dynamic pipelines are illustrated on a parallel implementation of the well-known problem of counting triangles in a graph. This problem is especially interesting either when the input graph does not fit in memory or is dynamically generated. To evaluate the properties of pipeline, a dynamic pipeline of processes and an ad-hoc version of MapReduce are implemented in the language Go, exploiting its ability to deal with channels and spawned processes. An empirical evaluation is conducted on graphs of different topologies, sizes, and densities. Observed results suggest that dynamic pipelines allows for an efficient implementation of the problem of counting triangles in a graph, particularly, in dense and large graphs, drastically reducing the execution time with respect to the MapReduce implementation.

  9. Preverbal and verbal counting and computation.

    Science.gov (United States)

    Gallistel, C R; Gelman, R

    1992-08-01

    We describe the preverbal system of counting and arithmetic reasoning revealed by experiments on numerical representations in animals. In this system, numerosities are represented by magnitudes, which are rapidly but inaccurately generated by the Meck and Church (1983) preverbal counting mechanism. We suggest the following. (1) The preverbal counting mechanism is the source of the implicit principles that guide the acquisition of verbal counting. (2) The preverbal system of arithmetic computation provides the framework for the assimilation of the verbal system. (3) Learning to count involves, in part, learning a mapping from the preverbal numerical magnitudes to the verbal and written number symbols and the inverse mappings from these symbols to the preverbal magnitudes. (4) Subitizing is the use of the preverbal counting process and the mapping from the resulting magnitudes to number words in order to generate rapidly the number words for small numerosities. (5) The retrieval of the number facts, which plays a central role in verbal computation, is mediated via the inverse mappings from verbal and written numbers to the preverbal magnitudes and the use of these magnitudes to find the appropriate cells in tabular arrangements of the answers. (6) This model of the fact retrieval process accounts for the salient features of the reaction time differences and error patterns revealed by experiments on mental arithmetic. (7) The application of verbal and written computational algorithms goes on in parallel with, and is to some extent guided by, preverbal computations, both in the child and in the adult.

  10. Two-part zero-inflated negative binomial regression model for quantitative trait loci mapping with count trait.

    Science.gov (United States)

    Moghimbeigi, Abbas

    2015-05-07

    Poisson regression models provide a standard framework for quantitative trait locus (QTL) mapping of count traits. In practice, however, count traits are often over-dispersed relative to the Poisson distribution. In these situations, the zero-inflated Poisson (ZIP), zero-inflated generalized Poisson (ZIGP) and zero-inflated negative binomial (ZINB) regression may be useful for QTL mapping of count traits. Added genetic variables to the negative binomial part equation, may also affect extra zero data. In this study, to overcome these challenges, I apply two-part ZINB model. The EM algorithm with Newton-Raphson method in the M-step uses for estimating parameters. An application of the two-part ZINB model for QTL mapping is considered to detect associations between the formation of gallstone and the genotype of markers. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Performance of 3DOSEM and MAP algorithms for reconstructing low count SPECT acquisitions

    Energy Technology Data Exchange (ETDEWEB)

    Grootjans, Willem [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Leiden Univ. Medical Center (Netherlands). Dept. of Radiology; Meeuwis, Antoi P.W.; Gotthardt, Martin; Visser, Eric P. [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Slump, Cornelis H. [Univ. Twente, Enschede (Netherlands). MIRA Inst. for Biomedical Technology and Technical Medicine; Geus-Oei, Lioe-Fee de [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Univ. Twente, Enschede (Netherlands). MIRA Inst. for Biomedical Technology and Technical Medicine; Leiden Univ. Medical Center (Netherlands). Dept. of Radiology

    2016-07-01

    Low count single photon emission computed tomography (SPECT) is becoming more important in view of whole body SPECT and reduction of radiation dose. In this study, we investigated the performance of several 3D ordered subset expectation maximization (3DOSEM) and maximum a posteriori (MAP) algorithms for reconstructing low count SPECT images. Phantom experiments were conducted using the National Electrical Manufacturers Association (NEMA) NU2 image quality (IQ) phantom. The background compartment of the phantom was filled with varying concentrations of pertechnetate and indiumchloride, simulating various clinical imaging conditions. Images were acquired using a hybrid SPECT/CT scanner and reconstructed with 3DOSEM and MAP reconstruction algorithms implemented in Siemens Syngo MI.SPECT (Flash3D) and Hermes Hybrid Recon Oncology (Hyrid Recon 3DOSEM and MAP). Image analysis was performed by calculating the contrast recovery coefficient (CRC),percentage background variability (N%), and contrast-to-noise ratio (CNR), defined as the ratio between CRC and N%. Furthermore, image distortion is characterized by calculating the aspect ratio (AR) of ellipses fitted to the hot spheres. Additionally, the performance of these algorithms to reconstruct clinical images was investigated. Images reconstructed with 3DOSEM algorithms demonstrated superior image quality in terms of contrast and resolution recovery when compared to images reconstructed with filtered-back-projection (FBP), OSEM and 2DOSEM. However, occurrence of correlated noise patterns and image distortions significantly deteriorated the quality of 3DOSEM reconstructed images. The mean AR for the 37, 28, 22, and 17 mm spheres was 1.3, 1.3, 1.6, and 1.7 respectively. The mean N% increase in high and low count Flash3D and Hybrid Recon 3DOSEM from 5.9% and 4.0% to 11.1% and 9.0%, respectively. Similarly, the mean CNR decreased in high and low count Flash3D and Hybrid Recon 3DOSEM from 8.7 and 8.8 to 3.6 and 4

  12. 2π proportional counting chamber for large-area-coated β sources

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 86; Issue 6. 2 π proportional counting chamber for large-area-coated β sources ... A provision is made for change ofthe source and immediate measurement of source activity. These sources are used to calibrate the efficiency of contamination monitors at radiological ...

  13. Nature or Nurture in finger counting: a review on the determinants of the direction of number-finger mapping

    Directory of Open Access Journals (Sweden)

    Paola ePrevitali

    2011-12-01

    Full Text Available The spontaneous use of finger counting has been for long recognised as critical to the acquisition of number skills. Recently, the great interest on space-number associations shifted attention to the practice of finger counting itself, and specifically, to its spatial components. Besides general cross-cultural differences in mapping numbers onto fingers, contrasting results have been reported with regard to the directional features of this mapping. The key issue we address is to what extent directionality is culturally-mediated, i.e., linked to the conventional reading-writing system direction, and/or biologically determined, i.e. linked to hand dominance. Although the preferred starting hand for counting seems to depend on the surveyed population, even within the same population high inter-individual variability minimises the role of cultural factors. Even if so far largely overlooked, handedness represents a sound candidate for shaping finger counting direction. Here we discuss adults and developmental evidence in support of this view and we reconsider the plausibility of multiple and coexistent number-space mapping in physical and representational space.

  14. Radiation measurement practice for understanding statistical fluctuation of radiation count using natural radiation sources

    International Nuclear Information System (INIS)

    Kawano, Takao

    2014-01-01

    It is known that radiation is detected at random and the radiation counts fluctuate statistically. In the present study, a radiation measurement experiment was performed to understand the randomness and statistical fluctuation of radiation counts. In the measurement, three natural radiation sources were used. The sources were fabricated from potassium chloride chemicals, chemical fertilizers and kelps. These materials contain naturally occurring potassium-40 that is a radionuclide. From high schools, junior high schools and elementary schools, nine teachers participated to the radiation measurement experiment. Each participant measured the 1-min integration counts of radiation five times using GM survey meters, and 45 sets of data were obtained for the respective natural radiation sources. It was found that the frequency of occurrence of radiation counts was distributed according to a Gaussian distribution curve, although the obtained 45 data sets of radiation counts superficially looked to be fluctuating meaninglessly. (author)

  15. HIGH-RESOLUTION IMAGING OF THE ATLBS REGIONS: THE RADIO SOURCE COUNTS

    Energy Technology Data Exchange (ETDEWEB)

    Thorat, K.; Subrahmanyan, R.; Saripalli, L.; Ekers, R. D., E-mail: kshitij@rri.res.in [Raman Research Institute, C. V. Raman Avenue, Sadashivanagar, Bangalore 560080 (India)

    2013-01-01

    The Australia Telescope Low-brightness Survey (ATLBS) regions have been mosaic imaged at a radio frequency of 1.4 GHz with 6'' angular resolution and 72 {mu}Jy beam{sup -1} rms noise. The images (centered at R.A. 00{sup h}35{sup m}00{sup s}, decl. -67 Degree-Sign 00'00'' and R.A. 00{sup h}59{sup m}17{sup s}, decl. -67 Degree-Sign 00'00'', J2000 epoch) cover 8.42 deg{sup 2} sky area and have no artifacts or imaging errors above the image thermal noise. Multi-resolution radio and optical r-band images (made using the 4 m CTIO Blanco telescope) were used to recognize multi-component sources and prepare a source list; the detection threshold was 0.38 mJy in a low-resolution radio image made with beam FWHM of 50''. Radio source counts in the flux density range 0.4-8.7 mJy are estimated, with corrections applied for noise bias, effective area correction, and resolution bias. The resolution bias is mitigated using low-resolution radio images, while effects of source confusion are removed by using high-resolution images for identifying blended sources. Below 1 mJy the ATLBS counts are systematically lower than the previous estimates. Showing no evidence for an upturn down to 0.4 mJy, they do not require any changes in the radio source population down to the limit of the survey. The work suggests that automated image analysis for counts may be dependent on the ability of the imaging to reproduce connecting emission with low surface brightness and on the ability of the algorithm to recognize sources, which may require that source finding algorithms effectively work with multi-resolution and multi-wavelength data. The work underscores the importance of using source lists-as opposed to component lists-and correcting for the noise bias in order to precisely estimate counts close to the image noise and determine the upturn at sub-mJy flux density.

  16. Radiation Source Mapping with Bayesian Inverse Methods

    Science.gov (United States)

    Hykes, Joshua Michael

    We present a method to map the spectral and spatial distributions of radioactive sources using a small number of detectors. Locating and identifying radioactive materials is important for border monitoring, accounting for special nuclear material in processing facilities, and in clean-up operations. Most methods to analyze these problems make restrictive assumptions about the distribution of the source. In contrast, the source-mapping method presented here allows an arbitrary three-dimensional distribution in space and a flexible group and gamma peak distribution in energy. To apply the method, the system's geometry and materials must be known. A probabilistic Bayesian approach is used to solve the resulting inverse problem (IP) since the system of equations is ill-posed. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint flux, discrete ordinates solutions, obtained in this work by the Denovo code, are required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes are then used to form the linear model to map the state space to the response space. The test for the method is simultaneously locating a set of 137Cs and 60Co gamma sources in an empty room. This test problem is solved using synthetic measurements generated by a Monte Carlo (MCNP) model and using experimental measurements that we collected for this purpose. With the synthetic data, the predicted source distributions identified the locations of the sources to within tens of centimeters, in a room with an approximately four-by-four meter floor plan. Most of the predicted source intensities were within a factor of ten of their true value. The chi-square value of the predicted source was within a factor of five from the expected value based on the number of measurements employed. With a favorable uniform initial guess, the predicted source map was nearly identical to the true distribution

  17. Preparation of source mounts for 4π counting

    International Nuclear Information System (INIS)

    Johnson, E.P.

    1991-01-01

    The 4πβ/γ counter in the ANSTO radioisotope standards laboratory at Lucas Heights constitutes part of the Australian national standard for radioactivity. Sources to be measured in the counter must be mounted on a substrate which is strong enough to withstand careful handling and transport. The substrate must also be electrically conducting to minimise counting errors caused by charging of the source, and it must have very low superficial density so that little or none of the radiation is absorbed. The entire process of fabrication of VYNS films, coating them with gold/palladium and transferring them to source mount rings, as carried out in the radioisotope standards laboratory, is documented. 3 refs., 2 tabs., 6 figs

  18. The European large area ISO survey - III. 90-mu m extragalactic source counts

    DEFF Research Database (Denmark)

    Efstathiou, A.; Oliver, S.; Rowan-Robinson, M.

    2000-01-01

    We present results and source counts at 90 mum extracted from the preliminary analysis of the European Large Area ISO Survey (ELAIS). The survey covered about 12 deg(2) of the sky in four main areas and was carried out with the ISOPHOT instrument onboard the Infrared Space Observatory (ISO...... or small groups of galaxies, suggesting that the sample may include a significant fraction of luminous infrared galaxies. The source counts extracted from a reliable subset of the detected sources are in agreement with strongly evolving models of the starburst galaxy population....

  19. AKARI/IRC source catalogues and source counts for the IRAC Dark Field, ELAIS North and the AKARI Deep Field South

    Science.gov (United States)

    Davidge, H.; Serjeant, S.; Pearson, C.; Matsuhara, H.; Wada, T.; Dryer, B.; Barrufet, L.

    2017-12-01

    We present the first detailed analysis of three extragalactic fields (IRAC Dark Field, ELAIS-N1, ADF-S) observed by the infrared satellite, AKARI, using an optimized data analysis toolkit specifically for the processing of extragalactic point sources. The InfaRed Camera (IRC) on AKARI complements the Spitzer Space Telescope via its comprehensive coverage between 8-24 μm filling the gap between the Spitzer/IRAC and MIPS instruments. Source counts in the AKARI bands at 3.2, 4.1, 7, 11, 15 and 18 μm are presented. At near-infrared wavelengths, our source counts are consistent with counts made in other AKARI fields and in general with Spitzer/IRAC (except at 3.2 μm where our counts lie above). In the mid-infrared (11 - 18 μm), we find our counts are consistent with both previous surveys by AKARI and the Spitzer peak-up imaging survey with the InfraRed Spectrograph (IRS). Using our counts to constrain contemporary evolutionary models, we find that although the models and counts are in agreement at mid-infrared wavelengths there are inconsistencies at wavelengths shortward of 7 μm, suggesting either a problem with stellar subtraction or indicating the need for refinement of the stellar population models. We have also investigated the AKARI/IRC filters, and find an active galactic nucleus selection criteria out to z < 2 on the basis of AKARI 4.1, 11, 15 and 18 μm colours.

  20. Web GIS in practice IV: publishing your health maps and connecting to remote WMS sources using the Open Source UMN MapServer and DM Solutions MapLab

    Directory of Open Access Journals (Sweden)

    Honda Kiyoshi

    2006-01-01

    Full Text Available Abstract Open Source Web GIS software systems have reached a stage of maturity, sophistication, robustness and stability, and usability and user friendliness rivalling that of commercial, proprietary GIS and Web GIS server products. The Open Source Web GIS community is also actively embracing OGC (Open Geospatial Consortium standards, including WMS (Web Map Service. WMS enables the creation of Web maps that have layers coming from multiple different remote servers/sources. In this article we present one easy to implement Web GIS server solution that is based on the Open Source University of Minnesota (UMN MapServer. By following the accompanying step-by-step tutorial instructions, interested readers running mainstream Microsoft® Windows machines and with no prior technical experience in Web GIS or Internet map servers will be able to publish their own health maps on the Web and add to those maps additional layers retrieved from remote WMS servers. The 'digital Asia' and 2004 Indian Ocean tsunami experiences in using free Open Source Web GIS software are also briefly described.

  1. Deep Galex Observations of the Coma Cluster: Source Catalog and Galaxy Counts

    Science.gov (United States)

    Hammer, D.; Hornschemeier, A. E.; Mobasher, B.; Miller, N.; Smith, R.; Arnouts, S.; Milliard, B.; Jenkins, L.

    2010-01-01

    We present a source catalog from deep 26 ks GALEX observations of the Coma cluster in the far-UV (FUV; 1530 Angstroms) and near-UV (NUV; 2310 Angstroms) wavebands. The observed field is centered 0.9 deg. (1.6 Mpc) south-west of the Coma core, and has full optical photometric coverage by SDSS and spectroscopic coverage to r-21. The catalog consists of 9700 galaxies with GALEX and SDSS photometry, including 242 spectroscopically-confirmed Coma member galaxies that range from giant spirals and elliptical galaxies to dwarf irregular and early-type galaxies. The full multi-wavelength catalog (cluster plus background galaxies) is 80% complete to NUV=23 and FUV=23.5, and has a limiting depth at NUV=24.5 and FUV=25.0 which corresponds to a star formation rate of 10(exp -3) solar mass yr(sup -1) at the distance of Coma. The GALEX images presented here are very deep and include detections of many resolved cluster members superposed on a dense field of unresolved background galaxies. This required a two-fold approach to generating a source catalog: we used a Bayesian deblending algorithm to measure faint and compact sources (using SDSS coordinates as a position prior), and used the GALEX pipeline catalog for bright and/or extended objects. We performed simulations to assess the importance of systematic effects (e.g. object blends, source confusion, Eddington Bias) that influence source detection and photometry when using both methods. The Bayesian deblending method roughly doubles the number of source detections and provides reliable photometry to a few magnitudes deeper than the GALEX pipeline catalog. This method is also free from source confusion over the UV magnitude range studied here: conversely, we estimate that the GALEX pipeline catalogs are confusion limited at NUV approximately 23 and FUV approximately 24. We have measured the total UV galaxy counts using our catalog and report a 50% excess of counts across FUV=22-23.5 and NUV=21.5-23 relative to previous GALEX

  2. Mapping of the extinction in Giant Molecular Clouds using optical star counts

    OpenAIRE

    Cambresy, L.

    1999-01-01

    This paper presents large scale extinction maps of most nearby Giant Molecular Clouds of the Galaxy (Lupus, rho-Ophiuchus, Scorpius, Coalsack, Taurus, Chamaeleon, Musca, Corona Australis, Serpens, IC 5146, Vela, Orion, Monoceros R1 and R2, Rosette, Carina) derived from a star count method using an adaptive grid and a wavelet decomposition applied to the optical data provided by the USNO-Precision Measuring Machine. The distribution of the extinction in the clouds leads to estimate their total...

  3. DEEP GALEX OBSERVATIONS OF THE COMA CLUSTER: SOURCE CATALOG AND GALAXY COUNTS

    International Nuclear Information System (INIS)

    Hammer, D.; Hornschemeier, A. E.; Miller, N.; Jenkins, L.; Mobasher, B.; Smith, R.; Arnouts, S.; Milliard, B.

    2010-01-01

    We present a source catalog from a deep 26 ks Galaxy Evolution Explorer (GALEX) observation of the Coma cluster in the far-UV (FUV; 1530 A) and near-UV (NUV; 2310 A) wavebands. The observed field is centered ∼0. 0 9 (1.6 Mpc) southwest of the Coma core in a well-studied region of the cluster known as 'Coma-3'. The entire field is located within the apparent virial radius of the Coma cluster, and has optical photometric coverage with Sloan Digital Sky Survey (SDSS) and deep spectroscopic coverage to r ∼ 21. We detect GALEX sources to NUV = 24.5 and FUV = 25.0, which corresponds to a star formation rate of ∼10 -3 M sun yr -1 for galaxies at the distance of Coma. We have assembled a catalog of 9700 galaxies with GALEX and SDSS photometry, including 242 spectroscopically confirmed Coma member galaxies that span a large range of galaxy types from giant spirals and elliptical galaxies to dwarf irregular and early-type galaxies. The full multi-wavelength catalog (cluster plus background galaxies) is ∼80% complete to NUV = 23 and FUV = 23.5. The GALEX images presented here are very deep and include detections of many resolved cluster members superposed on a dense field of unresolved background galaxies. This required a two-fold approach to generating a source catalog: we used a Bayesian deblending algorithm to measure faint and compact sources (using SDSS coordinates as position prior), and used the GALEX pipeline catalog for bright and/or extended objects. We performed simulations to assess the importance of systematic effects (e.g., object blends, source confusion, Eddington Bias) that influence the source detection and photometry when using both methods. The Bayesian deblending method roughly doubles the number of source detections and provides reliable photometry to a few magnitudes deeper than the GALEX pipeline catalog. This method is free from source confusion over the UV magnitude range studied here; we estimate that the GALEX pipeline catalogs are

  4. Testing the count rate performance of the scintillation camera by exponential attenuation: Decaying source; Multiple filters

    International Nuclear Information System (INIS)

    Adams, R.; Mena, I.

    1988-01-01

    An algorithm and two fortrAN programs have been developed to evaluate the count rate performance of scintillation cameras from count rates reduced exponentially, either by a decaying source or by filtration. The first method is used with short-lived radionuclides such as 191 /sup m/Ir or 191 /sup m/Au. The second implements a National Electrical Manufacturers' Association (NEMA) protocol in which the count rate from a source of 191 /sup m/Tc is attenuated by a varying number of copper filters stacked over it. The count rate at each data point is corrected for deadtime loss after assigning an arbitrary deadtime (tau). A second-order polynomial equation is fitted to the logarithms of net count rate values: ln(R) = A+BT+CT 2 where R is the net corrected count rate (cps), and T is the elapsed time (or the filter thickness in the NEMA method). Depending on C, tau is incremented or decremented iteratively, and the count rate corrections and curve fittings are repeated until C approaches zero, indicating a correct value of the deadtime (tau). The program then plots the measured count rate versus the corrected count rate values

  5. A simulator for airborne laser swath mapping via photon counting

    Science.gov (United States)

    Slatton, K. C.; Carter, W. E.; Shrestha, R.

    2005-06-01

    Commercially marketed airborne laser swath mapping (ALSM) instruments currently use laser rangers with sufficient energy per pulse to work with return signals of thousands of photons per shot. The resulting high signal to noise level virtually eliminates spurious range values caused by noise, such as background solar radiation and sensor thermal noise. However, the high signal level approach requires laser repetition rates of hundreds of thousands of pulses per second to obtain contiguous coverage of the terrain at sub-meter spatial resolution, and with currently available technology, affords little scalability for significantly downsizing the hardware, or reducing the costs. A photon-counting ALSM sensor has been designed by the University of Florida and Sigma Space, Inc. for improved topographic mapping with lower power requirements and weight than traditional ALSM sensors. Major elements of the sensor design are presented along with preliminary simulation results. The simulator is being developed so that data phenomenology and target detection potential can be investigated before the system is completed. Early simulations suggest that precise estimates of terrain elevation and target detection will be possible with the sensor design.

  6. Calculation of the counting efficiency for extended sources

    International Nuclear Information System (INIS)

    Korun, M.; Vidmar, T.

    2002-01-01

    A computer program for calculation of efficiency calibration curves for extended samples counted on gamma- and X ray spectrometers is described. The program calculates efficiency calibration curves for homogeneous cylindrical samples placed coaxially with the symmetry axis of the detector. The method of calculation is based on integration over the sample volume of the efficiencies for point sources measured in free space on an equidistant grid of points. The attenuation of photons within the sample is taken into account using the self-attenuation function calculated with a two-dimensional detector model. (author)

  7. Presurgical mapping with magnetic source imaging. Comparisons with intraoperative findings

    International Nuclear Information System (INIS)

    Roberts, T.P.L.; Ferrari, P.; Perry, D.; Rowley, H.A.; Berger, M.S.

    2000-01-01

    We compare noninvasive preoperative mapping with magnetic source imaging to intraoperative cortical stimulation mapping. These techniques were directly compared in 17 patients who underwent preoperative and postoperative somatosensory mapping of a total of 22 comparable anatomic sites (digits, face). Our findings are presented in the context of previous studies that used magnetic source imaging and functional magnetic resonance imaging as noninvasive surrogates of intraoperative mapping for the identification of sensorimotor and language-specific brain functional centers in patients with brain tumors. We found that magnetic source imaging results were reasonably concordant with intraoperative mapping findings in over 90% of cases, and that concordance could be defined as 'good' in 77% of cases. Magnetic source imaging therefore provides a viable, if coarse, identification of somatosensory areas and, consequently, can guide and reduce the time taken for intraoperative mapping procedures. (author)

  8. Reconstruction of multiple line source attenuation maps

    International Nuclear Information System (INIS)

    Celler, A.; Sitek, A.; Harrop, R.

    1996-01-01

    A simple configuration for a transmission source for the single photon emission computed tomography (SPECT) was proposed, which utilizes a series of collimated line sources parallel to the axis of rotation of a camera. The detector is equipped with a standard parallel hole collimator. We have demonstrated that this type of source configuration can be used to generate sufficient data for the reconstruction of the attenuation map when using 8-10 line sources spaced by 3.5-4.5 cm for a 30 x 40cm detector at 65cm distance from the sources. Transmission data for a nonuniform thorax phantom was simulated, then binned and reconstructed using filtered backprojection (FBP) and iterative methods. The optimum maps are obtained with data binned into 2-3 bins and FBP reconstruction. The activity in the source was investigated for uniform and exponential activity distributions, as well as the effect of gaps and overlaps of the neighboring fan beams. A prototype of the line source has been built and the experimental verification of the technique has started

  9. Degree of polarization and source counts of faint radio sources from Stacking Polarized intensity

    International Nuclear Information System (INIS)

    Stil, J. M.; George, S. J.; Keller, B. W.; Taylor, A. R.

    2014-01-01

    We present stacking polarized intensity as a means to study the polarization of sources that are too faint to be detected individually in surveys of polarized radio sources. Stacking offers not only high sensitivity to the median signal of a class of radio sources, but also avoids a detection threshold in polarized intensity, and therefore an arbitrary exclusion of sources with a low percentage of polarization. Correction for polarization bias is done through a Monte Carlo analysis and tested on a simulated survey. We show that the nonlinear relation between the real polarized signal and the detected signal requires knowledge of the shape of the distribution of fractional polarization, which we constrain using the ratio of the upper quartile to the lower quartile of the distribution of stacked polarized intensities. Stacking polarized intensity for NRAO VLA Sky Survey (NVSS) sources down to the detection limit in Stokes I, we find a gradual increase in median fractional polarization that is consistent with a trend that was noticed before for bright NVSS sources, but is much more gradual than found by previous deep surveys of radio polarization. Consequently, the polarized radio source counts derived from our stacking experiment predict fewer polarized radio sources for future surveys with the Square Kilometre Array and its pathfinders.

  10. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    Science.gov (United States)

    Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.

    2017-01-01

    Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.

  11. Micro-electrodeposition techniques for the preparation of small actinide counting sources for ultra-high resolution alpha spectrometry by microcalorimetry

    International Nuclear Information System (INIS)

    Plionis, A.A.; Hastings, E.P.; LaMont, S.P.; Dry, D.E.; Bacrania, M.K.; Rabin, M.W.; Rim, J.H.

    2009-01-01

    Special considerations and techniques are desired for the preparation of small actinide counting sources. Counting sources have been prepared on metal disk substrates (planchets) with an active area of only 0.079 mm 2 . This represents a 93.75% reduction in deposition area from standard electrodeposition methods. The actinide distribution upon the smaller planchet must remain thin and uniform to allow alpha particle emissions to escape the counting source with a minimal amount of self-attenuation. This work describes the development of micro-electrodeposition methods and optimization of the technique with respect to deposition time and current density for various planchet sizes. (author)

  12. Measurement of uranium and plutonium in solid waste by passive photon or neutron counting and isotopic neutron source interrogation

    Energy Technology Data Exchange (ETDEWEB)

    Crane, T.W.

    1980-03-01

    A summary of the status and applicability of nondestructive assay (NDA) techniques for the measurement of uranium and plutonium in 55-gal barrels of solid waste is reported. The NDA techniques reviewed include passive gamma-ray and x-ray counting with scintillator, solid state, and proportional gas photon detectors, passive neutron counting, and active neutron interrogation with neutron and gamma-ray counting. The active neutron interrogation methods are limited to those employing isotopic neutron sources. Three generic neutron sources (alpha-n, photoneutron, and /sup 252/Cf) are considered. The neutron detectors reviewed for both prompt and delayed fission neutron detection with the above sources include thermal (/sup 3/He, /sup 10/BF/sub 3/) and recoil (/sup 4/He, CH/sub 4/) proportional gas detectors and liquid and plastic scintillator detectors. The instrument found to be best suited for low-level measurements (< 10 nCi/g) is the /sup 252/Cf Shuffler. The measurement technique consists of passive neutron counting followed by cyclic activation using a /sup 252/Cf source and delayed neutron counting with the source withdrawn. It is recommended that a waste assay station composed of a /sup 252/Cf Shuffler, a gamma-ray scanner, and a screening station be tested and evaluated at a nuclear waste site. 34 figures, 15 tables.

  13. Optimizing the calculation of point source count-centroid in pixel size measurement

    International Nuclear Information System (INIS)

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Purpose: Pixel size is an important parameter of gamma camera and SPECT. A number of Methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source(PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (Xm) is an approximation of the true count-centroid (Xp) of the PS, i.e. Xm=Xp+(Xb-Xp)/(1+Rp/Rb), where Rp is the net counting rate of the PS, Xb the background count-centroid and Rb the background counting rate. To get accurate measurement, Rp must be very big, which is unpractical, resulting in the variation of measured pixel size. Rp-independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (Xb-Xp)/(1+Rp/Rb) by bringing Xb closer to Xp and by reducing Rb. In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=I-(0.5)D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent Xp. The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128*128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10cps to 1183cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01( mean= 3.01±0.00) as Rp increased

  14. Effects of the thickness of gold deposited on a source backing film in the 4πβ-counting

    International Nuclear Information System (INIS)

    Miyahara, Hiroshi; Yoshida, Makoto; Watanabe, Tamaki

    1976-01-01

    A gold deposited VYNS film as a source backing in the 4πβ-counting has generally been used for reducing the absorption of β-rays. The thickness of the film with the gold is usually a few times thicker than the VYNS film itself. However, Because the appropriate thickness of gold has not yet been determined, the effects of gold thickness on electrical resistivity, plateau characteristics and β-ray counting efficiency were studied. 198 Au (960 keV), 60 Co(315 keV), 59 Fe(273 keV) and 95 Nb(160 keV), which were prepared as sources by the aluminium chloride treatment method, were used. Gold was evaporated under a deposition rate of 1 - 5 μg/cm 2 /min at a pressure less than 1 x 10 -5 Torr. Results show that the gold deposition on the side opposite the source after source preparation is essential. In this case, a maximum counting efficiency is obtained at the mean thickness of 2 μg/cm 2 . When gold is deposited only on the same side as the source, a maximum counting efficiency, which is less than that in the former case, is obtained at the mean thickness of 20 μg/cm 2 . (Evans, J.)

  15. Determining random counts in liquid scintillation counting

    International Nuclear Information System (INIS)

    Horrocks, D.L.

    1979-01-01

    During measurements involving coincidence counting techniques, errors can arise due to the detection of chance or random coincidences in the multiple detectors used. A method and the electronic circuits necessary are here described for eliminating this source of error in liquid scintillation detectors used in coincidence counting. (UK)

  16. Optimizing the calculation of point source count-centroid in pixel size measurement

    International Nuclear Information System (INIS)

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Pixel size is an important parameter of gamma camera and SPECT. A number of methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source (PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (X m ) is an approximation of the true count-centroid (X p ) of the PS, i.e. X m =X p + (X b -X p )/(1+R p /R b ), where Rp is the net counting rate of the PS, X b the background count-centroid and Rb the background counting. To get accurate measurement, R p must be very big, which is unpractical, resulting in the variation of measured pixel size. R p -independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (X b -X p )/(1 + R p /R b ) by bringing X b closer to X p and by reducing R b . In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=1-(0.5) D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent X p . The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128 x 128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10 cps to 1183 cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01 (mean

  17. The optimal on-source region size for detections with counting-type telescopes

    Energy Technology Data Exchange (ETDEWEB)

    Klepser, Stefan

    2017-01-15

    The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument point spread function (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ{sup 2}{sub ∞}∼2.51 times the squared PSF width σ{sup 2}{sub PSF39}. While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Gaussian PSF. This result can standardise and simplify analysis procedures, reduce trials and eliminate the need for experience-based ad hoc cut definitions or expensive case-by-case Monte Carlo simulations.

  18. The optimal on-source region size for detections with counting-type telescopes

    International Nuclear Information System (INIS)

    Klepser, Stefan

    2017-01-01

    The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument point spread function (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ"2_∞∼2.51 times the squared PSF width σ"2_P_S_F_3_9. While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Gaussian PSF. This result can standardise and simplify analysis procedures, reduce trials and eliminate the need for experience-based ad hoc cut definitions or expensive case-by-case Monte Carlo simulations.

  19. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  20. One, Two, Three, Four, Nothing More: An Investigation of the Conceptual Sources of the Verbal Counting Principles

    Science.gov (United States)

    Le Corre, Mathieu; Carey, Susan

    2007-01-01

    Since the publication of [Gelman, R., & Gallistel, C. R. (1978). "The child's understanding of number." Cambridge, MA: Harvard University Press.] seminal work on the development of verbal counting as a representation of number, the nature of the ontogenetic sources of the verbal counting principles has been intensely debated. The present…

  1. Limits to source counts and cosmic microwave background fluctuations at 10.6 GHz

    International Nuclear Information System (INIS)

    Seielstad, G.A.; Masson, C.R.; Berge, G.L.

    1981-01-01

    We have determined the distribution of deflections due to sky temperature fluctuations at 10.6 GHz. If all the deflections are due to fine structure in the cosmic microwave background, we limit these fluctuations to ΔT/T -4 on an angular scale of 11 arcmin. If, on the other hand, all the deflections are due to confusion among discrete radio sources, the areal density of these sources is calculated for various slopes of the differential source count relationship and for various cutoff flux densities. If, for example, the slope is 2.1 and the cutoff is 10 mJy, we find (0.25--3.3) 10 6 sources sr -1 Jy -1

  2. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    Science.gov (United States)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  3. Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    Klumpp, John [Colorado State University, Department of Environmental and Radiological Health Sciences, Molecular and Radiological Biosciences Building, Colorado State University, Fort Collins, Colorado, 80523 (United States)

    2013-07-01

    We propose a radiation detection system which generates its own discrete sampling distribution based on past measurements of background. The advantage to this approach is that it can take into account variations in background with respect to time, location, energy spectra, detector-specific characteristics (i.e. different efficiencies at different count rates and energies), etc. This would therefore be a 'machine learning' approach, in which the algorithm updates and improves its characterization of background over time. The system would have a 'learning mode,' in which it measures and analyzes background count rates, and a 'detection mode,' in which it compares measurements from an unknown source against its unique background distribution. By characterizing and accounting for variations in the background, general purpose radiation detectors can be improved with little or no increase in cost. The statistical and computational techniques to perform this kind of analysis have already been developed. The necessary signal analysis can be accomplished using existing Bayesian algorithms which account for multiple channels, multiple detectors, and multiple time intervals. Furthermore, Bayesian machine-learning techniques have already been developed which, with trivial modifications, can generate appropriate decision thresholds based on the comparison of new measurements against a nonparametric sampling distribution. (authors)

  4. Systematic management of sealed source and nucleonic counting system in field service

    International Nuclear Information System (INIS)

    Mahadi Mustapha; Mohd Fitri Abdul Rahman; Jaafar Abdullah

    2005-01-01

    PAT group have received a lot of service from the oil and gas plant. All the services use sealed source and nucleonic counting system. This paper described the detail of management before going to the field service. This management is important to make sure the job is smoothly done and safe to the radiation worker and public. Furthermore this management in line with the regulation from LPTA. (Author)

  5. MASHUP SCHEME DESIGN OF MAP TILES USING LIGHTWEIGHT OPEN SOURCE WEBGIS PLATFORM

    Directory of Open Access Journals (Sweden)

    T. Hu

    2018-04-01

    Full Text Available To address the difficulty involved when using existing commercial Geographic Information System platforms to integrate multi-source image data fusion, this research proposes the loading of multi-source local tile data based on CesiumJS and examines the tile data organization mechanisms and spatial reference differences of the CesiumJS platform, as well as various tile data sources, such as Google maps, Map World, and Bing maps. Two types of tile data loading schemes have been designed for the mashup of tiles, the single data source loading scheme and the multi-data source loading scheme. The multi-sources of digital map tiles used in this paper cover two different but mainstream spatial references, the WGS84 coordinate system and the Web Mercator coordinate system. According to the experimental results, the single data source loading scheme and the multi-data source loading scheme with the same spatial coordinate system showed favorable visualization effects; however, the multi-data source loading scheme was prone to lead to tile image deformation when loading multi-source tile data with different spatial references. The resulting method provides a low cost and highly flexible solution for small and medium-scale GIS programs and has a certain potential for practical application values. The problem of deformation during the transition of different spatial references is an important topic for further research.

  6. Determining {sup 252}Cf source strength by absolute passive neutron correlation counting

    Energy Technology Data Exchange (ETDEWEB)

    Croft, S. [Oak Ridge National Laboratory, Oak Ridge, TN 37831-6166 (United States); Henzlova, D., E-mail: henzlova@lanl.gov [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

    2013-06-21

    Physically small, lightly encapsulated, radionuclide sources containing {sup 252}Cf are widely used for a vast variety of industrial, medical, educational and research applications requiring a convenient source of neutrons. For many quantitative applications, such as detector efficiency calibrations, the absolute strength of the neutron emission is needed. In this work we show how, by using a neutron multiplicity counter the neutron emission rate can be obtained with high accuracy. This provides an independent and alternative way to create reference sources in-house for laboratories such as ours engaged in international safeguards metrology. The method makes use of the unique and well known properties of the {sup 252}Cf spontaneous fission system and applies advanced neutron correlation counting methods. We lay out the foundation of the method and demonstrate it experimentally. We show that accuracy comparable to the best methods currently used by national bodies to certify neutron source strengths is possible.

  7. Frequency Count Attribute Oriented Induction of Corporate Network Data for Mapping Business Activity

    Directory of Open Access Journals (Sweden)

    Tanutama Lukas

    2014-03-01

    Full Text Available Companies increasingly rely on Internet for effective and efficient business communication. As Information Technology infrastructure backbone for business activities, corporate network connects the company to Internet and enables its activities globally. It carries data packets generated by the activities of the users performing their business tasks. Traditionally, infrastructure operations mainly maintain data carrying capacity and network devices performance. It would be advantageous if a company knows what activities are running in its network. The research provides a simple method of mapping the business activity reflected by the network data. To map corporate users’ activities, a slightly modified Attribute Oriented Induction (AOI approach to mine the network data was applied. The frequency of each protocol invoked were counted to show what the user intended to do. The collected data was samples taken within a certain sampling period. Samples were taken due to the enormous data packets generated. Protocols of interest are only Internet related while intranet protocols are ignored. It can be concluded that the method could provide the management a general overview of the usage of its infrastructure and lead to efficient, effective and secure ICT infrastructure.

  8. Frequency Count Attribute Oriented Induction of Corporate Network Data for Mapping Business Activity

    Science.gov (United States)

    Tanutama, Lukas

    2014-03-01

    Companies increasingly rely on Internet for effective and efficient business communication. As Information Technology infrastructure backbone for business activities, corporate network connects the company to Internet and enables its activities globally. It carries data packets generated by the activities of the users performing their business tasks. Traditionally, infrastructure operations mainly maintain data carrying capacity and network devices performance. It would be advantageous if a company knows what activities are running in its network. The research provides a simple method of mapping the business activity reflected by the network data. To map corporate users' activities, a slightly modified Attribute Oriented Induction (AOI) approach to mine the network data was applied. The frequency of each protocol invoked were counted to show what the user intended to do. The collected data was samples taken within a certain sampling period. Samples were taken due to the enormous data packets generated. Protocols of interest are only Internet related while intranet protocols are ignored. It can be concluded that the method could provide the management a general overview of the usage of its infrastructure and lead to efficient, effective and secure ICT infrastructure.

  9. Sources of uncertainty in flood inundation maps

    Science.gov (United States)

    Bales, J.D.; Wagner, C.R.

    2009-01-01

    Flood inundation maps typically have been used to depict inundated areas for floods having specific exceedance levels. The uncertainty associated with the inundation boundaries is seldom quantified, in part, because all of the sources of uncertainty are not recognized and because data available to quantify uncertainty seldom are available. Sources of uncertainty discussed in this paper include hydrologic data used for hydraulic model development and validation, topographic data, and the hydraulic model. The assumption of steady flow, which typically is made to produce inundation maps, has less of an effect on predicted inundation at lower flows than for higher flows because more time typically is required to inundate areas at high flows than at low flows. Difficulties with establishing reasonable cross sections that do not intersect and that represent water-surface slopes in tributaries contribute additional uncertainties in the hydraulic modelling. As a result, uncertainty in the flood inundation polygons simulated with a one-dimensional model increases with distance from the main channel.

  10. Calorie count - fast food

    Science.gov (United States)

    ... GO About MedlinePlus Site Map FAQs Customer Support Health Topics Drugs & Supplements Videos & Tools Español You Are Here: Home → Medical Encyclopedia → Calorie count - fast food URL of this page: //medlineplus.gov/ency/patientinstructions/ ...

  11. Pixel-based parametric source depth map for Cerenkov luminescence imaging

    International Nuclear Information System (INIS)

    Altabella, L.; Spinelli, A.E.; Boschi, F.

    2016-01-01

    Optical tomography represents a challenging problem in optical imaging because of the intrinsically ill-posed inverse problem due to photon diffusion. Cerenkov luminescence tomography (CLT) for optical photons produced in tissues by several radionuclides (i.e.: 32P, 18F, 90Y), has been investigated using both 3D multispectral approach and multiviews methods. Difficult in convergence of 3D algorithms can discourage to use this technique to have information of depth and intensity of source. For these reasons, we developed a faster 2D corrected approach based on multispectral acquisitions, to obtain source depth and its intensity using a pixel-based fitting of source intensity. Monte Carlo simulations and experimental data were used to develop and validate the method to obtain the parametric map of source depth. With this approach we obtain parametric source depth maps with a precision between 3% and 7% for MC simulation and 5–6% for experimental data. Using this method we are able to obtain reliable information about the source depth of Cerenkov luminescence with a simple and flexible procedure

  12. What Is the Role of Manual Preference in Hand-Digit Mapping During Finger Counting? A Study in a Large Sample of Right- and Left-Handers.

    Science.gov (United States)

    Zago, Laure; Badets, Arnaud

    2016-01-01

    The goal of the present study was to test whether there is a relationship between manual preference and hand-digit mapping in 369 French adults with similar numbers of right- and left-handers. Manual laterality was evaluated with the finger tapping test to evaluate hand motor asymmetry, and the Edinburgh handedness inventory was used to assess manual preference strength (MPS) and direction. Participants were asked to spontaneously "count on their fingers from 1 to 10" without indications concerning the hand(s) to be used. The results indicated that both MPS and hand motor asymmetry affect the hand-starting preference for counting. Left-handers with a strong left-hand preference (sLH) or left-hand motor asymmetry largely started to count with their left hand (left-starter), while right-handers with a strong right-hand preference (sRH) or right-hand motor asymmetry largely started to count with their right hand (right-starter). Notably, individuals with weak MPS did not show a hand-starting preference. These findings demonstrated that manual laterality contributes to finger counting directionality. Lastly, the results showed a higher proportion of sLH left-starter individuals compared with sRH right-starters, indicating an asymmetric bias of MPS on hand-starting preference. We hypothesize that the higher proportion of sLH left-starters could be explained by the congruence between left-to-right hand-digit mapping and left-to-right mental number line representation that has been largely reported in the literature. Taken together, these results indicate that finger-counting habits integrate biological and cultural information. © The Author(s) 2015.

  13. Radio source counts: comments on their convergence and assessment of the contribution to fluctuations of the microwave background

    International Nuclear Information System (INIS)

    Danese, L.; De Zotti, G.; Mandolesi, N.

    1982-01-01

    We point out that statistically estimated high frequency counts at milli-Jansky levels exhibit a slower convergence than expected on the basis of extrapolations of counts at higher flux densities and at longer wavelengths. This seems to demand a substantial cosmological evolution for at least a sub-population of flat-spectrum sources different from QSO's, a fact that might have important implications also in connection with the problem of the origin of the X-ray background. We also compute the discrete source contributions to small scale fluctuations in the Rayleigh-Jeans region of the cosmic microwave background and we show that they set a serious limit to the searches for truly primordial anisotropies using conventional radio-astronomical techniques

  14. Prototype of interactive Web Maps: an approach based on open sources

    Directory of Open Access Journals (Sweden)

    Jürgen Philips

    2004-07-01

    Full Text Available To explore the potentialities available in the World Wide Web (WWW, a prototype with interactive Web map was elaborated using standardized codes and open sources, such as eXtensible Markup Language (XML, Scalable Vector Graphics (SVG, Document Object Model (DOM , script languages ECMAScript/JavaScript and “PHP: Hypertext Preprocessor”, and PostgreSQL and its extension, the PostGIS, to disseminate information related to the urban real estate register. Data from the City Hall of São José - Santa Catarina, were used, referring to Campinas district. Using Client/Server model, a prototype of a Web map with standardized codes and open sources was implemented, allowing a user to visualize Web maps using only the Adobe’s plug-in Viewer 3.0 in his/her browser. Aiming a good cartographic project for the Web, it was obeyed rules of graphical translation and was implemented different functionalities of interaction, like interactive legends, symbolization and dynamic scale. From the results, it can be recommended the use of using standardized codes and open sources in interactive Web mapping projects. It is understood that, with the use of Open Source code, in the public and private administration, the possibility of technological development is amplified, and consequently, a reduction with expenses in the acquisition of computer’s program. Besides, it stimulates the development of computer applications targeting specific demands and requirements.

  15. Searching for Orphan radiation sources

    International Nuclear Information System (INIS)

    Bystrov, Evgenij; Antonau, Uladzimir; Gurinovich, Uladzimir; Kazhamiakin, Valery; Petrov, Vitaly; Shulhovich, Heorhi; Tischenko, Siarhei

    2008-01-01

    Full text: The problem of orphan sources cannot be left unaddressed due high probability of accidental exposure and use of sources for terrorism. Search of objects of this kind is complex particularly when search territory is large. This requires devices capable of detecting sources, identifying their radionuclide composition, and correlating scan results to geographical coordinates and displaying results on a map. Spectral radiation scanner AT6101C can fulfill the objective of search for gamma and neutron radiation sources, radionuclide composition identification, correlation results to geographical coordinates and displaying results on a map. The scanner consists of gamma radiation scintillation detection unit based on NaI(Tl) crystal, neutron detection unit based on two He 3 counters, GPS receiver and portable ruggedized computer. Built-in and application software automates entire scan process, saving all results to memory for further analysis with visual representation of results as spectral information diagrams, count rate profile and gamma radiation dose rates on a geographical map. The scanner informs operator with voice messages on detection of radiation sources, identification result and other events. Scanner detection units and accessories are packed in a backpack. Weighing 7 kg, the scanner is human portable and can be used for scan inside cars. The scanner can also be used for radiation mapping and inspections. (author)

  16. The Mapping X-ray Fluorescence Spectrometer (MapX)

    Science.gov (United States)

    Sarrazin, P.; Blake, D. F.; Marchis, F.; Bristow, T.; Thompson, K.

    2017-12-01

    Many planetary surface processes leave traces of their actions as features in the size range 10s to 100s of microns. The Mapping X-ray Fluorescence Spectrometer (MapX) will provide elemental imaging at 100 micron spatial resolution, yielding elemental chemistry at a scale where many relict physical, chemical, or biological features can be imaged and interpreted in ancient rocks on planetary bodies and planetesimals. MapX is an arm-based instrument positioned on a rock or regolith with touch sensors. During an analysis, an X-ray source (tube or radioisotope) bombards the sample with X-rays or alpha-particles / gamma-rays, resulting in sample X-ray Fluorescence (XRF). X-rays emitted in the direction of an X-ray sensitive CCD imager pass through a 1:1 focusing lens (X-ray micro-pore Optic (MPO)) that projects a spatially resolved image of the X-rays onto the CCD. The CCD is operated in single photon counting mode so that the energies and positions of individual X-ray photons are recorded. In a single analysis, several thousand frames are both stored and processed in real-time. Higher level data products include single-element maps with a lateral spatial resolution of 100 microns and quantitative XRF spectra from ground- or instrument- selected Regions of Interest (ROI). XRF spectra from ROI are compared with known rock and mineral compositions to extrapolate the data to rock types and putative mineralogies. When applied to airless bodies and implemented with an appropriate radioisotope source for alpha-particle excitation, MapX will be able to analyze biogenic elements C, N, O, P, S, in addition to the cations of the rock-forming elements >Na, accessible with either X-ray or gamma-ray excitation. The MapX concept has been demonstrated with a series of lab-based prototypes and is currently under refinement and TRL maturation.

  17. Quantifying the sources of variability in equine faecal egg counts: implications for improving the utility of the method.

    Science.gov (United States)

    Denwood, M J; Love, S; Innocent, G T; Matthews, L; McKendrick, I J; Hillary, N; Smith, A; Reid, S W J

    2012-08-13

    The faecal egg count (FEC) is the most widely used means of quantifying the nematode burden of horses, and is frequently used in clinical practice to inform treatment and prevention. The statistical process underlying the FEC is complex, comprising a Poisson counting error process for each sample, compounded with an underlying continuous distribution of means between samples. Being able to quantify the sources of variability contributing to this distribution of means is a necessary step towards providing estimates of statistical power for future FEC and FECRT studies, and may help to improve the usefulness of the FEC technique by identifying and minimising unwanted sources of variability. Obtaining such estimates require a hierarchical statistical model coupled with repeated FEC observations from a single animal over a short period of time. Here, we use this approach to provide the first comparative estimate of multiple sources of within-horse FEC variability. The results demonstrate that a substantial proportion of the observed variation in FEC between horses occurs as a result of variation in FEC within an animal, with the major sources being aggregation of eggs within faeces and variation in egg concentration between faecal piles. The McMaster procedure itself is associated with a comparatively small coefficient of variation, and is therefore highly repeatable when a sufficiently large number of eggs are observed to reduce the error associated with the counting process. We conclude that the variation between samples taken from the same animal is substantial, but can be reduced through the use of larger homogenised faecal samples. Estimates are provided for the coefficient of variation (cv) associated with each within animal source of variability in observed FEC, allowing the usefulness of individual FEC to be quantified, and providing a basis for future FEC and FECRT studies. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Analysis of spatial count data using Kalman smoothing

    DEFF Research Database (Denmark)

    Dethlefsen, Claus

    2007-01-01

    We consider spatial count data from an agricultural field experiment. Counts of weed plants in a field have been recorded in a project on precision farming. Interest is in mapping the weed intensity so that the dose of herbicide applied at any location can be adjusted to the amount of weed present...

  19. Stability of rotors and focal sources for human atrial fibrillation: focal impulse and rotor mapping (FIRM) of AF sources and fibrillatory conduction.

    Science.gov (United States)

    Swarup, Vijay; Baykaner, Tina; Rostamian, Armand; Daubert, James P; Hummel, John; Krummen, David E; Trikha, Rishi; Miller, John M; Tomassoni, Gery F; Narayan, Sanjiv M

    2014-12-01

    Several groups report electrical rotors or focal sources that sustain atrial fibrillation (AF) after it has been triggered. However, it is difficult to separate stable from unstable activity in prior studies that examined only seconds of AF. We applied phase-based focal impulse and rotor mapping (FIRM) to study the dynamics of rotors/sources in human AF over prolonged periods of time. We prospectively mapped AF in 260 patients (169 persistent, 61 ± 12 years) at 6 centers in the FIRM registry, using baskets with 64 contact electrodes per atrium. AF was phase mapped (RhythmView, Topera, Menlo Park, CA, USA). AF propagation movies were interpreted by each operator to assess the source stability/dynamics over tens of minutes before ablation. Sources were identified in 258 of 260 of patients (99%), for 2.8 ± 1.4 sources/patient (1.8 ± 1.1 in left, 1.1 ± 0.8 in right atria). While AF sources precessed in stable regions, emanating activity including spiral waves varied from collision/fusion (fibrillatory conduction). Each source lay in stable atrial regions for 4,196 ± 6,360 cycles, with no differences between paroxysmal versus persistent AF (4,290 ± 5,847 vs. 4,150 ± 6,604; P = 0.78), or right versus left atrial sources (P = 0.26). Rotors and focal sources for human AF mapped by FIRM over prolonged time periods precess ("wobble") but remain within stable regions for thousands of cycles. Conversely, emanating activity such as spiral waves disorganize and collide with the fibrillatory milieu, explaining difficulties in using activation mapping or signal processing analyses at fixed electrodes to detect AF rotors. These results provide a rationale for targeted ablation at AF sources rather than fibrillatory spiral waves. © 2014 Wiley Periodicals, Inc.

  20. Calculation of total counting efficiency of a NaI(Tl) detector by hybrid Monte-Carlo method for point and disk sources

    Energy Technology Data Exchange (ETDEWEB)

    Yalcin, S. [Education Faculty, Kastamonu University, 37200 Kastamonu (Turkey)], E-mail: yalcin@gazi.edu.tr; Gurler, O.; Kaynak, G. [Department of Physics, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey); Gundogdu, O. [Department of Physics, School of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH (United Kingdom)

    2007-10-15

    This paper presents results on the total gamma counting efficiency of a NaI(Tl) detector from point and disk sources. The directions of photons emitted from the source were determined by Monte-Carlo techniques and the photon path lengths in the detector were determined by analytic equations depending on photon directions. This is called the hybrid Monte-Carlo method where analytical expressions are incorporated into the Monte-Carlo simulations. A major advantage of this technique is the short computation time compared to other techniques on similar computational platforms. Another advantage is the flexibility for inputting detector-related parameters (such as source-detector distance, detector radius, source radius, detector linear attenuation coefficient) into the algorithm developed, thus making it an easy and flexible method to apply to other detector systems and configurations. The results of the total counting efficiency model put forward for point and disc sources were compared with the previous work reported in the literature.

  1. Calculation of total counting efficiency of a NaI(Tl) detector by hybrid Monte-Carlo method for point and disk sources

    International Nuclear Information System (INIS)

    Yalcin, S.; Gurler, O.; Kaynak, G.; Gundogdu, O.

    2007-01-01

    This paper presents results on the total gamma counting efficiency of a NaI(Tl) detector from point and disk sources. The directions of photons emitted from the source were determined by Monte-Carlo techniques and the photon path lengths in the detector were determined by analytic equations depending on photon directions. This is called the hybrid Monte-Carlo method where analytical expressions are incorporated into the Monte-Carlo simulations. A major advantage of this technique is the short computation time compared to other techniques on similar computational platforms. Another advantage is the flexibility for inputting detector-related parameters (such as source-detector distance, detector radius, source radius, detector linear attenuation coefficient) into the algorithm developed, thus making it an easy and flexible method to apply to other detector systems and configurations. The results of the total counting efficiency model put forward for point and disc sources were compared with the previous work reported in the literature

  2. IMPLEMENTATION OF OPEN-SOURCE WEB MAPPING TECHNOLOGIES TO SUPPORT MONITORING OF GOVERNMENTAL SCHEMES

    Directory of Open Access Journals (Sweden)

    B. R. Pulsani

    2015-10-01

    Full Text Available Several schemes are undertaken by the government to uplift social and economic condition of people. The monitoring of these schemes is done through information technology where involvement of Geographic Information System (GIS is lacking. To demonstrate the benefits of thematic mapping as a tool for assisting the officials in making decisions, a web mapping application for three government programs such as Mother and Child Tracking system (MCTS, Telangana State Housing Corporation Limited (TSHCL and Ground Water Quality Mapping (GWQM has been built. Indeed the three applications depicted the distribution of various parameters thematically and helped in identifying the areas with higher and weaker distributions. Based on the three applications, the study tends to find similarities of many government schemes reflecting the nature of thematic mapping and hence deduces to implement this kind of approach for other schemes as well. These applications have been developed using SharpMap Csharp library which is a free and open source mapping library for developing geospatial applications. The study highlights upon the cost benefits of SharpMap and brings out the advantage of this library over proprietary vendors and further discusses its advantages over other open source libraries as well.

  3. PRECISE ORTHO IMAGERY AS THE SOURCE FOR AUTHORITATIVE AIRPORT MAPPING

    Directory of Open Access Journals (Sweden)

    H. Howard

    2016-06-01

    Full Text Available As the aviation industry moves from paper maps and charts to the digital cockpit and electronic flight bag, producers of these products need current and accurate data to ensure flight safety. FAA (Federal Aviation Administration and ICAO (International Civil Aviation Organization require certified suppliers to follow a defined protocol to produce authoritative map data for the aerodrome. Typical airport maps have been produced to meet 5 m accuracy requirements. The new digital aviation world is moving to 1 m accuracy maps to provide better situational awareness on the aerodrome. The commercial availability of 0.5 m satellite imagery combined with accurate ground control is enabling the production of avionics certified .85 m orthophotos of airports around the globe. CompassData maintains an archive of over 400+ airports as source data to support producers of 1 m certified Aerodrome Mapping Database (AMDB critical to flight safety and automated situational awareness. CompassData is a DO200A certified supplier of authoritative orthoimagery and attendees will learn how to utilize current airport imagery to build digital aviation mapping products.

  4. Precise Ortho Imagery as the Source for Authoritative Airport Mapping

    Science.gov (United States)

    Howard, H.; Hummel, P.

    2016-06-01

    As the aviation industry moves from paper maps and charts to the digital cockpit and electronic flight bag, producers of these products need current and accurate data to ensure flight safety. FAA (Federal Aviation Administration) and ICAO (International Civil Aviation Organization) require certified suppliers to follow a defined protocol to produce authoritative map data for the aerodrome. Typical airport maps have been produced to meet 5 m accuracy requirements. The new digital aviation world is moving to 1 m accuracy maps to provide better situational awareness on the aerodrome. The commercial availability of 0.5 m satellite imagery combined with accurate ground control is enabling the production of avionics certified .85 m orthophotos of airports around the globe. CompassData maintains an archive of over 400+ airports as source data to support producers of 1 m certified Aerodrome Mapping Database (AMDB) critical to flight safety and automated situational awareness. CompassData is a DO200A certified supplier of authoritative orthoimagery and attendees will learn how to utilize current airport imagery to build digital aviation mapping products.

  5. DC KIDS COUNT e-Databook Indicators

    Science.gov (United States)

    DC Action for Children, 2012

    2012-01-01

    This report presents indicators that are included in DC Action for Children's 2012 KIDS COUNT e-databook, their definitions and sources and the rationale for their selection. The indicators for DC KIDS COUNT represent a mix of traditional KIDS COUNT indicators of child well-being, such as the number of children living in poverty, and indicators of…

  6. MEASURING PRIMORDIAL NON-GAUSSIANITY THROUGH WEAK-LENSING PEAK COUNTS

    International Nuclear Information System (INIS)

    Marian, Laura; Hilbert, Stefan; Smith, Robert E.; Schneider, Peter; Desjacques, Vincent

    2011-01-01

    We explore the possibility of detecting primordial non-Gaussianity of the local type using weak-lensing peak counts. We measure the peak abundance in sets of simulated weak-lensing maps corresponding to three models f NL = 0, - 100, and 100. Using survey specifications similar to those of EUCLID and without assuming any knowledge of the lens and source redshifts, we find the peak functions of the non-Gaussian models with f NL = ±100 to differ by up to 15% from the Gaussian peak function at the high-mass end. For the assumed survey parameters, the probability of fitting an f NL = 0 peak function to the f NL = ±100 peak functions is less than 0.1%. Assuming the other cosmological parameters are known, f NL can be measured with an error Δf NL ∼ 13. It is therefore possible that future weak-lensing surveys like EUCLID and LSST may detect primordial non-Gaussianity from the abundance of peak counts, and provide information complementary to that obtained from the cosmic microwave background.

  7. Mapping of auroral kilometric radiation sources to the aurora

    International Nuclear Information System (INIS)

    Huff, R.L.; Calvert, W.; Craven, J.D.; Frank, L.A.; Gurnett, D.A.

    1988-01-01

    Auroral kilometric radiation (AKR) and optical auroral emissions are observed simultaneously using plasma wave instrumentation and auroral imaging photometers acrried on the DE 1 spacecraft. The DE 1 plasma wave instrument measures the relative phase of signals from orthogonal electric dipole antennas, and from these measurements, apparent source directions can be determined with a high degree of precision. Wave data are analyzed for several strong AKR events, and source directions are determined for several emission frequencies. By assuming that the AKR originates at cyclotron resonant altitudes, a condidate source field line is identified. When the selected source field line is traced down to auroral altitudes on the concurrent DE 1 auroral image, a striking correspondence between the AKR source field line and localized auroral features is produced. The magnetic mapping study provides strong evidence that AKR sources occur on field lines associated with discrete auroral arcs, and it provides confirmation that AKR is generated near the electron cyclotron frequency

  8. Model-based analysis and optimization of the mapping of cortical sources in the spontaneous scalp EEG

    NARCIS (Netherlands)

    Sazonov, A.; Bergmans, J.W.M.; Cluitmans, P.J.M.; Griep, P.A.M.; Arends, J.B.A.M.; Boon, P.A.J.M.

    2007-01-01

    The mapping of brain sources into the scalp electroencephalogram (EEG) depends on volume conduction properties of the head and on an electrode montage involving a reference. Mathematically, this source mapping (SM) is fully determined by an observation function (OF) matrix. This paper analyses the

  9. High-frequency maximum observable shaking map of Italy from fault sources

    KAUST Repository

    Zonno, Gaetano

    2012-03-17

    We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.

  10. High-frequency maximum observable shaking map of Italy from fault sources

    KAUST Repository

    Zonno, Gaetano; Basili, Roberto; Meroni, Fabrizio; Musacchio, Gemma; Mai, Paul Martin; Valensise, Gianluca

    2012-01-01

    We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.

  11. Sources and magnitude of sampling error in redd counts for bull trout

    Science.gov (United States)

    Jason B. Dunham; Bruce Rieman

    2001-01-01

    Monitoring of salmonid populations often involves annual redd counts, but the validity of this method has seldom been evaluated. We conducted redd counts of bull trout Salvelinus confluentus in two streams in northern Idaho to address four issues: (1) relationships between adult escapements and redd counts; (2) interobserver variability in redd...

  12. Liquid scintillation counting system with automatic gain correction

    International Nuclear Information System (INIS)

    Frank, R.B.

    1976-01-01

    An automatic liquid scintillation counting apparatus is described including a scintillating medium in the elevator ram of the sample changing apparatus. An appropriate source of radiation, which may be the external source for standardizing samples, produces reference scintillations in the scintillating medium which may be used for correction of the gain of the counting system

  13. Spatial resolution limits for the localization of noise sources using direct sound mapping

    DEFF Research Database (Denmark)

    Comesana, D. Fernandez; Holland, K. R.; Fernandez Grande, Efren

    2016-01-01

    the relationship between spatial resolution, noise level and geometry. The proposed expressions are validated via simulations and experiments. It is shown that particle velocity mapping yields better results for identifying closely spaced sound sources than sound pressure or sound intensity, especially...... extensively been used for many years to locate sound sources. However, it is not yet well defined when two sources should be regarded as resolved by means of direct sound mapping. This paper derives the limits of the direct representation of sound pressure, particle velocity and sound intensity by exploring......One of the main challenges arising from noise and vibration problems is how to identify the areas of a device, machine or structure that produce significant acoustic excitation, i.e. the localization of main noise sources. The direct visualization of sound, in particular sound intensity, has...

  14. Crowd-Sourced Mobility Mapping for Location Tracking Using Unlabeled Wi-Fi Simultaneous Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Mu Zhou

    2015-01-01

    Full Text Available Due to the increasing requirements of the seamless and round-the-clock Location-based services (LBSs, a growing interest in Wi-Fi network aided location tracking is witnessed in the past decade. One of the significant problems of the conventional Wi-Fi location tracking approaches based on received signal strength (RSS fingerprinting is the time-consuming and labor intensive work involved in location fingerprint calibration. To solve this problem, a novel unlabeled Wi-Fi simultaneous localization and mapping (SLAM approach is developed to avoid the location fingerprinting and additional inertial or vision sensors. In this approach, an unlabeled mobility map of the coverage area is first constructed by using the crowd-sourcing from a batch of sporadically recorded Wi-Fi RSS sequences based on the spectral cluster assembling. Then, the sequence alignment algorithm is applied to conduct location tracking and mobility map updating. Finally, the effectiveness of this approach is verified by the extensive experiments carried out in a campus-wide area.

  15. Preschool children use space, rather than counting, to infer the numerical magnitude of digits: Evidence for a spatial mapping principle.

    Science.gov (United States)

    Sella, Francesco; Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco

    2017-01-01

    A milestone in numerical development is the acquisition of counting principles which allow children to exactly determine the numerosity of a given set. Moreover, a canonical left-to-right spatial layout for representing numbers also emerges during preschool. These foundational aspects of numerical competence have been extensively studied, but there is sparse knowledge about the interplay between the acquisition of the cardinality principle and spatial mapping of numbers in early numerical development. The present study investigated how these skills concurrently develop before formal schooling. Preschool children were classified according to their performance in Give-a-Number and Number-to-position tasks. Experiment 1 revealed three qualitatively different groups: (i) children who did not master the cardinality principle and lacked any consistent spatial mapping for digits, (ii) children who mastered the cardinality principle and yet failed in spatial mapping, and (iii) children who mastered the cardinality principle and displayed consistent spatial mapping. This suggests that mastery of the cardinality principle does not entail the emergence of spatial mapping. Experiment 2 confirmed the presence of these three developmental stages and investigated their relation with a digit comparison task. Crucially, only children who displayed a consistent spatial mapping of numbers showed the ability to compare digits by numerical magnitude. A congruent (i.e., numerically ordered) positioning of numbers onto a visual line as well as the concept that moving rightwards (in Western cultures) conveys an increase in numerical magnitude mark the mastery of a spatial mapping principle. Children seem to rely on this spatial organization to achieve a full understanding of the magnitude relations between digits. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. NetMap - Creating a Map of Application Layer QoS Metrics of Mobile Networks Using Crowd Sourcing

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Møller; Thomsen, Steffen Riber; Pedersen, Michael Sølvkjær

    2014-01-01

    Based on the continuous increase in network traffic on mobile networks, the large increase in smart devices, and the ever ongoing development of Internet enabled services, we argue for the need of a network performance map. In this paper NetMap is presented, which is a measurement system based...... on crowd sourcing, that utilizes end user smart devices in automatically measuring and gathering network performance metrics on mobile networks. Metrics measured include throughput, round trip times, connectivity, and signal strength, and are accompanied by a wide range of context information about...

  17. Relationship between γ detection dead-time and count correction factor

    International Nuclear Information System (INIS)

    Wu Huailong; Zhang Jianhua; Chu Chengsheng; Hu Guangchun; Zhang Changfan; Hu Gen; Gong Jian; Tian Dongfeng

    2015-01-01

    The relationship between dead-time and count correction factor was investigated by using interference source for purpose of high γ activity measurement. The count rates maintain several 10 s"-"l with γ energy of 0.3-1.3 MeV for 10"4-10"5 Bq radioactive source. It is proved that the relationship between count loss and dead-time is unconcerned at various energy and various count intensities. The same correction formula can be used for any nuclide measurement. (authors)

  18. OpenCFU, a new free and open-source software to count cell colonies and other circular objects.

    Science.gov (United States)

    Geissmann, Quentin

    2013-01-01

    Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net.

  19. Absolute nuclear material assay using count distribution (LAMBDA) space

    Science.gov (United States)

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  20. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources

    Directory of Open Access Journals (Sweden)

    Xiang Gao

    2016-07-01

    Full Text Available This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors’ data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented.

  1. Project and construction of counting system for neutron probe

    International Nuclear Information System (INIS)

    Monteiro, W.P.

    1985-01-01

    A counting system was developed for coupling neutron probe aiming to register pulses produced by slow neutron interaction in the detector. The neutron probe consists of fast neutron source, thermal neutron detector, amplifier circuit and pulse counting circuit. The counting system is composed by counting circuit, timer and signal circuit. (M.C.K.)

  2. THE USE OF MULTIPLE DATA SOURCES IN THE PROCESS OF TOPOGRAPHIC MAPS UPDATING

    Directory of Open Access Journals (Sweden)

    A. Cantemir

    2016-06-01

    Full Text Available The methods used in the process of updating maps have evolved and become more complex, especially upon the development of the digital technology. At the same time, the development of technology has led to an abundance of available data that can be used in the updating process. The data sources came in a great variety of forms and formats from different acquisition sensors. Satellite images provided by certain satellite missions are now available on space agencies portals. Images stored in archives of satellite missions such us Sentinel, Landsat and other can be downloaded free of charge.The main advantages are represented by the large coverage area and rather good spatial resolution that enables the use of these images for the map updating at an appropriate scale. In our study we focused our research of these images on 1: 50.000 scale map. DEM that are globally available could represent an appropriate input for watershed delineation and stream network generation, that can be used as support for hydrography thematic layer update. If, in addition to remote sensing aerial photogrametry and LiDAR data are ussed, the accuracy of data sources is enhanced. Ortophotoimages and Digital Terrain Models are the main products that can be used for feature extraction and update. On the other side, the use of georeferenced analogical basemaps represent a significant addition to the process. Concerning the thematic maps, the classic representation of the terrain by contour lines derived from DTM, remains the best method of surfacing the earth on a map, nevertheless the correlation with other layers such as Hidrography are mandatory. In the context of the current national coverage of the Digital Terrain Model, one of the main concerns of the National Center of Cartography, through the Cartography and Photogrammetry Department, is represented by the exploitation of the available data in order to update the layers of the Topographic Reference Map 1:5000, known as

  3. Large-region acoustic source mapping using a movable array and sparse covariance fitting.

    Science.gov (United States)

    Zhao, Shengkui; Tuna, Cagdas; Nguyen, Thi Ngoc Tho; Jones, Douglas L

    2017-01-01

    Large-region acoustic source mapping is important for city-scale noise monitoring. Approaches using a single-position measurement scheme to scan large regions using small arrays cannot provide clean acoustic source maps, while deploying large arrays spanning the entire region of interest is prohibitively expensive. A multiple-position measurement scheme is applied to scan large regions at multiple spatial positions using a movable array of small size. Based on the multiple-position measurement scheme, a sparse-constrained multiple-position vectorized covariance matrix fitting approach is presented. In the proposed approach, the overall sample covariance matrix of the incoherent virtual array is first estimated using the multiple-position array data and then vectorized using the Khatri-Rao (KR) product. A linear model is then constructed for fitting the vectorized covariance matrix and a sparse-constrained reconstruction algorithm is proposed for recovering source powers from the model. The user parameter settings are discussed. The proposed approach is tested on a 30 m × 40 m region and a 60 m × 40 m region using simulated and measured data. Much cleaner acoustic source maps and lower sound pressure level errors are obtained compared to the beamforming approaches and the previous sparse approach [Zhao, Tuna, Nguyen, and Jones, Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP) (2016)].

  4. Deep 3 GHz number counts from a P(D) fluctuation analysis

    Science.gov (United States)

    Vernstrom, T.; Scott, Douglas; Wall, J. V.; Condon, J. J.; Cotton, W. D.; Fomalont, E. B.; Kellermann, K. I.; Miller, N.; Perley, R. A.

    2014-05-01

    Radio source counts constrain galaxy populations and evolution, as well as the global star formation history. However, there is considerable disagreement among the published 1.4-GHz source counts below 100 μJy. Here, we present a statistical method for estimating the μJy and even sub-μJy source count using new deep wide-band 3-GHz data in the Lockman Hole from the Karl G. Jansky Very Large Array. We analysed the confusion amplitude distribution P(D), which provides a fresh approach in the form of a more robust model, with a comprehensive error analysis. We tested this method on a large-scale simulation, incorporating clustering and finite source sizes. We discuss in detail our statistical methods for fitting using Markov chain Monte Carlo, handling correlations, and systematic errors from the use of wide-band radio interferometric data. We demonstrated that the source count can be constrained down to 50 nJy, a factor of 20 below the rms confusion. We found the differential source count near 10 μJy to have a slope of -1.7, decreasing to about -1.4 at fainter flux densities. At 3 GHz, the rms confusion in an 8-arcsec full width at half-maximum beam is ˜ 1.2 μJy beam-1, and a radio background temperature ˜14 mK. Our counts are broadly consistent with published evolutionary models. With these results, we were also able to constrain the peak of the Euclidean normalized differential source count of any possible new radio populations that would contribute to the cosmic radio background down to 50 nJy.

  5. Finger Counting Habits in Middle Eastern and Western Individuals: An Online Survey

    NARCIS (Netherlands)

    Lindemann, O.; Alipour, A.; Fischer, M.H.

    2011-01-01

    The current study documents the presence of cultural differences in the development of finger counting strategies. About 900 Middle Eastern (i.e., Iranian) and Western (i.e., European and American) individuals reported in an online survey how they map numbers onto their fingers when counting from 1

  6. Parametric normalization for full-energy peak efficiency of HPGe γ-ray spectrometers at different counting positions for bulky sources.

    Science.gov (United States)

    Peng, Nie; Bang-Fa, Ni; Wei-Zhi, Tian

    2013-02-01

    Application of effective interaction depth (EID) principle for parametric normalization of full energy peak efficiencies at different counting positions, originally for quasi-point sources, has been extended to bulky sources (within ∅30 mm×40 mm) with arbitrary matrices. It is also proved that the EID function for quasi-point source can be directly used for cylindrical bulky sources (within ∅30 mm×40 mm) with the geometric center as effective point source for low atomic number (Z) and low density (D) media and high energy γ-rays. It is also found that in general EID for bulky sources is dependent upon Z and D of the medium and the energy of the γ-rays in question. In addition, the EID principle was theoretically verified by MCNP calculations. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Gating circuit for single photon-counting fluorescence lifetime instruments using high repetition pulsed light sources

    International Nuclear Information System (INIS)

    Laws, W.R.; Potter, D.W.; Sutherland, J.C.

    1984-01-01

    We have constructed a circuit that permits conventional timing electronics to be used in single photon-counting fluorimeters with high repetition rate excitation sources (synchrotrons and mode-locked lasers). Most commercial time-to-amplitude and time-to-digital converters introduce errors when processing very short time intervals and when subjected to high-frequency signals. This circuit reduces the frequency of signals representing the pulsed light source (stops) to the rate of detected fluorescence events (starts). Precise timing between the start/stop pair is accomplished by using the second stop pulse after a start pulse. Important features of our design are that the circuit is insensitive to the simultaneous occurrence of start and stop signals and that the reduction in the stop frequency allows the start/stop time interval to be placed in linear regions of the response functions of commercial timing electronics

  8. An Offline-Online Android Application for Hazard Event Mapping Using WebGIS Open Source Technologies

    Science.gov (United States)

    Olyazadeh, Roya; Jaboyedoff, Michel; Sudmeier-Rieux, Karen; Derron, Marc-Henri; Devkota, Sanjaya

    2016-04-01

    Nowadays, Free and Open Source Software (FOSS) plays an important role in better understanding and managing disaster risk reduction around the world. National and local government, NGOs and other stakeholders are increasingly seeking and producing data on hazards. Most of the hazard event inventories and land use mapping are based on remote sensing data, with little ground truthing, creating difficulties depending on the terrain and accessibility. Open Source WebGIS tools offer an opportunity for quicker and easier ground truthing of critical areas in order to analyse hazard patterns and triggering factors. This study presents a secure mobile-map application for hazard event mapping using Open Source WebGIS technologies such as Postgres database, Postgis, Leaflet, Cordova and Phonegap. The objectives of this prototype are: 1. An Offline-Online android mobile application with advanced Geospatial visualisation; 2. Easy Collection and storage of events information applied services; 3. Centralized data storage with accessibility by all the service (smartphone, standard web browser); 4. Improving data management by using active participation in hazard event mapping and storage. This application has been implemented as a low-cost, rapid and participatory method for recording impacts from hazard events and includes geolocation (GPS data and Internet), visualizing maps with overlay of satellite images, viewing uploaded images and events as cluster points, drawing and adding event information. The data can be recorded in offline (Android device) or online version (all browsers) and consequently uploaded through the server whenever internet is available. All the events and records can be visualized by an administrator and made public after approval. Different user levels can be defined to access the data for communicating the information. This application was tested for landslides in post-earthquake Nepal but can be used for any other type of hazards such as flood, avalanche

  9. High-resolution SMA imaging of bright submillimetre sources from the SCUBA-2 Cosmology Legacy Survey

    Science.gov (United States)

    Hill, Ryley; Chapman, Scott C.; Scott, Douglas; Petitpas, Glen; Smail, Ian; Chapin, Edward L.; Gurwell, Mark A.; Perry, Ryan; Blain, Andrew W.; Bremer, Malcolm N.; Chen, Chian-Chou; Dunlop, James S.; Farrah, Duncan; Fazio, Giovanni G.; Geach, James E.; Howson, Paul; Ivison, R. J.; Lacaille, Kevin; Michałowski, Michał J.; Simpson, James M.; Swinbank, A. M.; van der Werf, Paul P.; Wilner, David J.

    2018-06-01

    We have used the Submillimeter Array (SMA) at 860 μm to observe the brightest sources in the Submillimeter Common User Bolometer Array-2 (SCUBA-2) Cosmology Legacy Survey (S2CLS). The goal of this survey is to exploit the large field of the S2CLS along with the resolution and sensitivity of the SMA to construct a large sample of these rare sources and to study their statistical properties. We have targeted 70 of the brightest single-dish SCUBA-2 850 μm sources down to S850 ≈ 8 mJy, achieving an average synthesized beam of 2.4 arcsec and an average rms of σ860 = 1.5 mJy beam-1 in our primary beam-corrected maps. We searched our SMA maps for 4σ peaks, corresponding to S860 ≳ 6 mJy sources, and detected 62, galaxies, including three pairs. We include in our study 35 archival observations, bringing our sample size to 105 bright single-dish submillimetre sources with interferometric follow-up. We compute the cumulative and differential number counts, finding them to overlap with previous single-dish survey number counts within the uncertainties, although our cumulative number count is systematically lower than the parent S2CLS cumulative number count by 14 ± 6 per cent between 11 and 15 mJy. We estimate the probability that a ≳10 mJy single-dish submillimetre source resolves into two or more galaxies with similar flux densities to be less than 15 per cent. Assuming the remaining 85 per cent of the targets are ultraluminous starburst galaxies between z = 2 and 3, we find a likely volume density of ≳400 M⊙ yr-1 sources to be {˜ } 3^{+0.7}_{-0.6} {× } 10^{-7} Mpc-3. We show that the descendants of these galaxies could be ≳4 × 1011 M⊙ local quiescent galaxies, and that about 10 per cent of their total stellar mass would have formed during these short bursts of star formation.

  10. Automated detection of extended sources in radio maps: progress from the SCORPIO survey

    Science.gov (United States)

    Riggi, S.; Ingallinera, A.; Leto, P.; Cavallaro, F.; Bufano, F.; Schillirò, F.; Trigilio, C.; Umana, G.; Buemi, C. S.; Norris, R. P.

    2016-08-01

    Automated source extraction and parametrization represents a crucial challenge for the next-generation radio interferometer surveys, such as those performed with the Square Kilometre Array (SKA) and its precursors. In this paper, we present a new algorithm, called CAESAR (Compact And Extended Source Automated Recognition), to detect and parametrize extended sources in radio interferometric maps. It is based on a pre-filtering stage, allowing image denoising, compact source suppression and enhancement of diffuse emission, followed by an adaptive superpixel clustering stage for final source segmentation. A parametrization stage provides source flux information and a wide range of morphology estimators for post-processing analysis. We developed CAESAR in a modular software library, also including different methods for local background estimation and image filtering, along with alternative algorithms for both compact and diffuse source extraction. The method was applied to real radio continuum data collected at the Australian Telescope Compact Array (ATCA) within the SCORPIO project, a pathfinder of the Evolutionary Map of the Universe (EMU) survey at the Australian Square Kilometre Array Pathfinder (ASKAP). The source reconstruction capabilities were studied over different test fields in the presence of compact sources, imaging artefacts and diffuse emission from the Galactic plane and compared with existing algorithms. When compared to a human-driven analysis, the designed algorithm was found capable of detecting known target sources and regions of diffuse emission, outperforming alternative approaches over the considered fields.

  11. Counting statistics and loss corrections for the APS

    International Nuclear Information System (INIS)

    Lee, W.K.; Mills, D.M.

    1992-01-01

    It has been suggested that for timing experiments, it might be advantageous to arrange the bunches in the storage ring in an asymmetrical mode. In this paper, we determine the counting losses from pulsed x-ray sources from basic probabilistic arguments and from Poisson statistics. In particular the impact on single photon counting losses of a variety of possible filling modes for the Advanced Photon Source (APS) is examined. For bunches of equal current, a loss of 10% occurs whenever the count rate exceeds 21% of the bunch repetition rate. This changes slightly when bunches containing unequal numbers of particles are considered. The results are applied to several common detector/electronics systems

  12. Counting statistics and loss corrections for the APS

    International Nuclear Information System (INIS)

    Lee, W.K.; Mills, D.M.

    1992-01-01

    It has been suggested that for timing experiments, it might be advantageous to arrange the bunches in the storage ring in an asymmetrical mode. In this paper, we determine the counting losses from pulsed x-ray sources from basic probabilistic arguments and from Poisson statistics. In particular the impact on single-photon counting losses of a variety of possible filling modes for the Advanced Photon Source (APS) is examined. For bunches of equal current, a loss of 10% occurs whenever the count rate exceeds 21% of the bunch repetition rate. This changes slightly when bunches containing unequal numbers of particles are considered. The results are applied to several common detector/electronics systems

  13. 2013 Kids Count in Colorado! Community Matters

    Science.gov (United States)

    Colorado Children's Campaign, 2013

    2013-01-01

    "Kids Count in Colorado!" is an annual publication of the Children's Campaign, providing state and county level data on child well-being factors including child health, education, and economic status. Since its first release 20 years ago, "Kids Count in Colorado!" has become the most trusted source for data and information on…

  14. Set of counts by scintillations for atmospheric samplings

    International Nuclear Information System (INIS)

    Appriou, D.; Doury, A.

    1962-01-01

    The author reports the development of a scintillation-based counting assembly with the following characteristics: a photo-multiplier with a wide photo-cathode, a thin plastic scintillator for the counting of beta + alpha (and possibility of mounting an alpha scintillator), a relatively small own motion with respect to activities to be counted, a weakly varying efficiency. The authors discuss the counting objective, present equipment tests (counter, proportional amplifier and pre-amplifier, input drawer). They describe the apparatus operation, discuss the selection of scintillators, report the study of the own movement (electron-based background noise, total background noise, background noise reduction), discuss counts (influence of the external source, sensitivity to alpha radiations, counting homogeneity, minimum detectable activity) and efficiencies

  15. Open-Source Automated Mapping Four-Point Probe

    Directory of Open Access Journals (Sweden)

    Handy Chandra

    2017-01-01

    Full Text Available Scientists have begun using self-replicating rapid prototyper (RepRap 3-D printers to manufacture open source digital designs of scientific equipment. This approach is refined here to develop a novel instrument capable of performing automated large-area four-point probe measurements. The designs for conversion of a RepRap 3-D printer to a 2-D open source four-point probe (OS4PP measurement device are detailed for the mechanical and electrical systems. Free and open source software and firmware are developed to operate the tool. The OS4PP was validated against a wide range of discrete resistors and indium tin oxide (ITO samples of different thicknesses both pre- and post-annealing. The OS4PP was then compared to two commercial proprietary systems. Results of resistors from 10 to 1 MΩ show errors of less than 1% for the OS4PP. The 3-D mapping of sheet resistance of ITO samples successfully demonstrated the automated capability to measure non-uniformities in large-area samples. The results indicate that all measured values are within the same order of magnitude when compared to two proprietary measurement systems. In conclusion, the OS4PP system, which costs less than 70% of manual proprietary systems, is comparable electrically while offering automated 100 micron positional accuracy for measuring sheet resistance over larger areas.

  16. Open-Source Automated Mapping Four-Point Probe.

    Science.gov (United States)

    Chandra, Handy; Allen, Spencer W; Oberloier, Shane W; Bihari, Nupur; Gwamuri, Jephias; Pearce, Joshua M

    2017-01-26

    Scientists have begun using self-replicating rapid prototyper (RepRap) 3-D printers to manufacture open source digital designs of scientific equipment. This approach is refined here to develop a novel instrument capable of performing automated large-area four-point probe measurements. The designs for conversion of a RepRap 3-D printer to a 2-D open source four-point probe (OS4PP) measurement device are detailed for the mechanical and electrical systems. Free and open source software and firmware are developed to operate the tool. The OS4PP was validated against a wide range of discrete resistors and indium tin oxide (ITO) samples of different thicknesses both pre- and post-annealing. The OS4PP was then compared to two commercial proprietary systems. Results of resistors from 10 to 1 MΩ show errors of less than 1% for the OS4PP. The 3-D mapping of sheet resistance of ITO samples successfully demonstrated the automated capability to measure non-uniformities in large-area samples. The results indicate that all measured values are within the same order of magnitude when compared to two proprietary measurement systems. In conclusion, the OS4PP system, which costs less than 70% of manual proprietary systems, is comparable electrically while offering automated 100 micron positional accuracy for measuring sheet resistance over larger areas.

  17. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    Science.gov (United States)

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  18. An open-source java platform for automated reaction mapping.

    Science.gov (United States)

    Crabtree, John D; Mehta, Dinesh P; Kouri, Tina M

    2010-09-27

    This article presents software applications that have been built upon a modular, open-source, reaction mapping library that can be used in both cheminformatics and bioinformatics research. We first describe the theoretical underpinnings and modular architecture of the core software library. We then describe two applications that have been built upon that core. The first is a generic reaction viewer and mapper, and the second classifies reactions according to rules that can be modified by end users with little or no programming skills.

  19. The Atacama Cosmology Telescope: Extragalactic Sources at 148 GHz in the 2008 Survey

    Science.gov (United States)

    Marriage, T. A.; Juin, J. B.; Lin, Y. T.; Marsden, D.; Nolta, M. R.; Partridge, B.; Ade, P. A. R.; Aguirre, P.; Amiri, M.; Appel, J. W.; hide

    2011-01-01

    We report on extragalactic sources detected in a 455 square-degree map of the southern sky made with data at a frequency of 148 GHz from the Atacama Cosmology Telescope 2008 observing season. We provide a catalog of 157 sources with flux densities spanning two orders of magnitude: from 15 mJy to 1500 mJy. Comparison to other catalogs shows that 98% of the ACT detections correspond to sources detected at lower radio frequencies. Three of the sources appear to be associated with the brightest cluster galaxies of low redshift X-ray selected galaxy clusters. Estimates of the radio to mm-wave spectral indices and differential counts of the sources further bolster the hypothesis that they are nearly all radio sources, and that their emission is not dominated by re-emission from warm dust. In a bright (> 50 mJy) 148 GHz-selected sample with complete cross-identifications from the Australia Telescope 20 GHz survey, we observe an average steepening of the spectra between .5, 20, and 148 GHz with median spectral indices of alp[ha (sub 5-20) = -0.07 +/- 0.06, alpha (sub 20-148) -0.39 +/- 0.04, and alpha (sub 5-148) = -0.20 +/- 0.03. When the measured spectral indices are taken into account, the 148 GHz differential source counts are consistent with previous measurements at 30 GHz in the context of a source count model dominated by radio sources. Extrapolating with an appropriately rescaled model for the radio source counts, the Poisson contribution to the spatial power spectrum from synchrotron-dominated sources with flux density less than 20 mJy is C(sup Sync) = (2.8 +/- 0.3) x 1O (exp-6) micro K(exp 2).

  20. Rcount: simple and flexible RNA-Seq read counting.

    Science.gov (United States)

    Schmid, Marc W; Grossniklaus, Ueli

    2015-02-01

    Analysis of differential gene expression by RNA sequencing (RNA-Seq) is frequently done using feature counts, i.e. the number of reads mapping to a gene. However, commonly used count algorithms (e.g. HTSeq) do not address the problem of reads aligning with multiple locations in the genome (multireads) or reads aligning with positions where two or more genes overlap (ambiguous reads). Rcount specifically addresses these issues. Furthermore, Rcount allows the user to assign priorities to certain feature types (e.g. higher priority for protein-coding genes compared to rRNA-coding genes) or to add flanking regions. Rcount provides a fast and easy-to-use graphical user interface requiring no command line or programming skills. It is implemented in C++ using the SeqAn (www.seqan.de) and the Qt libraries (qt-project.org). Source code and 64 bit binaries for (Ubuntu) Linux, Windows (7) and MacOSX are released under the GPLv3 license and are freely available on github.com/MWSchmid/Rcount. marcschmid@gmx.ch Test data, genome annotation files, useful Python and R scripts and a step-by-step user guide (including run-time and memory usage tests) are available on github.com/MWSchmid/Rcount. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Mapping thunder sources by inverting acoustic and electromagnetic observations

    Science.gov (United States)

    Anderson, J. F.; Johnson, J. B.; Arechiga, R. O.; Thomas, R. J.

    2014-12-01

    We present a new method of locating current flow in lightning strikes by inversion of thunder recordings constrained by Lightning Mapping Array observations. First, radio frequency (RF) pulses are connected to reconstruct conductive channels created by leaders. Then, acoustic signals that would be produced by current flow through each channel are forward modeled. The recorded thunder is considered to consist of a weighted superposition of these acoustic signals. We calculate the posterior distribution of acoustic source energy for each channel with a Markov Chain Monte Carlo inversion that fits power envelopes of modeled and recorded thunder; these results show which parts of the flash carry current and produce thunder. We examine the effects of RF pulse location imprecision and atmospheric winds on quality of results and apply this method to several lightning flashes over the Magdalena Mountains in New Mexico, USA. This method will enable more detailed study of lightning phenomena by allowing researchers to map current flow in addition to leader propagation.

  2. A procedure for merging land cover/use data from Landsat, aerial photography, and map sources - Compatibility, accuracy and cost

    Science.gov (United States)

    Enslin, W. R.; Tilmann, S. E.; Hill-Rowley, R.; Rogers, R. H.

    1977-01-01

    A method is developed to merge land cover/use data from Landsat, aerial photography and map sources into a grid-based geographic information system. The method basically involves computer-assisted categorization of Landsat data to provide certain user-specified land cover categories; manual interpretation of aerial photography to identify other selected land cover/use categories that cannot be obtained from Landsat data; identification of special features from aerial photography or map sources; merging of the interpreted data from all the sources into a computer compatible file under a standardized coding structure; and the production of land cover/use maps, thematic maps, and tabular data. The specific tasks accomplished in producing the merged land cover/use data file and subsequent output products are identified and discussed. It is shown that effective implementation of the merging method is critically dependent on selecting the 'best' data source for each user-specified category in terms of accuracy and time/cost tradeoffs.

  3. An avalanche counter and encoder system for counting and mapping radioactive specimens

    International Nuclear Information System (INIS)

    Britten, R.J.

    1988-01-01

    A parallel plate counter utilizes avalanche event counting over a large area with the ability to locate radioactive sources in two dimensions. One novel embodiment comprises a gas-filled chamber formed by a stretched stainless steel window cathode spaced from a flat semiconductive anode surface between which a high voltage is applied. When a beta ray, for example, enters the chamber, an ionization event occurs and the avalanche effect multiplies the event and results in charge collection on the anode surface for a limited period of time before the charge leaks away. An encoder system, comprising a symmetrical array of planar conductive surfaces separated from the anode by a dielectric material, couples charge currents the amplitude of which define the relative position of the ionization event. A number of preferred encoder system embodiments are disclosed including a novel matrix or grid pattern of electrical paths connected to voltage dividers and charge sensitive integrating amplifiers. The amplitude of coupled current delivered to the amplifiers defines the location of the event, and spatial resolution for a given signal-to-noise ratio can be controlled by changing the number of such amplifiers. (author) 11 figs

  4. The Chandra Source Catalog: Source Properties and Data Products

    Science.gov (United States)

    Rots, Arnold; Evans, Ian N.; Glotfelty, Kenny J.; Primini, Francis A.; Zografou, Panagoula; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.

    2009-09-01

    The Chandra Source Catalog (CSC) is breaking new ground in several areas. There are two aspects that are of particular interest to the users: its evolution and its contents. The CSC will be a living catalog that becomes richer, bigger, and better in time while still remembering its state at each point in time. This means that users will be able to take full advantage of new additions to the catalog, while retaining the ability to back-track and return to what was extracted in the past. The CSC sheds the limitations of flat-table catalogs. Its sources will be characterized by a large number of properties, as usual, but each source will also be associated with its own specific data products, allowing users to perform mini custom analysis on the sources. Source properties fall in the spatial (position, extent), photometric (fluxes, count rates), spectral (hardness ratios, standard spectral fits), and temporal (variability probabilities) domains, and are all accompanied by error estimates. Data products cover the same coordinate space and include event lists, images, spectra, and light curves. In addition, the catalog contains data products covering complete observations: event lists, background images, exposure maps, etc. This work is supported by NASA contract NAS8-03060 (CXC).

  5. The Chandra Source Catalog: Algorithms

    Science.gov (United States)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  6. Pulsed single-photon spectrometer by frequency-to-time mapping using chirped fiber Bragg gratings.

    Science.gov (United States)

    Davis, Alex O C; Saulnier, Paul M; Karpiński, Michał; Smith, Brian J

    2017-05-29

    A fiber-integrated spectrometer for single-photon pulses outside the telecommunications wavelength range based upon frequency-to-time mapping, implemented by chromatic group delay dispersion (GDD), and precise temporally-resolved single-photon counting, is presented. A chirped fiber Bragg grating provides low-loss GDD, mapping the frequency distribution of an input pulse onto the temporal envelope of the output pulse. Time-resolved detection with fast single-photon-counting modules enables monitoring of a wavelength range from 825 nm to 835 nm with nearly uniform efficiency at 55 pm resolution (24 GHz at 830 nm). To demonstrate the versatility of this technique, spectral interference of heralded single photons and the joint spectral intensity distribution of a photon-pair source are measured. This approach to single-photon-level spectral measurements provides a route to realize applications of time-frequency quantum optics at visible and near-infrared wavelengths, where multiple spectral channels must be simultaneously monitored.

  7. Low Count Anomaly Detection at Large Standoff Distances

    Science.gov (United States)

    Pfund, David Michael; Jarman, Kenneth D.; Milbrath, Brian D.; Kiff, Scott D.; Sidor, Daniel E.

    2010-02-01

    Searching for hidden illicit sources of gamma radiation in an urban environment is difficult. Background radiation profiles are variable and cluttered with transient acquisitions from naturally occurring radioactive materials and medical isotopes. Potentially threatening sources likely will be nearly hidden in this noise and encountered at high standoff distances and low threat count rates. We discuss an anomaly detection algorithm that characterizes low count sources as threatening or non-threatening and operates well in the presence of high benign source variability. We discuss the algorithm parameters needed to reliably find sources both close to the detector and far away from it. These parameters include the cutoff frequencies of background tracking filters and the integration time of the spectrometer. This work is part of the development of the Standoff Radiation Imaging System (SORIS) as part of DNDO's Standoff Radiation Detection System Advanced Technology Demonstration (SORDS-ATD) program.

  8. Color quench correction for low level Cherenkov counting.

    Science.gov (United States)

    Tsroya, S; Pelled, O; German, U; Marco, R; Katorza, E; Alfassi, Z B

    2009-05-01

    The Cherenkov counting efficiency varies strongly with color quenching, thus correction curves must be used to obtain correct results. The external (152)Eu source of a Quantulus 1220 liquid scintillation counting (LSC) system was used to obtain a quench indicative parameter based on spectra area ratio. A color quench correction curve for aqueous samples containing (90)Sr/(90)Y was prepared. The main advantage of this method over the common spectra indicators is its usefulness also for low level Cherenkov counting.

  9. High rate 4π β-γ coincidence counting system

    International Nuclear Information System (INIS)

    Johnson, L.O.; Gehrke, R.J.

    1978-01-01

    A high count rate 4π β-γ coincidence counting system for the determination of absolute disintegration rates of short half-life radionuclides is described. With this system the dead time per pulse is minimized by not stretching any pulses beyond the width necessary to satisfy overlap coincidence requirements. The equations used to correct for the β, γ, and coincidence channel dead times and for accidental coincidences are presented but not rigorously developed. Experimental results are presented for a decaying source of 56 Mn initially at 2 x 10 6 d/s and a set of 60 Co sources of accurately known source strengths varying from 10 3 to 2 x 10 6 d/s. A check of the accidental coincidence equation for the case of two independent sources with varying source strengths is presented

  10. Evaluation of the influence of source and spatial resolution of DEMs on derivative products used in landslide mapping

    Directory of Open Access Journals (Sweden)

    Rubini Mahalingam

    2016-11-01

    Full Text Available Landslides are a major geohazard, which result in significant human, infrastructure, and economic losses. Landslide susceptibility mapping can help communities plan and prepare for these damaging events. Digital elevation models (DEMs are one of the most important data-sets used in landslide hazard assessment. Despite their frequent use, limited research has been completed to date on how the DEM source and spatial resolution can influence the accuracy of the produced landslide susceptibility maps. The aim of this paper is to analyse the influence of spatial resolutions and source of DEMs on landslide susceptibility mapping. For this purpose, Advanced Spaceborne Thermal Emission and Reflection (ASTER, National Elevation Dataset (NED, and Light Detection and Ranging (LiDAR DEMs were obtained for two study sections of approximately 140 km2 in north-west Oregon. Each DEM was resampled to 10, 30, and 50 m and slope and aspect grids were derived for each resolution. A set of nine spatial databases was constructed using geoinformation science (GIS for each of the spatial resolution and source. Additional factors such as distance to river and fault maps were included. An analytical hierarchical process (AHP, fuzzy logic model, and likelihood ratio-AHP representing qualitative, quantitative, and hybrid landslide mapping techniques were used for generating landslide susceptibility maps. The results from each of the techniques were verified with the Cohen's kappa index, confusion matrix, and a validation index based on agreement with detailed landslide inventory maps. The spatial resolution of 10 m, derived from the LiDAR data-set showed higher predictive accuracy in all the three techniques used for producing landslide susceptibility maps. At a resolution of 10 m, the output maps based on NED and ASTER had higher misclassification compared to the LiDAR-based outputs. Further, the 30-m LiDAR output showed improved results over the 10-m NED and 10-m

  11. Correction for intrinsic and set dead-time losses in radioactivity counting

    International Nuclear Information System (INIS)

    Wyllie, H.A.

    1992-12-01

    Equations are derived for the determination of the intrinsic dead time of the components which precede the paralysis unit in a counting system for measuring radioactivity. The determination depends on the extension of the set dead time by the intrinsic dead time. Improved formulae are given for the dead-time correction of the count rate of a radioactive source in a single-channel system. A variable in the formulae is the intrinsic dead time which is determined concurrently with the counting of the source. The only extra equipment required in a conventional system is a scaler. 5 refs., 2 tabs., 21 figs

  12. Manifold-Based Visual Object Counting.

    Science.gov (United States)

    Wang, Yi; Zou, Yuexian; Wang, Wenwu

    2018-07-01

    Visual object counting (VOC) is an emerging area in computer vision which aims to estimate the number of objects of interest in a given image or video. Recently, object density based estimation method is shown to be promising for object counting as well as rough instance localization. However, the performance of this method tends to degrade when dealing with new objects and scenes. To address this limitation, we propose a manifold-based method for visual object counting (M-VOC), based on the manifold assumption that similar image patches share similar object densities. Firstly, the local geometry of a given image patch is represented linearly by its neighbors using a predefined patch training set, and the object density of this given image patch is reconstructed by preserving the local geometry using locally linear embedding. To improve the characterization of local geometry, additional constraints such as sparsity and non-negativity are also considered via regularization, nonlinear mapping, and kernel trick. Compared with the state-of-the-art VOC methods, our proposed M-VOC methods achieve competitive performance on seven benchmark datasets. Experiments verify that the proposed M-VOC methods have several favorable properties, such as robustness to the variation in the size of training dataset and image resolution, as often encountered in real-world VOC applications.

  13. Mapping landslide source and transport areas in VHR images with Object-Based Analysis and Support Vector Machines

    Science.gov (United States)

    Heleno, Sandra; Matias, Magda; Pina, Pedro

    2015-04-01

    Visual interpretation of satellite imagery remains extremely demanding in terms of resources and time, especially when dealing with numerous multi-scale landslides affecting wide areas, such as is the case of rainfall-induced shallow landslides. Applying automated methods can contribute to more efficient landslide mapping and updating of existing inventories, and in recent years the number and variety of approaches is rapidly increasing. Very High Resolution (VHR) images, acquired by space-borne sensors with sub-metric precision, such as Ikonos, Quickbird, Geoeye and Worldview, are increasingly being considered as the best option for landslide mapping, but these new levels of spatial detail also present new challenges to state of the art image analysis tools, asking for automated methods specifically suited to map landslide events on VHR optical images. In this work we develop and test a methodology for semi-automatic landslide recognition and mapping of landslide source and transport areas. The method combines object-based image analysis and a Support Vector Machine supervised learning algorithm, and was tested using a GeoEye-1 multispectral image, sensed 3 days after a damaging landslide event in Madeira Island, together with a pre-event LiDAR DEM. Our approach has proved successful in the recognition of landslides on a 15 Km2-wide study area, with 81 out of 85 landslides detected in its validation regions. The classifier also showed reasonable performance (false positive rate 60% and false positive rate below 36% in both validation regions) in the internal mapping of landslide source and transport areas, in particular in the sunnier east-facing slopes. In the less illuminated areas the classifier is still able to accurately map the source areas, but performs poorly in the mapping of landslide transport areas.

  14. Review of single particle dynamics for third generation light sources through frequency map analysis

    Directory of Open Access Journals (Sweden)

    L. Nadolski

    2003-11-01

    Full Text Available Frequency map analysis [J. Laskar, Icarus 88, 266 (1990] is used here to analyze the transverse dynamics of four third generation synchrotron light sources: the ALS, the ESRF, the SOLEIL project, and Super-ACO. Time variations of the betatron tunes give additional information for the global dynamics of the beam. The main resonances are revealed; a one-to-one correspondence between the configuration space and the frequency space can be performed. We stress that the frequency maps, and therefore the dynamics optimization, are highly sensitive to sextupolar strengths and vary in a large amount from one machine to another. The frequency maps can thus be used to characterize the different machines.

  15. Airborne system for mapping and tracking extended gamma ray sources

    International Nuclear Information System (INIS)

    Stuart, T.P.; Hendricks, T.J.; Wallace, G.G.; Cleland, J.R.

    1976-01-01

    An airborne system was developed for mapping and tracking extended sources of airborne or terrestrially distributed γ-ray emitters. The system records 300 channel γ-ray spectral data every three seconds on magnetic tape. Computer programs have been written to isolate the contribution from the particular radionuclide of interest. Aircraft position as sensed by a microwave ranging system is recorded every second on magnetic tape. Measurements of airborne stack releases of 41 A concentrations versus time or aircraft position agree well with computer code predictions

  16. A Method for Counting Moving People in Video Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Mario Vento

    2010-01-01

    Full Text Available People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem. This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an ϵ-SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  17. A Method for Counting Moving People in Video Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Conte Donatello

    2010-01-01

    Full Text Available People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem. This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an -SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  18. A Method for Counting Moving People in Video Surveillance Videos

    Science.gov (United States)

    Conte, Donatello; Foggia, Pasquale; Percannella, Gennaro; Tufano, Francesco; Vento, Mario

    2010-12-01

    People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem). This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an [InlineEquation not available: see fulltext.]-SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  19. Alpha-particle autoradiography by solid state track detectors to spatial distribution of radioactivity in alpha-counting source

    International Nuclear Information System (INIS)

    Ishigure, Nobuhito; Nakano, Takashi; Enomoto, Hiroko; Koizumi, Akira; Miyamoto, Katsuhiro

    1989-01-01

    A technique of autoradiography using solid state track detectors is described by which spatial distribution of radioactivity in an alpha-counting source can easily be visualized. As solid state track detectors, polymer of allyl diglycol carbonate was used. The advantage of the present technique was proved that alpha-emitters can be handled in the light place alone through the whole course of autoradiography, otherwise in the conventional autoradiography the alpha-emitters, which requires special carefulness from the point of radiation protection, must be handled in the dark place with difficulty. This technique was applied to rough examination of self-absorption of the plutonium source prepared by the following different methods; the source (A) was prepared by drying at room temperature, (B) by drying under an infrared lamp, (C) by drying in ammonia atmosphere after redissolving by the addition of a drop of distilled water which followed complete evaporation under an infrared lamp and (D) by drying under an infrared lamp after adding a drop of diluted neutral detergent. The difference in the spatial distributions of radioactivity could clearly be observed on the autoradiographs. For example, the source (C) showed the most diffuse distribution, which suggested that the self-absorption of this source was the smallest. The present autoradiographic observation was in accordance with the result of the alpha-spectrometry with a silicon surface-barrier detector. (author)

  20. SU-E-T-375: Evaluation of a MapCHECK2(tm) Planar 2-D Diode Array for High-Dose-Rate Brachytherapy Treatment Delivery Verifications

    Energy Technology Data Exchange (ETDEWEB)

    Macey, N; Siebert, M; Shvydka, D; Parsai, E [University of Toledo Medical Center, Toledo, OH (United States)

    2015-06-15

    Purpose: Despite improvements of HDR brachytherapy delivery systems, verification of source position is still typically based on the length of the wire reeled out relative to the parked position. Yet, the majority of errors leading to medical events in HDR treatments continue to be classified as missed targets or wrong treatment sites. We investigate the feasibility of using dose maps acquired with a two-dimensional diode array to independently verify the source locations, dwell times, and dose during an HDR treatment. Methods: Custom correction factors were integrated into frame-by-frame raw counts recorded for a Varian VariSource™ HDR afterloader Ir-192 source located at various distances in air and in solid water from a MapCHECK2™ diode array. The resultant corrected counts were analyzed to determine the dwell position locations and doses delivered. The local maxima of polynomial equations fitted to the extracted dwell dose profiles provided the X and Y coordinates while the distance to the source was determined from evaluation of the full width at half maximum (FWHM). To verify the approach, the experiment was repeated as the source was moved through dwell positions at various distances along an inclined plane, mimicking a vaginal cylinder treatment. Results: Dose map analysis was utilized to provide the coordinates of the source and dose delivered over each dwell position. The accuracy in determining source dwell positions was found to be +/−1.0 mm of the preset values, and doses within +/−3% of those calculated by the BrachyVision™ treatment planning system for all measured distances. Conclusion: Frame-by-frame data furnished by a 2 -D diode array can be used to verify the dwell positions and doses delivered by the HDR source over the course of treatment. Our studies have verified that measurements provided by the MapCHECK2™ can be used as a routine QA tool for HDR treatment delivery verification.

  1. Accuracy in activation analysis: count rate effects

    International Nuclear Information System (INIS)

    Lindstrom, R.M.; Fleming, R.F.

    1980-01-01

    The accuracy inherent in activation analysis is ultimately limited by the uncertainty of counting statistics. When careful attention is paid to detail, several workers have shown that all systematic errors can be reduced to an insignificant fraction of the total uncertainty, even when the statistical limit is well below one percent. A matter of particular importance is the reduction of errors due to high counting rate. The loss of counts due to random coincidence (pulse pileup) in the amplifier and to digitization time in the ADC may be treated as a series combination of extending and non-extending dead times, respectively. The two effects are experimentally distinct. Live timer circuits in commercial multi-channel analyzers compensate properly for ADC dead time for long-lived sources, but not for pileup. Several satisfactory solutions are available, including pileup rejection and dead time correction circuits, loss-free ADCs, and computed corrections in a calibrated system. These methods are sufficiently reliable and well understood that a decaying source can be measured routinely with acceptably small errors at a dead time as high as 20 percent

  2. Dose-rate mapping and search of radioactive sources in Estonia

    International Nuclear Information System (INIS)

    Ylaetalo, S.; Karvonen, J.; Ilander, T.; Honkamaa, T.; Toivonen, H.

    1996-12-01

    The Estonian Ministry of Environment and the Finnish Centre for Radiation and Nuclear Safety (STUK) agreed in 1995 on a radiation mapping project in Estonia. The country was searched to find potential man-made radioactive sources. Another goal of the project was to produce a background dose-rate map over the whole country. The measurements provided an excellent opportunity to test new in-field measuring systems that are useful in a nuclear disaster. The basic idea was to monitor road sides, cities, domestic waste storage places and former military or rocket bases from a moving vehicle by measuring gamma spectrum and dose rate. The measurements were carried out using vehicle installed systems consisting of a pressurised ionisation chamber (PIC) in 1995 and a combination of a scintillation spectrometer (NaI(TI)) and Geiger-Mueller-counter (GM) in 1996. All systems utilised GPS-satellite navigation signals to relate the measured dose rates and gamma-spectra to current geographical location. The data were recorded for further computer analysis. The dose rate varied usually between 0.03-0.17 μSv/h in the whole country, excluding a few nuclear material storage places (in Saku and in Sillamae). Enhanced dose rates of natural origin (0.17-0.5 μSv/h) were measured near granite statues, buildings and bridges. No radioactive sources were found on road sides or in towns or villages. (orig.) (14 refs.)

  3. Quantitative Compton suppression spectrometry at elevated counting rates

    International Nuclear Information System (INIS)

    Westphal, G.P.; Joestl, K.; Schroeder, P.; Lauster, R.; Hausch, E.

    1999-01-01

    For quantitative Compton suppression spectrometry the decrease of coincidence efficiency with counting rate should be made negligible to avoid a virtual increase of relative peak areas of coincident isomeric transitions with counting rate. To that aim, a separate amplifier and discriminator has been used for each of the eight segments of the active shield of a new well-type Compton suppression spectrometer, together with an optimized, minimum dead-time design of the anticoincidence logic circuitry. Chance coincidence losses in the Compton suppression spectrometer are corrected instrumentally by comparing the chance coincidence rate to the counting rate of the germanium detector in a pulse-counting Busy circuit (G.P. Westphal, J. Rad. Chem. 179 (1994) 55) which is combined with the spectrometer's LFC counting loss correction system. The normally not observable chance coincidence rate is reconstructed from the rates of germanium detector and scintillation detector in an auxiliary coincidence unit, after the destruction of true coincidence by delaying one of the coincidence partners. Quantitative system response has been tested in two-source measurements with a fixed reference source of 60 Co of 14 kc/s, and various samples of 137 Cs, up to aggregate counting rates of 180 kc/s for the well-type detector, and more than 1400 kc/s for the BGO shield. In these measurements, the net peak areas of the 1173.3 keV line of 60 Co remained constant at typical values of 37 000 with and 95 000 without Compton suppression, with maximum deviations from the average of less than 1.5%

  4. Mapping correlation of a simulated dark matter source and a point source in the gamma-ray sky - Oral Presentation

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, Alexander [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2015-08-23

    In my research, I analyzed how two gamma-ray source models interact with one another when optimizing to fit data. This is important because it becomes hard to distinguish between the two point sources when they are close together or looking at low energy photons. The reason for the first is obvious, the reason why they become harder to distinguish at lower photon energies is the resolving power of the Fermi Gamma-Ray Space Telescope gets worse at lower energies. When the two point sources are highly correlated (hard to distinguish between), we need to change our method of statistical analysis. What I did was show that highly correlated sources have larger uncertainties associated with them, caused by an optimizer not knowing which point source’s parameters to optimize. I also mapped out where their is high correlation for 2 different theoretical mass dark matter point sources so that people analyzing them in the future knew where they had to use more sophisticated statistical analysis.

  5. TRAM (Transcriptome Mapper: database-driven creation and analysis of transcriptome maps from multiple sources

    Directory of Open Access Journals (Sweden)

    Danieli Gian

    2011-02-01

    Full Text Available Abstract Background Several tools have been developed to perform global gene expression profile data analysis, to search for specific chromosomal regions whose features meet defined criteria as well as to study neighbouring gene expression. However, most of these tools are tailored for a specific use in a particular context (e.g. they are species-specific, or limited to a particular data format and they typically accept only gene lists as input. Results TRAM (Transcriptome Mapper is a new general tool that allows the simple generation and analysis of quantitative transcriptome maps, starting from any source listing gene expression values for a given gene set (e.g. expression microarrays, implemented as a relational database. It includes a parser able to assign univocal and updated gene symbols to gene identifiers from different data sources. Moreover, TRAM is able to perform intra-sample and inter-sample data normalization, including an original variant of quantile normalization (scaled quantile, useful to normalize data from platforms with highly different numbers of investigated genes. When in 'Map' mode, the software generates a quantitative representation of the transcriptome of a sample (or of a pool of samples and identifies if segments of defined lengths are over/under-expressed compared to the desired threshold. When in 'Cluster' mode, the software searches for a set of over/under-expressed consecutive genes. Statistical significance for all results is calculated with respect to genes localized on the same chromosome or to all genome genes. Transcriptome maps, showing differential expression between two sample groups, relative to two different biological conditions, may be easily generated. We present the results of a biological model test, based on a meta-analysis comparison between a sample pool of human CD34+ hematopoietic progenitor cells and a sample pool of megakaryocytic cells. Biologically relevant chromosomal segments and gene

  6. Extragalactic sources in Cosmic Microwave Background maps

    Energy Technology Data Exchange (ETDEWEB)

    Zotti, G. De; Castex, G. [SISSA, via Bonomea 265, 34136 Trieste (Italy); González-Nuevo, J. [Departamento de Física, Universidad de Oviedo, C. Calvo Sotelo s/n, 33007 Oviedo (Spain); Lopez-Caniego, M. [European Space Agency, ESAC, Planck Science Office, Camino bajo del Castillo, s/n, Urbanización Villafranca del Castillo, Villanueva de la Cañada, Madrid (Spain); Negrello, M.; Clemens, M. [INAF-Osservatorio Astronomico di Padova, vicolo dell' Osservatorio 5, I-35122 Padova (Italy); Cai, Z.-Y. [CAS Key Laboratory for Research in Galaxies and Cosmology, Department of Astronomy, University of Science and Technology of China, Hefei, Anhui 230026 (China); Delabrouille, J. [APC, 10, rue Alice Domon et Léonie Duquet, 75205 Paris Cedex 13 (France); Herranz, D.; Bonavera, L. [Instituto de Física de Cantabria (CSIC-UC), avda. los Castros s/n, 39005 Santander (Spain); Melin, J.-B. [DSM/Irfu/SPP, CEA-Saclay, F-91191 Gif-sur-Yvette Cedex (France); Tucci, M. [Département de Physique Théorique and Center for Astroparticle Physics, Université de Genève, 24 quai Ansermet, CH-1211 Genève 4 (Switzerland); Serjeant, S. [Department of Physical Sciences, The Open University, Walton Hall, Milton Keynes MK7 6AA (United Kingdom); Bilicki, M. [Astrophysics, Cosmology and Gravity Centre, Department of Astronomy, University of Cape Town, Private Bag X3, Rondebosch (South Africa); Andreani, P., E-mail: gianfranco.dezotti@oapd.inaf.it, E-mail: gcastex@sissa.it, E-mail: gnuevo@uniovi.es, E-mail: marcos.lopez.caniego@sciops.esa.int [European Southern Observatory, Karl-Schwarzschild-Straße 2, D-85748, Garching (Germany); and others

    2015-06-01

    We discuss the potential of a next generation space-borne CMB experiment for studies of extragalactic sources with reference to COrE+, a project submitted to ESA in response to the call for a Medium-size mission (M4). We consider three possible options for the telescope size: 1 m, 1.5 m and 2 m (although the last option is probably impractical, given the M4 boundary conditions). The proposed instrument will be far more sensitive than Planck and will have a diffraction-limited angular resolution. These properties imply that even the 1 m telescope option will perform substantially better than Planck for studies of extragalactic sources. The source detection limits as a function of frequency have been estimated by means of realistic simulations taking into account all the relevant foregrounds. Predictions for the various classes of extragalactic sources are based on up-to-date models. The most significant improvements over Planck results are presented for each option. COrE+ will provide much larger samples of truly local star-forming galaxies (by about a factor of 8 for the 1 m telescope, of 17 for 1.5 m, of 30 for 2 m), making possible analyses of the properties of galaxies (luminosity functions, dust mass functions, star formation rate functions, dust temperature distributions, etc.) across the Hubble sequence. Even more interestingly, COrE+ will detect, at |b| > 30°, thousands of strongly gravitationally lensed galaxies (about 2,000, 6,000 and 13,000 for the 1 m, 1.5 m and 2 m options, respectively). Such large samples are of extraordinary astrophysical and cosmological value in many fields. Moreover, COrE+ high frequency maps will be optimally suited to pick up proto-clusters of dusty galaxies, i.e. to investigate the evolution of large scale structure at larger redshifts than can be reached by other means. Thanks to its high sensitivity COrE+ will also yield a spectacular advance in the blind detection of extragalactic sources in polarization: we expect that

  7. Extragalactic sources in Cosmic Microwave Background maps

    Science.gov (United States)

    De Zotti, G.; Castex, G.; González-Nuevo, J.; Lopez-Caniego, M.; Negrello, M.; Cai, Z.-Y.; Clemens, M.; Delabrouille, J.; Herranz, D.; Bonavera, L.; Melin, J.-B.; Tucci, M.; Serjeant, S.; Bilicki, M.; Andreani, P.; Clements, D. L.; Toffolatti, L.; Roukema, B. F.

    2015-06-01

    We discuss the potential of a next generation space-borne CMB experiment for studies of extragalactic sources with reference to COrE+, a project submitted to ESA in response to the call for a Medium-size mission (M4). We consider three possible options for the telescope size: 1 m, 1.5 m and 2 m (although the last option is probably impractical, given the M4 boundary conditions). The proposed instrument will be far more sensitive than Planck and will have a diffraction-limited angular resolution. These properties imply that even the 1 m telescope option will perform substantially better than Planck for studies of extragalactic sources. The source detection limits as a function of frequency have been estimated by means of realistic simulations taking into account all the relevant foregrounds. Predictions for the various classes of extragalactic sources are based on up-to-date models. The most significant improvements over Planck results are presented for each option. COrE+ will provide much larger samples of truly local star-forming galaxies (by about a factor of 8 for the 1 m telescope, of 17 for 1.5 m, of 30 for 2 m), making possible analyses of the properties of galaxies (luminosity functions, dust mass functions, star formation rate functions, dust temperature distributions, etc.) across the Hubble sequence. Even more interestingly, COrE+ will detect, at |b| > 30°, thousands of strongly gravitationally lensed galaxies (about 2,000, 6,000 and 13,000 for the 1 m, 1.5 m and 2 m options, respectively). Such large samples are of extraordinary astrophysical and cosmological value in many fields. Moreover, COrE+ high frequency maps will be optimally suited to pick up proto-clusters of dusty galaxies, i.e. to investigate the evolution of large scale structure at larger redshifts than can be reached by other means. Thanks to its high sensitivity COrE+ will also yield a spectacular advance in the blind detection of extragalactic sources in polarization: we expect that it

  8. Mapping Mixed Methods Research: Methods, Measures, and Meaning

    Science.gov (United States)

    Wheeldon, J.

    2010-01-01

    This article explores how concept maps and mind maps can be used as data collection tools in mixed methods research to combine the clarity of quantitative counts with the nuance of qualitative reflections. Based on more traditional mixed methods approaches, this article details how the use of pre/post concept maps can be used to design qualitative…

  9. Pulse-duration discrimination for increasing counting characteristic plateau and for improving counting rate stability of a scintillation counter

    International Nuclear Information System (INIS)

    Kuz'min, M.G.

    1977-01-01

    For greater stability of scintillation counters operation, discussed is the possibility for increasing the plateau and reducing its slope. Presented is the circuit for discrimination of the signal pulses from input pulses of a photomultiplier. The counting characteristics have been measured with the scintillation detectors being irradiated by different gamma sources ( 60 Co, 137 Cs, 241 Am) and without the source when the scintillation detector is shielded by a tungsten cylinder with a wall thickness of 23 mm. The comparison has revealed that discrimination in duration increase the plateau and reduces its slope. Proceeding from comparison of the noise characteristics, the relationship is found between the noise pulse number and gamma radiation energy. For better stability of the counting rate it is suggested to introduce into the scintillation counter the circuit for duration discrimination of the output pulses of a photomultiplier

  10. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    Science.gov (United States)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  11. Coincidence and noncoincidence counting (81Rb and 43K): a comparative study

    International Nuclear Information System (INIS)

    Ikeda, S.; Duken, H.; Tillmanns, H.; Bing, R.J.

    1975-01-01

    The accuracy of imaging and resolution obtained with 81 Rb and 43 K using coincidence and noncoincidence counting was compared. Phantoms and isolated infarcted dog hearts were used. The results clearly show the superiority of coincidence counting with a resolution of 0.5 cm. Noncoincidence counting failed to reveal even sizable defects in the radioactive source. (U.S.)

  12. Verifying mapping, monitoring and modeling of fine sediment pollution sources in West Maui, Hawai'i, USA

    Science.gov (United States)

    Cerovski-Darriau, C.; Stock, J. D.

    2017-12-01

    Coral reef ecosystems, and the fishing and tourism industries they support, depend on clean waters. Fine sediment pollution from nearshore watersheds threatens these enterprises in West Maui, Hawai'i. To effectively mitigate sediment pollution, we first have to know where the sediment is coming from, and how fast it erodes. In West Maui, we know that nearshore sediment plumes originate from erosion of fine sand- to silt-sized air fall deposits where they are exposed by grazing, agriculture, or other disturbances. We identified and located these sediment sources by mapping watershed geomorphological processes using field traverses, historic air photos, and modern orthophotos. We estimated bank lowering rates using erosion pins, and other surface erosion rates were extrapolated from data collected elsewhere on the Hawaiian Islands. These measurements and mapping led to a reconnaissance sediment budget which showed that annual loads are dominated by bank erosion of legacy terraces. Field observations during small storms confirm that nearshore sediment plumes are sourced from bank erosion of in-stream, legacy agricultural deposits. To further verify this sediment budget, we used geochemical fingerprinting to uniquely identify each potential source (e.g. stream banks, agricultural fields, roads, other human modified soils, and hillslopes) from the Wahikuli watershed (10 km2) and analyzed the fine fraction using ICP-MS for elemental geochemistry. We propose to apply this the fingerprinting results to nearshore suspended sediment samples taken during storms to identify the proportion of sediment coming from each source. By combining traditional geomorphic mapping, monitoring and geochemistry, we hope to provide a powerful tool to verify the primary source of sediment reaching the nearshore.

  13. Optimal Matched Filter in the Low-number Count Poisson Noise Regime and Implications for X-Ray Source Detection

    Science.gov (United States)

    Ofek, Eran O.; Zackay, Barak

    2018-04-01

    Detection of templates (e.g., sources) embedded in low-number count Poisson noise is a common problem in astrophysics. Examples include source detection in X-ray images, γ-rays, UV, neutrinos, and search for clusters of galaxies and stellar streams. However, the solutions in the X-ray-related literature are sub-optimal in some cases by considerable factors. Using the lemma of Neyman–Pearson, we derive the optimal statistics for template detection in the presence of Poisson noise. We demonstrate that, for known template shape (e.g., point sources), this method provides higher completeness, for a fixed false-alarm probability value, compared with filtering the image with the point-spread function (PSF). In turn, we find that filtering by the PSF is better than filtering the image using the Mexican-hat wavelet (used by wavdetect). For some background levels, our method improves the sensitivity of source detection by more than a factor of two over the popular Mexican-hat wavelet filtering. This filtering technique can also be used for fast PSF photometry and flare detection; it is efficient and straightforward to implement. We provide an implementation in MATLAB. The development of a complete code that works on real data, including the complexities of background subtraction and PSF variations, is deferred for future publication.

  14. YouGenMap: a web platform for dynamic multi-comparative mapping and visualization of genetic maps

    Science.gov (United States)

    Keith Batesole; Kokulapalan Wimalanathan; Lin Liu; Fan Zhang; Craig S. Echt; Chun Liang

    2014-01-01

    Comparative genetic maps are used in examination of genome organization, detection of conserved gene order, and exploration of marker order variations. YouGenMap is an open-source web tool that offers dynamic comparative mapping capability of users' own genetic mapping between 2 or more map sets. Users' genetic map data and optional gene annotations are...

  15. Model-Based Analysis and Optimization of the Mapping of Cortical Sources in the Spontaneous Scalp EEG

    Directory of Open Access Journals (Sweden)

    Andrei V. Sazonov

    2007-01-01

    Full Text Available The mapping of brain sources into the scalp electroencephalogram (EEG depends on volume conduction properties of the head and on an electrode montage involving a reference. Mathematically, this source mapping (SM is fully determined by an observation function (OF matrix. This paper analyses the OF-matrix for a generation model for the desynchronized spontaneous EEG. The model involves a four-shell spherical volume conductor containing dipolar sources that are mutually uncorrelated so as to reflect the desynchronized EEG. The reference is optimized in order to minimize the impact in the SM of the sources located distant from the electrodes. The resulting reference is called the localized reference (LR. The OF-matrix is analyzed in terms of the relative power contribution of the sources and the cross-channel correlation coefficient for five existing references as well as for the LR. It is found that the Hjorth Laplacian reference is a fair approximation of the LR, and thus is close to optimum for practical intents and purposes. The other references have a significantly poorer performance. Furthermore, the OF-matrix is analyzed for limits to the spatial resolution for the EEG. These are estimated to be around 2 cm.

  16. A hard X-ray scanning microprobe for fluorescence imaging and microdiffraction at the Advanced Photon Source

    International Nuclear Information System (INIS)

    Cai, L.; Lai, B.; Yun, W.; Ilinski, P.; Legnini, D.; Maser, J.; Rodrigues, W.

    1999-01-01

    A hard x-ray scanning microprobe based on zone plate optics and undulator radiation, in the energy region from 6 to 20 keV, has reached a focal spot size (FWHM) of 0.15 microm (v) x 0.6 microm (h), and a photon flux of 4 x 10 9 photons/sec/0.01%BW. Using a slit 44 meters upstream to create a virtual source, a circular beam spot of 0.15 microm in diameter can be obtained with a photon flux of one order of magnitude less. During fluorescence mapping of trace elements in a single human ovarian cell, the microprobe exhibited an imaging sensitivity for Pt (L a line) of 80 attograms/microm 2 for a count rate of 10 counts per second. The x-ray microprobe has been used to map crystallographic strain and multiquantum well thickness in micro-optoelectronic devices produced with the selective area growth technique

  17. Identifying fecal pollution sources using 3M(™) Petrifilm (™) count plates and antibiotic resistance analysis in the Horse Creek Watershed in Aiken County, SC (USA).

    Science.gov (United States)

    Harmon, S Michele; West, Ryan T; Yates, James R

    2014-12-01

    Sources of fecal coliform pollution in a small South Carolina (USA) watershed were identified using inexpensive methods and commonly available equipment. Samples from the upper reaches of the watershed were analyzed with 3M(™) Petrifilm(™) count plates. We were able to narrow down the study's focus to one particular tributary, Sand River, that was the major contributor of the coliform pollution (both fecal and total) to a downstream reservoir that is heavily used for recreation purposes. Concentrations of total coliforms ranged from 2,400 to 120,333 cfu/100 mL, with sharp increases in coliform counts observed in samples taken after rain events. Positive correlations between turbidity and fecal coliform counts suggested a relationship between fecal pollution and stormwater runoff. Antibiotic resistance analysis (ARA) compared antibiotic resistance profiles of fecal coliform isolates from the stream to those of a watershed-specific fecal source library (equine, waterfowl, canines, and untreated sewage). Known fecal source isolates and unknown isolates from the stream were exposed to six antibiotics at three concentrations each. Discriminant analysis grouped known isolates with an overall average rate of correct classification (ARCC) of 84.3 %. A total of 401 isolates from the first stream location were classified as equine (45.9 %), sewage (39.4 %), waterfowl (6.2 %), and feline (8.5 %). A similar pattern was observed at the second sampling location, with 42.6 % equine, 45.2 % sewage, 2.8 % waterfowl, 0.6 % canine, and 8.8 % feline. While there were slight weather-dependent differences, the vast majority of the coliform pollution in this stream appeared to be from two sources, equine and sewage. This information will contribute to better land use decisions and further justify implementation of low-impact development practices within this urban watershed.

  18. 10C survey of radio sources at 15.7 GHz - II. First results

    Science.gov (United States)

    AMI Consortium; Davies, Mathhew L.; Franzen, Thomas M. O.; Waldram, Elizabeth M.; Grainge, Keith J. B.; Hobson, Michael P.; Hurley-Walker, Natasha; Lasenby, Anthony; Olamaie, Malak; Pooley, Guy G.; Riley, Julia M.; Rodríguez-Gonzálvez, Carmen; Saunders, Richard D. E.; Scaife, Anna M. M.; Schammel, Michel P.; Scott, Paul F.; Shimwell, Timothy W.; Titterington, David J.; Zwart, Jonathan T. L.

    2011-08-01

    In a previous paper (Paper I), the observational, mapping and source-extraction techniques used for the Tenth Cambridge (10C) Survey of Radio Sources were described. Here, the first results from the survey, carried out using the Arcminute Microkelvin Imager Large Array (LA) at an observing frequency of 15.7 GHz, are presented. The survey fields cover an area of ≈27 deg2 to a flux-density completeness of 1 mJy. Results for some deeper areas, covering ≈12 deg2, wholly contained within the total areas and complete to 0.5 mJy, are also presented. The completeness for both areas is estimated to be at least 93 per cent. The 10C survey is the deepest radio survey of any significant extent (≳0.2 deg2) above 1.4 GHz. The 10C source catalogue contains 1897 entries and is available online. The source catalogue has been combined with that of the Ninth Cambridge Survey to calculate the 15.7-GHz source counts. A broken power law is found to provide a good parametrization of the differential count between 0.5 mJy and 1 Jy. The measured source count has been compared with that predicted by de Zotti et al. - the model is found to display good agreement with the data at the highest flux densities. However, over the entire flux-density range of the measured count (0.5 mJy to 1 Jy), the model is found to underpredict the integrated count by ≈30 per cent. Entries from the source catalogue have been matched with those contained in the catalogues of the NRAO VLA Sky Survey and the Faint Images of the Radio Sky at Twenty-cm survey (both of which have observing frequencies of 1.4 GHz). This matching provides evidence for a shift in the typical 1.4-GHz spectral index to 15.7-GHz spectral index of the 15.7-GHz-selected source population with decreasing flux density towards sub-mJy levels - the spectra tend to become less steep. Automated methods for detecting extended sources, developed in Paper I, have been applied to the data; ≈5 per cent of the sources are found to be extended

  19. NeuroMap: A spline-based interactive open-source software for spatiotemporal mapping of 2D and 3D MEA data

    Directory of Open Access Journals (Sweden)

    Oussama eAbdoun

    2011-01-01

    Full Text Available A major characteristic of neural networks is the complexity of their organization at various spatial scales, from microscopic local circuits to macroscopic brain-scale areas. Understanding how neural information is processed thus entails the ability to study them at multiple scales simultaneously. This is made possible using microelectrodes array (MEA technology. Indeed, high-density MEAs provide large-scale covering (several mm² of whole neural structures combined with microscopic resolution (about 50µm of unit activity. Yet, current options for spatiotemporal representation of MEA-collected data remain limited. Here we present NeuroMap, a new interactive Matlab-based software for spatiotemporal mapping of MEA data. NeuroMap uses thin plate spline interpolation, which provides several assets with respect to conventional mapping methods used currently. First, any MEA design can be considered, including 2D or 3D, regular or irregular, arrangements of electrodes. Second, spline interpolation allows the estimation of activity across the tissue with local extrema not necessarily at recording sites. Finally, this interpolation approach provides a straightforward analytical estimation of the spatial Laplacian for better current sources localization. In this software, coregistration of 2D MEA data on the anatomy of the neural tissue is made possible by fine matching of anatomical data with electrode positions using rigid deformation based correction of anatomical pictures. Overall, NeuroMap provides substantial material for detailed spatiotemporal analysis of MEA data. The package is distributed under GNU General Public License (GPL and available at http://sites.google.com/site/neuromapsoftware.

  20. NeuroMap: A Spline-Based Interactive Open-Source Software for Spatiotemporal Mapping of 2D and 3D MEA Data.

    Science.gov (United States)

    Abdoun, Oussama; Joucla, Sébastien; Mazzocco, Claire; Yvert, Blaise

    2011-01-01

    A major characteristic of neural networks is the complexity of their organization at various spatial scales, from microscopic local circuits to macroscopic brain-scale areas. Understanding how neural information is processed thus entails the ability to study them at multiple scales simultaneously. This is made possible using microelectrodes array (MEA) technology. Indeed, high-density MEAs provide large-scale coverage (several square millimeters) of whole neural structures combined with microscopic resolution (about 50 μm) of unit activity. Yet, current options for spatiotemporal representation of MEA-collected data remain limited. Here we present NeuroMap, a new interactive Matlab-based software for spatiotemporal mapping of MEA data. NeuroMap uses thin plate spline interpolation, which provides several assets with respect to conventional mapping methods used currently. First, any MEA design can be considered, including 2D or 3D, regular or irregular, arrangements of electrodes. Second, spline interpolation allows the estimation of activity across the tissue with local extrema not necessarily at recording sites. Finally, this interpolation approach provides a straightforward analytical estimation of the spatial Laplacian for better current sources localization. In this software, coregistration of 2D MEA data on the anatomy of the neural tissue is made possible by fine matching of anatomical data with electrode positions using rigid-deformation-based correction of anatomical pictures. Overall, NeuroMap provides substantial material for detailed spatiotemporal analysis of MEA data. The package is distributed under GNU General Public License and available at http://sites.google.com/site/neuromapsoftware.

  1. MAP-Based Underdetermined Blind Source Separation of Convolutive Mixtures by Hierarchical Clustering and -Norm Minimization

    Directory of Open Access Journals (Sweden)

    Kellermann Walter

    2007-01-01

    Full Text Available We address the problem of underdetermined BSS. While most previous approaches are designed for instantaneous mixtures, we propose a time-frequency-domain algorithm for convolutive mixtures. We adopt a two-step method based on a general maximum a posteriori (MAP approach. In the first step, we estimate the mixing matrix based on hierarchical clustering, assuming that the source signals are sufficiently sparse. The algorithm works directly on the complex-valued data in the time-frequency domain and shows better convergence than algorithms based on self-organizing maps. The assumption of Laplacian priors for the source signals in the second step leads to an algorithm for estimating the source signals. It involves the -norm minimization of complex numbers because of the use of the time-frequency-domain approach. We compare a combinatorial approach initially designed for real numbers with a second-order cone programming (SOCP approach designed for complex numbers. We found that although the former approach is not theoretically justified for complex numbers, its results are comparable to, or even better than, the SOCP solution. The advantage is a lower computational cost for problems with low input/output dimensions.

  2. Calibration of the Accuscan II In Vivo System for I-131 Thyroid Counting

    Energy Technology Data Exchange (ETDEWEB)

    Orval R. Perry; David L. Georgeson

    2011-07-01

    This report describes the March 2011 calibration of the Accuscan II HpGe In Vivo system for I-131 thyroid counting. The source used for the calibration was an Analytics mixed gamma source 82834-121 distributed in an epoxy matrix in a Wheaton Liquid Scintillation Vial with energies from 88.0 keV to 1836.1 keV. The center of the detectors was position 64-feet from the vault floor. This position places the approximate center line of the detectors at the center line of the source in the thyroid tube. The calibration was performed using an RMC II phantom (Appendix J). Validation testing was performed using a Ba-133 source and an ANSI N44.3 Phantom (Appendix I). This report includes an overview introduction and records for the energy/FWHM and efficiency calibrations including verification counting. The Accuscan II system was successfully calibrated for counting the thyroid for I-131 and verified in accordance with ANSI/HPS N13.30-1996 criteria.

  3. Gas source localization and gas distribution mapping with a micro-drone

    International Nuclear Information System (INIS)

    Neumann, Patrick P.

    2013-01-01

    The objective of this Ph.D. thesis is the development and validation of a VTOL-based (Vertical Take Off and Landing) micro-drone for the measurement of gas concentrations, to locate gas emission sources, and to build gas distribution maps. Gas distribution mapping and localization of a static gas source are complex tasks due to the turbulent nature of gas transport under natural conditions and becomes even more challenging when airborne. This is especially so, when using a VTOL-based micro-drone that induces disturbances through its rotors, which heavily affects gas distribution. Besides the adaptation of a micro-drone for gas concentration measurements, a novel method for the determination of the wind vector in real-time is presented. The on-board sensors for the flight control of the micro-drone provide a basis for the wind vector calculation. Furthermore, robot operating software for controlling the micro-drone autonomously is developed and used to validate the algorithms developed within this Ph.D. thesis in simulations and real-world experiments. Three biologically inspired algorithms for locating gas sources are adapted and developed for use with the micro-drone: the surge-cast algorithm (a variant of the silkworm moth algorithm), the zigzag / dung beetle algorithm, and a newly developed algorithm called ''pseudo gradient algorithm''. The latter extracts from two spatially separated measuring positions the information necessary (concentration gradient and mean wind direction) to follow a gas plume to its emission source. The performance of the algorithms is evaluated in simulations and real-world experiments. The distance overhead and the gas source localization success rate are used as main performance criteria for comparing the algorithms. Next, a new method for gas source localization (GSL) based on a particle filter (PF) is presented. Each particle represents a weighted hypothesis of the gas source position. As a first step, the PF-based GSL algorithm

  4. Gas source localization and gas distribution mapping with a micro-drone

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, Patrick P.

    2013-07-01

    The objective of this Ph.D. thesis is the development and validation of a VTOL-based (Vertical Take Off and Landing) micro-drone for the measurement of gas concentrations, to locate gas emission sources, and to build gas distribution maps. Gas distribution mapping and localization of a static gas source are complex tasks due to the turbulent nature of gas transport under natural conditions and becomes even more challenging when airborne. This is especially so, when using a VTOL-based micro-drone that induces disturbances through its rotors, which heavily affects gas distribution. Besides the adaptation of a micro-drone for gas concentration measurements, a novel method for the determination of the wind vector in real-time is presented. The on-board sensors for the flight control of the micro-drone provide a basis for the wind vector calculation. Furthermore, robot operating software for controlling the micro-drone autonomously is developed and used to validate the algorithms developed within this Ph.D. thesis in simulations and real-world experiments. Three biologically inspired algorithms for locating gas sources are adapted and developed for use with the micro-drone: the surge-cast algorithm (a variant of the silkworm moth algorithm), the zigzag / dung beetle algorithm, and a newly developed algorithm called ''pseudo gradient algorithm''. The latter extracts from two spatially separated measuring positions the information necessary (concentration gradient and mean wind direction) to follow a gas plume to its emission source. The performance of the algorithms is evaluated in simulations and real-world experiments. The distance overhead and the gas source localization success rate are used as main performance criteria for comparing the algorithms. Next, a new method for gas source localization (GSL) based on a particle filter (PF) is presented. Each particle represents a weighted hypothesis of the gas source position. As a first step, the PF

  5. Gas source localization and gas distribution mapping with a micro-drone

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, Patrick P.

    2013-07-01

    The objective of this Ph.D. thesis is the development and validation of a VTOL-based (Vertical Take Off and Landing) micro-drone for the measurement of gas concentrations, to locate gas emission sources, and to build gas distribution maps. Gas distribution mapping and localization of a static gas source are complex tasks due to the turbulent nature of gas transport under natural conditions and becomes even more challenging when airborne. This is especially so, when using a VTOL-based micro-drone that induces disturbances through its rotors, which heavily affects gas distribution. Besides the adaptation of a micro-drone for gas concentration measurements, a novel method for the determination of the wind vector in real-time is presented. The on-board sensors for the flight control of the micro-drone provide a basis for the wind vector calculation. Furthermore, robot operating software for controlling the micro-drone autonomously is developed and used to validate the algorithms developed within this Ph.D. thesis in simulations and real-world experiments. Three biologically inspired algorithms for locating gas sources are adapted and developed for use with the micro-drone: the surge-cast algorithm (a variant of the silkworm moth algorithm), the zigzag / dung beetle algorithm, and a newly developed algorithm called ''pseudo gradient algorithm''. The latter extracts from two spatially separated measuring positions the information necessary (concentration gradient and mean wind direction) to follow a gas plume to its emission source. The performance of the algorithms is evaluated in simulations and real-world experiments. The distance overhead and the gas source localization success rate are used as main performance criteria for comparing the algorithms. Next, a new method for gas source localization (GSL) based on a particle filter (PF) is presented. Each particle represents a weighted hypothesis of the gas source position. As a first step, the PF-based GSL algorithm

  6. An open-source software tool for the generation of relaxation time maps in magnetic resonance imaging

    International Nuclear Information System (INIS)

    Messroghli, Daniel R; Rudolph, Andre; Abdel-Aty, Hassan; Wassmuth, Ralf; Kühne, Titus; Dietz, Rainer; Schulz-Menger, Jeanette

    2010-01-01

    In magnetic resonance (MR) imaging, T1, T2 and T2* relaxation times represent characteristic tissue properties that can be quantified with the help of specific imaging strategies. While there are basic software tools for specific pulse sequences, until now there is no universal software program available to automate pixel-wise mapping of relaxation times from various types of images or MR systems. Such a software program would allow researchers to test and compare new imaging strategies and thus would significantly facilitate research in the area of quantitative tissue characterization. After defining requirements for a universal MR mapping tool, a software program named MRmap was created using a high-level graphics language. Additional features include a manual registration tool for source images with motion artifacts and a tabular DICOM viewer to examine pulse sequence parameters. MRmap was successfully tested on three different computer platforms with image data from three different MR system manufacturers and five different sorts of pulse sequences: multi-image inversion recovery T1; Look-Locker/TOMROP T1; modified Look-Locker (MOLLI) T1; single-echo T2/T2*; and multi-echo T2/T2*. Computing times varied between 2 and 113 seconds. Estimates of relaxation times compared favorably to those obtained from non-automated curve fitting. Completed maps were exported in DICOM format and could be read in standard software packages used for analysis of clinical and research MR data. MRmap is a flexible cross-platform research tool that enables accurate mapping of relaxation times from various pulse sequences. The software allows researchers to optimize quantitative MR strategies in a manufacturer-independent fashion. The program and its source code were made available as open-source software on the internet

  7. An open-source software tool for the generation of relaxation time maps in magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Kühne Titus

    2010-07-01

    Full Text Available Abstract Background In magnetic resonance (MR imaging, T1, T2 and T2* relaxation times represent characteristic tissue properties that can be quantified with the help of specific imaging strategies. While there are basic software tools for specific pulse sequences, until now there is no universal software program available to automate pixel-wise mapping of relaxation times from various types of images or MR systems. Such a software program would allow researchers to test and compare new imaging strategies and thus would significantly facilitate research in the area of quantitative tissue characterization. Results After defining requirements for a universal MR mapping tool, a software program named MRmap was created using a high-level graphics language. Additional features include a manual registration tool for source images with motion artifacts and a tabular DICOM viewer to examine pulse sequence parameters. MRmap was successfully tested on three different computer platforms with image data from three different MR system manufacturers and five different sorts of pulse sequences: multi-image inversion recovery T1; Look-Locker/TOMROP T1; modified Look-Locker (MOLLI T1; single-echo T2/T2*; and multi-echo T2/T2*. Computing times varied between 2 and 113 seconds. Estimates of relaxation times compared favorably to those obtained from non-automated curve fitting. Completed maps were exported in DICOM format and could be read in standard software packages used for analysis of clinical and research MR data. Conclusions MRmap is a flexible cross-platform research tool that enables accurate mapping of relaxation times from various pulse sequences. The software allows researchers to optimize quantitative MR strategies in a manufacturer-independent fashion. The program and its source code were made available as open-source software on the internet.

  8. Fast radio burst event rate counts - I. Interpreting the observations

    Science.gov (United States)

    Macquart, J.-P.; Ekers, R. D.

    2018-02-01

    The fluence distribution of the fast radio burst (FRB) population (the `source count' distribution, N (>F) ∝Fα), is a crucial diagnostic of its distance distribution, and hence the progenitor evolutionary history. We critically reanalyse current estimates of the FRB source count distribution. We demonstrate that the Lorimer burst (FRB 010724) is subject to discovery bias, and should be excluded from all statistical studies of the population. We re-examine the evidence for flat, α > -1, source count estimates based on the ratio of single-beam to multiple-beam detections with the Parkes multibeam receiver, and show that current data imply only a very weak constraint of α ≲ -1.3. A maximum-likelihood analysis applied to the portion of the Parkes FRB population detected above the observational completeness fluence of 2 Jy ms yields α = -2.6_{-1.3}^{+0.7 }. Uncertainties in the location of each FRB within the Parkes beam render estimates of the Parkes event rate uncertain in both normalizing survey area and the estimated post-beam-corrected completeness fluence; this uncertainty needs to be accounted for when comparing the event rate against event rates measured at other telescopes.

  9. A New Method for Calculating Counts in Cells

    Science.gov (United States)

    Szapudi, István

    1998-04-01

    In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.

  10. Development of counting system for wear measurements using Thin Layer Activation and the Wearing Apparatus

    Energy Technology Data Exchange (ETDEWEB)

    França, Michel de A.; Suita, Julio C.; Salgado, César M., E-mail: mchldante@gmail.com, E-mail: suita@ien.gov.br, E-mail: otero@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    This paper focus on developing a counting system for the Wearing Apparatus, which is a device previously built to generate measurable wear on a given surface (Main Source) and to carry the fillings from it to a filter (second source). The Thin Layer Activation is a technique used to produce activity on one of the Wearing Apparatus' piece, this activity is proportional to the amount of material worn, or scrapped, from the piece's surface. Thus, by measuring the activity on those two points it is possible to measure the produced wear. The methodology used in this work is based on simulations through MCNP-X Code to nd the best specifications for shielding, solid angles, detectors dimensions and collimation for the Counting System. By simulating several scenarios, each one different from the other, and analyzing the results in the form of Counts Per Second, the ideal counting system's specifications and geometry to measure the activity in the Main Source and the Filter (second source) is chosen. After that, a set of previously activated stainless steel foils were used to reproduce the real experiments' conditions, this real experiment consists of using TLA and the Wearing Apparatus, the results demonstrate that the counting system and methodology are adequate for such experiments. (author)

  11. Where can pixel counting area estimates meet user-defined accuracy requirements?

    Science.gov (United States)

    Waldner, François; Defourny, Pierre

    2017-08-01

    Pixel counting is probably the most popular way to estimate class areas from satellite-derived maps. It involves determining the number of pixels allocated to a specific thematic class and multiplying it by the pixel area. In the presence of asymmetric classification errors, the pixel counting estimator is biased. The overarching objective of this article is to define the applicability conditions of pixel counting so that the estimates are below a user-defined accuracy target. By reasoning in terms of landscape fragmentation and spatial resolution, the proposed framework decouples the resolution bias and the classifier bias from the overall classification bias. The consequence is that prior to any classification, part of the tolerated bias is already committed due to the choice of the spatial resolution of the imagery. How much classification bias is affordable depends on the joint interaction of spatial resolution and fragmentation. The method was implemented over South Africa for cropland mapping, demonstrating its operational applicability. Particular attention was paid to modeling a realistic sensor's spatial response by explicitly accounting for the effect of its point spread function. The diagnostic capabilities offered by this framework have multiple potential domains of application such as guiding users in their choice of imagery and providing guidelines for space agencies to elaborate the design specifications of future instruments.

  12. Compton suppression gamma-counting: The effect of count rate

    Science.gov (United States)

    Millard, H.T.

    1984-01-01

    Past research has shown that anti-coincidence shielded Ge(Li) spectrometers enhanced the signal-to-background ratios for gamma-photopeaks, which are situated on high Compton backgrounds. Ordinarily, an anti- or non-coincidence spectrum (A) and a coincidence spectrum (C) are collected simultaneously with these systems. To be useful in neutron activation analysis (NAA), the fractions of the photopeak counts routed to the two spectra must be constant from sample to sample to variations must be corrected quantitatively. Most Compton suppression counting has been done at low count rate, but in NAA applications, count rates may be much higher. To operate over the wider dynamic range, the effect of count rate on the ratio of the photopeak counts in the two spectra (A/C) was studied. It was found that as the count rate increases, A/C decreases for gammas not coincident with other gammas from the same decay. For gammas coincident with other gammas, A/C increases to a maximum and then decreases. These results suggest that calibration curves are required to correct photopeak areas so quantitative data can be obtained at higher count rates. ?? 1984.

  13. Protecting count queries in study design.

    Science.gov (United States)

    Vinterbo, Staal A; Sarwate, Anand D; Boxwala, Aziz A

    2012-01-01

    Today's clinical research institutions provide tools for researchers to query their data warehouses for counts of patients. To protect patient privacy, counts are perturbed before reporting; this compromises their utility for increased privacy. The goal of this study is to extend current query answer systems to guarantee a quantifiable level of privacy and allow users to tailor perturbations to maximize the usefulness according to their needs. A perturbation mechanism was designed in which users are given options with respect to scale and direction of the perturbation. The mechanism translates the true count, user preferences, and a privacy level within administrator-specified bounds into a probability distribution from which the perturbed count is drawn. Users can significantly impact the scale and direction of the count perturbation and can receive more accurate final cohort estimates. Strong and semantically meaningful differential privacy is guaranteed, providing for a unified privacy accounting system that can support role-based trust levels. This study provides an open source web-enabled tool to investigate visually and numerically the interaction between system parameters, including required privacy level and user preference settings. Quantifying privacy allows system administrators to provide users with a privacy budget and to monitor its expenditure, enabling users to control the inevitable loss of utility. While current measures of privacy are conservative, this system can take advantage of future advances in privacy measurement. The system provides new ways of trading off privacy and utility that are not provided in current study design systems.

  14. Crowd counting via scale-adaptive convolutional neural network

    OpenAIRE

    Zhang, Lu; Shi, Miaojing; Chen, Qiaobo

    2017-01-01

    The task of crowd counting is to automatically estimate the pedestrian number in crowd images. To cope with the scale and perspective changes that commonly exist in crowd images, state-of-the-art approaches employ multi-column CNN architectures to regress density maps of crowd images. Multiple columns have different receptive fields corresponding to pedestrians (heads) of different scales. We instead propose a scale-adaptive CNN (SaCNN) architecture with a backbone of fixed small receptive fi...

  15. EcoCount

    Directory of Open Access Journals (Sweden)

    Phillip P. Allen

    2014-05-01

    Full Text Available Techniques that analyze biological remains from sediment sequences for environmental reconstructions are well established and widely used. Yet, identifying, counting, and recording biological evidence such as pollen grains remain a highly skilled, demanding, and time-consuming task. Standard procedure requires the classification and recording of between 300 and 500 pollen grains from each representative sample. Recording the data from a pollen count requires significant effort and focused resources from the palynologist. However, when an adaptation to the recording procedure is utilized, efficiency and time economy improve. We describe EcoCount, which represents a development in environmental data recording procedure. EcoCount is a voice activated fully customizable digital count sheet that allows the investigator to continuously interact with a field of view during the data recording. Continuous viewing allows the palynologist the opportunity to remain engaged with the essential task, identification, for longer, making pollen counting more efficient and economical. EcoCount is a versatile software package that can be used to record a variety of environmental evidence and can be installed onto different computer platforms, making the adoption by users and laboratories simple and inexpensive. The user-friendly format of EcoCount allows any novice to be competent and functional in a very short time.

  16. RBC count

    Science.gov (United States)

    ... by kidney disease) RBC destruction ( hemolysis ) due to transfusion, blood vessel injury, or other cause Leukemia Malnutrition Bone ... slight risk any time the skin is broken) Alternative Names Erythrocyte count; Red blood cell count; Anemia - RBC count Images Blood test ...

  17. Recommended methods for monitoring change in bird populations by counting and capture of migrants

    Science.gov (United States)

    David J. T. Hussell; C. John Ralph

    2005-01-01

    Counts and banding captures of spring or fall migrants can generate useful information on the status and trends of the source populations. To do so, the counts and captures must be taken and recorded in a standardized and consistent manner. We present recommendations for field methods for counting and capturing migrants at intensively operated sites, such as bird...

  18. Mapping nanoscale effects of localized noise-source activities on photoconductive charge transports in polymer-blend films

    Science.gov (United States)

    Shekhar, Shashank; Cho, Duckhyung; Cho, Dong-Guk; Yang, Myungjae; Hong, Seunghun

    2018-05-01

    We develolped a method to directly image the nanoscale effects of localized noise-source activities on photoconducting charge transports in domain structures of phase-separated polymer-blend films of Poly(9,9-di-n-octylfluorenyl-2,7-diyl) and Poly(9,9-di-n-octylfluorene-alt-benzothiadiazole). For the imaging, current and noise maps of the polymer-blend were recorded using a conducting nanoprobe in contact with the surface, enabling the conductivity (σ) and noise-source density (N T) mappings under an external stimulus. The blend-films exhibited the phase-separation between the constituent polymers at domains level. Within a domain, high σ (low N T) and low σ (high N T) regions were observed, which could be associated with the ordered and disordered regions of a domain. In the N T maps, we observed that noise-sources strongly affected the conduction mechanism, resulting in a scaling behavior of σ ∝ {{N}{{T}}}-0.5 in both ordered and disordered regions. When a blend film was under an influence of an external stimulus such as a high bias or an illumination, an increase in the σ was observed, but that also resulted in increases in the N T as a trade-off. Interestingly, the Δσ versus ΔN T plot exhibited an unusual scaling behavior of Δσ ∝ {{Δ }}{{N}{{T}}}0.5, which is attributed to the de-trapping of carriers from deep traps by the external stimuli. In addition, we found that an external stimulus increased the conductivity at the interfaces without significantly increasing their N T, which can be the origin of the superior performances of polymer-blend based devices. These results provide valuable insight about the effects of noise-sources on nanoscale optoelectronic properties in polymer-blend films, which can be an important guideline for improving devices based on polymer-blend.

  19. A system for mapping radioactive specimens

    International Nuclear Information System (INIS)

    Britten, R.J.; Davidson, E.H.

    1988-01-01

    A system for mapping radioactive specimens comprises an avalanche counter, an encoder, pre-amplifier circuits, sample and hold circuits and a programmed computer. The parallel plate counter utilizes avalanche event counting over a large area with the ability to locate radioactive sources in two dimensions. When a beta ray, for example, enters a chamber, an ionization event occurs and the avalanche effect multiplies the event and results in charge collection on the anode surface for a limited period of time before the charge leaks away. The encoder comprises a symmetrical array of planar conductive surfaces separated from the anode by a dielectric material. The encoder couples charge currents, the amlitudes of which define the relative position of the ionization event. The amplitude of coupled current, delivered to pre-amplifiers, defines the location of the event. (author) 12 figs

  20. Automatic quench compensation for liquid scintillation counting system

    International Nuclear Information System (INIS)

    Nather, R.E.

    1978-01-01

    A method of automatic quench compensation is provided, where a reference measure of quench is taken on a sample prior to taking a sample count. The measure of quench is then compared with a reference voltage source which has been established to vary in proportion to the variation of the measure of quench with the level of a system parameter required to restore at least one isotope spectral energy endpoint substantially to a selected counting window discriminator level in order to determine the amount of adjustment of the system parameter required to restore the endpoint. This is followed by the appropriate adjustment of the system parameter required to restore the relative position of the discriminator windows and the sample spectrum and is followed in turn by taking a sample count

  1. Confusion-limited extragalactic source survey at 4.755 GHz. I. Source list and areal distributions

    International Nuclear Information System (INIS)

    Ledden, J.E.; Broderick, J.J.; Condon, J.J.; Brown, R.L.

    1980-01-01

    A confusion-limited 4.755-GHz survey covering 0.00 956 sr between right ascensions 07/sup h/05/sup m/ and 18/sup h/ near declination +35 0 has been made with the NRAO 91-m telescope. The survey found 237 sources and is complete above 15 mJy. Source counts between 15 and 100 mJy were obtained directly. The P(D) distribution was used to determine the number counts between 0.5 and 13.2 mJy, to search for anisotropy in the density of faint extragalactic sources, and to set a 99%-confidence upper limit of 1.83 mK to the rms temperature fluctuation of the 2.7-K cosmic microwave background on angular scales smaller than 7.3 arcmin. The discrete-source density, normalized to the static Euclidean slope, falls off sufficiently rapidly below 100 mJy that no new population of faint flat-spectrum sources is required to explain the 4.755-GHz source counts

  2. Cosmology from angular size counts of extragalactic radio sources

    International Nuclear Information System (INIS)

    Kapahi, V.K.

    1975-01-01

    The cosmological implications of the observed angular sizes of extragalactic radio sources are investigated using (i) the log N-log theta relation, where N is the number of sources with an angular size greater than a value theta, for the complete sample of 3CR sources, and (ii) the thetasub(median) vs flux density (S) relation derived from the 3CR, the All-sky, and the Ooty occulation surveys, spanning a flux density range of about 300:1. The method of estimating the expected N(theta) and thetasub(m)(S) relations for a uniform distribution of sources in space is outlined. Since values of theta>approximately 100second arc in the 3C sample arise from sources of small z, the slope of the N(theta) relation in this range is practically independent of the world model and the distribution of source sizes, but depends strongly on the radio luminosity function (RLF). From the observed slope the RLF is derived in the luminosity range of about 10 23 178 26 W Hz -1 sr -1 to be of the form rho(P)dP proportional to Psup(-2.1)dP. It is shown that the angular size data provide independent evidence of evolution in source properties with epoch. It is difficult to explain the data with the simple steady-state theory even if identified QSOs are excluded from ths source samples and a local deficiency of strong source is postulated. The simplest evolutionary scheme that fits the data in the Einstein-de Sitter cosmology indicates that (a) the local RLF steepens considerably at high luminosities, (b) the comoving density of high luminosity sources increases with z in a manner similar to that implied by the log N-log S data and by the V/Vsub(m) test for QSOs, and (c) the mean physical sizes of radio sources evolve with z approximately as (1+z) -1 . Similar evolutionary effects appear to be present for QSOs as well as radio galaxies. (author)

  3. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    Directory of Open Access Journals (Sweden)

    Eduardo Garza-Gisholt

    Full Text Available Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect

  4. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    Science.gov (United States)

    Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.

  5. Can Probability Maps of Swept-Source Optical Coherence Tomography Predict Visual Field Changes in Preperimetric Glaucoma?

    Science.gov (United States)

    Lee, Won June; Kim, Young Kook; Jeoung, Jin Wook; Park, Ki Ho

    2017-12-01

    To determine the usefulness of swept-source optical coherence tomography (SS-OCT) probability maps in detecting locations with significant reduction in visual field (VF) sensitivity or predicting future VF changes, in patients with classically defined preperimetric glaucoma (PPG). Of 43 PPG patients, 43 eyes were followed-up on every 6 months for at least 2 years were analyzed in this longitudinal study. The patients underwent wide-field SS-OCT scanning and standard automated perimetry (SAP) at the time of enrollment. With this wide-scan protocol, probability maps originating from the corresponding thickness map and overlapped with SAP VF test points could be generated. We evaluated the vulnerable VF points with SS-OCT probability maps as well as the prevalence of locations with significant VF reduction or subsequent VF changes observed in the corresponding damaged areas of the probability maps. The vulnerable VF points were shown in superior and inferior arcuate patterns near the central fixation. In 19 of 43 PPG eyes (44.2%), significant reduction in baseline VF was detected within the areas of structural change on the SS-OCT probability maps. In 16 of 43 PPG eyes (37.2%), subsequent VF changes within the areas of SS-OCT probability map change were observed over the course of the follow-up. Structural changes on SS-OCT probability maps could detect or predict VF changes using SAP, in a considerable number of PPG eyes. Careful comparison of probability maps with SAP results could be useful in diagnosing and monitoring PPG patients in the clinical setting.

  6. Counting statistics in low level radioactivity measurements fluctuating counting efficiency

    International Nuclear Information System (INIS)

    Pazdur, M.F.

    1976-01-01

    A divergence between the probability distribution of the number of nuclear disintegrations and the number of observed counts, caused by counting efficiency fluctuation, is discussed. The negative binominal distribution is proposed to describe the probability distribution of the number of counts, instead of Poisson distribution, which is assumed to hold for the number of nuclear disintegrations only. From actual measurements the r.m.s. amplitude of counting efficiency fluctuation is estimated. Some consequences of counting efficiency fluctuation are investigated and the corresponding formulae are derived: (1) for detection limit as a function of the number of partial measurements and the relative amplitude of counting efficiency fluctuation, and (2) for optimum allocation of the number of partial measurements between sample and background. (author)

  7. Rugged: an operational, open-source solution for Sentinel-2 mapping

    Science.gov (United States)

    Maisonobe, Luc; Seyral, Jean; Prat, Guylaine; Guinet, Jonathan; Espesset, Aude

    2015-10-01

    When you map the entire Earth every 5 days with the aim of generating high-quality time series over land, there is no room for geometrical error: the algorithms have to be stable, reliable, and precise. Rugged, a new open-source library for pixel geolocation, is at the geometrical heart of the operational processing for Sentinel-2. Rugged performs sensor-to-terrain mapping taking into account ground Digital Elevation Models, Earth rotation with all its small irregularities, on-board sensor pixel individual lines-of-sight, spacecraft motion and attitude, and all significant physical effects. It provides direct and inverse location, i.e. it allows the accurate computation of which ground point is viewed from a specific pixel in a spacecraft instrument, and conversely which pixel will view a specified ground point. Direct and inverse location can be used to perform full ortho-rectification of images and correlation between sensors observing the same area. Implemented as an add-on for Orekit (Orbits Extrapolation KIT; a low-level space dynamics library), Rugged also offers the possibility of simulating satellite motion and attitude auxiliary data using Orekit's full orbit propagation capability. This is a considerable advantage for test data generation and mission simulation activities. Together with the Orfeo ToolBox (OTB) image processing library, Rugged provides the algorithmic core of Sentinel-2 Instrument Processing Facilities. The S2 complex viewing model - with 12 staggered push-broom detectors and 13 spectral bands - is built using Rugged objects, enabling the computation of rectification grids for mapping between cartographic and focal plane coordinates. These grids are passed to the OTB library for further image resampling, thus completing the ortho-rectification chain. Sentinel-2 stringent operational requirements to process several terabytes of data per week represented a tough challenge, though one that was well met by Rugged in terms of the robustness and

  8. Substance Flow Analysis and Source Mapping of Chemical UV-filters

    International Nuclear Information System (INIS)

    Eriksson, E.; Andersen, H. R.; Ledin, A.

    2008-01-01

    Chemical ultraviolet (UV)-filters are used in sunscreens to protect the skin from harmful UV radiation which may otherwise cause sunburns and skin cancer. Commonly used chemical UV-filters are known to cause endocrine disrupting effects in both aquatic and terrestrial animals as well as in human skin cells. Here, source mapping and substance flow analysis were applied to find the sources of six UV-filters (oxybenzone, avobenzone, 4-methylbenzylidene camphor, octyl methoxycinnamate, octyl dimethyl PABA and homosalate) and to identify the most dominant flows of these substances in Denmark. Urban water, composed of wastewater and surface waters, was found to be the primary recipient of UV-filters, whereby wastewater received an estimated 8.5-65 tonnes and surface waters received 7.1-51 tonnes in 2005. In wastewater treatment plants, their sorption onto sludge is perceived to be an important process and presence in effluents can be expected due to a lack of biodegradability. In addition, the use of UV-filters is expected to continue to increase significantly. Not all filters (e.g., octyl dimethyl PABA and homosalate) are used in Denmark. For example, 4-MBC is mainly associated with self-tanning liquids and private import of sunscreens

  9. Substance Flow Analysis and Source Mapping of Chemical UV-filters

    Energy Technology Data Exchange (ETDEWEB)

    Eriksson, E., E-mail: eve@env.dtu.dk; Andersen, H. R.; Ledin, A. [Technical University of Denmark, Department of Environmental Engineering (Denmark)

    2008-12-15

    Chemical ultraviolet (UV)-filters are used in sunscreens to protect the skin from harmful UV radiation which may otherwise cause sunburns and skin cancer. Commonly used chemical UV-filters are known to cause endocrine disrupting effects in both aquatic and terrestrial animals as well as in human skin cells. Here, source mapping and substance flow analysis were applied to find the sources of six UV-filters (oxybenzone, avobenzone, 4-methylbenzylidene camphor, octyl methoxycinnamate, octyl dimethyl PABA and homosalate) and to identify the most dominant flows of these substances in Denmark. Urban water, composed of wastewater and surface waters, was found to be the primary recipient of UV-filters, whereby wastewater received an estimated 8.5-65 tonnes and surface waters received 7.1-51 tonnes in 2005. In wastewater treatment plants, their sorption onto sludge is perceived to be an important process and presence in effluents can be expected due to a lack of biodegradability. In addition, the use of UV-filters is expected to continue to increase significantly. Not all filters (e.g., octyl dimethyl PABA and homosalate) are used in Denmark. For example, 4-MBC is mainly associated with self-tanning liquids and private import of sunscreens.

  10. Following the money: Mapping the sources and funding flows of alcohol and other drug treatment in Australia.

    Science.gov (United States)

    Chalmers, Jenny; Ritter, Alison; Berends, Lynda; Lancaster, Kari

    2016-05-01

    The structures of health systems impact on patient outcomes. We present and analyse the first detailed mapping of who funds alcohol and other drug (AOD) treatment and the channels and intermediaries through which funding flows from the funding sources to treatment providers. The study involved a literature review of AOD treatment financing and existing diagrammatic representations of the structure of the Australian health system. We interviewed 190 key informants to particularise the AOD treatment sector, and undertook two case examples of government funded non-government organisations providing AOD treatment. Funding sources include the Australian and state and territory governments, philanthropy, fund-raising and clients themselves. While funding sources align with the health sector generally and the broader social services sector, the complexity of flows from source to treatment service and the number of intermediaries are noteworthy. So too are the many sources of funding drawn on by some treatment providers. Diversification is both beneficial and disadvantageous for non-government treatment providers, adding to administrative workloads, but smoothing the risk of funding shortfalls. Government funders benefit from sharing risk. Circuitous funding flows multiply the funding sources drawn on by services and put distance between the funding source and the service provider. This leads to concerns over lack of transparency about what is being purchased and challenges for the multiply funded service provider in maintaining programs and service models amid multiple and sometimes competing funding and accountability frameworks. [Chalmers J, Ritter A, Berends L, Lancaster K. Following the money: Mapping the sources and funding flows of alcohol and other drug treatment in Australia. Drug Alcohol Rev 2016;35:255-262]. © 2015 Australasian Professional Society on Alcohol and other Drugs.

  11. Lessons learned from a pilot implementation of the UMLS information sources map.

    Science.gov (United States)

    Miller, P L; Frawley, S J; Wright, L; Roderer, N K; Powsner, S M

    1995-01-01

    OBJECTIVE: To explore the software design issues involved in implementing an operational information sources map (ISM) knowledge base (KB) and system of navigational tools that can help medical users access network-based information sources relevant to a biomedical question. DESIGN: A pilot biomedical ISM KB and associated client-server software (ISM/Explorer) have been developed to help students, clinicians, researchers, and staff access network-based information sources, as part of the National Library of Medicine's (NLM) multi-institutional Unified Medical Language System (UMLS) project. The system allows the user to specify and constrain a search for a biomedical question of interest. The system then returns a list of sources matching the search. At this point the user may request 1) further information about a source, 2) that the list of sources be regrouped by different criteria to allow the user to get a better overall appreciation of the set of retrieved sources as a whole, or 3) automatic connection to a source. RESULTS: The pilot system operates in client-server mode and currently contains coded information for 121 sources. It is in routine use from approximately 40 workstations at the Yale School of Medicine. The lessons that have been learned are that: 1) it is important to make access to different versions of a source as seamless as possible, 2) achieving seamless, cross-platform access to heterogeneous sources is difficult, 3) significant differences exist between coding the subject content of an electronic information resource versus that of an article or a book, 4) customizing the ISM to multiple institutions entails significant complexities, and 5) there are many design trade-offs between specifying searches and viewing sets of retrieved sources that must be taken into consideration. CONCLUSION: An ISM KB and navigational tools have been constructed. In the process, much has been learned about the complexities of development and evaluation in this

  12. Lessons learned from a pilot implementation of the UMLS information sources map.

    Science.gov (United States)

    Miller, P L; Frawley, S J; Wright, L; Roderer, N K; Powsner, S M

    1995-01-01

    To explore the software design issues involved in implementing an operational information sources map (ISM) knowledge base (KB) and system of navigational tools that can help medical users access network-based information sources relevant to a biomedical question. A pilot biomedical ISM KB and associated client-server software (ISM/Explorer) have been developed to help students, clinicians, researchers, and staff access network-based information sources, as part of the National Library of Medicine's (NLM) multi-institutional Unified Medical Language System (UMLS) project. The system allows the user to specify and constrain a search for a biomedical question of interest. The system then returns a list of sources matching the search. At this point the user may request 1) further information about a source, 2) that the list of sources be regrouped by different criteria to allow the user to get a better overall appreciation of the set of retrieved sources as a whole, or 3) automatic connection to a source. The pilot system operates in client-server mode and currently contains coded information for 121 sources. It is in routine use from approximately 40 workstations at the Yale School of Medicine. The lessons that have been learned are that: 1) it is important to make access to different versions of a source as seamless as possible, 2) achieving seamless, cross-platform access to heterogeneous sources is difficult, 3) significant differences exist between coding the subject content of an electronic information resource versus that of an article or a book, 4) customizing the ISM to multiple institutions entails significant complexities, and 5) there are many design trade-offs between specifying searches and viewing sets of retrieved sources that must be taken into consideration. An ISM KB and navigational tools have been constructed. In the process, much has been learned about the complexities of development and evaluation in this new environment, which are different

  13. Calibration of the Accuscan II In Vivo System for I-125 Thyroid Counting

    Energy Technology Data Exchange (ETDEWEB)

    Ovard R. Perry; David L. Georgeson

    2011-07-01

    This report describes the March 2011 calibration of the Accuscan II HpGe In Vivo system for I-125 thyroid counting. The source used for the calibration was a DOE manufactured Am-241/Eu-152 source contained in a 22 ml vial BEA Am-241/Eu-152 RMC II-1 with energies from 26 keV to 344 keV. The center of the detector housing was positioned 64 inches from the vault floor. This position places the approximate center line of the detector housing at the center line of the source in the phantom thyroid tube. The energy and efficiency calibration were performed using an RMC II phantom (Appendix J). Performance testing was conducted using source BEA Am-241/Eu-152 RMC II-1 and Validation testing was performed using an I-125 source in a 30 ml vial (I-125 BEA Thyroid 002) and an ANSI N44.3 phantom (Appendix I). This report includes an overview introduction and records for the energy/FWHM and efficiency calibration including performance verification and validation counting. The Accuscan II system was successfully calibrated for counting the thyroid for I-125 and verified in accordance with ANSI/HPS N13.30-1996 criteria.

  14. Analyzing thematic maps and mapping for accuracy

    Science.gov (United States)

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  15. Theory of overdispersion in counting statistics caused by fluctuating probabilities

    International Nuclear Information System (INIS)

    Semkow, Thomas M.

    1999-01-01

    It is shown that the random Lexis fluctuations of probabilities such as probability of decay or detection cause the counting statistics to be overdispersed with respect to the classical binomial, Poisson, or Gaussian distributions. The generating and the distribution functions for the overdispersed counting statistics are derived. Applications to radioactive decay with detection and more complex experiments are given, as well as distinguishing between the source and background, in the presence of overdispersion. Monte-Carlo verifications are provided

  16. Air Emissions Sources, Charts and Maps

    Data.gov (United States)

    U.S. Environmental Protection Agency — Air Emissions provides (1) interactive charts supporting national, state, or county charts, (2) county maps of criteria air pollutant emissions for a state, and (3)...

  17. A simple method for calibration of Lucas scintillation cell counting system for measurement of 226Ra and 222Rn

    Directory of Open Access Journals (Sweden)

    N.K. Sethy

    2014-10-01

    Full Text Available Known quantity of radium from high grade ore solution was chemically separated and carefully kept inside the cavity of a Lucas Cell (LC. The 222Rn gradually builds up and attain secular equilibrium with its parent 226Ra. This gives a steady count after a suitable buildup period (>25 days. This secondary source was used to calibrate the radon counting system. The method is validated in by comparison with identical measurement with AlphaGuard Aquakit. The radon counting system was used to evaluate dissolved radon in ground water sample by gross alpha counting in LC. Radon counting system measures the collected radon after a delay of >180 min by gross alpha counting. Simultaneous measurement also carried out by AlphaGuard Aquakit in identical condition. AlphaGuard measures dissolved radon from water sample by constant aeration in a closed circuit without giving any delay. Both the methods are matching with a correlation coefficient of >0.9. This validates the calibration of Lucas scintillation cell counting system by designed encapsulated source. This study provides an alternative for calibration in absence of costly Radon source available in the market.

  18. Mapping of potential heat sources for heat pumps for district heating in Denmark

    International Nuclear Information System (INIS)

    Lund, Rasmus; Persson, Urban

    2016-01-01

    The ambitious policy in Denmark on having a 100% renewable energy supply in 2050 requires radical changes to the energy systems to avoid an extensive and unsustainable use of biomass resources. Currently, wind power is being expanded and the increasing supply of electricity is slowly pushing the CHP (combined heat and power) plants out of operation, reducing the energy efficiency of the DH (district heating) supply. Here, large heat pumps for district heating is a frequently mentioned solution as a flexible demand for electricity and an energy efficient heat producer. The idea is to make heat pump use a low temperature waste or ambient heat source, but it has so far been very unclear which heat sources are actually available for this purpose. In this study eight categories of heat sources are analysed for the case of Denmark and included in a detailed spatial analysis where the identified heat sources are put in relation to the district heating areas and the corresponding demands. The analysis shows that potential heat sources are present near almost all district heating areas and that sea water most likely will have to play a substantial role as a heat source in future energy systems in Denmark. - Highlights: • The availability of heat sources for heat pumps in Denmark are mapped and quantified. • A novel methodology for assessment of low temperature industrial excess heat is presented. • There are heat sources available for 99% of district heating networks in Denmark. • The concentration of heat sources is generally bigger around bigger cities than smaller. • Ambient temperature heat sources will be more needed in district heating of big cities.

  19. Investigation of internal conversion electron lines by track counting technique

    CERN Document Server

    Islamov, T A; Kambarova, N T; Muminov, T M; Lebedev, N A; Solnyshkin, A A; Aleshin, Yu D; Kolesnikov, V V; Silaev, V I; Niipf-Tashgu, T

    2001-01-01

    The methodology of counting the tracks of the internal conversion electron (ICE) in the nuclear photoemulsion is described. The results on counting the ICE tracks on the photoplates for sup 1 sup 6 sup 1 Ho, sup 1 sup 6 sup 3 Tm, sup 1 sup 6 sup 6 Tm, sup 1 sup 3 sup 5 Ce is described. The above results are obtained through the MBI-9 microscope and the MAS-1 automated facility. The ICE track counting on the photoplates provides for essentially higher sensitivity as compared to the photometry method. This makes it possible to carry out measurements with the sources by 1000 times weaker as by the study into the density of blackening

  20. The Hausdorff and box-counting dimensions of a class of recurrent sets

    Energy Technology Data Exchange (ETDEWEB)

    Dai Meifeng [Nonlinear Scientific Research Center, Faculty of Science, Jiangsu University, Zhenjiang 212013 (China)], E-mail: daimf@ujs.edu.cn; Liu Xi [Nonlinear Scientific Research Center, Faculty of Science, Jiangsu University, Zhenjiang 212013 (China)], E-mail: liuxi2001@etang.com

    2008-05-15

    It is well known that a lot of familiar fractal sets can be generated using recurrent method. Conclusions under similitude linear map are straightforward. In this paper, we study the upper and low bounds for the Hausdorff dimension and boxing-counting dimension of recurrent sets. Especially, we focus our attention on the case of the non-similitude.

  1. Radiation counting statistics

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs.

  2. Radiation counting statistics

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Jee, K. Y.; Park, K. K. [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiments. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. 11 refs., 6 figs., 8 tabs. (Author)

  3. Radiation counting statistics

    International Nuclear Information System (INIS)

    Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H.

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs

  4. Tower counts

    Science.gov (United States)

    Woody, Carol Ann; Johnson, D.H.; Shrier, Brianna M.; O'Neal, Jennifer S.; Knutzen, John A.; Augerot, Xanthippe; O'Neal, Thomas A.; Pearsons, Todd N.

    2007-01-01

    Counting towers provide an accurate, low-cost, low-maintenance, low-technology, and easily mobilized escapement estimation program compared to other methods (e.g., weirs, hydroacoustics, mark-recapture, and aerial surveys) (Thompson 1962; Siebel 1967; Cousens et al. 1982; Symons and Waldichuk 1984; Anderson 2000; Alaska Department of Fish and Game 2003). Counting tower data has been found to be consistent with that of digital video counts (Edwards 2005). Counting towers do not interfere with natural fish migration patterns, nor are fish handled or stressed; however, their use is generally limited to clear rivers that meet specific site selection criteria. The data provided by counting tower sampling allow fishery managers to determine reproductive population size, estimate total return (escapement + catch) and its uncertainty, evaluate population productivity and trends, set harvest rates, determine spawning escapement goals, and forecast future returns (Alaska Department of Fish and Game 1974-2000 and 1975-2004). The number of spawning fish is determined by subtracting subsistence, sport-caught fish, and prespawn mortality from the total estimated escapement. The methods outlined in this protocol for tower counts can be used to provide reasonable estimates ( plus or minus 6%-10%) of reproductive salmon population size and run timing in clear rivers. 

  5. SEDS: THE SPITZER EXTENDED DEEP SURVEY. SURVEY DESIGN, PHOTOMETRY, AND DEEP IRAC SOURCE COUNTS

    International Nuclear Information System (INIS)

    Ashby, M. L. N.; Willner, S. P.; Fazio, G. G.; Huang, J.-S.; Hernquist, L.; Hora, J. L.; Arendt, R.; Barmby, P.; Barro, G.; Faber, S.; Guhathakurta, P.; Bell, E. F.; Bouwens, R.; Cattaneo, A.; Croton, D.; Davé, R.; Dunlop, J. S.; Egami, E.; Finlator, K.; Grogin, N. A.

    2013-01-01

    The Spitzer Extended Deep Survey (SEDS) is a very deep infrared survey within five well-known extragalactic science fields: the UKIDSS Ultra-Deep Survey, the Extended Chandra Deep Field South, COSMOS, the Hubble Deep Field North, and the Extended Groth Strip. SEDS covers a total area of 1.46 deg 2 to a depth of 26 AB mag (3σ) in both of the warm Infrared Array Camera (IRAC) bands at 3.6 and 4.5 μm. Because of its uniform depth of coverage in so many widely-separated fields, SEDS is subject to roughly 25% smaller errors due to cosmic variance than a single-field survey of the same size. SEDS was designed to detect and characterize galaxies from intermediate to high redshifts (z = 2-7) with a built-in means of assessing the impact of cosmic variance on the individual fields. Because the full SEDS depth was accumulated in at least three separate visits to each field, typically with six-month intervals between visits, SEDS also furnishes an opportunity to assess the infrared variability of faint objects. This paper describes the SEDS survey design, processing, and publicly-available data products. Deep IRAC counts for the more than 300,000 galaxies detected by SEDS are consistent with models based on known galaxy populations. Discrete IRAC sources contribute 5.6 ± 1.0 and 4.4 ± 0.8 nW m –2 sr –1 at 3.6 and 4.5 μm to the diffuse cosmic infrared background (CIB). IRAC sources cannot contribute more than half of the total CIB flux estimated from DIRBE data. Barring an unexpected error in the DIRBE flux estimates, half the CIB flux must therefore come from a diffuse component.

  6. Reduction of CMOS Image Sensor Read Noise to Enable Photon Counting.

    Science.gov (United States)

    Guidash, Michael; Ma, Jiaju; Vogelsang, Thomas; Endsley, Jay

    2016-04-09

    Recent activity in photon counting CMOS image sensors (CIS) has been directed to reduction of read noise. Many approaches and methods have been reported. This work is focused on providing sub 1 e(-) read noise by design and operation of the binary and small signal readout of photon counting CIS. Compensation of transfer gate feed-through was used to provide substantially reduced CDS time and source follower (SF) bandwidth. SF read noise was reduced by a factor of 3 with this method. This method can be applied broadly to CIS devices to reduce the read noise for small signals to enable use as a photon counting sensor.

  7. INLINING 3D RECONSTRUCTION, MULTI-SOURCE TEXTURE MAPPING AND SEMANTIC ANALYSIS USING OBLIQUE AERIAL IMAGERY

    Directory of Open Access Journals (Sweden)

    D. Frommholz

    2016-06-01

    Full Text Available This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for fac¸ade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the fac¸ades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM. The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA tool running a custom rule set to identify windows on the contained fac¸ade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric

  8. Usefulness of change ratio map in 99mTc-HMPAO SPECT with acetazolamide enhancement

    International Nuclear Information System (INIS)

    Yamamoto, Naoyuki

    1996-01-01

    Although a sequential 99m Tc-HMPAO SPECT technique with Diamox test (seq-SPECT) is a simple and time-saving procedure to assess brain perfusion reserve, the influence of the first dose of the tracer on the second one is not negligible. Therefore, a subtraction of the rest-SPECT from the 2nd SPECT is widely-used. However, subtracted SPECT images not only need to be corrected for the injected dose and the radiochemical purity due to inherent instability of HMPAO but also are usually degraded in quality. This study was undertaken to resolve these problems utilizing a change ratio (CR) map. The CR map was obtained by dividing 2nd SPECT by rest-SPECT. Prior to subtraction, the 2nd SPECT was normalized with the ratio of the mean whole brain counts between both SPECTs. To validate CR map, 7 patients were studied with both seq-SPECT and 133 Xe inhalation CBF measurement (Xe-CBF). The right to left count ratio obtained from the ROIs placed on MCA territory of CR map correlated well with that from Xe-CBF (r=0.89, p<0.01). Fifty-three patients with stroke underwent the seq-SPECT which was compared with the cerebral angiography (CAG) and classified into 4 groups according to the CR map. In 25 patients, all of the rest-, the subtracted-SPECT and the CR map did not show any difference between the affected side and the contralateral normal side. Seven patients with normal rest-SPECT showed decreased subtracted-SPECT counts and CR on the affected side. Three of them showed more than 75% stenosis on CAG. Four patients with the decreased counts both at the rest-and the subtracted-SPECT revealed no difference on the CR map suggesting the matched decrease of both blood flow and metabolism in the affected side. In conclusion, the CR map was a simple and useful method to evaluate the brain perfusion reserve with the seq-SPECT. (author)

  9. DAFNE: A Matlab toolbox for Bayesian multi-source remote sensing and ancillary data fusion, with application to flood mapping

    Science.gov (United States)

    D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco P.; Pasquariello, Guido

    2018-03-01

    High-resolution, remotely sensed images of the Earth surface have been proven to be of help in producing detailed flood maps, thanks to their synoptic overview of the flooded area and frequent revisits. However, flood scenarios can be complex situations, requiring the integration of different data in order to provide accurate and robust flood information. Several processing approaches have been recently proposed to efficiently combine and integrate heterogeneous information sources. In this paper, we introduce DAFNE, a Matlab®-based, open source toolbox, conceived to produce flood maps from remotely sensed and other ancillary information, through a data fusion approach. DAFNE is based on Bayesian Networks, and is composed of several independent modules, each one performing a different task. Multi-temporal and multi-sensor data can be easily handled, with the possibility of following the evolution of an event through multi-temporal output flood maps. Each DAFNE module can be easily modified or upgraded to meet different user needs. The DAFNE suite is presented together with an example of its application.

  10. Measuring trace gas emission from multi-distributed sources using vertical radial plume mapping (VRPM) and backward Lagrangian stochastic (bLS) techniques

    Science.gov (United States)

    Two micrometeorological techniques for measuring trace gas emission rates from distributed area sources were evaluated using a variety of synthetic area sources. The accuracy of the vertical radial plume mapping (VRPM) and the backward Lagrangian (bLS) techniques with an open-path optical spectrosco...

  11. Counting carbohydrates

    Science.gov (United States)

    Carb counting; Carbohydrate-controlled diet; Diabetic diet; Diabetes-counting carbohydrates ... Many foods contain carbohydrates (carbs), including: Fruit and fruit juice Cereal, bread, pasta, and rice Milk and milk products, soy milk Beans, legumes, ...

  12. Mapping and defining sources of variability in bioavailable strontium isotope ratios in the Eastern Mediterranean

    Science.gov (United States)

    Hartman, Gideon; Richards, Mike

    2014-02-01

    The relative contributions of bedrock and atmospheric sources to bioavailable strontium (Sr) pools in local soils was studied in Northern Israel and the Golan regions through intensive systematic sampling of modern plants and invertebrates, to produce a map of modern bioavailable strontium isotope ratios (87Sr/86Sr) for regional reconstructions of human and animal mobility patterns. The study investigates sources of variability in bioavailable 87Sr/86Sr ratios, in particular the intra-and inter-site range of variation in plant 87Sr/86Sr ratios, the range of 87Sr/86Sr ratios of plants growing on marine sedimentary versus volcanic geologies, the differences between ligneous and non-ligneous plants with varying growth and water utilization strategies, and the relative contribution of atmospheric Sr sources from different soil and vegetation types and climatic zones. Results indicate predictable variation in 87Sr/86Sr ratios. Inter- and intra-site differences in bioavailable 87Sr/86Sr ratios average of 0.00025, while the range of 87Sr/86Sr ratios measured regionally in plants and invertebrates is 0.7090 in Pleistocene calcareous sandstone and 0.7074 in mid-Pleistocene volcanic pyroclast. The 87Sr/86Sr ratios measured in plants growing on volcanic bedrock show time dependent increases in atmospheric deposition relative to bedrock weathering. The 87Sr/86Sr ratios measured in plants growing on renzina soils depends on precipitation. The spacing between bedrock 87Sr/86Sr ratios and plants is highest in wet conditions and decreases in dry conditions. The 87Sr/86Sr ratios measured in plants growing on terra rossa soils is relatively constant (0.7085) regardless of precipitation. Ligneous plants are typically closer to bedrock 87Sr/86Sr ratios than non-ligneous plants. Since the bioavailable 87Sr/86Sr ratios currently measured in the region reflect a mix of both exogenous and endogenous sources, changes in the relative contribution of exogenous sources can cause variation

  13. Repeatability of differential goat bulk milk culture and associations with somatic cell count, total bacterial count, and standard plate count

    OpenAIRE

    Koop, G.; Dik, N.; Nielen, M.; Lipman, L.J.A.

    2010-01-01

    The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms, 3 bulk milk samples were collected at intervals of 2 wk. The samples were cultured for SPC, coliform count, and staphylococcal count and for the presence of Staphylococcus aureus. Furthermore, SCC ...

  14. Errors associated with moose-hunter counts of occupied beaver Castor fiber lodges in Norway

    OpenAIRE

    Parker, Howard; Rosell, Frank; Gustavsen, Per Øyvind

    2002-01-01

    In Norway, Sweden and Finland moose Alces alces hunting teams are often employed to survey occupied beaver (Castor fiber and C. canadensis) lodges while hunting. Results may be used to estimate population density or trend, or for issuing harvest permits. Despite the method's increasing popularity, the errors involved have never been identified. In this study we 1) compare hunting-team counts of occupied lodges with total counts, 2) identify the sources of error between counts and 3) evaluate ...

  15. Impact source location on composite CNG storage tank using acoustic emission energy based signal mapping method

    Energy Technology Data Exchange (ETDEWEB)

    Han, Byeong Hee; Yoon, Dong Jin; Park, Chun Soo [Korea Research Institute of Standards and Science, Center for Safety Measurement, Daejeon (Korea, Republic of); Lee, Young Shin [Dept. of Mechanical Design Engineering, Chungnam National University, Daejeon (Korea, Republic of)

    2016-10-15

    Acoustic emission (AE) is one of the most powerful techniques for detecting damages and identify damage location during operations. However, in case of the source location technique, there is some limitation in conventional AE technology, because it strongly depends on wave speed in the corresponding structures having heterogeneous composite materials. A compressed natural gas(CNG) pressure vessel is usually made of carbon fiber composite outside of vessel for the purpose of strengthening. In this type of composite material, locating impact damage sources exactly using conventional time arrival method is difficult. To overcome this limitation, this study applied the previously developed Contour D/B map technique to four types of CNG storage tanks to identify the source location of damages caused by external shock. The results of the identification of the source location for different types were compared.

  16. Application of the Frequency Map Analysis to the Study of the Beam Dynamics of Light Sources

    International Nuclear Information System (INIS)

    Nadolski, Laurent

    2001-01-01

    The topic of this thesis is the study of beam dynamics in storage rings with a restriction to single particle transverse dynamics. In a first part, tools (Frequency Map Analysis, Hamiltonian, Integrator) are presented for studying and exploring the dynamics. Numerical simulations of four synchrotron radiation sources (the ALS, the ESRF, SOLEIL and Super-ACO) are performed. We construct a tracking code based on a new class of symplectic integrators (Laskar and Robutel, 2000). These integrators with only positive steps are more precise by an order of magnitude than the standard Forest and Ruth's scheme. Comparisons with the BETA, DESPOT and MAD codes are carried out. Frequency Map Analysis (Laskar, 1990) is our main analysis tool. This is a numerical method for analysing a conservative dynamical system. Based on a refined Fourier technique, it enables us to compute frequency maps which are real footprints of the beam dynamics of an accelerator. We stress the high sensitivity of the dynamics to magnetics errors and sextipolar strengths. The second part of this work is dedicated to the analysis of experimental results from two light sources. Together with the ALS accelerator team (Berkeley), we succeeded in obtaining the first experimental frequency map of an accelerator. The agreement with the machine model is very impressive. At the Super-ACO ring, the study of the tune shift with amplitude enabled us to highlight a strong octupolar-like component related to the quadrupole fringe field. The aftermaths for the beam dynamics are important and give us a better understanding the measured ring performance. All these results are based on turn by turn measurements. Many closely related phenomena are treated such as response matrix analysis or beam decoherence. (author) [fr

  17. Determination of efficiency curves for HPGE detector in different counting geometries

    International Nuclear Information System (INIS)

    Rodrigues, Josianne L.; Kastner, Geraldo F.; Ferreira, Andrea V.

    2011-01-01

    This paper presents the first experimental results related to determination of efficiency curves for HPGe detector in different counting geometries. The detector is a GX2520 Canberra belonging to CDTN/CNEN. Efficiency curves for punctual were determined by using a certified set of gamma sources. These curves were determined for three counting geometries. Following that, efficiency curves for non punctual samples were determined by using standard solutions of radionuclides in 500 ml and 1000 ml wash bottle Marinelli

  18. Dual-contrast agent photon-counting computed tomography of the heart: initial experience.

    Science.gov (United States)

    Symons, Rolf; Cork, Tyler E; Lakshmanan, Manu N; Evers, Robert; Davies-Venn, Cynthia; Rice, Kelly A; Thomas, Marvin L; Liu, Chia-Ying; Kappler, Steffen; Ulzheimer, Stefan; Sandfort, Veit; Bluemke, David A; Pourmorteza, Amir

    2017-08-01

    To determine the feasibility of dual-contrast agent imaging of the heart using photon-counting detector (PCD) computed tomography (CT) to simultaneously assess both first-pass and late enhancement of the myocardium. An occlusion-reperfusion canine model of myocardial infarction was used. Gadolinium-based contrast was injected 10 min prior to PCD CT. Iodinated contrast was infused immediately prior to PCD CT, thus capturing late gadolinium enhancement as well as first-pass iodine enhancement. Gadolinium and iodine maps were calculated using a linear material decomposition technique and compared to single-energy (conventional) images. PCD images were compared to in vivo and ex vivo magnetic resonance imaging (MRI) and histology. For infarct versus remote myocardium, contrast-to-noise ratio (CNR) was maximal on late enhancement gadolinium maps (CNR 9.0 ± 0.8, 6.6 ± 0.7, and 0.4 ± 0.4, p contrast agent cardiac imaging is feasible with photon-counting detector CT. These initial proof-of-concept results may provide incentives to develop new k-edge contrast agents, to investigate possible interactions between multiple simultaneously administered contrast agents, and to ultimately bring them to clinical practice.

  19. Rho Ophiuchi Cloud Core Extinction Map

    Science.gov (United States)

    Gibson, D. J.; Rudolph, A.; Barsony, M.

    1997-12-01

    We present an extinction map of a one square degree region ( ~ 2.2pc square) of the core of the star-forming region rho Ophiuchi derived by the method of star counts. Photometry from the near-infrared J, H, and K band images of Barsony et al. (1997) provided the stellar catalog for this study. From this map an estimate of the mass of the region is made and compared with previous estimates from other methods. Reference Barsony, M., Kenyon, S.J., Lada, E.A., & Teuben, P.J. 1997, ApJS, 112, 109

  20. THE HAWAII SCUBA-2 LENSING CLUSTER SURVEY: NUMBER COUNTS AND SUBMILLIMETER FLUX RATIOS

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, Li-Yen; Cowie, Lennox L.; Barger, Amy J. [Institute of Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Chen, Chian-Chou [Center for Extragalactic Astronomy, Department of Physics, Durham University, South Road, Durham DH1 3LE (United Kingdom); Wang, Wei-Hao [Academia Sinica Institute of Astronomy and Astrophysics, P.O. Box 23-141, Taipei 10617, Taiwan (China)

    2016-09-20

    We present deep number counts at 450 and 850 μ m using the SCUBA-2 camera on the James Clerk Maxwell Telescope. We combine data for six lensing cluster fields and three blank fields to measure the counts over a wide flux range at each wavelength. Thanks to the lensing magnification, our measurements extend to fluxes fainter than 1 mJy and 0.2 mJy at 450 μ m and 850 μ m, respectively. Our combined data highly constrain the faint end of the number counts. Integrating our counts shows that the majority of the extragalactic background light (EBL) at each wavelength is contributed by faint sources with L {sub IR} < 10{sup 12} L {sub ⊙}, corresponding to luminous infrared galaxies (LIRGs) or normal galaxies. By comparing our result with the 500 μ m stacking of K -selected sources from the literature, we conclude that the K -selected LIRGs and normal galaxies still cannot fully account for the EBL that originates from sources with L {sub IR} < 10{sup 12} L {sub ⊙}. This suggests that many faint submillimeter galaxies may not be included in the UV star formation history. We also explore the submillimeter flux ratio between the two bands for our 450 μ m and 850 μ m selected sources. At 850 μ m, we find a clear relation between the flux ratio and the observed flux. This relation can be explained by a redshift evolution, where galaxies at higher redshifts have higher luminosities and star formation rates. In contrast, at 450 μ m, we do not see a clear relation between the flux ratio and the observed flux.

  1. Set of counts by scintillations for atmospheric samplings; Ensemble de comptages par scintillations pour prelevements atmospheriques

    Energy Technology Data Exchange (ETDEWEB)

    Appriou, D.; Doury, A.

    1962-07-01

    The author reports the development of a scintillation-based counting assembly with the following characteristics: a photo-multiplier with a wide photo-cathode, a thin plastic scintillator for the counting of beta + alpha (and possibility of mounting an alpha scintillator), a relatively small own motion with respect to activities to be counted, a weakly varying efficiency. The authors discuss the counting objective, present equipment tests (counter, proportional amplifier and pre-amplifier, input drawer). They describe the apparatus operation, discuss the selection of scintillators, report the study of the own movement (electron-based background noise, total background noise, background noise reduction), discuss counts (influence of the external source, sensitivity to alpha radiations, counting homogeneity, minimum detectable activity) and efficiencies.

  2. Standardization of 241Am by digital coincidence counting, liquid scintillation counting and defined solid angle counting

    International Nuclear Information System (INIS)

    Balpardo, C.; Capoulat, M.E.; Rodrigues, D.; Arenillas, P.

    2010-01-01

    The nuclide 241 Am decays by alpha emission to 237 Np. Most of the decays (84.6%) populate the excited level of 237 Np with energy of 59.54 keV. Digital coincidence counting was applied to standardize a solution of 241 Am by alpha-gamma coincidence counting with efficiency extrapolation. Electronic discrimination was implemented with a pressurized proportional counter and the results were compared with two other independent techniques: Liquid scintillation counting using the logical sum of double coincidences in a TDCR array and defined solid angle counting taking into account activity inhomogeneity in the active deposit. The results show consistency between the three methods within a limit of a 0.3%. An ampoule of this solution will be sent to the International Reference System (SIR) during 2009. Uncertainties were analysed and compared in detail for the three applied methods.

  3. Smooth incidence maps give valuable insight into Q fever outbreaks in The Netherlands

    Directory of Open Access Journals (Sweden)

    Wim van der Hoek

    2012-11-01

    Full Text Available From 2007 through 2009, The Netherlands faced large outbreaks of human Q fever. Control measures focused primarily on dairy goat farms because these were implicated as the main source of infection for the surrounding population. However, in other countries, outbreaks have mainly been associated with non-dairy sheep and The Netherlands has many more sheep than goats. Therefore, a public discussion arose about the possible role of non-dairy (meat sheep in the outbreaks. To inform decision makers about the relative importance of different infection sources, we developed accurate and high-resolution incidence maps for detection of Q fever hot spots. In the high incidence area in the south of the country, full postal codes of notified Q fever patients with onset of illness in 2009, were georeferenced. Q fever cases (n = 1,740 were treated as a spatial point process. A 500 x 500 m grid was imposed over the area of interest. The number of cases and the population number were counted in each cell. The number of cases was modelled as an inhomogeneous Poisson process where the underlying incidence was estimated by 2-dimensional P-spline smoothing. Modelling of numbers of Q fever cases based on residential addresses and population size produced smooth incidence maps that clearly showed Q fever hotspots around infected dairy goat farms. No such increased incidence was noted around infected meat sheep farms. We conclude that smooth incidence maps of human notifications give valuable information about the Q fever epidemic and are a promising method to provide decision support for the control of other infectious diseases with an environmental source.

  4. Benjamin Thompson, Count Rumford Count Rumford on the nature of heat

    CERN Document Server

    Brown, Sanborn C

    1967-01-01

    Men of Physics: Benjamin Thompson - Count Rumford: Count Rumford on the Nature of Heat covers the significant contributions of Count Rumford in the fields of physics. Count Rumford was born with the name Benjamin Thompson on March 23, 1753, in Woburn, Massachusetts. This book is composed of two parts encompassing 11 chapters, and begins with a presentation of Benjamin Thompson's biography and his interest in physics, particularly as an advocate of an """"anti-caloric"""" theory of heat. The subsequent chapters are devoted to his many discoveries that profoundly affected the physical thought

  5. Some target assay uncertainties for passive neutron coincidence counting

    International Nuclear Information System (INIS)

    Ensslin, N.; Langner, D.G.; Menlove, H.O.; Miller, M.C.; Russo, P.A.

    1990-01-01

    This paper provides some target assay uncertainties for passive neutron coincidence counting of plutonium metal, oxide, mixed oxide, and scrap and waste. The target values are based in part on past user experience and in part on the estimated results from new coincidence counting techniques that are under development. The paper summarizes assay error sources and the new coincidence techniques, and recommends the technique that is likely to yield the lowest assay uncertainty for a given material type. These target assay uncertainties are intended to be useful for NDA instrument selection and assay variance propagation studies for both new and existing facilities. 14 refs., 3 tabs

  6. Precise method for correcting count-rate losses in scintillation cameras

    International Nuclear Information System (INIS)

    Madsen, M.T.; Nickles, R.J.

    1986-01-01

    Quantitative studies performed with scintillation detectors often require corrections for lost data because of the finite resolving time of the detector. Methods that monitor losses by means of a reference source or pulser have unacceptably large statistical fluctuations associated with their correction factors. Analytic methods that model the detector as a paralyzable system require an accurate estimate of the system resolving time. Because the apparent resolving time depends on many variables, including the window setting, source distribution, and the amount of scattering material, significant errors can be introduced by relying on a resolving time obtained from phantom measurements. These problems can be overcome by curve-fitting the data from a reference source to a paralyzable model in which the true total count rate in the selected window is estimated from the observed total rate. The resolving time becomes a free parameter in this method which is optimized to provide the best fit to the observed reference data. The fitted curve has the inherent accuracy of the reference source method with the precision associated with the observed total image count rate. Correction factors can be simply calculated from the ratio of the true reference source rate and the fitted curve. As a result, the statistical uncertainty of the data corrected by this method is not significantly increased

  7. PhyloGeoViz: a web-based program that visualizes genetic data on maps.

    Science.gov (United States)

    Tsai, Yi-Hsin E

    2011-05-01

    The first step of many population genetic studies is the simple visualization of allele frequencies on a landscape. This basic data exploration can be challenging without proprietary software, and the manual plotting of data is cumbersome and unfeasible at large sample sizes. I present an open source, web-based program that plots any kind of frequency or count data as pie charts in Google Maps (Google Inc., Mountain View, CA). Pie polygons are then exportable to Google Earth (Google Inc.), a free Geographic Information Systems platform. Import of genetic data into Google Earth allows phylogeographers access to a wealth of spatial information layers integral to forming hypotheses and understanding patterns in the data. © 2010 Blackwell Publishing Ltd.

  8. An Open Source Web Map Server Implementation For California and the Digital Earth: Lessons Learned

    Science.gov (United States)

    Sullivan, D. V.; Sheffner, E. J.; Skiles, J. W.; Brass, J. A.; Condon, Estelle (Technical Monitor)

    2000-01-01

    This paper describes an Open Source implementation of the Open GIS Consortium's Web Map interface. It is based on the very popular Apache WWW Server, the Sun Microsystems Java ServIet Development Kit, and a C language shared library interface to a spatial datastore. This server was initially written as a proof of concept, to support a National Aeronautics and Space Administration (NASA) Digital Earth test bed demonstration. It will also find use in the California Land Science Information Partnership (CaLSIP), a joint program between NASA and the state of California. At least one WebMap enabled server will be installed in every one of the state's 58 counties. This server will form a basis for a simple, easily maintained installation for those entities that do not yet require one of the larger, more expensive, commercial offerings.

  9. The distribution of polarized radio sources >15 μJy IN GOODS-N

    International Nuclear Information System (INIS)

    Rudnick, L.; Owen, F. N.

    2014-01-01

    We present deep Very Large Array observations of the polarization of radio sources in the GOODS-N field at 1.4 GHz at resolutions of 1.''6 and 10''. At 1.''6, we find that the peak flux cumulative number count distribution is N(> p) ∼ 45*(p/30 μJy) –0.6 per square degree above a detection threshold of 14.5 μJy. This represents a break from the steeper slopes at higher flux densities, resulting in fewer sources predicted for future surveys with the Square Kilometer Array and its precursors. It provides a significant challenge for using background rotation measures (RMs) to study clusters of galaxies or individual galaxies. Most of the polarized sources are well above our detection limit, and they are also radio galaxies that are well-resolved even at 10'', with redshifts from ∼0.2-1.9. We determined a total polarized flux for each source by integrating the 10'' polarized intensity maps, as will be done by upcoming surveys such as POSSUM. These total polarized fluxes are a factor of two higher, on average, than the peak polarized flux at 1.''6; this would increase the number counts by ∼50% at a fixed flux level. The detected sources have RMs with a characteristic rms scatter of ∼11 rad m –2 around the local Galactic value, after eliminating likely outliers. The median fractional polarization from all total intensity sources does not continue the trend of increasing at lower flux densities, as seen for stronger sources. The changes in the polarization characteristics seen at these low fluxes likely represent the increasing dominance of star-forming galaxies.

  10. Study on advancement of in vivo counting using mathematical simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kinase, Sakae [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2003-05-01

    To obtain an assessment of the committed effective dose, individual monitoring for the estimation of intakes of radionuclides is required. For individual monitoring of exposure to intakes of radionuclides, direct measurement of radionuclides in the body - in vivo counting- is very useful. To advance in a precision in vivo counting which fulfills the requirements of ICRP 1990 recommendations, some problems, such as the investigation of uncertainties in estimates of body burdens by in vivo counting, and the selection of the way to improve the precision, have been studied. In the present study, a calibration technique for in vivo counting application using Monte Carlo simulation was developed. The advantage of the technique is that counting efficiency can be obtained for various shapes and sizes that are very difficult to change for phantoms. To validate the calibration technique, the response functions and counting efficiencies of a whole-body counter installed in JAERI were evaluated using the simulation and measurements. Consequently, the calculations are in good agreement with the measurements. The method for the determination of counting efficiency curves as a function of energy was developed using the present technique and a physiques correction equation was derived from the relationship between parameters of correction factor and counting efficiencies of the JAERI whole-body counter. The uncertainties in body burdens of {sup 137}Cs estimated with the JAERI whole-body counter were also investigated using the Monte Carlo simulation and measurements. It was found that the uncertainties of body burdens estimated with the whole-body counter are strongly dependent on various sources of uncertainty such as radioactivity distribution within the body and counting statistics. Furthermore, the evaluation method of the peak efficiencies of a Ge semi-conductor detector was developed by Monte Carlo simulation for optimum arrangement of Ge semi-conductor detectors for

  11. Counting cormorants

    DEFF Research Database (Denmark)

    Bregnballe, Thomas; Carss, David N; Lorentsen, Svein-Håkon

    2013-01-01

    This chapter focuses on Cormorant population counts for both summer (i.e. breeding) and winter (i.e. migration, winter roosts) seasons. It also explains differences in the data collected from undertaking ‘day’ versus ‘roost’ counts, gives some definitions of the term ‘numbers’, and presents two...

  12. Short communication: Repeatability of differential goat bulk milk culture and associations with somatic cell count, total bacterial count, and standard plate count.

    Science.gov (United States)

    Koop, G; Dik, N; Nielen, M; Lipman, L J A

    2010-06-01

    The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms, 3 bulk milk samples were collected at intervals of 2 wk. The samples were cultured for SPC, coliform count, and staphylococcal count and for the presence of Staphylococcus aureus. Furthermore, SCC (Fossomatic 5000, Foss, Hillerød, Denmark) and TBC (BactoScan FC 150, Foss) were measured. Staphylococcal count was correlated to SCC (r=0.40), TBC (r=0.51), and SPC (r=0.53). Coliform count was correlated to TBC (r=0.33), but not to any of the other variables. Staphylococcus aureus did not correlate to SCC. The contribution of the staphylococcal count to the SPC was 31%, whereas the coliform count comprised only 1% of the SPC. The agreement of the repeated measurements was low. This study indicates that staphylococci in goat bulk milk are related to SCC and make a significant contribution to SPC. Because of the high variation in bacterial counts, repeated sampling is necessary to draw valid conclusions from bulk milk culturing. 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. The History of Cartographic Sources Development

    Directory of Open Access Journals (Sweden)

    L. Volkotrub

    2016-07-01

    Full Text Available Cartographic sources are the variety of descriptive sources. They include historical and geographical maps and circuits maps. The image maps are a special kind of modeling the real phenomenon, that broadcasts their quantitative and qualitative characteristics, structure, interconnections and dynamic in a graphic form. The prototypes of maps appeared as a way of transmitting information around the world. People began to use this way of communication long before the appearance of writing. The quality of mapping images matched with the evolution of techniques and methods of mapping and publishing. The general development of cartographic sources is determined primarily by three factors: the development of science and technology, the needs of society in different cartographic works, political and economic situation of country. Given this, map is an all-sufficient phenomenon, its sources expert study is based on understanding of invariance of its perception. Modern theoretical concepts show us the invariance of maps. Specifially, map is viewed in the following aspects: 1 it is one of the universal models of land and existing natural and social processes.2 it is one of the tools of researching and forecasting. 3 it is a specific language formation. 4 it is a method of transferring information. As a source map may contain important information about physical geography, geology, hydrology, political-administrative division, population, flora and fauna of a particular area in a particular period. Mostly, cartographic sources are complex, because they contain a lot of cognitive and historical information.

  14. Direct dose mapping versus energy/mass transfer mapping for 4D dose accumulation: fundamental differences and dosimetric consequences.

    Science.gov (United States)

    Li, Haisen S; Zhong, Hualiang; Kim, Jinkoo; Glide-Hurst, Carri; Gulam, Misbah; Nurushev, Teamour S; Chetty, Indrin J

    2014-01-06

    The direct dose mapping (DDM) and energy/mass transfer (EMT) mapping are two essential algorithms for accumulating the dose from different anatomic phases to the reference phase when there is organ motion or tumor/tissue deformation during the delivery of radiation therapy. DDM is based on interpolation of the dose values from one dose grid to another and thus lacks rigor in defining the dose when there are multiple dose values mapped to one dose voxel in the reference phase due to tissue/tumor deformation. On the other hand, EMT counts the total energy and mass transferred to each voxel in the reference phase and calculates the dose by dividing the energy by mass. Therefore it is based on fundamentally sound physics principles. In this study, we implemented the two algorithms and integrated them within the Eclipse treatment planning system. We then compared the clinical dosimetric difference between the two algorithms for ten lung cancer patients receiving stereotactic radiosurgery treatment, by accumulating the delivered dose to the end-of-exhale (EE) phase. Specifically, the respiratory period was divided into ten phases and the dose to each phase was calculated and mapped to the EE phase and then accumulated. The displacement vector field generated by Demons-based registration of the source and reference images was used to transfer the dose and energy. The DDM and EMT algorithms produced noticeably different cumulative dose in the regions with sharp mass density variations and/or high dose gradients. For the planning target volume (PTV) and internal target volume (ITV) minimum dose, the difference was up to 11% and 4% respectively. This suggests that DDM might not be adequate for obtaining an accurate dose distribution of the cumulative plan, instead, EMT should be considered.

  15. Smart "geomorphological" map browsing - a tale about geomorphological maps and the internet

    Science.gov (United States)

    Geilhausen, M.; Otto, J.-C.

    2012-04-01

    With the digital production of geomorphological maps, the dissemination of research outputs now extends beyond simple paper products. Internet technologies can contribute to both, the dissemination of geomorphological maps and access to geomorphologic data and help to make geomorphological knowledge available to a greater public. Indeed, many national geological surveys employ end-to-end digital workflows from data capture in the field to final map production and dissemination. This paper deals with the potential of web mapping applications and interactive, portable georeferenced PDF maps for the distribution of geomorphological information. Web mapping applications such as Google Maps have become very popular and widespread and increased the interest and access to mapping. They link the Internet with GIS technology and are a common way of presenting dynamic maps online. The GIS processing is performed online and maps are visualised in interactive web viewers characterised by different capabilities such as zooming, panning or adding further thematic layers, with the map refreshed after each task. Depending on the system architecture and the components used, advanced symbology, map overlays from different applications and sources and their integration into a Desktop GIS are possible. This interoperability is achieved through the use of international open standards that include mechanisms for the integration and visualisation of information from multiple sources. The portable document format (PDF) is commonly used for printing and is a standard format that can be processed by many graphic software and printers without loss of information. A GeoPDF enables the sharing of geospatial maps and data in PDF documents. Multiple, independent map frames with individual spatial reference systems are possible within a GeoPDF, for example, for map overlays or insets. Geospatial functionality of a GeoPDF includes scalable map display, layer visibility control, access to attribute

  16. On the Use of Second-Order Descriptors To Predict Queueing Behavior of MAPs

    DEFF Research Database (Denmark)

    Andersen, Allan T.; Nielsen, Bo Friis

    2002-01-01

    The contributions of this paper are the following: We derive a formula for the IDI (Index of Dispersion for Intervals) for the Markovian Arrival Process (MAP). We show that two-state MAPs with identical fundamental rate, IDI and IDC (Index of Dispersion for Counts), define interval stationary poi...

  17. Digital intelligence sources transporter

    International Nuclear Information System (INIS)

    Zhang Zhen; Wang Renbo

    2011-01-01

    It presents from the collection of particle-ray counting, infrared data communication, real-time monitoring and alarming, GPRS and other issues start to realize the digital management of radioactive sources, complete the real-time monitoring of all aspects, include the storing of radioactive sources, transporting and using, framing intelligent radioactive sources transporter, as a result, achieving reliable security supervision of radioactive sources. (authors)

  18. Source SDK development essentials

    CERN Document Server

    Bernier, Brett

    2014-01-01

    The Source Authoring Tools are the pieces of software used to create custom content for games made with Valve's Source engine. Creating mods and maps for your games without any programming knowledge can be time consuming. These tools allow you to create your own maps and levels without the need for any coding knowledge. All the tools that you need to start creating your own levels are built-in and ready to go! This book will teach you how to use the Authoring Tools provided with Source games and will guide you in creating your first maps and mods (modifications) using Source. You will learn ho

  19. The development of an avian wind sensitivity map for South Africa

    Energy Technology Data Exchange (ETDEWEB)

    Retief, Ernst; Anderson, M. D.; Harebottle, D.; Jenkins, A.; Simmons, R.; Smit, H.A.; Rooyen, C. Van; Smallie, J.

    2011-07-01

    Full text: Wind energy is a relative new industry in South Africa. This provides South Africans with the opportunity to ensure that wind farms are placed in areas that are of low sensitivity to birds. With this in mind two environmental NGOs, BirdLife South Africa and the Endangered Wildlife Trust, designed an Avian Wind Sensitivity Map to provide guidance to the wind farm industry about the location of wind turbines. The map is the first of its kind in Africa. The purpose of the map is to provide an indication of the geographic areas in South Africa where the possible establishment of wind farms might have a negative impact on birds. Such a map will identify areas of bird sensitivity, i.e. sites where threatened, endemic and vulnerable bird species occur. The map was designed using a variety of data sources, specifically data acquired through citizen science projects - such as the Southern African Bird Atlas Project 2 and the Coordinated Waterbird Counts Project. The data were analysed using data priority scores based on the conservation concern of each species as well as the risk associated with a species to fly into wind turbines and associated infrastructures. The formal protection status of a geographic area was also taken into account. Extensive use was made of GIS tools to collate, analyse and present the data. A number of African countries are considering establishing wind farms. The lessons learnt during the design process can be used by other African countries as the basis for similar maps which can serve as a mitigation measure against the loss of vulnerable species. (Author)

  20. Neutron diffraction measurements at the INES diffractometer using a neutron radiative capture based counting technique

    Energy Technology Data Exchange (ETDEWEB)

    Festa, G. [Centro NAST, Universita degli Studi di Roma Tor Vergata, Roma (Italy); Pietropaolo, A., E-mail: antonino.pietropaolo@roma2.infn.it [Centro NAST, Universita degli Studi di Roma Tor Vergata, Roma (Italy); Grazzi, F.; Barzagli, E. [CNR-ISC Firenze (Italy); Scherillo, A. [CNR-ISC Firenze (Italy); ISIS facility Rutherford Appleton Laboratory (United Kingdom); Schooneveld, E.M. [ISIS facility Rutherford Appleton Laboratory (United Kingdom)

    2011-10-21

    The global shortage of {sup 3}He gas is an issue to be addressed in neutron detection. In the context of the research and development activity related to the replacement of {sup 3}He for neutron counting systems, neutron diffraction measurements performed on the INES beam line at the ISIS pulsed spallation neutron source are presented. For these measurements two different neutron counting devices have been used: a 20 bar pressure squashed {sup 3}He tube and a Yttrium-Aluminum-Perovskite scintillation detector. The scintillation detector was coupled to a cadmium sheet that registers the prompt radiative capture gamma rays generated by the (n,{gamma}) nuclear reactions occurring in cadmium. The assessment of the scintillator based counting system was done by performing a Rietveld refinement analysis on the diffraction pattern from an ancient Japanese blade and comparing the results with those obtained by a {sup 3}He tube placed at the same angular position. The results obtained demonstrate the considerable potential of the proposed counting approach based on the radiative capture gamma rays at spallation neutron sources.

  1. Variations in neutrophil count in preterm infants with respiratory distress syndrome who subsequently developed chronic lung disease.

    Science.gov (United States)

    Kohelet, D; Arbel, E; Ballin, A; Goldberg, M

    2000-01-01

    Neutrophil counts were studied in 62 preterm infants receiving mechanical ventilation for neonatal respiratory distress syndrome (NRDS). Exploratory analysis indicated that the severity of NRDS, as demonstrated by fractional inspired oxygen (FiO2), mean airway pressure (MAP), arterial-alveolar PO2 ratio (a/APO2) and oxygenation index (OI), was correlated with percentage change of neutrophil counts during the first 5 days of life. Further analysis demonstrated that infants with NRDS who subsequently developed chronic lung disease (CLD) (n = 21) had statistically significant differences in variation of neutrophil counts when compared with the remainder (n = 41) without CLD (-35.0% +/- 4.3 vs. -16.9% +/- 5.8, p variations in neutrophil counts during the first 5 days of life may be found in infants with NRDS who subsequently develop CLD and that these changes may have predictive value regarding the development of CLD.

  2. Repeatability of differential goat bulk milk culture and associations with somatic cell count, total bacterial count, and standard plate count

    NARCIS (Netherlands)

    Koop, G.; Dik, N.; Nielen, M.; Lipman, L.J.A.

    2010-01-01

    The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms,

  3. Land cover change map comparisons using open source web mapping technologies

    Science.gov (United States)

    Erik Lindblom; Ian Housman; Tony Guay; Mark Finco; Kevin. Megown

    2015-01-01

    The USDA Forest Service is evaluating the status of current landscape change maps and assessing gaps in their information content. These activities have been occurring under the auspices of the Landscape Change Monitoring System (LCMS) project, which is a joint effort between USFS Research, USFS Remote Sensing Applications Center (RSAC), USGS Earth Resources...

  4. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'

    DEFF Research Database (Denmark)

    de Nijs, Robin

    2015-01-01

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed...... by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all...... methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics...

  5. Choropleth map legend design for visualizing community health disparities

    Directory of Open Access Journals (Sweden)

    Cromley Ellen K

    2009-09-01

    Full Text Available Abstract Background Disparities in health outcomes across communities are a central concern in public health and epidemiology. Health disparities research often links differences in health outcomes to other social factors like income. Choropleth maps of health outcome rates show the geographical distribution of health outcomes. This paper illustrates the use of cumulative frequency map legends for visualizing how the health events are distributed in relation to social characteristics of community populations. The approach uses two graphs in the cumulative frequency legend to highlight the difference between the raw count of the health events and the raw count of the social characteristic like low income in the geographical areas of the map. The approach is applied to mapping publicly available data on low birth weight by town in Connecticut and Lyme disease incidence by town in Connecticut in relation to income. The steps involved in creating these legends are described in detail so that health analysts can adopt this approach. Results The different health problems, low birth weight and Lyme disease, have different cumulative frequency signatures. Graphing poverty population on the cumulative frequency legends revealed that the poverty population is distributed differently with respect to the two different health problems mapped here. Conclusion Cumulative frequency legends can be useful supplements for choropleth maps. These legends can be constructed using readily available software. They contain all of the information found in standard choropleth map legends, and they can be used with any choropleth map classification scheme. Cumulative frequency legends effectively communicate the proportion of areas, the proportion of health events, and/or the proportion of the denominator population in which the health events occurred that falls within each class interval. They illuminate the context of disease through graphing associations with other

  6. Standardization of {sup 241}Am by digital coincidence counting, liquid scintillation counting and defined solid angle counting

    Energy Technology Data Exchange (ETDEWEB)

    Balpardo, C., E-mail: balpardo@cae.cnea.gov.a [Laboratorio de Metrologia de Radioisotopos, CNEA, Buenos Aires (Argentina); Capoulat, M.E.; Rodrigues, D.; Arenillas, P. [Laboratorio de Metrologia de Radioisotopos, CNEA, Buenos Aires (Argentina)

    2010-07-15

    The nuclide {sup 241}Am decays by alpha emission to {sup 237}Np. Most of the decays (84.6%) populate the excited level of {sup 237}Np with energy of 59.54 keV. Digital coincidence counting was applied to standardize a solution of {sup 241}Am by alpha-gamma coincidence counting with efficiency extrapolation. Electronic discrimination was implemented with a pressurized proportional counter and the results were compared with two other independent techniques: Liquid scintillation counting using the logical sum of double coincidences in a TDCR array and defined solid angle counting taking into account activity inhomogeneity in the active deposit. The results show consistency between the three methods within a limit of a 0.3%. An ampoule of this solution will be sent to the International Reference System (SIR) during 2009. Uncertainties were analysed and compared in detail for the three applied methods.

  7. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    Science.gov (United States)

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  8. Inconsistencies in authoritative national paediatric workforce data sources.

    Science.gov (United States)

    Allen, Amy R; Doherty, Richard; Hilton, Andrew M; Freed, Gary L

    2017-12-01

    Objective National health workforce data are used in workforce projections, policy and planning. If data to measure the current effective clinical medical workforce are not consistent, accurate and reliable, policy options pursued may not be aligned with Australia's actual needs. The aim of the present study was to identify any inconsistencies and contradictions in the numerical count of paediatric specialists in Australia, and discuss issues related to the accuracy of collection and analysis of medical workforce data. Methods This study compared respected national data sources regarding the number of medical practitioners in eight fields of paediatric speciality medical (non-surgical) practice. It also counted the number of doctors listed on the websites of speciality paediatric hospitals and clinics as practicing in these eight fields. Results Counts of medical practitioners varied markedly for all specialties across the data sources examined. In some fields examined, the range of variability across data sources exceeded 450%. Conclusions The national datasets currently available from federal and speciality sources do not provide consistent or reliable counts of the number of medical practitioners. The lack of an adequate baseline for the workforce prevents accurate predictions of future needs to provide the best possible care of children in Australia. What is known about the topic? Various national data sources contain counts of the number of medical practitioners in Australia. These data are used in health workforce projections, policy and planning. What does this paper add? The present study found that the current data sources do not provide consistent or reliable counts of the number of practitioners in eight selected fields of paediatric speciality practice. There are several potential issues in the way workforce data are collected or analysed that cause the variation between sources to occur. What are the implications for practitioners? Without accurate

  9. Complete Mapping of Complex Disulfide Patterns with Closely-Spaced Cysteines by In-Source Reduction and Data-Dependent Mass Spectrometry

    DEFF Research Database (Denmark)

    Cramer, Christian N; Kelstrup, Christian D; Olsen, Jesper V

    2017-01-01

    bonds are present in complicated patterns. This includes the presence of disulfide bonds in nested patterns and closely spaced cysteines. Unambiguous mapping of such disulfide bonds typically requires advanced MS approaches. In this study, we exploited in-source reduction (ISR) of disulfide bonds during...... the electrospray ionization process to facilitate disulfide bond assignments. We successfully developed a LC-ISR-MS/MS methodology to use as an online and fully automated partial reduction procedure. Postcolumn partial reduction by ISR provided fast and easy identification of peptides involved in disulfide bonding......Mapping of disulfide bonds is an essential part of protein characterization to ensure correct cysteine pairings. For this, mass spectrometry (MS) is the most widely used technique due to fast and accurate characterization. However, MS-based disulfide mapping is challenged when multiple disulfide...

  10. Mapping of MAC Address with Moving WiFi Scanner

    Directory of Open Access Journals (Sweden)

    Arief Hidayat

    2017-10-01

    Full Text Available Recently, Wifi is one of the most useful technologies that can be used for detecting and counting MAC Address. This paper described using of WiFi scanner which carried out seven times circulated the bus. The method used WiFi and GPS are to counting MAC address as raw data from the pedestrian smartphone, bus passenger or WiFi devices near from the bus as long as the bus going around the route. There are seven processes to make map WiFi data.

  11. Applied categorical and count data analysis

    CERN Document Server

    Tang, Wan; Tu, Xin M

    2012-01-01

    Introduction Discrete Outcomes Data Source Outline of the BookReview of Key Statistical ResultsSoftwareContingency Tables Inference for One-Way Frequency TableInference for 2 x 2 TableInference for 2 x r TablesInference for s x r TableMeasures of AssociationSets of Contingency Tables Confounding Effects Sets of 2 x 2 TablesSets of s x r TablesRegression Models for Categorical Response Logistic Regression for Binary ResponseInference about Model ParametersGoodness of FitGeneralized Linear ModelsRegression Models for Polytomous ResponseRegression Models for Count Response Poisson Regression Mode

  12. Multiplicity counting from fission detector signals with time delay effects

    Science.gov (United States)

    Nagy, L.; Pázsit, I.; Pál, L.

    2018-03-01

    In recent work, we have developed the theory of using the first three auto- and joint central moments of the currents of up to three fission chambers to extract the singles, doubles and triples count rates of traditional multiplicity counting (Pázsit and Pál, 2016; Pázsit et al., 2016). The objective is to elaborate a method for determining the fissile mass, neutron multiplication, and (α, n) neutron emission rate of an unknown assembly of fissile material from the statistics of the fission chamber signals, analogous to the traditional multiplicity counting methods with detectors in the pulse mode. Such a method would be an alternative to He-3 detector systems, which would be free from the dead time problems that would be encountered in high counting rate applications, for example the assay of spent nuclear fuel. A significant restriction of our previous work was that all neutrons born in a source event (spontaneous fission) were assumed to be detected simultaneously, which is not fulfilled in reality. In the present work, this restriction is eliminated, by assuming an independent, identically distributed random time delay for all neutrons arising from one source event. Expressions are derived for the same auto- and joint central moments of the detector current(s) as in the previous case, expressed with the singles, doubles, and triples (S, D and T) count rates. It is shown that if the time-dispersion of neutron detections is of the same order of magnitude as the detector pulse width, as they typically are in measurements of fast neutrons, the multiplicity rates can still be extracted from the moments of the detector current, although with more involved calibration factors. The presented formulae, and hence also the performance of the proposed method, are tested by both analytical models of the time delay as well as with numerical simulations. Methods are suggested also for the modification of the method for large time delay effects (for thermalised neutrons).

  13. Accommodating Binary and Count Variables in Mediation: A Case for Conditional Indirect Effects

    Science.gov (United States)

    Geldhof, G. John; Anthony, Katherine P.; Selig, James P.; Mendez-Luck, Carolyn A.

    2018-01-01

    The existence of several accessible sources has led to a proliferation of mediation models in the applied research literature. Most of these sources assume endogenous variables (e.g., M, and Y) have normally distributed residuals, precluding models of binary and/or count data. Although a growing body of literature has expanded mediation models to…

  14. Map of the Physical Sciences

    Energy Technology Data Exchange (ETDEWEB)

    Boyack, Kevin W.

    1999-07-02

    Various efforts to map the structure of science have been undertaken over the years. Using a new tool, VxInsight{trademark}, we have mapped and displayed 3000 journals in the physical sciences. This map is navigable and interactively reveals the structure of science at many different levels. Science mapping studies are typically focused at either the macro-or micro-level. At a macro-level such studies seek to determine the basic structural units of science and their interrelationships. The majority of studies are performed at the discipline or specialty level, and seek to inform science policy and technical decision makers. Studies at both levels probe the dynamic nature of science, and the implications of the changes. A variety of databases and methods have been used for these studies. Primary among databases are the citation indices (SCI and SSCI) from the Institute for Scientific Information, which have gained widespread acceptance for bibliometric studies. Maps are most often based on computed similarities between journal articles (co-citation), keywords or topics (co-occurrence or co-classification), or journals (journal-journal citation counts). Once the similarity matrix is defined, algorithms are used to cluster the data.

  15. Influence of MAP and Multi-layer Flexible Pouches on Clostridium Count of Smoked Kutum Fish (Rutilus frisii kutum

    Directory of Open Access Journals (Sweden)

    Nazanin Zand

    2016-11-01

    Full Text Available In this study the effect of different concentrations of three gas mixture (carbon dioxide, nitrogen, oxygen, and also vacuum conditions and flexible multi-layer films were evaluated on Clostridium count of smoked kutum fish (Rutilus frisii kutum at ambient condition (T= 25 0C. Ordinary condition as control packaging were compared with four types of modified atmosphere packaging: (N270%+ CO230%, (N230% + CO270%, (45%CO2+ 45%N2+10%O2 and vacuum conditions, in this project. Smoked kutum fish were packaged into 3 kinds of flexible pouches {3- layers(PET(12/AL(12/LLD(100, 4-layers (PET(12/AL(7/ PET(12/LLD(100, and 3-layer (PET(12/AL(7/LLD(100}. Packed samples were performed microbial tests (Clostridium count, in different times during 60 days, with 15 treatment ,3 run, statistical analysis and comparison of data, were done by software SAS (Ver:9/1 and Duncan’s new multiple range test, with confidence level of 95% (P <0.05 . The shelf life of Samples (according to Clostridium count were reported in 4-layers , under conditions 1,2,3 and vacuum conditions, 60,58,45,40 days, in 3-layers (AL:12, under conditions 1,2,3 were 55,50,40 days and in vacuum conditions were about 35 days, with 3- layers(AL:7, under conditions 1,2,3 and vacuum conditions 45,40,35, 30 days. Clostridium count showed that increasing CO2 concentration prolonged shelf life. During the period of this experiment Clostridium count of samples in various conditions, had significant level. According to these results could be concluded the best condition belonged to treatment under modified atmosphere CO2 70% and also 4- layer container due to the thickness (131 μ, low permeability of water vapor in this 4-layer container and anti-microbial effect of more percentage of CO2.

  16. Clean Hands Count

    Medline Plus

    Full Text Available ... Like this video? Sign in to make your opinion count. Sign in 131 2 Don't like this video? Sign in to make your opinion count. Sign in 3 Loading... Loading... Transcript The ...

  17. Testing Metadata Existence of Web Map Services

    Directory of Open Access Journals (Sweden)

    Jan Růžička

    2011-05-01

    Full Text Available For a general user is quite common to use data sources available on WWW. Almost all GIS software allow to use data sources available via Web Map Service (ISO/OGC standard interface. The opportunity to use different sources and combine them brings a lot of problems that were discussed many times on conferences or journal papers. One of the problem is based on non existence of metadata for published sources. The question was: were the discussions effective? The article is partly based on comparison of situation for metadata between years 2007 and 2010. Second part of the article is focused only on 2010 year situation. The paper is created in a context of research of intelligent map systems, that can be used for an automatic or a semi-automatic map creation or a map evaluation.

  18. Noun Countability; Count Nouns and Non-count Nouns, What are the Syntactic Differences Between them?

    Directory of Open Access Journals (Sweden)

    Azhar A. Alkazwini

    2016-11-01

    Full Text Available Words that function as the subjects of verbs, objects of verbs or prepositions and which can have a plural form and possessive ending are known as nouns. They are described as referring to persons, places, things, states, or qualities and might also be used as an attributive modifier. In this paper, classes and subclasses of nouns shall be presented, then, noun countability branching into count and non-count nous shall be discussed. A number of present examples illustrating differences between count and non-count nouns and this includes determiner-head-co-occurrence restrictions of number, subject-verb agreement, in addition to some exceptions to this agreement rule shall be discussed. Also, the lexically inherent number in nouns and how inherently plural nouns are classified in terms of (+/- count are illustrated. This research will discuss partitive construction of count and non-count nouns, nouns as attributive modifier and, finally, conclude with the fact that there are syntactic difference between count and non-count in the English Language.

  19. Calibration of nuclides by gamma-gamma sum peak coincidence counting

    International Nuclear Information System (INIS)

    Guevara, E.A.

    1986-01-01

    The feasibility of extending sum peak coincidence counting to the direct calibration of gamma-ray emitters having particular decay schemes was investigated, also checkings of the measurement accuracy, by comparing with more precise beta-gamma coincidence counting have been performed. New theoretical studies and experiments were developed, demonstrating the reliability of the procedure. Uncertainties of less than one percent were obtained when certain radioactive sources were measured. The application of the procedure to 60 Co, 22 Na, 47 Ca and 148 Pm was studied. Theoretical bases of sum peak coincidence counting were set in order to extend it as an alternative method for absolute activity determination. In this respect, theoretical studies were performed for positive and negative beta decay, and electron capture, either accompanied or unaccompanied by coincident gamma rays. They include decay schemes containing up to three daughter nuclide excited levels, for different geometrical configurations. Equations are proposed for a possible generalization of the procedure. (M.E.L.) [es

  20. Clean Hands Count

    Medline Plus

    Full Text Available ... starting stop Loading... Watch Queue Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with ... ads? Get YouTube Red. Working... Not now Try it free Find out why Close Clean Hands Count ...

  1. Assessment of the statistical uncertainty affecting a counting; Evaluation de l'incertitude statistique affectant un comptage

    Energy Technology Data Exchange (ETDEWEB)

    Cluchet, J.

    1960-07-01

    After a recall of some aspects regarding the Gauss law and the Gauss curve, this note addresses the case of performance of a large number of measurements of a source activity by means of a sensor (counter, scintillator, nuclear emulsion, etc.) at equal intervals, and with a number of events which is not rigorously constant. Thus, it addresses measurements, and more particularly counting operations in a random or statistical environment. It more particularly addresses the case of a counting rate due to a source greater (and then lower) than twenty times the Eigen movement. The validity of curves is discussed.

  2. Cardiac functional mapping for thallium-201 myocardial perfusion, washout, wall motion and phase using single-photon emission computed tomography (SPECT)

    International Nuclear Information System (INIS)

    Nakajima, Kenichi; Bunko, Hisashi; Taniguchi, Mitsuru; Taki, Junichi; Tonami, Norihisa; Hisada, Kinichi; Hirano, Takako; Wani, Hidenobu.

    1986-01-01

    A method for three-dimensional functional mapping of Tl-201 myocardial uptake, washout, wall motion and phase was developed using SPECT. Each parameter was mapped using polar display in the same format. Normal values were determined in Tl-201 exercise study in 16 patients. Myocardial counts were lower in the septum and inferior wall and the difference of counts between anterior and inferior walls were greater in man compared with the perfusion pattern in woman. Washout was slower at septum and inferior wall in man, and slightly slower at inferior wall in woman. In gated blood-pool tomography, length-based and count-based Fourier analyses were applied to calculate the parameters of contraction and phase. The results of both Fourier analyses generally agreed; however, the area of abnormality was slightly different. Phase maps were useful for the assessment of asynergy as well as in patients with conduction disorders. These cardiac functional maps using SPECT were considered to be effective for the understanding of three-dimensional informations of cardiac function. (author)

  3. Count-to-count time interval distribution analysis in a fast reactor

    International Nuclear Information System (INIS)

    Perez-Navarro Gomez, A.

    1973-01-01

    The most important kinetic parameters have been measured at the zero power fast reactor CORAL-I by means of the reactor noise analysis in the time domain, using measurements of the count-to-count time intervals. (Author) 69 refs

  4. Challenges and Opportunities for Using Crowd-Sourced Air Pollution Measurements for Education and Outreach

    Science.gov (United States)

    Stanier, C. O.; Dong, C.; Janechek, N. J.; Bryngelson, N.; Schultz, P.; Heimbinder, M.

    2017-12-01

    As part of the CLE4R air quality education project, the University of Iowa has been working with AirBeam low-cost consumer-grade fine particulate matter (PM2.5) sensors in educational and outreach settings, both in K-12 environments and in informal settings such as science days and technology fairs. Users are attracted to the AirBeam device, in part, because of the easy creation of crowd-sourced maps of air pollution. With over 1000 AirBeam devices in use, extensive measurements are now available at aircasting.org. The AirBeam sensor is a portable, low-cost sensor which measures light scattering due to aerosols as a single bin converting the detected signal to a particle count and uses a calibration fit to estimate particle mass. The AirBeam is able to detect particle sizes of 0.5 - 2.5 µm, concentrations up to 400 µg m-3, and with a time resolution of 1 s. A corresponding Android device is used to visualize, record, and upload measured data to a community website (aircasting.org) that maps the spatial and temporal resolved data. The non-profit vendor's website constructs crowdsourced maps of air quality, environmental, and meteorological variables. As of April 1st, 2017, through the CLE4R project, 109 people had used the AirBeam sensors for educational purposes, for a total of 271 person hours. In the poster, we will explain the outreach that was done, and share best practices for education and outreach using consumer-grade PM sensors. Strengths and needed improvements to the technology for these outreach, education, and classroom uses will also be detailed. Sources of particles that can be artificially generated for educational use, including authentic smoke, spray smoke, and various dust sources will be enumerated. For use in K-12 classrooms, requirements for robust startup, operation, and ease-of-use are high. Mapping of concentrations is a desirable attribute but adds additional sources of failure to the hardware-software system used for education/outreach.

  5. Geiger-Mueller haloid counter dead time dependence on counting rate

    International Nuclear Information System (INIS)

    Onishchenko, A.M.; Tsvetkov, A.A.

    1980-01-01

    The experimental dependences of the dead time of Geiger counters (SBM-19, SBM-20, SBM-21 and SGM-19) on the loading, are presented. The method of two sources has been used to determine the dead time counters of increased stability. The counters are switched on according to the usually used circuit of discrete counting with loading resistance of 50 MOhm and the separating capacity of 10 pF. Voltage pulses are given to the counting device with the time of resolution of 100 ns, discrimenation threshold 3 V, input resistance 3.6 Ω and the input capacity-15 pF. The time constant of the counter RC-circuit is 50 μs

  6. REM meter for pulsed sources of neutrons

    International Nuclear Information System (INIS)

    Thorngate, J.E.; Hunt, G.F.; Rueppel, D.W.

    1980-01-01

    A rem meter was constructed specifically for measuring neutrons produced by fusion experiments for which the source pulses last 10 ms or longer. The detector is a 6 Li glass scintillator, 25.4 mm in diameter and 3.2 mm thick, surrounded by 11.5 cm of polyethylene. This detector has a sensitivity of 8.5 x 10 4 counts/mrem. The signals from this fast scintillator are shaped using a shorted delay line to produce pulses that are only 10 ns long so that dose equivalent rates up to 12 mrem/s can be measured with less than a 1% counting loss. The associated electronic circuits store detector counts only when the count rate exceeds a preset level. When the count rate returns to background, a conversion from counts to dose equivalent is made and the results are displayed. As a means of recording the number of source pulses that have occurred, a second display shows how many times the preset count rate has been exceeded. Accumulation of detector counts and readouts can also be controlled manually. The unit will display the integrated dose equilavent up to 200 mrem in 0.01 mrem steps. A pulse-height discriminator rejects gamma-ray interactions below 1 MeV, and the detector size limits the response above that energy. The instrument can be operated from an ac line or will run on rechargeable batteries for up to 12 hours

  7. BikeMaps.org: A Global Tool for Collision and Near Miss Mapping.

    Science.gov (United States)

    Nelson, Trisalyn A; Denouden, Taylor; Jestico, Benjamin; Laberee, Karen; Winters, Meghan

    2015-01-01

    There are many public health benefits to cycling, such as chronic disease reduction and improved air quality. Real and perceived concerns about safety are primary barriers to new ridership. Due to limited forums for official reporting of cycling incidents, lack of comprehensive data is limiting our ability to study cycling safety and conduct surveillance. Our goal is to introduce BikeMaps.org, a new website developed by the authors for crowd-source mapping of cycling collisions and near misses. BikeMaps.org is a global mapping system that allows citizens to map locations of cycling incidents and report on the nature of the event. Attributes collected are designed for spatial modeling research on predictors of safety and risk, and to aid surveillance and planning. Released in October 2014, within 2 months the website had more than 14,000 visitors and mapping in 14 countries. Collisions represent 38% of reports (134/356) and near misses 62% (222/356). In our pilot city, Victoria, Canada, citizens mapped data equivalent to about 1 year of official cycling collision reports within 2 months via BikeMaps.org. Using report completeness as an indicator, early reports indicate that data are of high quality with 50% being fully attributed and another 10% having only one missing attribute. We are advancing this technology, with the development of a mobile App, improved data visualization, real-time altering of hazard reports, and automated open-source tools for data sharing. Researchers and citizens interested in utilizing the BikeMaps.org technology can get involved by encouraging citizen mapping in their region.

  8. Imaginal discs--a new source of chromosomes for genome mapping of the yellow fever mosquito Aedes aegypti.

    Directory of Open Access Journals (Sweden)

    Maria V Sharakhova

    2011-10-01

    Full Text Available The mosquito Aedes aegypti is the primary global vector for dengue and yellow fever viruses. Sequencing of the Ae. aegypti genome has stimulated research in vector biology and insect genomics. However, the current genome assembly is highly fragmented with only ~31% of the genome being assigned to chromosomes. A lack of a reliable source of chromosomes for physical mapping has been a major impediment to improving the genome assembly of Ae. aegypti.In this study we demonstrate the utility of mitotic chromosomes from imaginal discs of 4(th instar larva for cytogenetic studies of Ae. aegypti. High numbers of mitotic divisions on each slide preparation, large sizes, and reproducible banding patterns of the individual chromosomes simplify cytogenetic procedures. Based on the banding structure of the chromosomes, we have developed idiograms for each of the three Ae. aegypti chromosomes and placed 10 BAC clones and a 18S rDNA probe to precise chromosomal positions.The study identified imaginal discs of 4(th instar larva as a superior source of mitotic chromosomes for Ae. aegypti. The proposed approach allows precise mapping of DNA probes to the chromosomal positions and can be utilized for obtaining a high-quality genome assembly of the yellow fever mosquito.

  9. Neutron counting and gamma spectroscopy with PVT detectors

    International Nuclear Information System (INIS)

    Mitchell, Dean James; Brusseau, Charles A.

    2011-01-01

    Radiation portals normally incorporate a dedicated neutron counter and a gamma-ray detector with at least some spectroscopic capability. This paper describes the design and presents characterization data for a detection system called PVT-NG, which uses large polyvinyl toluene (PVT) detectors to monitor both types of radiation. The detector material is surrounded by polyvinyl chloride (PVC), which emits high-energy gamma rays following neutron capture reactions. Assessments based on high-energy gamma rays are well suited for the detection of neutron sources, particularly in border security applications, because few isotopes in the normal stream of commerce have significant gamma ray yields above 3 MeV. Therefore, an increased count rate for high-energy gamma rays is a strong indicator for the presence of a neutron source. The sensitivity of the PVT-NG sensor to bare 252 Cf is 1.9 counts per second per nanogram (cps/ng) and the sensitivity for 252 Cf surrounded by 2.5 cm of polyethylene is 2.3 cps/ng. The PVT-NG sensor is a proof-of-principal sensor that was not fully optimized. The neutron detector sensitivity could be improved, for instance, by using additional moderator. The PVT-NG detectors and associated electronics are designed to provide improved resolution, gain stability, and performance at high-count rates relative to PVT detectors in typical radiation portals. As well as addressing the needs for neutron detection, these characteristics are also desirable for analysis of the gamma-ray spectra. Accurate isotope identification results were obtained despite the common impression that the absence of photopeaks makes data collected by PVT detectors unsuitable for spectroscopic analysis. The PVT detectors in the PVT-NG unit are used for both gamma-ray and neutron detection, so the sensitive volume exceeds the volume of the detection elements in portals that use dedicated components to detect each type of radiation.

  10. Mapping rice extent map with crop intensity in south China through integration of optical and microwave images based on google earth engine

    Science.gov (United States)

    Zhang, X.; Wu, B.; Zhang, M.; Zeng, H.

    2017-12-01

    Rice is one of the main staple foods in East Asia and Southeast Asia, which has occupied more than half of the world's population with 11% of cultivated land. Study on rice can provide direct or indirect information on food security and water source management. Remote sensing has proven to be the most effective method to monitoring the cropland in large scale by using temporary and spectral information. There are two main kinds of satellite have been used to mapping rice including microwave and optical. Rice, as the main crop of paddy fields, the main feature different from other crops is flooding phenomenon at planning stage (Figure 1). Microwave satellites can penetrate through clouds and efficiency on monitoring flooding phenomenon. Meanwhile, the vegetation index based on optical satellite can well distinguish rice from other vegetation. Google Earth Engine is a cloud-based platform that makes it easy to access high-performance computing resources for processing very large geospatial datasets. Google has collected large number of remote sensing satellite data around the world, which providing researchers with the possibility of doing application by using multi-source remote sensing data in a large area. In this work, we map rice planting area in south China through integration of Landsat-8 OLI, Sentienl-2, and Sentinel-1 Synthetic Aperture Radar (SAR) images. The flowchart is shown in figure 2. First, a threshold method the VH polarized backscatter from SAR sensor and vegetation index including normalized difference vegetation index (NDVI) and enhanced vegetation index (EVI) from optical sensor were used the classify the rice extent map. The forest and water surface extent map provided by earth engine were used to mask forest and water. To overcome the problem of the "salt and pepper effect" by Pixel-based classification when the spatial resolution increased, we segment the optical image and use the pixel- based classification results to merge the object

  11. The effect of volume and quenching on estimation of counting efficiencies in liquid scintillation counting

    International Nuclear Information System (INIS)

    Knoche, H.W.; Parkhurst, A.M.; Tam, S.W.

    1979-01-01

    The effect of volume on the liquid scintillation counting performance of 14 C-samples has been investigated. A decrease in counting efficiency was observed for samples with volumes below about 6 ml and those above about 18 ml when unquenched samples were assayed. Two quench-correction methods, sample channels ratio and external standard channels ratio, and three different liquid scintillation counters, were used in an investigation to determine the magnitude of the error in predicting counting efficiencies when small volume samples (2 ml) with different levels of quenching were assayed. The 2 ml samples exhibited slightly greater standard deviations of the difference between predicted and determined counting efficiencies than did 15 ml samples. Nevertheless, the magnitude of the errors indicate that if the sample channels ratio method of quench correction is employed, 2 ml samples may be counted in conventional counting vials with little loss in counting precision. (author)

  12. Activity measurements of radioactive solutions by liquid scintillation counting and pressurized ionization chambers and Monte Carlo simulations of source-detector systems for metrology

    International Nuclear Information System (INIS)

    Amiot, Marie-Noelle

    2013-01-01

    The research works 'Activity measurements of radioactive solutions by liquid scintillation and pressurized ionization chambers and Monte Carlo simulations of source-detector systems' was presented for the graduation: 'Habilitation a diriger des recherches'. The common thread of both themes liquid scintillation counting and pressurized ionization chambers lies in the improvement of the techniques of radionuclide activity measurement. Metrology of ionization radiation intervenes in numerous domains, in the research, in the industry including the environment and the health, which are subjects of constant concern for the world population these last years. In this big variety of applications answers a large number of radionuclides of diverse disintegration scheme and under varied physical forms. The presented works realized within the National Laboratory Henri Becquerel have for objective to assure detector calibration traceability and to improve the methods of activity measurements within the framework of research projects and development. The improvement of the primary and secondary activity measurement methods consists in perfecting the accuracy of the measurements in particular by a better knowledge of the parameters influencing the detector yield. The works of development dealing with liquid scintillation counting concern mainly the study of the response of liquid scintillators to low energy electrons as well as their linear absorption coefficients using synchrotron radiation. The research works on pressurized ionization chambers consist of the study of their response to photons and electrons by experimental measurements compared to the simulation of the source-detector system using Monte Carlo codes. Besides, the design of a new type of ionization chamber with variable pressure is presented. This new project was developed to guarantee the precision of the amount of activity injected into the patient within the framework of diagnosis examination

  13. Study of the influence of radionuclide biokinetics on in vivo counting using voxel phantoms

    International Nuclear Information System (INIS)

    Lamart, St.

    2008-10-01

    The in vivo measurement is an efficient method to estimate the retention of activity in case of internal contamination. However, it is currently limited by the use of physical phantoms for the calibration, not enabling to reproduce neither the morphology of the measured person nor the actual distribution of the contamination. The current method of calibration therefore leads to significant systematic uncertainties on the quantification of the contamination. To improve the in vivo measurement, the Laboratory of Internal Dose Assessment (LEDI, IRSN) has developed an original numerical calibration method with the OEDIPE software. It is based on voxel phantoms created from the medical images of persons, and associated with the MCNPX Monte Carlo code of particle transport. The first version of this software enabled to model simple homogeneous sources and to better estimate the systematic uncertainties in the lung counting of actinides due to the detector position and to the heterogeneous distribution of activity inside the lungs. However, it was not possible to take into account the dynamic feature, and often heterogeneous distribution between body organs and tissues of the activity. Still, the efficiency of the detection system depends on the distribution of the source of activity. The main purpose of the thesis work is to answer to the question: what is the influence of the biokinetics of the radionuclides on the in vivo counting? To answer it, it was necessary to deeply modify OEDIPE. This new development enabled to model the source of activity more realistically from the reference biokinetic models defined by the ICRP. The first part of the work consisted in developing the numerical tools needed to integrate the biokinetics in OEDIPE. Then, a methodology was developed to quantify its influence on the in vivo counting from the results of simulations. This method was carried out and validated on the model of the in vivo counting system of the LEDI. Finally, the

  14. Effect of land uses and wind direction on the contribution of local sources to airborne pollen

    International Nuclear Information System (INIS)

    Rojo, Jesús; Rapp, Ana; Lara, Beatriz; Fernández-González, Federico; Pérez-Badia, Rosa

    2015-01-01

    . - Highlights: • We identified the major sources of urban airborne pollen from maps of land uses. • Pollen spectrum was governed by the location of pollen sources and wind direction. • The flora of parks and gardens had a marked impact on airborne pollen level. • Our findings enabled to recognize the major sources of allergenic pollen.

  15. Effect of land uses and wind direction on the contribution of local sources to airborne pollen

    Energy Technology Data Exchange (ETDEWEB)

    Rojo, Jesús; Rapp, Ana; Lara, Beatriz; Fernández-González, Federico; Pérez-Badia, Rosa

    2015-12-15

    transport. - Highlights: • We identified the major sources of urban airborne pollen from maps of land uses. • Pollen spectrum was governed by the location of pollen sources and wind direction. • The flora of parks and gardens had a marked impact on airborne pollen level. • Our findings enabled to recognize the major sources of allergenic pollen.

  16. TasselNet: counting maize tassels in the wild via local counts regression network.

    Science.gov (United States)

    Lu, Hao; Cao, Zhiguo; Xiao, Yang; Zhuang, Bohan; Shen, Chunhua

    2017-01-01

    Accurately counting maize tassels is important for monitoring the growth status of maize plants. This tedious task, however, is still mainly done by manual efforts. In the context of modern plant phenotyping, automating this task is required to meet the need of large-scale analysis of genotype and phenotype. In recent years, computer vision technologies have experienced a significant breakthrough due to the emergence of large-scale datasets and increased computational resources. Naturally image-based approaches have also received much attention in plant-related studies. Yet a fact is that most image-based systems for plant phenotyping are deployed under controlled laboratory environment. When transferring the application scenario to unconstrained in-field conditions, intrinsic and extrinsic variations in the wild pose great challenges for accurate counting of maize tassels, which goes beyond the ability of conventional image processing techniques. This calls for further robust computer vision approaches to address in-field variations. This paper studies the in-field counting problem of maize tassels. To our knowledge, this is the first time that a plant-related counting problem is considered using computer vision technologies under unconstrained field-based environment. With 361 field images collected in four experimental fields across China between 2010 and 2015 and corresponding manually-labelled dotted annotations, a novel Maize Tassels Counting ( MTC ) dataset is created and will be released with this paper. To alleviate the in-field challenges, a deep convolutional neural network-based approach termed TasselNet is proposed. TasselNet can achieve good adaptability to in-field variations via modelling the local visual characteristics of field images and regressing the local counts of maize tassels. Extensive results on the MTC dataset demonstrate that TasselNet outperforms other state-of-the-art approaches by large margins and achieves the overall best counting

  17. TasselNet: counting maize tassels in the wild via local counts regression network

    Directory of Open Access Journals (Sweden)

    Hao Lu

    2017-11-01

    Full Text Available Abstract Background Accurately counting maize tassels is important for monitoring the growth status of maize plants. This tedious task, however, is still mainly done by manual efforts. In the context of modern plant phenotyping, automating this task is required to meet the need of large-scale analysis of genotype and phenotype. In recent years, computer vision technologies have experienced a significant breakthrough due to the emergence of large-scale datasets and increased computational resources. Naturally image-based approaches have also received much attention in plant-related studies. Yet a fact is that most image-based systems for plant phenotyping are deployed under controlled laboratory environment. When transferring the application scenario to unconstrained in-field conditions, intrinsic and extrinsic variations in the wild pose great challenges for accurate counting of maize tassels, which goes beyond the ability of conventional image processing techniques. This calls for further robust computer vision approaches to address in-field variations. Results This paper studies the in-field counting problem of maize tassels. To our knowledge, this is the first time that a plant-related counting problem is considered using computer vision technologies under unconstrained field-based environment. With 361 field images collected in four experimental fields across China between 2010 and 2015 and corresponding manually-labelled dotted annotations, a novel Maize Tassels Counting (MTC dataset is created and will be released with this paper. To alleviate the in-field challenges, a deep convolutional neural network-based approach termed TasselNet is proposed. TasselNet can achieve good adaptability to in-field variations via modelling the local visual characteristics of field images and regressing the local counts of maize tassels. Extensive results on the MTC dataset demonstrate that TasselNet outperforms other state-of-the-art approaches by large

  18. Coincidence counting corrections for dead time losses and accidental coincidences

    International Nuclear Information System (INIS)

    Wyllie, H.A.

    1987-04-01

    An equation is derived for the calculation of the radioactivity of a source from the results of coincidence counting taking into account the dead-time losses and accidental coincidences. The derivation is an extension of the method of J. Bryant [Int. J. Appl. Radiat. Isot., 14:143, 1963]. The improvement on Bryant's formula has been verified by experiment

  19. Quantitative X-ray mapping, scatter diagrams and the generation of correction maps to obtain more information about your material

    Science.gov (United States)

    Wuhrer, R.; Moran, K.

    2014-03-01

    Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper.

  20. Quantitative X-ray mapping, scatter diagrams and the generation of correction maps to obtain more information about your material

    International Nuclear Information System (INIS)

    Wuhrer, R; Moran, K

    2014-01-01

    Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper

  1. GIS based optimal impervious surface map generation using various spatial data for urban nonpoint source management.

    Science.gov (United States)

    Lee, Cholyoung; Kim, Kyehyun; Lee, Hyuk

    2018-01-15

    Impervious surfaces are mainly artificial structures such as rooftops, roads, and parking lots that are covered by impenetrable materials. These surfaces are becoming the major causes of nonpoint source (NPS) pollution in urban areas. The rapid progress of urban development is increasing the total amount of impervious surfaces and NPS pollution. Therefore, many cities worldwide have adopted a stormwater utility fee (SUF) that generates funds needed to manage NPS pollution. The amount of SUF is estimated based on the impervious ratio, which is calculated by dividing the total impervious surface area by the net area of an individual land parcel. Hence, in order to identify the exact impervious ratio, large-scale impervious surface maps (ISMs) are necessary. This study proposes and assesses various methods for generating large-scale ISMs for urban areas by using existing GIS data. Bupyeong-gu, a district in the city of Incheon, South Korea, was selected as the study area. Spatial data that were freely offered by national/local governments in S. Korea were collected. First, three types of ISMs were generated by using the land-cover map, digital topographic map, and orthophotographs, to validate three methods that had been proposed conceptually by Korea Environment Corporation. Then, to generate an ISM of higher accuracy, an integration method using all data was proposed. Error matrices were made and Kappa statistics were calculated to evaluate the accuracy. Overlay analyses were performed to examine the distribution of misclassified areas. From the results, the integration method delivered the highest accuracy (Kappa statistic of 0.99) compared to the three methods that use a single type of spatial data. However, a longer production time and higher cost were limiting factors. Among the three methods using a single type of data, the land-cover map showed the highest accuracy with a Kappa statistic of 0.91. Thus, it was judged that the mapping method using the land

  2. Do your syringes count?

    International Nuclear Information System (INIS)

    Brewster, K.

    2002-01-01

    Full text: This study was designed to investigate anecdotal evidence that residual Sestamibi (MIBI) activity vaned in certain situations. For rest studies different brands of syringes were tested to see if the residuals varied. The period of time MIBI doses remained in the syringe between dispensing and injection was also considered as a possible source of increased residual counts. Stress Mibi syringe residual activities were measured to assess if the method of stress test affected residual activity. MIBI was reconstituted using 13 Gbq of Technetium in 3mls of normal saline then boiled for 10 minutes. Doses were dispensed according to department protocol and injected via cannula. Residual syringes were collected for three syringe types. In each case the barrel and plunger were measured separately. As the syringe is flushed during the exercise stress test and not the pharmacological stress test the chosen method was recorded. No relationship was demonstrated between the time MIBI remained in a syringe prior to injection and residual activity. Residual activity was not affected by method of stress test used. Actual injected activity can be calculated if the amount of activity remaining in the syringe post injection is known. Imaging time can be adjusted for residual activity to optimise count statistics. Preliminary results in this study indicate there is no difference in residual activity between syringe brands.Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  3. Isospectral discrete and quantum graphs with the same flip counts and nodal counts

    Science.gov (United States)

    Juul, Jonas S.; Joyner, Christopher H.

    2018-06-01

    The existence of non-isomorphic graphs which share the same Laplace spectrum (to be referred to as isospectral graphs) leads naturally to the following question: what additional information is required in order to resolve isospectral graphs? It was suggested by Band, Shapira and Smilansky that this might be achieved by either counting the number of nodal domains or the number of times the eigenfunctions change sign (the so-called flip count) (Band et al 2006 J. Phys. A: Math. Gen. 39 13999–4014 Band and Smilansky 2007 Eur. Phys. J. Spec. Top. 145 171–9). Recent examples of (discrete) isospectral graphs with the same flip count and nodal count have been constructed by Ammann by utilising Godsil–McKay switching (Ammann private communication). Here, we provide a simple alternative mechanism that produces systematic examples of both discrete and quantum isospectral graphs with the same flip and nodal counts.

  4. Local Group dSph radio survey with ATCA (I): observations and background sources

    Science.gov (United States)

    Regis, Marco; Richter, Laura; Colafrancesco, Sergio; Massardi, Marcella; de Blok, W. J. G.; Profumo, Stefano; Orford, Nicola

    2015-04-01

    Dwarf spheroidal (dSph) galaxies are key objects in near-field cosmology, especially in connection to the study of galaxy formation and evolution at small scales. In addition, dSphs are optimal targets to investigate the nature of dark matter. However, while we begin to have deep optical photometric observations of the stellar population in these objects, little is known so far about their diffuse emission at any observing frequency, and hence on thermal and non-thermal plasma possibly residing within dSphs. In this paper, we present deep radio observations of six local dSphs performed with the Australia Telescope Compact Array (ATCA) at 16 cm wavelength. We mosaicked a region of radius of about 1 deg around three `classical' dSphs, Carina, Fornax, and Sculptor, and of about half of degree around three `ultrafaint' dSphs, BootesII, Segue2, and Hercules. The rms noise level is below 0.05 mJy for all the maps. The restoring beams full width at half-maximum ranged from 4.2 arcsec × 2.5 arcsec to 30.0 arcsec × 2.1 arcsec in the most elongated case. A catalogue including the 1392 sources detected in the six dSph fields is reported. The main properties of the background sources are discussed, with positions and fluxes of brightest objects compared with the FIRST, NVSS, and SUMSS observations of the same fields. The observed population of radio emitters in these fields is dominated by synchrotron sources. We compute the associated source number counts at 2 GHz down to fluxes of 0.25 mJy, which prove to be in agreement with AGN count models.

  5. Counting It Twice.

    Science.gov (United States)

    Schattschneider, Doris

    1991-01-01

    Provided are examples from many domains of mathematics that illustrate the Fubini Principle in its discrete version: the value of a summation over a rectangular array is independent of the order of summation. Included are: counting using partitions as in proof by pictures, combinatorial arguments, indirect counting as in the inclusion-exclusion…

  6. Low White Blood Cell Count

    Science.gov (United States)

    Symptoms Low white blood cell count By Mayo Clinic Staff A low white blood cell count (leukopenia) is a decrease ... of white blood cell (neutrophil). The definition of low white blood cell count varies from one medical ...

  7. Liquid scintillation counting of 3H-thymidine incorporated into rat lens DNA

    International Nuclear Information System (INIS)

    Soederberg, P.G.; Lindstroem, B.

    1990-01-01

    DNA synthesis in the lens has previously been localized by autoradiography following incorporation of 3 H-thymidine. For the quantification of DNA synthesis in the lens, pooling of lenses and extraction of the DNA for liquid scintillation counting, has formerly been adapted. In the present investigation a method has been developed for the extraction of the unincorporated tracer from whole lenses after short time incubation in a medium containing 3 H-thymidine. The 3 H-thymidine incorporated into individual lenses was then detected by liquid scintillation counting after dissolution of the lenses. The sources of the variation in the method are evaluated. (author)

  8. Ideal flood field images for SPECT uniformity correction

    International Nuclear Information System (INIS)

    Oppenheim, B.E.; Appledorn, C.R.

    1984-01-01

    Since as little as 2.5% camera non-uniformity can cause disturbing artifacts in SPECT imaging, the ideal flood field images for uniformity correction would be made with the collimator in place using a perfectly uniform sheet source. While such a source is not realizable the equivalent images can be generated by mapping the activity distribution of a Co-57 sheet source and correcting subsequent images of the source with this mapping. Mapping is accomplished by analyzing equal-time images of the source made in multiple precisely determined positions. The ratio of counts detected in the same region of two images is a measure of the ratio of the activities of the two portions of the source imaged in that region. The activity distribution in the sheet source is determined from a set of such ratios. The more source positions imaged in a given time, the more accurate the source mapping, according to results of a computer simulation. A 1.9 mCi Co-57 sheet source was shifted by 12 mm increments along the horizontal and vertical axis of the camera face to 9 positions on each axis. The source was imaged for 20 min in each position and 214 million total counts were accumulated. The activity distribution of the source, relative to the center pixel, was determined for a 31 x 31 array. The integral uniformity was found to be 2.8%. The RMS error for such a mapping was determined by computer simulation to be 0.46%. The activity distribution was used to correct a high count flood field image for non-uniformities attributable to the Co-57 source. Such a corrected image represents camera plus collimator response to an almost perfectly uniform sheet source

  9. RADIO SOURCES FROM A 31 GHz SKY SURVEY WITH THE SUNYAEV-ZEL'DOVICH ARRAY

    International Nuclear Information System (INIS)

    Muchovej, Stephen; Hawkins, David; Lamb, James; Woody, David; Leitch, Erik; Carlstrom, John E.; Culverhouse, Thomas; Greer, Chris; Hennessy, Ryan; Loh, Michael; Marrone, Daniel P.; Pryke, Clem; Sharp, Matthew; Joy, Marshall; Miller, Amber; Mroczkowski, Tony

    2010-01-01

    We present the first sample of 31 GHz selected sources to flux levels of 1 mJy. From late 2005 to mid-2007, the Sunyaev-Zel'dovich Array observed 7.7 deg 2 of the sky at 31 GHz to a median rms of 0.18 mJy beam -1 . We identify 209 sources at greater than 5σ significance in the 31 GHz maps, ranging in flux from 0.7 mJy to ∼200 mJy. Archival NVSS data at 1.4 GHz and observations at 5 GHz with the Very Large Array are used to characterize the sources. We determine the maximum-likelihood integrated source count to be N(>S) = (27.2 ± 2.5)deg -2 x (S mJy ) -1.18±0.12 over the flux range 0.7-15 mJy. This result is significantly higher than predictions based on 1.4 GHz selected samples, a discrepancy which can be explained by a small shift in the spectral index distribution for faint 1.4 GHz sources. From comparison with previous measurements of sources within the central arcminute of massive clusters, we derive an overdensity of 6.8 ± 4.4, relative to field sources.

  10. Hanford whole body counting manual

    International Nuclear Information System (INIS)

    Palmer, H.E.; Brim, C.P.; Rieksts, G.A.; Rhoads, M.C.

    1987-05-01

    This document, a reprint of the Whole Body Counting Manual, was compiled to train personnel, document operation procedures, and outline quality assurance procedures. The current manual contains information on: the location, availability, and scope of services of Hanford's whole body counting facilities; the administrative aspect of the whole body counting operation; Hanford's whole body counting facilities; the step-by-step procedure involved in the different types of in vivo measurements; the detectors, preamplifiers and amplifiers, and spectroscopy equipment; the quality assurance aspect of equipment calibration and recordkeeping; data processing, record storage, results verification, report preparation, count summaries, and unit cost accounting; and the topics of minimum detectable amount and measurement accuracy and precision. 12 refs., 13 tabs

  11. Custom OpenStreetMap Rendering – OpenTrackMap Experience

    Directory of Open Access Journals (Sweden)

    Radek Bartoň

    2010-02-01

    Full Text Available After 5 years of its existence, the OpenSteetMap [1] is becoming to be an important and valuable source of a geographic data for all people on the world. Although initially targeted to provide a map of cities for routing services, it can be exploited to other and often unexpected purposes. Such an utilization is an effort to map a network of hiking tracks of the Czech Tourist Club [2].  To support and apply this endeavour, the OpenTrackMap [3] project was started. Its aim is to primarily provide a customized rendering style for Mapnik renderer which emphasizes map features important to tourists and displays a layer with hiking tracks. This article presents obstacles which such project must face and it can be used as a tutorial for other projects of similar type.

  12. Determination of the 51Cr source strength at BNL

    International Nuclear Information System (INIS)

    Boger, J.; Hahn, R.L.; Chu, Y.Y.

    1995-11-01

    Neutron activation analysis (NAA) and γ-ray counting have been used to measure the activity of 24 samples removed from the GALLEX radioactive Cr neutrino source. In 9.86% of the disintegrations, 51 Cr decays with the emission of a 320-keV γ-ray. Counting this γ-ray provides a direct means to obtain the disintegration rates of the Cr samples. Based upon these disintegration rates, the authors obtain a strength of 63.1 ± 1.0 PBq for the entire Cr source. The Cr source activity has also been obtained through measuring the 51 V content of each sample by means of NAA. 51 V is the decay daughter for all decay modes of 51 Cr. Through neutron bombardment, radioactive 52 V is produced, which decays with the emission of a 1,434-keV γ-ray. By counting this γ-ray from NAA, they obtain a disintegration rate of 62.1 ± 1.0 PBq for the entire source. These values are consistent with all other measurements of the source strength done at other GALLEX Laboratories

  13. Standards-Based Open-Source Planetary Map Server: Lunaserv

    Science.gov (United States)

    Estes, N. M.; Silva, V. H.; Bowley, K. S.; Lanjewar, K. K.; Robinson, M. S.

    2018-04-01

    Lunaserv is a planetary capable Web Map Service developed by the LROC SOC. It enables researchers to serve their own planetary data to a wide variety of GIS clients without any additional processing or download steps.

  14. Automatic airline baggage counting using 3D image segmentation

    Science.gov (United States)

    Yin, Deyu; Gao, Qingji; Luo, Qijun

    2017-06-01

    The baggage number needs to be checked automatically during baggage self-check-in. A fast airline baggage counting method is proposed in this paper using image segmentation based on height map which is projected by scanned baggage 3D point cloud. There is height drop in actual edge of baggage so that it can be detected by the edge detection operator. And then closed edge chains are formed from edge lines that is linked by morphological processing. Finally, the number of connected regions segmented by closed chains is taken as the baggage number. Multi-bag experiment that is performed on the condition of different placement modes proves the validity of the method.

  15. Using open source data for flood risk mapping and management in Brazil

    Science.gov (United States)

    Whitley, Alison; Malloy, James; Chirouze, Manuel

    2013-04-01

    Whitley, A., Malloy, J. and Chirouze, M. Worldwide the frequency and severity of major natural disasters, particularly flooding, has increased. Concurrently, countries such as Brazil are experiencing rapid socio-economic development with growing and increasingly concentrated populations, particularly in urban areas. Hence, it is unsurprising that Brazil has experienced a number of major floods in the past 30 years such as the January 2011 floods which killed 900 people and resulted in significant economic losses of approximately 1 billion US dollars. Understanding, mitigating against and even preventing flood risk is high priority. There is a demand for flood models in many developing economies worldwide for a range of uses including risk management, emergency planning and provision of insurance solutions. However, developing them can be expensive. With an increasing supply of freely-available, open source data, the costs can be significantly reduced, making the tools required for natural hazard risk assessment more accessible. By presenting a flood model developed for eight urban areas of Brazil as part of a collaboration between JBA Risk Management and Guy Carpenter, we explore the value of open source data and demonstrate its usability in a business context within the insurance industry. We begin by detailing the open source data available and compare its suitability to commercially-available equivalents for datasets including digital terrain models and river gauge records. We present flood simulation outputs in order to demonstrate the impact of the choice of dataset on the results obtained and its use in a business context. Via use of the 2D hydraulic model JFlow+, our examples also show how advanced modelling techniques can be used on relatively crude datasets to obtain robust and good quality results. In combination with accessible, standard specification GPU technology and open source data, use of JFlow+ has enabled us to produce large-scale hazard maps

  16. CalCOFI Egg Counts

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Fish egg counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets], and...

  17. Count-doubling time safety circuit

    International Nuclear Information System (INIS)

    Keefe, D.J.; McDowell, W.P.; Rusch, G.K.

    1981-01-01

    There is provided a nuclear reactor count-factor-increase time monitoring circuit which includes a pulse-type neutron detector, and means for counting the number of detected pulses during specific time periods. Counts are compared and the comparison is utilized to develop a reactor scram signal, if necessary

  18. Count-doubling time safety circuit

    Science.gov (United States)

    Rusch, Gordon K.; Keefe, Donald J.; McDowell, William P.

    1981-01-01

    There is provided a nuclear reactor count-factor-increase time monitoring circuit which includes a pulse-type neutron detector, and means for counting the number of detected pulses during specific time periods. Counts are compared and the comparison is utilized to develop a reactor scram signal, if necessary.

  19. Toward unstained cytology and complete blood counts at the point of care (Conference Presentation)

    Science.gov (United States)

    Zuluaga, Andres F.; Pierce, Mark C.; MacAulay, Calum E.

    2017-02-01

    Cytology tests, whether performed on body fluids, aspirates, or scrapings are commonly used to detect, diagnose, and monitor a wide variety of health conditions. Complete blood counts (CBCs) quantify the number of red and white blood cells in a blood volume, as well as the different types of white blood cells. There is a critical unmet need for an instrument that can perform CBCs at the point of care (POC), and there is currently no product in the US that can perform this test at the bedside. We have developed a system that is capable of tomographic images with sub-cellular resolution with consumer-grade broadband (LED) sources and CMOS detectors suitable for POC implementation of CBC tests. The systems consists of cascaded static Michelson and Sagnac interferometers that map phase (encoding depth) and a transverse spatial dimension onto a two-dimensional output plane. Our approach requires a 5 microliter sample, can be performed in 5 minutes or less, and does not require staining or other processing as it relies on intrinsic contrast. We will show results directly imaging and differentiating unstained blood cells using supercontinuum fiber lasers and LEDs as sources and CMOS cameras as sensors. We will also lay out the follow up steps needed, including image segmentation, analysis and classification, to verify performance and advance toward CBCs that can be performed bedside and do not require CLIA-certified laboratories.

  20. XMM-Newton 13H deep field - I. X-ray sources

    Science.gov (United States)

    Loaring, N. S.; Dwelly, T.; Page, M. J.; Mason, K.; McHardy, I.; Gunn, K.; Moss, D.; Seymour, N.; Newsam, A. M.; Takata, T.; Sekguchi, K.; Sasseen, T.; Cordova, F.

    2005-10-01

    We present the results of a deep X-ray survey conducted with XMM-Newton, centred on the UK ROSAT13H deep field area. This region covers 0.18 deg2, and is the first of the two areas covered with XMM-Newton as part of an extensive multiwavelength survey designed to study the nature and evolution of the faint X-ray source population. We have produced detailed Monte Carlo simulations to obtain a quantitative characterization of the source detection procedure and to assess the reliability of the resultant sourcelist. We use the simulations to establish a likelihood threshold, above which we expect less than seven (3 per cent) of our sources to be spurious. We present the final catalogue of 225 sources. Within the central 9 arcmin, 68 per cent of source positions are accurate to 2 arcsec, making optical follow-up relatively straightforward. We construct the N(>S) relation in four energy bands: 0.2-0.5, 0.5-2, 2-5 and 5-10 keV. In all but our highest energy band we find that the source counts can be represented by a double power law with a bright-end slope consistent with the Euclidean case and a break around 10-14yergcm-2s-1. Below this flux, the counts exhibit a flattening. Our source counts reach densities of 700, 1300, 900 and 300 deg-2 at fluxes of 4.1 × 10-16,4.5 × 10-16,1.1 × 10-15 and 5.3 × 10-15ergcm-2s-1 in the 0.2-0.5, 0.5-2, 2-5 and 5-10 keV energy bands, respectively. We have compared our source counts with those in the two Chandra deep fields and Lockman hole, and found our source counts to be amongst the highest of these fields in all energy bands. We resolve >51 per cent (>50 per cent) of the X-ray background emission in the 1-2 keV (2-5 keV) energy bands.

  1. The Big Pumpkin Count.

    Science.gov (United States)

    Coplestone-Loomis, Lenny

    1981-01-01

    Pumpkin seeds are counted after students convert pumpkins to jack-o-lanterns. Among the activities involved, pupils learn to count by 10s, make estimates, and to construct a visual representation of 1,000. (MP)

  2. Analysis of car shredder polymer waste with Raman mapping and chemometrics

    Directory of Open Access Journals (Sweden)

    B. Vajna

    2012-02-01

    Full Text Available A novel evaluation method was developed for Raman microscopic quantitative characterization of polymer waste. Car shredder polymer waste was divided into different density fractions by magnetic density separation (MDS technique, and each fraction was investigated by Raman mapping, which is capable of detecting the components being present even in low concentration. The only method available for evaluation of the mapping results was earlier to assign each pixel to a component visually and to count the number of different polymers on the Raman map. An automated method is proposed here for pixel classification, which helps to detect the different polymers present and enables rapid assignment of each pixel to the appropriate polymer. Six chemometric methods were tested to provide a basis for the pixel classification, among which multivariate curve resolution-alternating least squares (MCR-ALS provided the best results. The MCR-ALS based pixel identification method was then used for the quantitative characterization of each waste density fraction, where it was found that the automated method yields accurate results in a very short time, as opposed to manual pixel counting method which may take hours of human work per dataset.

  3. Preparation of Films and Sources for 4{pi} Counting; Prigotovlenie dlya 4{pi}-scheta plenok i istochnikov

    Energy Technology Data Exchange (ETDEWEB)

    Konstantinov, A. A.; Sazonova, T. E. [Vsesojuznyj Nauchno - Isledovatel' skij Institut Im. D.I. Mendeleeva, Leningrad, SSSR (Russian Federation)

    1967-03-15

    To obtain a high degree of accuracy in determining the specific activity of sources by the absolute counting of particles with a 4{pi} counter, attention must be paid to the preparation of the radioactive sources. At the Mendeleev Institute of Metrology, celluloid films (surface density 8-10 {mu}/cm{sup 2}) coated on both sides with gold (10-15 {mu}g/cm{sup 2} ) or palladium (5-6 {mu}g/cm{sup 2}) are used as the bases of the radioactive sources. In order to reduce the correction for absorption of beta particles in the radioactive deposit, the base is specially treated with insulin. The authors present an extremely sensitive and effective method, employing the electron-capture nuclide {sup 54}Mn ({sup 54}Cr), for determining the uniform distribution of the active layer over the entire insulintreated surface. A solution of {sup 54}Mn ({sup 54}Cr) salt was applied to the insulin-tteated film, and the source of {sup 54}Cr ({sup 54}Mn) Auger K electrons thus obtained was investigated with the help of a proportional 4{pi} counter. The total number of {sup 54}Cr ({sup 54}Mn) Auger K electrons from the source was 8-12% less than the fluorescence coefficient (calculated from the number of {sup 54}Cr ({sup 54}Mn) K X-quanta emitted by the source) and the number of K electrons absorbed in the film (determined by the 'sandwich' method). From the differences, for insulintreated and untreated {sup 54}Mn ({sup 54}Cr) sources, between the calculated and recorded number of Auger electrons it is possible to reach a definite conclusion regarding the quality of the insulin treatment. (author) [Russian] Dlja poluchenija vysokoj tochnosti izmerenij pri opredelenii udel'noj aktivnosti istochnikov metodom absoljutnogo scheta chastic s pomoshh'ju 4{pi}- schetchika, bol'shoe vnimanie dolzhno byt' udeleno prigotovleniju radioaktivnyh istochnikov. Vo VNIIM v kachestve podlozhek radioaktivnyh istochnikov ispol'zujutsja celluloidnye plenki (poverhnostnaja plot- nost' 8-10 mkg/sm{sup 2

  4. Noise mapping inside a car cabin

    DEFF Research Database (Denmark)

    Knudsen, Kim; Sjøj, Sidsel Marie Nørholm; Jacobsen, Finn

    The mapping of noise is of considerable interest in the car industry where a good noise mapping can make it much easier to identify the sources that generate the noise and eventually reduce the individual contributions to the noise. The methods used for this purpose include delay-and-sum beamform......The mapping of noise is of considerable interest in the car industry where a good noise mapping can make it much easier to identify the sources that generate the noise and eventually reduce the individual contributions to the noise. The methods used for this purpose include delay......-and-sum beamforming and spherical harmonics beamforming. These methods have a poor spatial esolution at low frequencies, and since much noise generated in cars is dominated by low frequencies the methods are not optimal. In the present paper the mapping is done by solving an inverse problem with a transfer matrix...

  5. Correction of count losses due to deadtime on a DST-XLi (SMVi-GE) camera during dosimetric studies in patients injected with iodine-131

    International Nuclear Information System (INIS)

    Delpon, G.; Ferrer, L.; Lisbona, A.; Bardies, M.

    2002-01-01

    In dosimetric studies performed after therapeutic injection, it is essential to correct count losses due to deadtime on the gamma camera. This note describes four deadtime correction methods, one based on the use of a standard source without preliminary calibration, and three requiring specific calibration and based on the count rate observed in different spectrometric windows (20%, 20% plus a lower energy window and the full spectrum of 50-750 keV). Experiments were conducted on a phantom at increasingly higher count rates to check correction accuracy with the different methods. The error was less than +7% with a standard source, whereas count-rate-based methods gave more accurate results. On the assumption that the model was paralysable, preliminary calibration allowed an observed count rate curve to be plotted as a function of the real count rate. The use of the full spectrum led to a 3.0% underestimation for the highest activity imaged. As count losses depend on photon flux independent of energy, the use of the full spectrum during measurement allowed scatter conditions to be taken into account. A protocol was developed to apply this correction method to whole-body acquisitions. (author)

  6. Calibrate the aerial surveying instrument by the limited surface source and the single point source that replace the unlimited surface source

    International Nuclear Information System (INIS)

    Lu Cunheng

    1999-01-01

    It is described that the calculating formula and surveying result is found on the basis of the stacking principle of gamma ray and the feature of hexagonal surface source when the limited surface source replaces the unlimited surface source to calibrate the aerial survey instrument on the ground, and that it is found in the light of the exchanged principle of the gamma ray when the single point source replaces the unlimited surface source to calibrate aerial surveying instrument in the air. Meanwhile through the theoretical analysis, the receiving rate of the crystal bottom and side surfaces is calculated when aerial surveying instrument receives gamma ray. The mathematical expression of the gamma ray decaying following height according to the Jinge function regularity is got. According to this regularity, the absorbing coefficient that air absorbs the gamma ray and the detective efficiency coefficient of the crystal is calculated based on the ground and air measuring value of the bottom surface receiving count rate (derived from total receiving count rate of the bottom and side surface). Finally, according to the measuring value, it is proved that imitating the change of total receiving gamma ray exposure rate of the bottom and side surfaces with this regularity in a certain high area is feasible

  7. Galaxy number counts: Pt. 2

    International Nuclear Information System (INIS)

    Metcalfe, N.; Shanks, T.; Fong, R.; Jones, L.R.

    1991-01-01

    Using the Prime Focus CCD Camera at the Isaac Newton Telescope we have determined the form of the B and R galaxy number-magnitude count relations in 12 independent fields for 21 m ccd m and 19 m ccd m 5. The average galaxy count relations lie in the middle of the wide range previously encompassed by photographic data. The field-to-field variation of the counts is small enough to define the faint (B m 5) galaxy count to ±10 per cent and this variation is consistent with that expected from galaxy clustering considerations. Our new data confirm that the B, and also the R, galaxy counts show evidence for strong galaxy luminosity evolution, and that the majority of the evolving galaxies are of moderately blue colour. (author)

  8. A constant velocity Moessbauer spectrometer free of long-term instrumental drifts in the count rate

    International Nuclear Information System (INIS)

    Sarma, P.R.; Sharma, A.K.; Tripathi, K.C.

    1979-01-01

    Two new control circuits to be used with a constant velocity Moessbauer spectrometer with a loud-speaker drive have been described. The wave-forms generated in the circuits are of the stair-case type instead of the usual square wave-form, so that in each oscillation of the source it remains stationary for a fraction of the time-period. The gamma-rays counted during this period are monitored along with the positive and negative velocity counts and are used to correct any fluctuation in the count rate by feeding these pulses into the timer. The associated logic circuits have been described and the statistical errors involved in the circuits have been computed. (auth.)

  9. Application of Open Source Software by the Lunar Mapping and Modeling Project

    Science.gov (United States)

    Ramirez, P.; Goodale, C. E.; Bui, B.; Chang, G.; Kim, R. M.; Law, E.; Malhotra, S.; Rodriguez, L.; Sadaqathullah, S.; Mattmann, C. A.; Crichton, D. J.

    2011-12-01

    The Lunar Mapping and Modeling Project (LMMP), led by the Marshall Space Flight center (MSFC), is responsible for the development of an information system to support lunar exploration, decision analysis, and release of lunar data to the public. The data available through the lunar portal is predominantly derived from present lunar missions (e.g., the Lunar Reconnaissance Orbiter (LRO)) and from historical missions (e.g., Apollo). This project has created a gold source of data, models, and tools for lunar explorers to exercise and incorporate into their activities. At Jet Propulsion Laboratory (JPL), we focused on engineering and building the infrastructure to support cataloging, archiving, accessing, and delivery of lunar data. We decided to use a RESTful service-oriented architecture to enable us to abstract from the underlying technology choices and focus on interfaces to be used internally and externally. This decision allowed us to leverage several open source software components and integrate them by either writing a thin REST service layer or relying on the API they provided; the approach chosen was dependent on the targeted consumer of a given interface. We will discuss our varying experience using open source products; namely Apache OODT, Oracle Berkley DB XML, Apache Solr, and Oracle OpenSSO (now named OpenAM). Apache OODT, developed at NASA's Jet Propulsion Laboratory and recently migrated over to Apache, provided the means for ingestion and cataloguing of products within the infrastructure. Its usage was based upon team experience with the project and past benefit received on other projects internal and external to JPL. Berkeley DB XML, distributed by Oracle for both commercial and open source use, was the storage technology chosen for our metadata. This decision was in part based on our use Federal Geographic Data Committee (FGDC) Metadata, which is expressed in XML, and the desire to keep it in its native form and exploit other technologies built on

  10. WEB MAPPING ARCHITECTURES BASED ON OPEN SPECIFICATIONS AND FREE AND OPEN SOURCE SOFTWARE IN THE WATER DOMAIN

    Directory of Open Access Journals (Sweden)

    C. Arias Muñoz

    2017-09-01

    Full Text Available The availability of water-related data and information across different geographical and jurisdictional scales is of critical importance for the conservation and management of water resources in the 21st century. Today information assets are often found fragmented across multiple agencies that use incompatible data formats and procedures for data collection, storage, maintenance, analysis, and distribution. The growing adoption of Web mapping systems in the water domain is reducing the gap between data availability and its practical use and accessibility. Nevertheless, more attention must be given to the design and development of these systems to achieve high levels of interoperability and usability while fulfilling different end user informational needs. This paper first presents a brief overview of technologies used in the water domain, and then presents three examples of Web mapping architectures based on free and open source software (FOSS and the use of open specifications (OS that address different users’ needs for data sharing, visualization, manipulation, scenario simulations, and map production. The purpose of the paper is to illustrate how the latest developments in OS for geospatial and water-related data collection, storage, and sharing, combined with the use of mature FOSS projects facilitate the creation of sophisticated interoperable Web-based information systems in the water domain.

  11. Web Mapping Architectures Based on Open Specifications and Free and Open Source Software in the Water Domain

    Science.gov (United States)

    Arias Muñoz, C.; Brovelli, M. A.; Kilsedar, C. E.; Moreno-Sanchez, R.; Oxoli, D.

    2017-09-01

    The availability of water-related data and information across different geographical and jurisdictional scales is of critical importance for the conservation and management of water resources in the 21st century. Today information assets are often found fragmented across multiple agencies that use incompatible data formats and procedures for data collection, storage, maintenance, analysis, and distribution. The growing adoption of Web mapping systems in the water domain is reducing the gap between data availability and its practical use and accessibility. Nevertheless, more attention must be given to the design and development of these systems to achieve high levels of interoperability and usability while fulfilling different end user informational needs. This paper first presents a brief overview of technologies used in the water domain, and then presents three examples of Web mapping architectures based on free and open source software (FOSS) and the use of open specifications (OS) that address different users' needs for data sharing, visualization, manipulation, scenario simulations, and map production. The purpose of the paper is to illustrate how the latest developments in OS for geospatial and water-related data collection, storage, and sharing, combined with the use of mature FOSS projects facilitate the creation of sophisticated interoperable Web-based information systems in the water domain.

  12. Phenotypic characterization, genetic mapping and candidate gene analysis of a source conferring reduced plant height in sunflower.

    Science.gov (United States)

    Ramos, María Laura; Altieri, Emiliano; Bulos, Mariano; Sala, Carlos A

    2013-01-01

    Reduced height germplasm has the potential to increase stem strength, standability, and also yields potential of the sunflower crop (Helianthus annuus L. var. macrocarpus Ckll.). In this study, we report on the inheritance, mapping, phenotypic and molecular characterization of a reduced plant height trait in inbred lines derived from the source DDR. This trait is controlled by a semidominant allele, Rht1, which maps on linkage group 12 of the sunflower public consensus map. Phenotypic effects of this allele include shorter height and internode length, insensibility to exogenous gibberellin application, normal skotomorphogenetic response, and reduced seed set under self-pollination conditions. This later effect presumably is related to the reduced pollen viability observed in all DDR-derived lines studied. Rht1 completely cosegregated with a haplotype of the HaDella1 gene sequence. This haplotype consists of a point mutation converting a leucine residue in a proline within the conserved DELLA domain. Taken together, the phenotypic, genetic, and molecular results reported here indicate that Rht1 in sunflower likely encodes an altered DELLA protein. If the DELPA motif of the HaDELLA1 sequence in the Rht1-encoded protein determines by itself the observed reduction in height is a matter that remains to be investigated.

  13. Mapping and ablating stable sources for atrial fibrillation: summary of the literature on Focal Impulse and Rotor Modulation (FIRM).

    Science.gov (United States)

    Baykaner, Tina; Lalani, Gautam G; Schricker, Amir; Krummen, David E; Narayan, Sanjiv M

    2014-09-01

    Atrial fibrillation (AF) is the most common sustained arrhythmia and the most common indication for catheter ablation. However, despite substantial technical advances in mapping and energy delivery, ablation outcomes remain suboptimal. A major limitation to AF ablation is that the areas targeted for ablation are rarely of proven mechanistic importance, in sharp contrast to other arrhythmias in which ablation targets demonstrated mechanisms in each patient. Focal impulse and rotor modulation (FIRM) is a new approach to demonstrate the mechanisms that sustain AF ("substrates") in each patient that can be used to guide ablation then confirm elimination of each mechanism. FIRM mapping reveals that AF is sustained by 2-3 rotors and focal sources, with a greater number in patients with persistent than paroxysmal AF, lying within spatially reproducible 2.2 ± 1.4-cm(2) areas in diverse locations. This temporospatial reproducibility, now confirmed by several groups using various methods, changes the concepts regarding AF-sustaining mechanisms, enabling localized rather than widespread ablation. Mechanistically, the role of rotors and focal sources in sustaining AF has been demonstrated by the acute and chronic success of source (FIRM) ablation alone. Clinically, adding FIRM to conventional ablation substantially improves arrhythmia freedom compared with conventional ablation alone, and ongoing randomized trials are comparing FIRM-ablation with and without conventional ablation to conventional ablation alone. In conclusion, ablation of patient-specific AF-sustaining mechanisms (substrates), as exemplified by FIRM, may be central to substantially improving AF ablation outcomes.

  14. Mobile Mapping of Sporting Event Spectators Using Bluetooth Sensors: Tour of Flanders 2011

    Directory of Open Access Journals (Sweden)

    Frederik van Bossche

    2012-10-01

    Full Text Available Accurate spatiotemporal information on crowds is a necessity for a better management in general and for the mitigation of potential security risks. The large numbers of individuals involved and their mobility, however, make generation of this information non-trivial. This paper proposes a novel methodology to estimate and map crowd sizes using mobile Bluetooth sensors and examines to what extent this methodology represents a valuable alternative to existing traditional crowd density estimation methods. The proposed methodology is applied in a unique case study that uses Bluetooth technology for the mobile mapping of spectators of the Tour of Flanders 2011 road cycling race. The locations of nearly 16,000 cell phones of spectators along the race course were registered and detailed views of the spatiotemporal distribution of the crowd were generated. Comparison with visual head counts from camera footage delivered a detection ratio of 13.0 ± 2.3%, making it possible to estimate the crowd size. To our knowledge, this is the first study that uses mobile Bluetooth sensors to count and map a crowd over space and time.

  15. AN ANNOTATED BIBLIOGRAPHY OF CLIMATIC MAPS OF ANGOLA,

    Science.gov (United States)

    Contents: Map of political divisions of Africa; Map of Angola; Sources with abstracts listed alphabetically by author; Alphabetical author index ; Subject heading index with period of record; Subject heading index with map scales.

  16. Sources to the landscape - detailed spatiotemporal analysis of 200 years Danish landscape dynamics using unexploited historical maps and aerial photos

    DEFF Research Database (Denmark)

    Svenningsen, Stig Roar; Christensen, Andreas Aagaard; Dupont, Henrik

    to declassification of military maps and aerial photos from the cold war, only relatively few sources have been made available to researchers due to lacking efforts in digitalization and related services. And even though the digitizing of cartographic material has been accelerated, the digitally available materials...... or to the commercial photo series from the last 20 years. This poster outlines a new research project focusing on the potential of unexploited cartographic sources for detailed analysis of the dynamic of the Danish landscape between 1800 – 2000. The project draws on cartographic sources available in Danish archives...... of material in landscape change studies giving a high temporal and spatial resolution. The project also deals with the opportunity and constrain of comparing different cartographic sources with diverse purpose and time of production, e.g. different scale and quality of aerial photos or the difference between...

  17. Counting statistics in radioactivity measurements

    International Nuclear Information System (INIS)

    Martin, J.

    1975-01-01

    The application of statistical methods to radioactivity measurement problems is analyzed in several chapters devoted successively to: the statistical nature of radioactivity counts; the application to radioactive counting of two theoretical probability distributions, Poisson's distribution law and the Laplace-Gauss law; true counting laws; corrections related to the nature of the apparatus; statistical techniques in gamma spectrometry [fr

  18. Correlation between total lymphocyte count, hemoglobin, hematocrit and CD4 count in HIV patients in Nigeria.

    Science.gov (United States)

    Emuchay, Charles Iheanyichi; Okeniyi, Shemaiah Olufemi; Okeniyi, Joshua Olusegun

    2014-04-01

    The expensive and technology limited setting of CD4 count testing is a major setback to the initiation of HAART in a resource limited country like Nigeria. Simple and inexpensive tools such as Hemoglobin (Hb) measurement and Total Lymphocyte Count (TLC) are recommended as substitute marker. In order to assess the correlations of these parameters with CD4 count, 100 "apparently healthy" male volunteers tested HIV positive aged ≥ 20 years but ≤ 40 years were recruited and from whom Hb, Hct, TLC and CD4 count were obtained. The correlation coefficients, R, the Nash-Sutcliffe Coefficient of Efficiency (CoE) and the p-values of the ANOVA model of Hb, Hct and TLC with CD4 count were assessed. The assessments show that there is no significant relationship of any of these parameters with CD4 count and the correlation coefficients are very weak. This study shows that Hb, Hct and TLC cannot be substitute for CD4 count as this might lead to certain individuals' deprivation of required treatment.

  19. It counts who counts: an experimental evaluation of the importance of observer effects on spotlight count estimates

    DEFF Research Database (Denmark)

    Sunde, Peter; Jessen, Lonnie

    2013-01-01

    observers with respect to their ability to detect and estimate distance to realistic animal silhouettes at different distances. Detection probabilities were higher for observers experienced in spotlighting mammals than for inexperienced observers, higher for observers with a hunting background compared...... with non-hunters and decreased as function of age but were independent of sex or educational background. If observer-specific detection probabilities were applied to real counting routes, point count estimates from inexperienced observers without a hunting background would only be 43 % (95 % CI, 39...

  20. Cardiac MOLLI T1 mapping at 3.0 T: comparison of patient-adaptive dual-source RF and conventional RF transmission.

    Science.gov (United States)

    Rasper, Michael; Nadjiri, Jonathan; Sträter, Alexandra S; Settles, Marcus; Laugwitz, Karl-Ludwig; Rummeny, Ernst J; Huber, Armin M

    2017-06-01

    To prospectively compare image quality and myocardial T 1 relaxation times of modified Look-Locker inversion recovery (MOLLI) imaging at 3.0 T (T) acquired with patient-adaptive dual-source (DS) and conventional single-source (SS) radiofrequency (RF) transmission. Pre- and post-contrast MOLLI T 1 mapping using SS and DS was acquired in 27 patients. Patient wise and segment wise analysis of T 1 times was performed. The correlation of DS MOLLI measurements with a reference spin echo sequence was analysed in phantom experiments. DS MOLLI imaging reduced T 1 standard deviation in 14 out of 16 myocardial segments (87.5%). Significant reduction of T 1 variance could be obtained in 7 segments (43.8%). DS significantly reduced myocardial T 1 variance in 16 out of 25 patients (64.0%). With conventional RF transmission, dielectric shading artefacts occurred in six patients causing diagnostic uncertainty. No according artefacts were found on DS images. DS image findings were in accordance with conventional T 1 mapping and late gadolinium enhancement (LGE) imaging. Phantom experiments demonstrated good correlation of myocardial T 1 time between DS MOLLI and spin echo imaging. Dual-source RF transmission enhances myocardial T 1 homogeneity in MOLLI imaging at 3.0 T. The reduction of signal inhomogeneities and artefacts due to dielectric shading is likely to enhance diagnostic confidence.

  1. Global Rural-Urban Mapping Project, Version 1 (GRUMPv1): Urban Extent Polygons, Revision 01

    Data.gov (United States)

    National Aeronautics and Space Administration — The primary output of the Global Rural Urban Mapping Project, Version 1 (GRUMPv1) are a series of grids representing estimated population counts and density for the...

  2. Dual-source dual-energy CT angiography with virtual non-enhanced images and iodine map for active gastrointestinal bleeding: Image quality, radiation dose and diagnostic performance

    International Nuclear Information System (INIS)

    Sun, Hao; Hou, Xin-Yi; Xue, Hua-Dan; Li, Xiao-Guang; Jin, Zheng-Yu; Qian, Jia-Ming; Yu, Jian-Chun; Zhu, Hua-Dong

    2015-01-01

    Highlights: • GIB is a common gastrointestinal emergency with a high mortality rate. • Detection and localization of GIB source are important for imaging modality. • DSDECTA using a dual-phase scan protocol is clinically feasible. • DSDECTA with VNE and iodine map images can diagnose the active GIB source accurately. • DSDECTA can reduce radiation dose compared with conventional CT examination in GIB. - Abstract: Objectives: To evaluate the clinical feasibility of dual-source dual-energy CT angiography (DSDECTA) with virtual non-enhanced images and iodine map for active gastrointestinal bleeding (GIB). Methods: From June 2010 to December 2012, 112 consecutive patients with clinical signs of active GIB underwent DSDECTA with true non-enhanced (TNE), arterial phase with single-source mode, and portal-venous phase with dual-energy mode (100 kVp/230 mAs and Sn 140 kVp/178 mAs). Virtual non-enhanced CT (VNE) image sets and iodine map were reformatted from ‘Liver VNC’ software. The mean CT number, noise, signal to noise ratio (SNR), image quality and radiation dose were compared between TNE and VNE image sets. Two radiologists, blinded to clinical data, interpreted images from DSDECTA with TNE (protocol 1), and DSDECTA with VNE and iodine map (protocol 2) respectively, with discordant interpretation resolved by consensus. The standards of reference included digital subtraction angiography, endoscopy, surgery, or final pathology reports. Receiver–operating characteristic (ROC) analysis was undertaken and the area under the curve (AUC) calculated for CT protocols 1 and 2, respectively. Results: There was no significant difference in mean CT numbers of all organs (including liver, pancreas, spleen, kidney, abdominal aorta, and psoas muscle) (P > 0.05). Lower noise and higher SNR were found on VNE images than TNE images (P < 0.05). Image quality of VNE was lower than that of TNE without significant difference (P > 0.05). The active GIB source was identified

  3. Dual-source dual-energy CT angiography with virtual non-enhanced images and iodine map for active gastrointestinal bleeding: Image quality, radiation dose and diagnostic performance

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Hao, E-mail: sunhao_robert@126.com [Department of Radiology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No. 1, Wangfujing Street, Dongcheng District, Beijing 100730 (China); Hou, Xin-Yi, E-mail: hxy_pumc@126.com [Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing (China); Xue, Hua-Dan, E-mail: bjdanna95@hotmail.com [Department of Radiology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No. 1, Wangfujing Street, Dongcheng District, Beijing 100730 (China); Li, Xiao-Guang, E-mail: xglee88@126.com [Department of Radiology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No. 1, Wangfujing Street, Dongcheng District, Beijing 100730 (China); Jin, Zheng-Yu, E-mail: zhengyu_jin@126.com [Department of Radiology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No. 1, Wangfujing Street, Dongcheng District, Beijing 100730 (China); Qian, Jia-Ming, E-mail: qjiaming57@gmail.com [Department of Gastroenterology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing (China); Yu, Jian-Chun, E-mail: yu-jch@163.com [Department of General Surgery, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing (China); Zhu, Hua-Dong, E-mail: huadongzhu@hotmail.com [Department of Emergency, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing (China)

    2015-05-15

    Highlights: • GIB is a common gastrointestinal emergency with a high mortality rate. • Detection and localization of GIB source are important for imaging modality. • DSDECTA using a dual-phase scan protocol is clinically feasible. • DSDECTA with VNE and iodine map images can diagnose the active GIB source accurately. • DSDECTA can reduce radiation dose compared with conventional CT examination in GIB. - Abstract: Objectives: To evaluate the clinical feasibility of dual-source dual-energy CT angiography (DSDECTA) with virtual non-enhanced images and iodine map for active gastrointestinal bleeding (GIB). Methods: From June 2010 to December 2012, 112 consecutive patients with clinical signs of active GIB underwent DSDECTA with true non-enhanced (TNE), arterial phase with single-source mode, and portal-venous phase with dual-energy mode (100 kVp/230 mAs and Sn 140 kVp/178 mAs). Virtual non-enhanced CT (VNE) image sets and iodine map were reformatted from ‘Liver VNC’ software. The mean CT number, noise, signal to noise ratio (SNR), image quality and radiation dose were compared between TNE and VNE image sets. Two radiologists, blinded to clinical data, interpreted images from DSDECTA with TNE (protocol 1), and DSDECTA with VNE and iodine map (protocol 2) respectively, with discordant interpretation resolved by consensus. The standards of reference included digital subtraction angiography, endoscopy, surgery, or final pathology reports. Receiver–operating characteristic (ROC) analysis was undertaken and the area under the curve (AUC) calculated for CT protocols 1 and 2, respectively. Results: There was no significant difference in mean CT numbers of all organs (including liver, pancreas, spleen, kidney, abdominal aorta, and psoas muscle) (P > 0.05). Lower noise and higher SNR were found on VNE images than TNE images (P < 0.05). Image quality of VNE was lower than that of TNE without significant difference (P > 0.05). The active GIB source was identified

  4. You can count on the motor cortex: Finger counting habits modulate motor cortex activation evoked by numbers

    Science.gov (United States)

    Tschentscher, Nadja; Hauk, Olaf; Fischer, Martin H.; Pulvermüller, Friedemann

    2012-01-01

    The embodied cognition framework suggests that neural systems for perception and action are engaged during higher cognitive processes. In an event-related fMRI study, we tested this claim for the abstract domain of numerical symbol processing: is the human cortical motor system part of the representation of numbers, and is organization of numerical knowledge influenced by individual finger counting habits? Developmental studies suggest a link between numerals and finger counting habits due to the acquisition of numerical skills through finger counting in childhood. In the present study, digits 1 to 9 and the corresponding number words were presented visually to adults with different finger counting habits, i.e. left- and right-starters who reported that they usually start counting small numbers with their left and right hand, respectively. Despite the absence of overt hand movements, the hemisphere contralateral to the hand used for counting small numbers was activated when small numbers were presented. The correspondence between finger counting habits and hemispheric motor activation is consistent with an intrinsic functional link between finger counting and number processing. PMID:22133748

  5. Reliability evaluation of hard disk drive failures based on counting processes

    International Nuclear Information System (INIS)

    Ye, Zhi-Sheng; Xie, Min; Tang, Loon-Ching

    2013-01-01

    Reliability assessment for hard disk drives (HDDs) is important yet difficult for manufacturers. Motivated by the fact that the particle accumulation in the HDDs, which accounts for most HDD catastrophic failures, is contributed from the internal and external sources, a counting process with two arrival sources is proposed to model the particle cumulative process in HDDs. This model successfully explains the collapse of traditional ALT approaches for accelerated life test data. Parameter estimation and hypothesis tests for the model are developed and illustrated with real data from a HDD test. A simulation study is conducted to examine the accuracy of large sample normal approximations that are used to test existence of the internal and external sources.

  6. Total lymphocyte count and subpopulation lymphocyte counts in relation to dietary intake and nutritional status of peritoneal dialysis patients.

    Science.gov (United States)

    Grzegorzewska, Alicja E; Leander, Magdalena

    2005-01-01

    Dietary deficiency causes abnormalities in circulating lymphocyte counts. For the present paper, we evaluated correlations between total and subpopulation lymphocyte counts (TLC, SLCs) and parameters of nutrition in peritoneal dialysis (PD) patients. Studies were carried out in 55 patients treated with PD for 22.2 +/- 11.4 months. Parameters of nutritional status included total body mass, lean body mass (LBM), body mass index (BMI), and laboratory indices [total protein, albumin, iron, ferritin, and total iron binding capacity (TIBC)]. The SLCs were evaluated using flow cytometry. Positive correlations were seen between TLC and dietary intake of niacin; TLC and CD8 and CD16+56 counts and energy delivered from protein; CD4 count and beta-carotene and monounsaturated fatty acids 17:1 intake; and CD19 count and potassium, copper, vitamin A, and beta-carotene intake. Anorexia negatively influenced CD19 count. Serum albumin showed correlations with CD4 and CD19 counts, and LBM with CD19 count. A higher CD19 count was connected with a higher red blood cell count, hemoglobin, and hematocrit. Correlations were observed between TIBC and TLC and CD3 and CD8 counts, and between serum Fe and TLC and CD3 and CD4 counts. Patients with a higher CD19 count showed a better clinical-laboratory score, especially less weakness. Patients with a higher CD4 count had less expressed insomnia. Quantities of ingested vitamins and minerals influence lymphocyte counts in the peripheral blood of PD patients. Evaluation of TLC and SLCs is helpful in monitoring the effectiveness of nutrition in these patients.

  7. Intensity Maps Production Using Real-Time Joint Streaming Data Processing From Social and Physical Sensors

    Science.gov (United States)

    Kropivnitskaya, Y. Y.; Tiampo, K. F.; Qin, J.; Bauer, M.

    2015-12-01

    Intensity is one of the most useful measures of earthquake hazard, as it quantifies the strength of shaking produced at a given distance from the epicenter. Today, there are several data sources that could be used to determine intensity level which can be divided into two main categories. The first category is represented by social data sources, in which the intensity values are collected by interviewing people who experienced the earthquake-induced shaking. In this case, specially developed questionnaires can be used in addition to personal observations published on social networks such as Twitter. These observations are assigned to the appropriate intensity level by correlating specific details and descriptions to the Modified Mercalli Scale. The second category of data sources is represented by observations from different physical sensors installed with the specific purpose of obtaining an instrumentally-derived intensity level. These are usually based on a regression of recorded peak acceleration and/or velocity amplitudes. This approach relates the recorded ground motions to the expected felt and damage distribution through empirical relationships. The goal of this work is to implement and evaluate streaming data processing separately and jointly from both social and physical sensors in order to produce near real-time intensity maps and compare and analyze their quality and evolution through 10-minute time intervals immediately following an earthquake. Results are shown for the case study of the M6.0 2014 South Napa, CA earthquake that occurred on August 24, 2014. The using of innovative streaming and pipelining computing paradigms through IBM InfoSphere Streams platform made it possible to read input data in real-time for low-latency computing of combined intensity level and production of combined intensity maps in near-real time. The results compare three types of intensity maps created based on physical, social and combined data sources. Here we correlate

  8. Monte Carlo simulation of gamma-ray total counting efficiency for a Phoswich detector

    Energy Technology Data Exchange (ETDEWEB)

    Yalcin, S. [Education Faculty, Kastamonu University, 37200 Kastamonu (Turkey)], E-mail: syalcin@kastamonu.edu.tr; Gurler, O. [Department of Physics, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey); Gundogdu, O. [Department of Physics, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford, GU2 7XH (United Kingdom); NCCPM, Medical Physics, Royal Surrey County Hospital, Guildford, GU2 7XX (United Kingdom); Kaynak, G. [Department of Physics, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey)

    2009-01-15

    The LB 1000-PW detector is mainly used for determining total alpha, beta and gamma activity of low activity natural sources such as water, soil, air filters and any other environmental sources. Detector efficiency needs to be known in order to measure the absolute activity of such samples. This paper presents results on the total gamma counting efficiency of a Phoswich detector from point and disk sources. The directions of photons emitted from the source were determined by Monte Carlo techniques and the true path lengths in the detector were determined by analytical equations depending on photon directions. Results are tabulated for various gamma energies.

  9. Monte Carlo simulation of gamma-ray total counting efficiency for a Phoswich detector

    International Nuclear Information System (INIS)

    Yalcin, S.; Gurler, O.; Gundogdu, O.; Kaynak, G.

    2009-01-01

    The LB 1000-PW detector is mainly used for determining total alpha, beta and gamma activity of low activity natural sources such as water, soil, air filters and any other environmental sources. Detector efficiency needs to be known in order to measure the absolute activity of such samples. This paper presents results on the total gamma counting efficiency of a Phoswich detector from point and disk sources. The directions of photons emitted from the source were determined by Monte Carlo techniques and the true path lengths in the detector were determined by analytical equations depending on photon directions. Results are tabulated for various gamma energies

  10. Preparedness for response to the challenges from orphan sources: nationwide environmental radiation mapping with state of the art monitoring systems

    International Nuclear Information System (INIS)

    Saindane, Shashank S.; Pradeepkumar, K.S.; Suri, M.M.K.; Sharma, D.N.

    2008-01-01

    Based on the various international reports on orphan sources, the potential for radiological emergencies in public domain is recognized as a cause of concern. To detect the presence of any such orphan sources and to strengthen the preparedness for response to any radiological emergencies in public domain, a nationwide radiation mapping programme was initiated in India. Various radiation monitoring systems, few of them integrated with Global Positioning System (GPS) installed in mobile monitoring vans were used for this purpose. This monitoring also helped in generating the base line dose rate data of the cities and also in demonstrating the methodology of environmental monitoring for locating the presence of orphan sources, if any. During the detailed monitoring of various cities of the country, different systems such as GSM based Radiation Monitoring System (GRaMS), Compact Radiation Monitoring system, Portable Mobile Gamma Spectrometry System, Gamma Tracer System etc. installed in a vehicle were made to continuously acquire the data at a varying rate from 10 sec to 1 minute acquisition time. These systems can measure dose rate in the range of 0.01 - 100 μGy h -1 and can detect 7.4 MBq (200 μCi) of 60 Co and 25 MBq (675 μCi) of 137 Cs from a distance of 5 metre. Average dose rate recorded during these environmental monitoring was 81 ± 07 nGy h -1 with a maximum of 210 ± 11 nGyh -1 at Bangalore (attributed to the presence of K-40). The digital topographic map and the data acquired from the radiation mapping are used to generate terrestrial radiation map. This radiation profile stored in the database can be used as reference while carrying out the impact assessment following any nuclear / radiological emergencies. These systems also help to tag the radiation levels along with positional coordinates online onto the GIS map of the area. GRaMS also demonstrated its capability for online transmission of the data to the centralized data acquisition Base Station

  11. Aerial Survey Counts of Harbor Seals in Lake Iliamna, Alaska, 1984-2013 (NODC Accession 0123188)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset provides counts of harbor seals from aerial surveys over Lake Iliamna, Alaska, USA. The data have been collated from three previously published sources...

  12. A Dataset of Aerial Survey Counts of Harbor Seals in Iliamna Lake, Alaska: 1984-2013

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset provides counts of harbor seals from aerial surveys over Iliamna Lake, Alaska, USA. The data have been collated from three previously published sources...

  13. Single-source gamma radiation procedures for improved calibration and measurements in porous media

    International Nuclear Information System (INIS)

    Oostrom, M.; Hofstee, C.; Dane, H.; Lenhard, R.J.

    1998-01-01

    When dual-energy gamma radiation systems are employed for measurements in porous media, count rates from both sources are often used to compute parameter values. However, for several applications, the count rates of just one source are insufficient. These applications include the determination of volumetric liquid content values in two-liquid systems and salt concentration values in water-saturated porous media. Single-energy gamma radiation procedures for three applications are described in this paper. Through an error analysis, single-source procedures are shown to reduce the probable error in the determinations considerably. Example calculations and simple column experiments were conducted for each application to compare the performance of the new single-source and standard dual-source methods. In all cases, the single-source methods provided more reliable data than the traditional dual-source methods. In addition, a single-source calibration procedure is proposed to determine incident count rates indirectly. This procedure, which requires packing under saturated conditions, can be used in all single- and dual-source applications and yields accurate porosity and dry bulk density values

  14. Heterogeneous counting on filter support media

    International Nuclear Information System (INIS)

    Long, E.; Kohler, V.; Kelly, M.J.

    1976-01-01

    Many investigators in the biomedical research area have used filter paper as the support for radioactive samples. This means that a heterogeneous counting of sample sometimes results. The count rate of a sample on a filter will be affected by positioning, degree of dryness, sample application procedure, the type of filter, and the type of cocktail used. Positioning of the filter (up or down) in the counting vial can cause a variation of 35% or more when counting tritiated samples on filter paper. Samples of varying degrees of dryness when added to the counting cocktail can cause nonreproducible counts if handled improperly. Count rates starting at 2400 CPM initially can become 10,000 CPM in 24 hours for 3 H-DNA (deoxyribonucleic acid) samples dried on standard cellulose acetate membrane filters. Data on cellulose nitrate filters show a similar trend. Sample application procedures in which the sample is applied to the filter in a small spot or on a large amount of the surface area can cause nonreproducible or very low counting rates. A tritiated DNA sample, when applied topically, gives a count rate of 4,000 CPM. When the sample is spread over the whole filter, 13,400 CPM are obtained with a much better coefficient of variation (5% versus 20%). Adding protein carrier (bovine serum albumin-BSA) to the sample to trap more of the tritiated DNA on the filter during the filtration process causes a serious beta absorption problem. Count rates which are one-fourth the count rate applied to the filter are obtained on calibrated runs. Many of the problems encountered can be alleviated by a proper choice of filter and the use of a liquid scintillation cocktail which dissolves the filter. Filter-Solv has been used to dissolve cellulose nitrate filters and filters which are a combination of cellulose nitrate and cellulose acetate. Count rates obtained for these dissolved samples are very reproducible and highly efficient

  15. Mapping groundwater dynamics using multiple sources of exhaustive high resolution data

    NARCIS (Netherlands)

    Finke, P.A.; Brus, D.J.; Bierkens, M.F.P.; Hoogland, T.; Knotters, M.; Vries, de F.

    2004-01-01

    Existing groundwater table (GWT) class maps, available at full coverage for the Netherlands at 1:50,000 scale, no longer satisfy user demands. Groundwater levels have changed due to strong human impact, so the maps are partially outdated. Furthermore, a more dynamic description of groundwater table

  16. Tsunami hazard maps of spanish coast at national scale from seismic sources

    Science.gov (United States)

    Aniel-Quiroga, Íñigo; González, Mauricio; Álvarez-Gómez, José Antonio; García, Pablo

    2017-04-01

    Tsunamis are a moderately frequent phenomenon in the NEAM (North East Atlantic and Mediterranean) region, and consequently in Spain, as historic and recent events have affected this area. I.e., the 1755 earthquake and tsunami affected the Spanish Atlantic coasts of Huelva and Cadiz and the 2003 Boumerdés earthquake triggered a tsunami that reached Balearic island coast in less than 45 minutes. The risk in Spain is real and, its population and tourism rate makes it vulnerable to this kind of catastrophic events. The Indian Ocean tsunami in 2004 and the tsunami in Japan in 2011 launched the worldwide development and application of tsunami risk reduction measures that have been taken as a priority in this field. On November 20th 2015 the directive of the Spanish civil protection agency on planning under the emergency of tsunami was presented. As part of the Spanish National Security strategy, this document specifies the structure of the action plans at different levels: National, regional and local. In this sense, the first step is the proper evaluation of the tsunami hazard at National scale. This work deals with the assessment of the tsunami hazard in Spain, by means of numerical simulations, focused on the elaboration of tsunami hazard maps at National scale. To get this, following a deterministic approach, the seismic structures whose earthquakes could generate the worst tsunamis affecting the coast of Spain have been compiled and characterized. These worst sources have been propagated numerically along a reconstructed bathymetry, built from the best resolution available data. This high-resolution bathymetry was joined with a 25-m resolution DTM, to generate continuous offshore-onshore space, allowing the calculation of the flooded areas prompted by each selected source. The numerical model applied for the calculation of the tsunami propagations was COMCOT. The maps resulting from the numerical simulations show not only the tsunami amplitude at coastal areas but

  17. Improving flood risk mapping in Italy: the FloodRisk open-source software

    Science.gov (United States)

    Albano, Raffaele; Mancusi, Leonardo; Craciun, Iulia; Sole, Aurelia; Ozunu, Alexandru

    2017-04-01

    Time and again, floods around the world illustrate the devastating impact they can have on societies. Furthermore, the expectation that the flood damages can increase over time with climate, land-use change and social growth in flood prone-areas has raised the public and other stakeholders' (governments, international organization, re-insurance companies and emergency responders) awareness for the need to manage risks in order to mitigate their causes and consequences. In this light, the choice of appropriate measures, the assessment of the costs and effects of such measures, and their prioritization are crucial for decision makers. As a result, a priori flood risk assessment has become a key part of flood management practices with the aim of minimizing the total costs related to the risk management cycle. In this context, The EU Flood Directive 2007/60 requires the delineation of flood risk maps on the bases of most appropriate and advanced tools, with particular attention on limiting required economic efforts. The main aim of these risk maps is to provide the required knowledge for the development of flood risk management plans (FRMPs) by considering both costs and benefits of alternatives and results from consultation with all interested parties. In this context, this research project developed a free and open-source (FOSS) GIS software, called FloodRisk, to operatively support stakeholders in their compliance with the FRMPs. FloodRisk aims to facilitate the development of risk maps and the evaluation and management of current and future flood risk for multi-purpose applications. This new approach overcomes the limits of the expert-drive qualitative (EDQ) approach currently adopted in several European countries, such as Italy, which does not permit a suitable evaluation of the effectiveness of risk mitigation strategies, because the vulnerability component cannot be properly assessed. Moreover, FloodRisk is also able to involve the citizens in the flood

  18. a Mapping Method of Slam Based on Look up Table

    Science.gov (United States)

    Wang, Z.; Li, J.; Wang, A.; Wang, J.

    2017-09-01

    In the last years several V-SLAM(Visual Simultaneous Localization and Mapping) approaches have appeared showing impressive reconstructions of the world. However these maps are built with far more than the required information. This limitation comes from the whole process of each key-frame. In this paper we present for the first time a mapping method based on the LOOK UP TABLE(LUT) for visual SLAM that can improve the mapping effectively. As this method relies on extracting features in each cell divided from image, it can get the pose of camera that is more representative of the whole key-frame. The tracking direction of key-frames is obtained by counting the number of parallax directions of feature points. LUT stored all mapping needs the number of cell corresponding to the tracking direction which can reduce the redundant information in the key-frame, and is more efficient to mapping. The result shows that a better map with less noise is build using less than one-third of the time. We believe that the capacity of LUT efficiently building maps makes it a good choice for the community to investigate in the scene reconstruction problems.

  19. LAWRENCE RADIATION LABORATORY COUNTING HANDBOOK

    Energy Technology Data Exchange (ETDEWEB)

    Group, Nuclear Instrumentation

    1966-10-01

    The Counting Handbook is a compilation of operational techniques and performance specifications on counting equipment in use at the Lawrence Radiation Laboratory, Berkeley. Counting notes have been written from the viewpoint of the user rather than that of the designer or maintenance man. The only maintenance instructions that have been included are those that can easily be performed by the experimenter to assure that the equipment is operating properly.

  20. Precise Mapping Of A Spatially Distributed Radioactive Source

    International Nuclear Information System (INIS)

    Beck, A.; Caras, I.; Piestum, S.; Sheli, E.; Melamud, Y.; Berant, S.; Kadmon, Y.; Tirosh, D.

    1999-01-01

    Spatial distribution measurement of radioactive sources is a routine task in the nuclear industry. The precision of each measurement depends upon the specific application. However, the technological edge of this precision is motivated by the production of standards for calibration. Within this definition, the most demanding field is the calibration of standards for medical equipment. In this paper, a semi-empirical method for controlling the measurement precision is demonstrated, using a relatively simple laboratory apparatus. The spatial distribution of the source radioactivity is measured as part of the quality assurance tests, during the production of flood sources. These sources are further used in calibration of medical gamma cameras. A typical flood source is a 40 x 60 cm 2 plate with an activity of 10 mCi (or more) of 57 Co isotope. The measurement set-up is based on a single NaI(Tl) scintillator with a photomultiplier tube, moving on an X Y table which scans the flood source. In this application the source is required to have a uniform activity distribution over its surface

  1. ChromAIX2: A large area, high count-rate energy-resolving photon counting ASIC for a Spectral CT Prototype

    Science.gov (United States)

    Steadman, Roger; Herrmann, Christoph; Livne, Amir

    2017-08-01

    Spectral CT based on energy-resolving photon counting detectors is expected to deliver additional diagnostic value at a lower dose than current state-of-the-art CT [1]. The capability of simultaneously providing a number of spectrally distinct measurements not only allows distinguishing between photo-electric and Compton interactions but also discriminating contrast agents that exhibit a K-edge discontinuity in the absorption spectrum, referred to as K-edge Imaging [2]. Such detectors are based on direct converting sensors (e.g. CdTe or CdZnTe) and high-rate photon counting electronics. To support the development of Spectral CT and show the feasibility of obtaining rates exceeding 10 Mcps/pixel (Poissonian observed count-rate), the ChromAIX ASIC has been previously reported showing 13.5 Mcps/pixel (150 Mcps/mm2 incident) [3]. The ChromAIX has been improved to offer the possibility of a large area coverage detector, and increased overall performance. The new ASIC is called ChromAIX2, and delivers count-rates exceeding 15 Mcps/pixel with an rms-noise performance of approximately 260 e-. It has an isotropic pixel pitch of 500 μm in an array of 22×32 pixels and is tile-able on three of its sides. The pixel topology consists of a two stage amplifier (CSA and Shaper) and a number of test features allowing to thoroughly characterize the ASIC without a sensor. A total of 5 independent thresholds are also available within each pixel, allowing to acquire 5 spectrally distinct measurements simultaneously. The ASIC also incorporates a baseline restorer to eliminate excess currents induced by the sensor (e.g. dark current and low frequency drifts) which would otherwise cause an energy estimation error. In this paper we report on the inherent electrical performance of the ChromAXI2 as well as measurements obtained with CZT (CdZnTe)/CdTe sensors and X-rays and radioactive sources.

  2. Map Archive Mining: Visual-Analytical Approaches to Explore Large Historical Map Collections

    Directory of Open Access Journals (Sweden)

    Johannes H. Uhl

    2018-04-01

    Full Text Available Historical maps are unique sources of retrospective geographical information. Recently, several map archives containing map series covering large spatial and temporal extents have been systematically scanned and made available to the public. The geographical information contained in such data archives makes it possible to extend geospatial analysis retrospectively beyond the era of digital cartography. However, given the large data volumes of such archives (e.g., more than 200,000 map sheets in the United States Geological Survey topographic map archive and the low graphical quality of older, manually-produced map sheets, the process to extract geographical information from these map archives needs to be automated to the highest degree possible. To understand the potential challenges (e.g., salient map characteristics and data quality variations in automating large-scale information extraction tasks for map archives, it is useful to efficiently assess spatio-temporal coverage, approximate map content, and spatial accuracy of georeferenced map sheets at different map scales. Such preliminary analytical steps are often neglected or ignored in the map processing literature but represent critical phases that lay the foundation for any subsequent computational processes including recognition. Exemplified for the United States Geological Survey topographic map and the Sanborn fire insurance map archives, we demonstrate how such preliminary analyses can be systematically conducted using traditional analytical and cartographic techniques, as well as visual-analytical data mining tools originating from machine learning and data science.

  3. A UNIFIED EMPIRICAL MODEL FOR INFRARED GALAXY COUNTS BASED ON THE OBSERVED PHYSICAL EVOLUTION OF DISTANT GALAXIES

    International Nuclear Information System (INIS)

    Béthermin, Matthieu; Daddi, Emanuele; Sargent, Mark T.; Elbaz, David; Mullaney, James; Pannella, Maurilio; Magdis, Georgios; Hezaveh, Yashar; Le Borgne, Damien; Buat, Véronique; Charmandaris, Vassilis; Lagache, Guilaine; Scott, Douglas

    2012-01-01

    We reproduce the mid-infrared to radio galaxy counts with a new empirical model based on our current understanding of the evolution of main-sequence (MS) and starburst (SB) galaxies. We rely on a simple spectral energy distribution (SED) library based on Herschel observations: a single SED for the MS and another one for SB, getting warmer with redshift. Our model is able to reproduce recent measurements of galaxy counts performed with Herschel, including counts per redshift slice. This agreement demonstrates the power of our 2-Star-Formation Modes (2SFM) decomposition in describing the statistical properties of infrared sources and their evolution with cosmic time. We discuss the relative contribution of MS and SB galaxies to the number counts at various wavelengths and flux densities. We also show that MS galaxies are responsible for a bump in the 1.4 GHz radio counts around 50 μJy. Material of the model (predictions, SED library, mock catalogs, etc.) is available online.

  4. Full counting statistics of a charge pump in the Coulomb blockade regime

    Science.gov (United States)

    Andreev, A. V.; Mishchenko, E. G.

    2001-12-01

    We study full charge counting statistics (FCCS) of a charge pump based on a nearly open single electron transistor. The problem is mapped onto an exactly soluble problem of a nonequilibrium g=1/2 Luttinger liquid with an impurity. We obtain an analytic expression for the generating function of the transmitted charge for an arbitrary pumping strength. Although this model contains fractionally charged excitations only integer transmitted charges can be observed. In the weak pumping limit FCCS correspond to a Poissonian transmission of particles with charge e*=e/2 from which all events with odd numbers of transferred particles are excluded.

  5. The Kruskal Count

    OpenAIRE

    Lagarias, Jeffrey C.; Rains, Eric; Vanderbei, Robert J.

    2001-01-01

    The Kruskal Count is a card trick invented by Martin J. Kruskal in which a magician "guesses" a card selected by a subject according to a certain counting procedure. With high probability the magician can correctly "guess" the card. The success of the trick is based on a mathematical principle related to coupling methods for Markov chains. This paper analyzes in detail two simplified variants of the trick and estimates the probability of success. The model predictions are compared with simula...

  6. Determination, Source Identification and GIS Mapping for Nitrate Concentration in Groundwater from Bara Aquifer

    Energy Technology Data Exchange (ETDEWEB)

    Elami, G. M.; Sam, A. K.; Yagob, T. I.; Siddeeg, S. E.M.B.; Hatim, E.; Hajo, I. [Sudan Atomic Energy Commission, Sudan, Khartoum (Sudan)

    2013-07-15

    This study was carried out to determine the level of nitrate concentration in well water from Bara aquifer in north Kordofan state (west central sudan). The analysis was conducted for 69 wells from different villages within the Bara basin. Spectophotometric analysis was used to determine nitrate, nitrite and ammonia. Results revealed that nitrate concentration range was from 9.68 to 891 mg L in the sampled well with 81% exceeding the maximum permissible limits set for drinking water by WHO and SSMO. Animal waste and organic soil nitrogen were found to be the source of nitrate in these wells as indicated by {sup 15}N. The majority of wells with high nitrate are in the north and the north east part of the study area are shown by the GIS predictive map. (author)

  7. SPERM COUNT DISTRIBUTIONS IN FERTILE MEN

    Science.gov (United States)

    Sperm concentration and count are often used as indicators of environmental impacts on male reproductive health. Existing clinical databases may be biased towards subfertile men with low sperm counts and less is known about expected sperm count distributions in cohorts of fertil...

  8. Edge detection of optical subaperture image based on improved differential box-counting method

    Science.gov (United States)

    Li, Yi; Hui, Mei; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-01-01

    Optical synthetic aperture imaging technology is an effective approach to improve imaging resolution. Compared with monolithic mirror system, the image of optical synthetic aperture system is often more complex at the edge, and as a result of the existence of gap between segments, which makes stitching becomes a difficult problem. So it is necessary to extract the edge of subaperture image for achieving effective stitching. Fractal dimension as a measure feature can describe image surface texture characteristics, which provides a new approach for edge detection. In our research, an improved differential box-counting method is used to calculate fractal dimension of image, then the obtained fractal dimension is mapped to grayscale image to detect edges. Compared with original differential box-counting method, this method has two improvements as follows: by modifying the box-counting mechanism, a box with a fixed height is replaced by a box with adaptive height, which solves the problem of over-counting the number of boxes covering image intensity surface; an image reconstruction method based on super-resolution convolutional neural network is used to enlarge small size image, which can solve the problem that fractal dimension can't be calculated accurately under the small size image, and this method may well maintain scale invariability of fractal dimension. The experimental results show that the proposed algorithm can effectively eliminate noise and has a lower false detection rate compared with the traditional edge detection algorithms. In addition, this algorithm can maintain the integrity and continuity of image edge in the case of retaining important edge information.

  9. Standardization of I-125 solution by extrapolation of an efficiency wave obtained by coincidence X-(X-γ) counting method

    International Nuclear Information System (INIS)

    Iwahara, A.

    1989-01-01

    The activity concentration of 125 I was determined by X-(X-α) coincidence counting method and efficiency extrapolation curve. The measurement system consists of 2 thin NaI(T1) scintillation detectors which are horizontally movable on a track. The efficiency curve is obtained by symmetricaly changing the distance between the source and the detectors and the activity is determined by applying a linear efficiency extrapolation curve. All sum-coincidence events are included between 10 and 100 KeV window counting and the main source of uncertainty is coming from poor counting statistic around zero efficiency. The consistence of results with other methods shows that this technique can be applied to photon cascade emitters and are not discriminating by the detectors. It has been also determined the 35,5 KeV gamma-ray emission probability of 125 I by using a Gamma-X type high purity germanium detector. (author) [pt

  10. High-resolution mapping of motor vehicle carbon dioxide emissions

    Science.gov (United States)

    McDonald, Brian C.; McBride, Zoe C.; Martin, Elliot W.; Harley, Robert A.

    2014-05-01

    A fuel-based inventory for vehicle emissions is presented for carbon dioxide (CO2) and mapped at various spatial resolutions (10 km, 4 km, 1 km, and 500 m) using fuel sales and traffic count data. The mapping is done separately for gasoline-powered vehicles and heavy-duty diesel trucks. Emission estimates from this study are compared with the Emissions Database for Global Atmospheric Research (EDGAR) and VULCAN. All three inventories agree at the national level within 5%. EDGAR uses road density as a surrogate to apportion vehicle emissions, which leads to 20-80% overestimates of on-road CO2 emissions in the largest U.S. cities. High-resolution emission maps are presented for Los Angeles, New York City, San Francisco-San Jose, Houston, and Dallas-Fort Worth. Sharp emission gradients that exist near major highways are not apparent when emissions are mapped at 10 km resolution. High CO2 emission fluxes over highways become apparent at grid resolutions of 1 km and finer. Temporal variations in vehicle emissions are characterized using extensive day- and time-specific traffic count data and are described over diurnal, day of week, and seasonal time scales. Clear differences are observed when comparing light- and heavy-duty vehicle traffic patterns and comparing urban and rural areas. Decadal emission trends were analyzed from 2000 to 2007 when traffic volumes were increasing and a more recent period (2007-2010) when traffic volumes declined due to recession. We found large nonuniform changes in on-road CO2 emissions over a period of 5 years, highlighting the importance of timely updates to motor vehicle emission inventories.

  11. Calibration of the Accuscan II In Vivo System for Whole Body Counting

    Energy Technology Data Exchange (ETDEWEB)

    Orval R. Perry; David L. Georgeson

    2011-08-01

    This report describes the April 2011 calibration of the Accuscan II HpGe In Vivo system for whole body counting. The source used for the calibration was a NIST traceable BOMAB manufactured by DOE as INL2006 BOMAB containing Eu-154, Eu-155, Eu-152, Sb-125 and Y-88 with energies from 27 keV to 1836 keV with a reference date of 11/29/2006. The actual usable energy range was 86.5 keV to 1597 keV on 4/21/2011. The BOMAB was constructed inside the Accuscan II counting 'tub' in the order of legs, thighs, abdomen, thorax/arms, neck, and head. Each piece was taped to the backwall of the counter. The arms were taped to the thorax. The phantom was constructed between the v-ridges on the backwall of the Accuscan II counter. The energy and efficiency calibrations were performed using the INL2006 BOMAB. The calibrations were performed with the detectors in the scanning mode. This report includes an overview introduction and records for the energy/FWHM and efficiency calibration including performance verification and validation counting. The Accuscan II system was successfully calibrated for whole body counting and verified in accordance with ANSI/HPS N13.30-1996 criteria.

  12. Mapping of Natural Radionuclides using Noise Adjusted Singular Value Decomposition, NASVD

    DEFF Research Database (Denmark)

    Aage, Helle Karina

    2006-01-01

    Mapping of natural radionuclides from airborne gamma spectrometry suffer from random ”noise” in the spectra due to short measurement times. This is partly compensated for by using large volume detectors to improve the counting statistics. One method of further improving the quality of the measured...... spectra is to remove from the spectra a large fraction of this random noise using a special variant of Singular Value Decomposition: Noise Adjusted Singular Value Decomposition. In 1997-1999 the natural radionuclides on the Danish Island of Bornholm were mapped using a combination of the standard 3...

  13. Total bacterial count and somatic cell count in refrigerated raw milk stored in communal tanks

    Directory of Open Access Journals (Sweden)

    Edmar da Costa Alves

    2014-09-01

    Full Text Available The current industry demand for dairy products with extended shelf life has resulted in new challenges for milk quality maintenance. The processing of milk with high bacterial counts compromises the quality and performance of industrial products. The study aimed to evaluate the total bacteria counts (TBC and somatic cell count (SCC in 768 samples of refrigerated raw milk, from 32 communal tanks. Samples were collected in the first quarter of 2010, 2011, 2012 and 2013 and analyzed by the Laboratory of Milk Quality - LQL. Results showed that 62.5%, 37.5%, 15.6% and 27.1% of the means for TBC in 2010, 2011, 2012 and 2013, respectively, were above the values established by legislation. However, we observed a significant reduction in the levels of total bacterial count (TBC in the studied periods. For somatic cell count, 100% of the means indicated values below 600.000 cells/mL, complying with the actual Brazilian legislation. The values found for the somatic cell count suggests the adoption of effective measures for the sanitary control of the herd. However, the results must be considered with caution as it highlights the need for quality improvements of the raw material until it achieves reliable results effectively.

  14. Developing Coastal Surface Roughness Maps Using ASTER and QuickBird Data Sources

    Science.gov (United States)

    Spruce, Joe; Berglund, Judith; Davis, Bruce

    2006-01-01

    This viewgraph presentation regards one element of a larger project on the integration of NASA science models and data into the Hazards U.S. Multi-Hazard (HAZUS-MH) Hurricane module for hurricane damage and loss risk assessment. HAZUS-MH is a decision support tool being developed by the National Institute of Building Sciences for the Federal Emergency Management Agency (FEMA). It includes the Hurricane Module, which employs surface roughness maps made from National Land Cover Data (NLCD) maps to estimate coastal hurricane wind damage and loss. NLCD maps are produced and distributed by the U.S. Geological Survey. This presentation discusses an effort to improve upon current HAZUS surface roughness maps by employing ASTER multispectral classifications with QuickBird "ground reference" imagery.

  15. Combining disparate data sources for improved poverty prediction and mapping.

    Science.gov (United States)

    Pokhriyal, Neeti; Jacques, Damien Christophe

    2017-11-14

    More than 330 million people are still living in extreme poverty in Africa. Timely, accurate, and spatially fine-grained baseline data are essential to determining policy in favor of reducing poverty. The potential of "Big Data" to estimate socioeconomic factors in Africa has been proven. However, most current studies are limited to using a single data source. We propose a computational framework to accurately predict the Global Multidimensional Poverty Index (MPI) at a finest spatial granularity and coverage of 552 communes in Senegal using environmental data (related to food security, economic activity, and accessibility to facilities) and call data records (capturing individualistic, spatial, and temporal aspects of people). Our framework is based on Gaussian Process regression, a Bayesian learning technique, providing uncertainty associated with predictions. We perform model selection using elastic net regularization to prevent overfitting. Our results empirically prove the superior accuracy when using disparate data (Pearson correlation of 0.91). Our approach is used to accurately predict important dimensions of poverty: health, education, and standard of living (Pearson correlation of 0.84-0.86). All predictions are validated using deprivations calculated from census. Our approach can be used to generate poverty maps frequently, and its diagnostic nature is, likely, to assist policy makers in designing better interventions for poverty eradication. Copyright © 2017 the Author(s). Published by PNAS.

  16. Planck intermediate results. VII. Statistical properties of infrared and radio extragalactic sources from the Planck Early Release Compact Source Catalogue at frequencies between 100 and 857 GHz

    Science.gov (United States)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Argüeso, F.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Balbi, A.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Bernard, J.-P.; Bersanelli, M.; Bethermin, M.; Bhatia, R.; Bonaldi, A.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Cabella, P.; Cardoso, J.-F.; Catalano, A.; Cayón, L.; Chamballu, A.; Chary, R.-R.; Chen, X.; Chiang, L.-Y.; Christensen, P. R.; Clements, D. L.; Colafrancesco, S.; Colombi, S.; Colombo, L. P. L.; Coulais, A.; Crill, B. P.; Cuttaia, F.; Danese, L.; Davis, R. J.; de Bernardis, P.; de Gasperis, G.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Dörl, U.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Fosalba, P.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Jaffe, T. R.; Jaffe, A. H.; Jagemann, T.; Jones, W. C.; Juvela, M.; Keihänen, E.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurinsky, N.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lawrence, C. R.; Leonardi, R.; Lilje, P. B.; López-Caniego, M.; Macías-Pérez, J. F.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Mitra, S.; Miville-Deschènes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sajina, A.; Sandri, M.; Savini, G.; Scott, D.; Smoot, G. F.; Starck, J.-L.; Sudiwala, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Türler, M.; Valenziano, L.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; White, M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2013-02-01

    We make use of the Planck all-sky survey to derive number counts and spectral indices of extragalactic sources - infrared and radio sources - from the Planck Early Release Compact Source Catalogue (ERCSC) at 100 to 857 GHz (3 mm to 350 μm). Three zones (deep, medium and shallow) of approximately homogeneous coverage are used to permit a clean and controlled correction for incompleteness, which was explicitly not done for the ERCSC, as it was aimed at providing lists of sources to be followed up. Our sample, prior to the 80% completeness cut, contains between 217 sources at 100 GHz and 1058 sources at 857 GHz over about 12 800 to 16 550 deg2 (31 to 40% of the sky). After the 80% completeness cut, between 122 and 452 and sources remain, with flux densities above 0.3 and 1.9 Jy at 100 and 857 GHz. The sample so defined can be used for statistical analysis. Using the multi-frequency coverage of the Planck High Frequency Instrument, all the sources have been classified as either dust-dominated (infrared galaxies) or synchrotron-dominated (radio galaxies) on the basis of their spectral energy distributions (SED). Our sample is thus complete, flux-limited and color-selected to differentiate between the two populations. We find an approximately equal number of synchrotron and dusty sources between 217 and 353 GHz; at 353 GHz or higher (or 217 GHz and lower) frequencies, the number is dominated by dusty (synchrotron) sources, as expected. For most of the sources, the spectral indices are also derived. We provide for the first time counts of bright sources from 353 to 857 GHz and the contributions from dusty and synchrotron sources at all HFI frequencies in the key spectral range where these spectra are crossing. The observed counts are in the Euclidean regime. The number counts are compared to previously published data (from earlier Planck results, Herschel, BLAST, SCUBA, LABOCA, SPT, and ACT) and models taking into account both radio or infrared galaxies, and covering a

  17. Near-Infrared Imaging for Spatial Mapping of Organic Content in Petroleum Source Rocks

    Science.gov (United States)

    Mehmani, Y.; Burnham, A. K.; Vanden Berg, M. D.; Tchelepi, H.

    2017-12-01

    Natural gas from unconventional petroleum source rocks (shales) plays a key role in our transition towards sustainable low-carbon energy production. The potential for carbon storage (in adsorbed state) in these formations further aligns with efforts to mitigate climate change. Optimizing production and development from these resources requires knowledge of the hydro-thermo-mechanical properties of the rock, which are often strong functions of organic content. This work demonstrates the potential of near-infrared (NIR) spectral imaging in mapping the spatial distribution of organic content with O(100µm) resolution on cores that can span several hundred feet in depth (Mehmani et al., 2017). We validate our approach for the immature oil shale of the Green River Formation (GRF), USA, and show its applicability potential in other formations. The method is a generalization of a previously developed optical approach specialized to the GRF (Mehmani et al., 2016a). The implications of this work for spatial mapping of hydro-thermo-mechanical properties of excavated cores, in particular thermal conductivity, are discussed (Mehmani et al., 2016b). References:Mehmani, Y., A.K. Burnham, M.D. Vanden Berg, H. Tchelepi, "Quantification of organic content in shales via near-infrared imaging: Green River Formation." Fuel, (2017). Mehmani, Y., A.K. Burnham, M.D. Vanden Berg, F. Gelin, and H. Tchelepi. "Quantification of kerogen content in organic-rich shales from optical photographs." Fuel, (2016a). Mehmani, Y., A.K. Burnham, H. Tchelepi, "From optics to upscaled thermal conductivity: Green River oil shale." Fuel, (2016b).

  18. Automatic counting of fission fragments tracks using the gas permeation technique

    CERN Document Server

    Yamazaki, I M

    1999-01-01

    An automatic counting system for fission tracks induced in a polycarbonate plastic Makrofol KG (10 mu m thickness) is described. The method is based on the gas transport mechanism proposed by Knudsen, where the gas permeability for a porous membrane is expected to be directly related to its track density. In this work, nitrogen permeabilities for several Makrofol films, with different fission track densities, have been measured using an adequate gas permeation system. The fission tracks were produced by irradiating Makrofol foils with a 252Cf calibrated source in a 2 pi geometry. A calibration curve fission track number versus nitrogen permeability has been obtained, for track densities higher than 1000/cm sup 2 , where the spark gap technique and the visual methods employing a microscope, are not appropriate for track counting.

  19. Rainflow counting revisited

    Energy Technology Data Exchange (ETDEWEB)

    Soeker, H [Deutsches Windenergie-Institut (Germany)

    1996-09-01

    As state of the art method the rainflow counting technique is presently applied everywhere in fatigue analysis. However, the author feels that the potential of the technique is not fully recognized in wind energy industries as it is used, most of the times, as a mere data reduction technique disregarding some of the inherent information of the rainflow counting results. The ideas described in the following aim at exploitation of this information and making it available for use in the design and verification process. (au)

  20. Radon counting statistics - a Monte Carlo investigation

    International Nuclear Information System (INIS)

    Scott, A.G.

    1996-01-01

    Radioactive decay is a Poisson process, and so the Coefficient of Variation (COV) of open-quotes nclose quotes counts of a single nuclide is usually estimated as 1/√n. This is only true if the count duration is much shorter than the half-life of the nuclide. At longer count durations, the COV is smaller than the Poisson estimate. Most radon measurement methods count the alpha decays of 222 Rn, plus the progeny 218 Po and 214 Po, and estimate the 222 Rn activity from the sum of the counts. At long count durations, the chain decay of these nuclides means that every 222 Rn decay must be followed by two other alpha decays. The total number of decays is open-quotes 3Nclose quotes, where N is the number of radon decays, and the true COV of the radon concentration estimate is 1/√(N), √3 larger than the Poisson total count estimate of 1/√3N. Most count periods are comparable to the half lives of the progeny, so the relationship between COV and count time is complex. A Monte-Carlo estimate of the ratio of true COV to Poisson estimate was carried out for a range of count periods from 1 min to 16 h and three common radon measurement methods: liquid scintillation, scintillation cell, and electrostatic precipitation of progeny. The Poisson approximation underestimates COV by less than 20% for count durations of less than 60 min

  1. Event-based motion correction for PET transmission measurements with a rotating point source

    International Nuclear Information System (INIS)

    Zhou, Victor W; Kyme, Andre Z; Meikle, Steven R; Fulton, Roger

    2011-01-01

    Accurate attenuation correction is important for quantitative positron emission tomography (PET) studies. When performing transmission measurements using an external rotating radioactive source, object motion during the transmission scan can distort the attenuation correction factors computed as the ratio of the blank to transmission counts, and cause errors and artefacts in reconstructed PET images. In this paper we report a compensation method for rigid body motion during PET transmission measurements, in which list mode transmission data are motion corrected event-by-event, based on known motion, to ensure that all events which traverse the same path through the object are recorded on a common line of response (LOR). As a result, the motion-corrected transmission LOR may record a combination of events originally detected on different LORs. To ensure that the corresponding blank LOR records events from the same combination of contributing LORs, the list mode blank data are spatially transformed event-by-event based on the same motion information. The number of counts recorded on the resulting blank LOR is then equivalent to the number of counts that would have been recorded on the corresponding motion-corrected transmission LOR in the absence of any attenuating object. The proposed method has been verified in phantom studies with both stepwise movements and continuous motion. We found that attenuation maps derived from motion-corrected transmission and blank data agree well with those of the stationary phantom and are significantly better than uncorrected attenuation data.

  2. Hanford whole body counting manual

    International Nuclear Information System (INIS)

    Palmer, H.E.; Rieksts, G.A.; Lynch, T.P.

    1990-06-01

    This document describes the Hanford Whole Body Counting Program as it is administered by Pacific Northwest Laboratory (PNL) in support of the US Department of Energy--Richland Operations Office (DOE-RL) and its Hanford contractors. Program services include providing in vivo measurements of internally deposited radioactivity in Hanford employees (or visitors). Specific chapters of this manual deal with the following subjects: program operational charter, authority, administration, and practices, including interpreting applicable DOE Orders, regulations, and guidance into criteria for in vivo measurement frequency, etc., for the plant-wide whole body counting services; state-of-the-art facilities and equipment used to provide the best in vivo measurement results possible for the approximately 11,000 measurements made annually; procedures for performing the various in vivo measurements at the Whole Body Counter (WBC) and related facilities including whole body counts; operation and maintenance of counting equipment, quality assurance provisions of the program, WBC data processing functions, statistical aspects of in vivo measurements, and whole body counting records and associated guidance documents. 16 refs., 48 figs., 22 tabs

  3. Platelet Count and Plateletcrit

    African Journals Online (AJOL)

    strated that neonates with late onset sepsis (bacteremia after 3 days of age) had a dramatic increase in MPV and. PDW18. We hypothesize that as the MPV and PDW increase and platelet count and PCT decrease in sick children, intui- tively, the ratio of MPV to PCT; MPV to Platelet count,. PDW to PCT, PDW to platelet ...

  4. An Adaptive Smoother for Counting Measurements

    International Nuclear Information System (INIS)

    Kondrasovs Vladimir; Coulon Romain; Normand Stephane

    2013-06-01

    Counting measurements associated with nuclear instruments are tricky to carry out due to the stochastic process of the radioactivity. Indeed events counting have to be processed and filtered in order to display a stable count rate value and to allow variations monitoring in the measured activity. Smoothers (as the moving average) are adjusted by a time constant defined as a compromise between stability and response time. A new approach has been developed and consists in improving the response time while maintaining count rate stability. It uses the combination of a smoother together with a detection filter. A memory of counting data is processed to calculate several count rate estimates using several integration times. These estimates are then sorted into the memory from short to long integration times. A measurement position, in terms of integration time, is then chosen into this memory after a detection test. An inhomogeneity into the Poisson counting process is detected by comparison between current position estimate and the other estimates contained into the memory in respect with the associated statistical variance calculated with homogeneous assumption. The measurement position (historical time) and the ability to forget an obsolete data or to keep in memory a useful data are managed using the detection test result. The proposed smoother is then an adaptive and a learning algorithm allowing an optimization of the response time while maintaining measurement counting stability and converging efficiently to the best counting estimate after an effective change in activity. This algorithm has also the specificity to be low recursive and thus easily embedded into DSP electronics based on FPGA or micro-controllers meeting 'real life' time requirements. (authors)

  5. The study of error for analysis in dynamic image from the error of count rates in Nal (Tl) scintillation camera

    International Nuclear Information System (INIS)

    Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam; Park, Hoon Hee

    2013-01-01

    This study is aimed to evaluate the effect of T 1/2 upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9 9m TcO 4 - of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ 2 test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T 1/2 error from change of gradient with -0.25% to +0.25%, if T 1/2 is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T 1/2 error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation measurement. Especially, we found a

  6. A procedure for merging land cover/use data from LANDSAT, aerial photography, and map sources: Compatibility, accuracy, and cost. Remote Sensing Project

    Science.gov (United States)

    Enslin, W. R.; Tilmann, S. E.; Hill-Rowley, R.; Rogers, R. H.

    1977-01-01

    Regional planning agencies are currently expressing a need for detailed land cover/use information to effectively meet the requirements of various federal programs. Individual data sources have advantages and limitations in fulfilling this need, both in terms of time/cost and technological capability. A methodology has been developed to merge land cover/use data from LANDSAT, aerial photography and map sources to maximize the effective use of a variety of data sources in the provision of an integrated information system for regional analysis. A test of the proposed inventory method is currently under way in four central Michigan townships. This test will evaluate the compatibility, accuracy and cost of the integrated method with reference to inventories developed from a single data source, and determine both the technological feasibility and analytical potential of such a system.

  7. Extreme Ultraviolet Explorer Bright Source List

    Science.gov (United States)

    Malina, Roger F.; Marshall, Herman L.; Antia, Behram; Christian, Carol A.; Dobson, Carl A.; Finley, David S.; Fruscione, Antonella; Girouard, Forrest R.; Hawkins, Isabel; Jelinsky, Patrick

    1994-01-01

    Initial results from the analysis of the Extreme Ultraviolet Explorer (EUVE) all-sky survey (58-740 A) and deep survey (67-364 A) are presented through the EUVE Bright Source List (BSL). The BSL contains 356 confirmed extreme ultraviolet (EUV) point sources with supporting information, including positions, observed EUV count rates, and the identification of possible optical counterparts. One-hundred twenty-six sources have been detected longward of 200 A.

  8. Counting probe

    International Nuclear Information System (INIS)

    Matsumoto, Haruya; Kaya, Nobuyuki; Yuasa, Kazuhiro; Hayashi, Tomoaki

    1976-01-01

    Electron counting method has been devised and experimented for the purpose of measuring electron temperature and density, the most fundamental quantities to represent plasma conditions. Electron counting is a method to count the electrons in plasma directly by equipping a probe with the secondary electron multiplier. It has three advantages of adjustable sensitivity, high sensitivity of the secondary electron multiplier, and directional property. Sensitivity adjustment is performed by changing the size of collecting hole (pin hole) on the incident front of the multiplier. The probe is usable as a direct reading thermometer of electron temperature because it requires to collect very small amount of electrons, thus it doesn't disturb the surrounding plasma, and the narrow sweep width of the probe voltage is enough. Therefore it can measure anisotropy more sensitively than a Langmuir probe, and it can be used for very low density plasma. Though many problems remain on anisotropy, computer simulation has been carried out. Also it is planned to provide a Helmholtz coil in the vacuum chamber to eliminate the effect of earth magnetic field. In practical experiments, the measurement with a Langmuir probe and an emission probe mounted to the movable structure, the comparison with the results obtained in reverse magnetic field by using a Helmholtz coil, and the measurement of ionic sound wave are scheduled. (Wakatsuki, Y.)

  9. A mind you can count on: validating breath counting as a behavioral measure of mindfulness

    Directory of Open Access Journals (Sweden)

    Daniel B Levinson

    2014-10-01

    Full Text Available Mindfulness practice of present moment awareness promises many benefits, but has eluded rigorous behavioral measurement. To date, research has relied on self-reported mindfulness or heterogeneous mindfulness trainings to infer skillful mindfulness practice and its effects. In four independent studies with over 400 total participants, we present the first construct validation of a behavioral measure of mindfulness, breath counting. We found it was reliable, correlated with self-reported mindfulness, differentiated long-term meditators from age-matched controls, and was distinct from sustained attention and working memory measures. In addition, we employed breath counting to test the nomological network of mindfulness. As theorized, we found skill in breath counting associated with more meta-awareness, less mind wandering, better mood, and greater nonattachment (i.e. less attentional capture by distractors formerly paired with reward. We also found in a randomized online training study that 4 weeks of breath counting training improved mindfulness and decreased mind wandering relative to working memory training and no training controls. Together, these findings provide the first evidence for breath counting as a behavioral measure of mindfulness.

  10. Measuring Trace Gas Emission from Multi-Distributed Sources Using Vertical Radial Plume Mapping (VRPM and Backward Lagrangian Stochastic (bLS Techniques

    Directory of Open Access Journals (Sweden)

    Thomas K. Flesch

    2011-09-01

    Full Text Available Two micrometeorological techniques for measuring trace gas emission rates from distributed area sources were evaluated using a variety of synthetic area sources. The vertical radial plume mapping (VRPM and the backward Lagrangian stochastic (bLS techniques with an open-path optical spectroscopic sensor were evaluated for relative accuracy for multiple emission-source and sensor configurations. The relative accuracy was calculated by dividing the measured emission rate by the actual emission rate; thus, a relative accuracy of 1.0 represents a perfect measure. For a single area emission source, the VRPM technique yielded a somewhat high relative accuracy of 1.38 ± 0.28. The bLS technique resulted in a relative accuracy close to unity, 0.98 ± 0.24. Relative accuracies for dual source emissions for the VRPM and bLS techniques were somewhat similar to single source emissions, 1.23 ± 0.17 and 0.94 ± 0.24, respectively. When the bLS technique was used with vertical point concentrations, the relative accuracy was unacceptably low,

  11. Estimation of single-year-of-age counts of live births, fetal losses, abortions, and pregnant women for counties of Texas.

    Science.gov (United States)

    Singh, Bismark; Meyers, Lauren Ancel

    2017-05-08

    We provide a methodology for estimating counts of single-year-of-age live-births, fetal-losses, abortions, and pregnant women from aggregated age-group counts. As a case study, we estimate counts for the 254 counties of Texas for the year 2010. We use interpolation to estimate counts of live-births, fetal-losses, and abortions by women of each single-year-of-age for all Texas counties. We then use these counts to estimate the numbers of pregnant women for each single-year-of-age, which were previously available only in aggregate. To support public health policy and planning, we provide single-year-of-age estimates of live-births, fetal-losses, abortions, and pregnant women for all Texas counties in the year 2010, as well as the estimation method source code.

  12. Modification history of the Harmakhis Vallis outflow channel, Mars, based on CTX-scale photogeologic mapping and crater count dating

    Science.gov (United States)

    Kukkonen, S.; Kostama, V.-P.

    2018-01-01

    Harmakhis Vallis is one of the four major outflow channel systems (Dao, Niger, Harmakhis, and Reull Valles) that cut the eastern rim region of the Hellas basin, the largest well-preserved impact structure on Mars. The structure of Harmakhis Vallis and the volume of its head depression, as well as earlier dating studies of the region, suggest that the outflow channel formed in the Hesperian period by collapsing when a large amount of subsurface fluid was released. Thus Harmakhis Vallis, as well as the other nearby outflow channels, represents a significant stage of the fluvial activity in the regional history. On the other hand, the outflow channel lies in the Martian mid-latitude zone, where there are several geomorphologic indicators of past and possibly also contemporary ground ice. The floor of Harmakhis also displays evidence of a later-stage ice-related activity, as the outflow channel has been covered by lineated valley fill deposits and debris apron material. The eastern rim region of the Hellas impact basin has been the subject of numerous geologic mapping studies at various scales and based on different imaging data sets. However, Harmakhis Vallis itself has received less attention and the studies on the outflow channel have focused only on limited parts of the outflow channel or on separated different geologic events. In this work, the Harmakhis Vallis floor is mapped and dated from the head depression to the beginning of the terminus based on the Mars Reconnaissance Orbiter's ConTeXt camera images (CTX; ∼ 6 m/pixel). Our results show that Harmakhis Vallis has been modified by several processes after its formation. Age determinations on the small uncovered parts of the outflow channel, which possibly represent the original floor of Harmakhis, imply that Harmakhis may have experienced fluvial activity only 780-850 ( ± 400-600) Ma ago. The discovered terrace structure instead shows that the on-surface activity of the outflow channel has been periodic

  13. Monte Carlo simulation of lung counting efficiency using a whole-body counter at a nuclear power plant

    International Nuclear Information System (INIS)

    Dongming, L.; Shuhai, J.; Houwen, L.

    2016-01-01

    In order to routinely evaluate workers' internal exposure due to intake of radionuclides, a whole-body counter (WBC) at the Third Qinshan Nuclear Power Co. Ltd. (TQNPC) is used. Counting would typically occur immediately after a confirmed or suspected inhalation exposure. The counting geometry would differ as a result of the height of the individual being counted, which would result in over- or underestimated intake(s). In this study, Monte Carlo simulation was applied to evaluate the counting efficiency when performing a lung count using the WBC at the TQNPC. In order to validate the simulated efficiencies for lung counting, the WBC was benchmarked for various lung positions using a 137 Cs source. The results show that the simulated efficiencies are fairly consistent with the measured ones for 137 Cs, with a relative error of 0.289%. For a lung organ simulation, the discrepancy between the calibration phantom and the Chinese reference adult person (170 cm) was within 6% for peak energies ranging from 59.5 keV to 2000 keV. The relative errors vary from 4.63% to 8.41% depending on the person's height and photon energy. Therefore, the simulation technique is effective and practical for lung counting, which is difficult to calibrate using a physical phantom. (authors)

  14. Web mapping: tools and solutions for creating interactive maps of forestry interest

    Directory of Open Access Journals (Sweden)

    Notarangelo G

    2011-12-01

    Full Text Available The spread of geobrowsers as tools for displaying geographically referenced information provides insights and opportunities to those who, not being specialists in Geographic Information Systems, want to take advantage from exploration and communication power offered by these software. Through the use of web services such as Google Maps and the use of suitable markup languages, one can create interactive maps starting from highly heterogeneous data and information. These interactive maps can also be easily distributed and shared with Internet users, because they do not need to use proprietary software nor special skills but only a web browser. Unlike the maps created with GIS, whose output usually is a static image, the interactive maps retain all their features to users advantage. This paper describes a web application that, using the Keyhole Markup Language and the free service of Google Maps, produces choropleth maps relating to some forest indicators estimated by the last Italian National Forest Inventory. The creation of a map is done through a simple and intuitive interface. The maps created by users can be downloaded as KML file and can be viewed or modified via the freeware application Google Earth or free and open source GIS software like Quantum GIS. The web application is free and available at www.ricercaforestale.it.

  15. Multiplicity counting from fission chamber signals in the current mode

    Energy Technology Data Exchange (ETDEWEB)

    Pázsit, I. [Chalmers University of Technology, Department of Physics, Division of Subatomic and Plasma Physics, SE-412 96 Göteborg (Sweden); Pál, L. [Centre for Energy Research, Hungarian Academy of Sciences, 114, POB 49, H-1525 Budapest (Hungary); Nagy, L. [Chalmers University of Technology, Department of Physics, Division of Subatomic and Plasma Physics, SE-412 96 Göteborg (Sweden); Budapest University of Technology and Economics, Institute of Nuclear Techniques, H-1111 Budapest (Hungary)

    2016-12-11

    In nuclear safeguards, estimation of sample parameters using neutron-based non-destructive assay methods is traditionally based on multiplicity counting with thermal neutron detectors in the pulse mode. These methods in general require multi-channel analysers and various dead time correction methods. This paper proposes and elaborates on an alternative method, which is based on fast neutron measurements with fission chambers in the current mode. A theory of “multiplicity counting” with fission chambers is developed by incorporating Böhnel's concept of superfission [1] into a master equation formalism, developed recently by the present authors for the statistical theory of fission chamber signals [2,3]. Explicit expressions are derived for the first three central auto- and cross moments (cumulants) of the signals of up to three detectors. These constitute the generalisation of the traditional Campbell relationships for the case when the incoming events represent a compound Poisson distribution. Because now the expressions contain the factorial moments of the compound source, they contain the same information as the singles, doubles and triples rates of traditional multiplicity counting. The results show that in addition to the detector efficiency, the detector pulse shape also enters the formulas; hence, the method requires a more involved calibration than the traditional method of multiplicity counting. However, the method has some advantages by not needing dead time corrections, as well as having a simpler and more efficient data processing procedure, in particular for cross-correlations between different detectors, than the traditional multiplicity counting methods.

  16. Comparison of MCNP6 and experimental results for neutron counts, Rossi-α, and Feynman-α distributions

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, Y.; Sadovich, S.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2013-01-01

    MCNP6, the general-purpose Monte Carlo N-Particle code, has the capability to perform time-dependent calculations by tracking the time interval between successive events of the neutron random walk. In fixed-source calculations for a subcritical assembly, the zero time value is assigned at the moment the neutron is emitted by the external neutron source. The PTRAC and F8 cards of MCNP allow to tally the time when a neutron is captured by 3 He(n, p) reactions in the neutron detector. From this information, it is possible to build three different time distributions: neutron counts, Rossi-α, and Feynman-α. The neutron counts time distribution represents the number of neutrons captured as a function of time. The Rossi-a distribution represents the number of neutron pairs captured as a function of the time interval between two capture events. The Feynman-a distribution represents the variance-to-mean ratio, minus one, of the neutron counts array as a function of a fixed time interval. The MCNP6 results for these three time distributions have been compared with the experimental data of the YALINA Thermal facility and have been found to be in quite good agreement. (authors)

  17. Principles of correlation counting

    International Nuclear Information System (INIS)

    Mueller, J.W.

    1975-01-01

    A review is given of the various applications which have been made of correlation techniques in the field of nuclear physics, in particular for absolute counting. Whereas in most cases the usual coincidence method will be preferable for its simplicity, correlation counting may be the only possible approach in such cases where the two radiations of the cascade cannot be well separated or when there is a longliving intermediate state. The measurement of half-lives and of count rates of spurious pulses is also briefly discussed. The various experimental situations lead to different ways the correlation method is best applied (covariance technique with one or with two detectors, application of correlation functions, etc.). Formulae are given for some simple model cases, neglecting dead-time corrections

  18. Alignment and referencing of maps and aerial photographs

    International Nuclear Information System (INIS)

    Cullings, Harry M.; Fujita, Shoichiro; Hoshi, Masaharu; Egbert, Stephen D.; Kerr, George D.

    2005-01-01

    Documentation of survivor locations as well as sample collection sites for dosimetry-related measurements requires reference to suitable maps. The maps traditionally used at RERF for these purposes are the U.S. Army maps that date from circa 1945 (see Chapter 1). In later years, some use has been made of Japanese city plan maps, which are much newer (1979 in Hiroshima and 1981 in Nagasaki) and of larger scale (1:2,500 vs. 1:12,500 for the U.S. Army maps). Even before the publication of DS86, efforts were made to reconcile the locations of buildings and other features of interest on these two sets of maps. Beyond the simple desire to compare two different sources of map information, it was thought that a better standard of accuracy for technical reasons could be achieved with the use of the newer maps. The U.S. Army maps were compiled under wartime conditions from an assortment of older Japanese maps and other sources, including aerial photographs of limited quality, using the best methods available at the time. The newer Japanese maps had the benefit of 34 years of improvement in cartographic methods and were made with extensive new survey information. Because of their larger scale, they are also more detailed than the U.S. Army maps. (J.P.N.)

  19. The Chandra Source Catalog: Spectral Properties

    Science.gov (United States)

    Doe, Stephen; Siemiginowska, Aneta L.; Refsdal, Brian L.; Evans, Ian N.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Glotfelty, Kenny J.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Primini, Francis A.; Rots, Arnold H.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2009-09-01

    The first release of the Chandra Source Catalog (CSC) contains all sources identified from eight years' worth of publicly accessible observations. The vast majority of these sources have been observed with the ACIS detector and have spectral information in 0.5-7 keV energy range. Here we describe the methods used to automatically derive spectral properties for each source detected by the standard processing pipeline and included in the final CSC. Hardness ratios were calculated for each source between pairs of energy bands (soft, medium and hard) using the Bayesian algorithm (BEHR, Park et al. 2006). The sources with high signal to noise ratio (exceeding 150 net counts) were fit in Sherpa (the modeling and fitting application from the Chandra Interactive Analysis of Observations package, developed by the Chandra X-ray Center; see Freeman et al. 2001). Two models were fit to each source: an absorbed power law and a blackbody emission. The fitted parameter values for the power-law and blackbody models were included in the catalog with the calculated flux for each model. The CSC also provides the source energy flux computed from the normalizations of predefined power-law and black-body models needed to match the observed net X-ray counts. In addition, we provide access to data products for each source: a file with source spectrum, the background spectrum, and the spectral response of the detector. This work is supported by NASA contract NAS8-03060 (CXC).

  20. The calculation and experiment verification of geometry factors of disk sources and detectors

    International Nuclear Information System (INIS)

    Shi Zhixia; Minowa, Y.

    1993-01-01

    In alpha counting the efficiency of counting system is most frequently determined from the counter response to a calibrated source. Whenever this procedure is used, however, question invariably arise as to the integrity of the standard source, or indeed the validity of the primary calibration. As a check, therefore, it is often helped to be able to calculate the disintegration rate from counting rate data. The conclusion are: 1. If the source is thin enough the error E is generally less than 5%. It is acceptable in routine measurement. When the standard source lacks for experiment we can use the geometry factor calculated instead of measured efficiency. 2. The geometry factor calculated can be used to correct the counter system, study the effect of each parameters and identify those parameters needing careful control. 3. The method of overlapping area of the source and the projection of the detector is very believable, simple and convenient for calculating geometry. (5 tabs.)

  1. Python passive network mapping P2NMAP

    CERN Document Server

    Hosmer, Chet

    2015-01-01

    Python Passive Network Mapping: P2NMAP is the first book to reveal a revolutionary and open source method for exposing nefarious network activity. The ""Heartbleed"" vulnerability has revealed significant weaknesses within enterprise environments related to the lack of a definitive mapping of network assets. In Python Passive Network Mapping, Chet Hosmer shows you how to effectively and definitively passively map networks. Active or probing methods to network mapping have traditionally been used, but they have many drawbacks - they can disrupt operations, crash systems, and - most important

  2. The effect of energy distribution of external source on source multiplication in fast assemblies

    International Nuclear Information System (INIS)

    Karam, R.A.; Vakilian, M.

    1976-02-01

    The essence of this study is the effect of energy distribution of a source on the detection rate as a function of K effective in fast assemblies. This effectiveness, as a function of K was studied in a fission chamber, using the ABN cross-section set and Mach 1 code. It was found that with a source which has a fission spectrum, the reciprocal count rate versus mass relationship is linear down to K effective 0.59. For a thermal source, the linearity was never achieved. (author)

  3. Tender point count, pain, and mobility in the older population: the mobilize Boston study.

    Science.gov (United States)

    Eggermont, Laura H P; Shmerling, Robert H; Leveille, Suzanne G

    2010-01-01

    Prevalence of tender points (TP), and widespread pain and fibromyalgia, as well as the relationship between TP and widespread pain and mobility, was examined in 585 community-dwelling older adults (mean age 78.2 years, 63.4% female). Pain was based on location (none, single site, multisite, widespread). Mobility was measured by the Short Physical Performance Battery (SPPB), gait speed, and self-reported (S-R) mobility difficulty. Tender-point count and health characteristics (ie, BMI, chronic conditions, analgesic use, number of medications, depression, and blocks walked per week) were assessed. Several participants had 3 or more TP (22.1%) although prevalence of criteria-based fibromyalgia was low (.3%). Mobility was more limited in persons with higher tender-point counts. After adjustment for pain and other risk factors, higher tender-point count was associated with poorer SPPB performance (score < 10, aOR = 1.09 per TP, 95%CI, 1.01-1.17), and slow gait speed (< .784m/sec, aOR = 1.14 per TP, 95%CI, 1.05-1.24), but not with S-R mobility difficulty. S-R mobility difficulty was associated with more disseminated pain (multisite pain, aOR = 2.01, 95%CI, 1.21-3.34; widespread pain, aOR = 2.47, 95%CI, 1.09-5.62). These findings portray a significant mobility burden related to tender-point count and multisite and widespread pain in the older population. Future studies using longitudinal methods are warranted. Higher tender-point count, multisite pain, and widespread pain are common in community-dwelling older adults and associated with mobility problems. Both the manual tender-point exam and the McGill Pain Map may provide important yet different information about risks for mobility disability in older individuals. Copyright 2010 American Pain Society. All rights reserved.

  4. Delta count-rate monitoring system

    International Nuclear Information System (INIS)

    Van Etten, D.; Olsen, W.A.

    1985-01-01

    A need for a more effective way to rapidly search for gamma-ray contamination over large areas led to the design and construction of a very sensitive gamma detection system. The delta count-rate monitoring system was installed in a four-wheel-drive van instrumented for environmental surveillance and accident response. The system consists of four main sections: (1) two scintillation detectors, (2) high-voltage power supply amplifier and single-channel analyzer, (3) delta count-rate monitor, and (4) count-rate meter and recorder. The van's 6.5-kW generator powers the standard nuclear instrument modular design system. The two detectors are mounted in the rear corners of the van and can be run singly or jointly. A solid-state bar-graph count-rate meter mounted on the dashboard can be read easily by both the driver and passenger. A solid-state strip chart recorder shows trends and provides a permanent record of the data. An audible alarm is sounded at the delta monitor and at the dashboard count-rate meter if a detected radiation level exceeds the set background level by a predetermined amount

  5. Coincidence-counting corrections for accidental coincidences, set dead time and intrinsic dead time

    International Nuclear Information System (INIS)

    Wyllie, H.A.

    1998-01-01

    An equation is derived for calculating the radioactivity of a source from the results of coincidence counting, taking into account dead-time losses and accidental coincidences. The corrections allow for the extension of the set dead time in the p channel by the intrinsic dead time. Experimental verification shows improvement over a previous equation. (author)

  6. Updating the USGS seismic hazard maps for Alaska

    Science.gov (United States)

    Mueller, Charles; Briggs, Richard; Wesson, Robert L.; Petersen, Mark D.

    2015-01-01

    The U.S. Geological Survey makes probabilistic seismic hazard maps and engineering design maps for building codes, emergency planning, risk management, and many other applications. The methodology considers all known earthquake sources with their associated magnitude and rate distributions. Specific faults can be modeled if slip-rate or recurrence information is available. Otherwise, areal sources are developed from earthquake catalogs or GPS data. Sources are combined with ground-motion estimates to compute the hazard. The current maps for Alaska were developed in 2007, and included modeled sources for the Alaska-Aleutian megathrust, a few crustal faults, and areal seismicity sources. The megathrust was modeled as a segmented dipping plane with segmentation largely derived from the slip patches of past earthquakes. Some megathrust deformation is aseismic, so recurrence was estimated from seismic history rather than plate rates. Crustal faults included the Fairweather-Queen Charlotte system, the Denali–Totschunda system, the Castle Mountain fault, two faults on Kodiak Island, and the Transition fault, with recurrence estimated from geologic data. Areal seismicity sources were developed for Benioff-zone earthquakes and for crustal earthquakes not associated with modeled faults. We review the current state of knowledge in Alaska from a seismic-hazard perspective, in anticipation of future updates of the maps. Updated source models will consider revised seismicity catalogs, new information on crustal faults, new GPS data, and new thinking on megathrust recurrence, segmentation, and geometry. Revised ground-motion models will provide up-to-date shaking estimates for crustal earthquakes and subduction earthquakes in Alaska.

  7. Alternate method of source preparation for alpha spectrometry: No electrodeposition, no hydrofluoric acid

    International Nuclear Information System (INIS)

    Kurosaki, Hiromu; Mueller, Rebecca J.; Lambert, Susan B.; Rao, Govind R.

    2016-01-01

    An alternate method of preparing actinide alpha counting sources was developed in place of electrodeposition or lanthanide fluoride micro-precipitation. The method uses lanthanide hydroxide micro-precipitation to avoid the use of hazardous hydrofluoric acid. Lastly, it provides a quicker, simpler, and safer way of preparing actinide alpha counting sources in routine, production-type laboratories that process many samples daily.

  8. Alternate method of source preparation for alpha spectrometry: no electrodeposition, no hydrofluoric acid

    International Nuclear Information System (INIS)

    Hiromu Kurosaki; Lambert, S.B.; Rao, G.R.; Mueller, R.J.

    2017-01-01

    An alternate method of preparing actinide alpha counting sources was developed in place of electrodeposition or lanthanide fluoride micro-precipitation. The method uses lanthanide hydroxide micro-precipitation to avoid the use of hazardous hydrofluoric acid. It provides a quicker, simpler, and safer way of preparing actinide alpha counting sources in routine, production-type laboratories that process many samples daily. (author)

  9. In vivo counting of uranium

    International Nuclear Information System (INIS)

    Palmer, H.E.

    1985-03-01

    A state-of-the-art radiation detector system consisting of six individually mounted intrinsic germanium planar detectors, each 20 cm 2 by 13 mm thick, mounted together such that the angle of the whole system can be changed to match the slope of the chest of the person being counted, is described. The sensitivity of the system for counting uranium and plutonium in vivo and the precedures used in calibrating the system are also described. Some results of counts done on uranium mill workers are presented. 15 figs., 2 tabs

  10. Characterizing and predicting ultrafine particle counts in Canadian classrooms during the winter months: model development and evaluation.

    Science.gov (United States)

    Weichenthal, Scott; Dufresne, André; Infante-Rivard, Claire; Joseph, Lawrence

    2008-03-01

    indoor UFP source (electric kitchen stove) was active in schools. In general, our findings suggest that reasonable estimates of classroom UFP counts may be obtained from outdoor UFP data but that the accuracy of such estimates are limited in the presence of indoor UFP sources.

  11. Characterizing and predicting ultrafine particle counts in Canadian classrooms during the winter months: Model development and evaluation

    International Nuclear Information System (INIS)

    Weichenthal, Scott; Dufresne, Andre; Infante-Rivard, Claire; Joseph, Lawrence

    2008-01-01

    source (electric kitchen stove) was active in schools. In general, our findings suggest that reasonable estimates of classroom UFP counts may be obtained from outdoor UFP data but that the accuracy of such estimates are limited in the presence of indoor UFP sources

  12. Into a Mapping of Copenhagen Street Lighting 2014

    DEFF Research Database (Denmark)

    Bülow, Katja; Asp, Claus; Kongshaug, Jesper

    LED lighting is a new lighting component in urban Spaces. How does LED lighting change the visual experience of a street, how did it use to be and how will it become? The book presents a mapping method in which an overview map of light sources in the Copenhagen streets is combined with a video...... recording and a series of photos from a route, whick goes through different city parts and types of streets. The mapping is done in the crucial changing fase, in which the street lighting in Copenhagen is a mix of previously used light sources and LED....

  13. Standardization of iodine-129 by the TDCR liquid scintillation method and 4π β-γ coincidence counting

    Science.gov (United States)

    Cassette, P.; Bouchard, J.; Chauvenet, B.

    1994-01-01

    Iodine-129 is a long-lived fission product, with physical and chemical properties that make it a good candidate for evaluating the environmental impact of the nuclear energy fuel cycle. To avoid solid source preparation problems, liquid scintillation has been used to standardize this nuclide for a EUROMET intercomparison. Two methods were used to measure the iodine-129 activity: triple-to-double-coincidence ratio liquid scintillation counting and 4π β-γ coincidence counting; the results are in good agreement.

  14. Planck early results. XIII. Statistical properties of extragalactic radio sources in the Planck Early Release Compact Source Catalogue

    DEFF Research Database (Denmark)

    Lähteenmäki, A.; Poutanen, T.; Natoli, P.

    2011-01-01

    The data reported in Planck's Early Release Compact Source Catalogue (ERCSC) are exploited to measure the number counts (dN/dS) of extragalactic radio sources at 30, 44, 70, 100, 143 and 217 GHz. Due to the full-sky nature of the catalogue, this measurement extends to the rarest and brightest sou...

  15. Alpha scintillation radon counting

    International Nuclear Information System (INIS)

    Lucas, H.F. Jr.

    1977-01-01

    Radon counting chambers which utilize the alpha-scintillation properties of silver activated zinc sulfide are simple to construct, have a high efficiency, and, with proper design, may be relatively insensitive to variations in the pressure or purity of the counter filling. Chambers which were constructed from glass, metal, or plastic in a wide variety of shapes and sizes were evaluated for the accuracy and the precision of the radon counting. The principles affecting the alpha-scintillation radon counting chamber design and an analytic system suitable for a large scale study of the 222 Rn and 226 Ra content of either air or other environmental samples are described. Particular note is taken of those factors which affect the accuracy and the precision of the method for monitoring radioactivity around uranium mines

  16. Counting radon tracks in Makrofol detectors with the 'image reduction and analysis facility' (IRAF) software package

    International Nuclear Information System (INIS)

    Hernandez, F.; Gonzalez-Manrique, S.; Karlsson, L.; Hernandez-Armas, J.; Aparicio, A.

    2007-01-01

    Makrofol detectors are commonly used for long-term radon ( 222 Rn) measurements in houses, schools and workplaces. The use of this type of passive detectors for the determination of radon concentrations requires the counting of the nuclear tracks produced by alpha particles on the detecting material. The 'image reduction and analysis facility' (IRAF) software package is a piece of software commonly used in astronomical applications. It allows detailed counting and mapping of sky sections where stars are grouped very closely, even forming clusters. In order to count the nuclear tracks in our Makrofol radon detectors, we have developed an inter-disciplinary application that takes advantage of the similitude that exist between counting stars in a dark sky and tracks in a track-etch detector. Thus, a low cost semi-automatic system has been set up in our laboratory which utilises a commercially available desktop scanner and the IRAF software package. A detailed description of the proposed semi-automatic method and its performance, in comparison to ocular counting, is described in detail here. In addition, the calibration factor for this procedure, 2.97+/-0.07kBqm -3 htrack -1 cm 2 , has been calculated based on the results obtained from exposing 46 detectors to certified radon concentrations. Furthermore, the results of a preliminary radon survey carried out in 62 schools in Tenerife island (Spain), using Makrofol detectors, counted with the mentioned procedure, are briefly presented. The results reported here indicate that the developed procedure permits a fast, accurate and unbiased determination of the radon tracks in a large number of detectors. The measurements carried out in the schools showed that the radon concentrations in at least 12 schools were above 200Bqm -3 and, in two of them, above 400Bqm -3 . Further studies should be performed at those schools following the European Union recommendations about radon concentrations in buildings

  17. SLAMM: Visual monocular SLAM with continuous mapping using multiple maps.

    Directory of Open Access Journals (Sweden)

    Hayyan Afeef Daoud

    Full Text Available This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM. It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor's malfunction; making it suitable for real-world applications. It works with single or multiple robots. In a single robot scenario the algorithm generates a new map at the time of tracking failure, and later it merges maps at the event of loop closure. Similarly, maps generated from multiple robots are merged without prior knowledge of their relative poses; which makes this algorithm flexible. The system works in real time at frame-rate speed. The proposed approach was tested on the KITTI and TUM RGB-D public datasets and it showed superior results compared to the state-of-the-arts in calibrated visual monocular keyframe-based SLAM. The mean tracking time is around 22 milliseconds. The initialization is twice as fast as it is in ORB-SLAM, and the retrieved map can reach up to 90 percent more in terms of information preservation depending on tracking loss and loop closure events. For the benefit of the community, the source code along with a framework to be run with Bebop drone are made available at https://github.com/hdaoud/ORBSLAMM.

  18. Quality Analysis of Open Street Map Data

    Science.gov (United States)

    Wang, M.; Li, Q.; Hu, Q.; Zhou, M.

    2013-05-01

    Crowd sourcing geographic data is an opensource geographic data which is contributed by lots of non-professionals and provided to the public. The typical crowd sourcing geographic data contains GPS track data like OpenStreetMap, collaborative map data like Wikimapia, social websites like Twitter and Facebook, POI signed by Jiepang user and so on. These data will provide canonical geographic information for pubic after treatment. As compared with conventional geographic data collection and update method, the crowd sourcing geographic data from the non-professional has characteristics or advantages of large data volume, high currency, abundance information and low cost and becomes a research hotspot of international geographic information science in the recent years. Large volume crowd sourcing geographic data with high currency provides a new solution for geospatial database updating while it need to solve the quality problem of crowd sourcing geographic data obtained from the non-professionals. In this paper, a quality analysis model for OpenStreetMap crowd sourcing geographic data is proposed. Firstly, a quality analysis framework is designed based on data characteristic analysis of OSM data. Secondly, a quality assessment model for OSM data by three different quality elements: completeness, thematic accuracy and positional accuracy is presented. Finally, take the OSM data of Wuhan for instance, the paper analyses and assesses the quality of OSM data with 2011 version of navigation map for reference. The result shows that the high-level roads and urban traffic network of OSM data has a high positional accuracy and completeness so that these OSM data can be used for updating of urban road network database.

  19. Estimation of low-level neutron dose-equivalent rate by using extrapolation method for a curie level Am–Be neutron source

    International Nuclear Information System (INIS)

    Li, Gang; Xu, Jiayun; Zhang, Jie

    2015-01-01

    Neutron radiation protection is an important research area because of the strong radiation biological effect of neutron field. The radiation dose of neutron is closely related to the neutron energy, and the connected relationship is a complex function of energy. For the low-level neutron radiation field (e.g. the Am–Be source), the commonly used commercial neutron dosimeter cannot always reflect the low-level dose rate, which is restricted by its own sensitivity limit and measuring range. In this paper, the intensity distribution of neutron field caused by a curie level Am–Be neutron source was investigated by measuring the count rates obtained through a 3 He proportional counter at different locations around the source. The results indicate that the count rates outside of the source room are negligible compared with the count rates measured in the source room. In the source room, 3 He proportional counter and neutron dosimeter were used to measure the count rates and dose rates respectively at different distances to the source. The results indicate that both the count rates and dose rates decrease exponentially with the increasing distance, and the dose rates measured by a commercial dosimeter are in good agreement with the results calculated by the Geant4 simulation within the inherent errors recommended by ICRP and IEC. Further studies presented in this paper indicate that the low-level neutron dose equivalent rates in the source room increase exponentially with the increasing low-energy neutron count rates when the source is lifted from the shield with different radiation intensities. Based on this relationship as well as the count rates measured at larger distance to the source, the dose rates can be calculated approximately by the extrapolation method. This principle can be used to estimate the low level neutron dose values in the source room which cannot be measured directly by a commercial dosimeter. - Highlights: • The scope of the affected area for

  20. Calibration of the Accuscan II IN Vivo System for High Energy Lung Counting

    Energy Technology Data Exchange (ETDEWEB)

    Ovard R. Perry; David L. Georgeson

    2011-07-01

    This report describes the April 2011 calibration of the Accuscan II HpGe In Vivo system for high energy lung counting. The source used for the calibration was a NIST traceable lung set manufactured at the University of Cincinnati UCLL43AMEU & UCSL43AMEU containing Am-241 and Eu-152 with energies from 26 keV to 1408 keV. The lung set was used in conjunction with a Realistic Torso phantom. The phantom was placed on the RMC II counting table (with pins removed) between the v-ridges on the backwall of the Accuscan II counter. The top of the detector housing was positioned perpendicular to the junction of the phantom clavicle with the sternum. This position places the approximate center line of the detector housing with the center of the lungs. The energy and efficiency calibrations were performed using a Realistic Torso phantom (Appendix I) and the University of Cincinnati lung set. This report includes an overview introduction and records for the energy/FWHM and efficiency calibration including performance verification and validation counting. The Accuscan II system was successfully calibrated for high energy lung counting and verified in accordance with ANSI/HPS N13.30-1996 criteria.

  1. Blood Count Tests: MedlinePlus Health Topic

    Science.gov (United States)

    ... Spanish WBC count (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Blood Count Tests ... WBC count Show More Show Less Related Health Topics Bleeding Disorders Blood Laboratory Tests National Institutes of ...

  2. Evaluation of lactate, white blood cell count, neutrophil count, procalcitonin and immature granulocyte count as biomarkers for sepsis in emergency department patients.

    Science.gov (United States)

    Karon, Brad S; Tolan, Nicole V; Wockenfus, Amy M; Block, Darci R; Baumann, Nikola A; Bryant, Sandra C; Clements, Casey M

    2017-11-01

    Lactate, white blood cell (WBC) and neutrophil count, procalcitonin and immature granulocyte (IG) count were compared for the prediction of sepsis, and severe sepsis or septic shock, in patients presenting to the emergency department (ED). We prospectively enrolled 501 ED patients with a sepsis panel ordered for suspicion of sepsis. WBC, neutrophil, and IG counts were measured on a Sysmex XT-2000i analyzer. Lactate was measured by i-STAT, and procalcitonin by Brahms Kryptor. We classified patients as having sepsis using a simplification of the 1992 consensus conference sepsis definitions. Patients with sepsis were further classified as having severe sepsis or septic shock using established criteria. Univariate receiver operating characteristic (ROC) analysis was performed to determine odds ratio (OR), area under the ROC curve (AUC), and sensitivity/specificity at optimal cut-off for prediction of sepsis (vs. no sepsis), and prediction of severe sepsis or septic shock (vs. no sepsis). There were 267 patients without sepsis; and 234 with sepsis, including 35 patients with severe sepsis or septic shock. Lactate had the highest OR (1.44, 95th% CI 1.20-1.73) for the prediction of sepsis; while WBC, neutrophil count and percent (neutrophil/WBC) had OR>1.00 (psepsis or septic shock, with an odds ratio (95th% CI) of 2.70 (2.02-3.61) and AUC 0.89 (0.82-0.96). Traditional biomarkers (lactate, WBC, neutrophil count, procalcitonin, IG) have limited utility in the prediction of sepsis. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  3. Serum Copper Level Significantly Influences Platelet Count, Lymphocyte Count and Mean Cell Hemoglobin in Sickle Cell Anemia

    Directory of Open Access Journals (Sweden)

    Okocha Chide

    2015-12-01

    Full Text Available Background Changes in serum micro nutrients levels affect a number of critically important metabolic processes; these could potentially influence blood counts and ultimately disease presentation in patients with sickle cell anemia (SCA. Objectives To evaluate the influence of serum micro-nutrients levels; zinc, copper, selenium and magnesium on blood counts in steady state SCA patients. Methods A cross sectional study that involved 28 steady state adult SCA subjects. Seven milliliters (mls of blood was collected; 3 mls was for hemoglobin electrophoresis and full blood count determination while 4 mls was for measurement of serum micro nutrients levels, by the atomic absorption spectrophotometry. Correlation between serum micro-nutrient levels and blood counts was done by the Pearson’s linear regression. Ethical approval was obtained from the institutional review board and each participant gave informed consent. All data was analyzed by SPSS software version 20. Results There was a significant correlation between serum copper levels and mean cell hemoglobin (MCH, platelet and lymphocyte counts (r = 0.418; P = 0.02, r = -0.376; P = 0.04 and r = -0.383; P = 0.04, respectively. There were no significant correlations between serum levels of other micro nutrients (selenium, zinc and magnesium and blood counts. Conclusions Copper influences blood count in SCA patients probably by inducing red cell haemolysis, oxidant tissue damage and stimulating the immune system.

  4. The determination of a neutron source position in an unknown homogeneous medium: The planar case

    International Nuclear Information System (INIS)

    Dubinski, S.; Talmor, A.; Presler, O.; Tshuva, A.; Yaar, I.; Orion, I.; Alfassi, Z.B.

    2005-01-01

    The possibility of localization of an unknown neutron source in various bulky homogeneous media (box) was studied. For the planar case, two 3 He detectors on the opposite faces of the box were used. A constant polypropylene shield around the box and detectors was used to eliminate the varying contribution from the environment to increase count rates of the detectors and to protect the experimentalist. It is shown that the location of a single small neutron emitting source in a large box can be found to a better than 7% by using two neutron detectors positioned on parallel faces of the box, coplanar with the source. The localization requires measurement of the count rate of both the unknown source and an extra source positioned on one of the faces of the box. The localization is based on the finding that the ratio of the count rates of the two detectors is an exponential function of the distance of the source from one of the detectors

  5. Lessons in modern digital field geology: Open source software, 3D techniques, and the new world of digital mapping

    Science.gov (United States)

    Pavlis, Terry; Hurtado, Jose; Langford, Richard; Serpa, Laura

    2014-05-01

    Although many geologists refuse to admit it, it is time to put paper-based geologic mapping into the historical archives and move to the full potential of digital mapping techniques. For our group, flat map digital geologic mapping is now a routine operation in both research and instruction. Several software options are available, and basic proficiency with the software can be learned in a few hours of instruction and practice. The first practical field GIS software, ArcPad, remains a viable, stable option on Windows-based systems. However, the vendor seems to be moving away from ArcPad in favor of mobile software solutions that are difficult to implement without GIS specialists. Thus, we have pursued a second software option based on the open source program QGIS. Our QGIS system uses the same shapefile-centric data structure as our ArcPad system, including similar pop-up data entry forms and generic graphics for easy data management in the field. The advantage of QGIS is that the same software runs on virtually all common platforms except iOS, although the Android version remains unstable as of this writing. A third software option we are experimenting with for flat map-based field work is Fieldmove, a derivative of the 3D-capable program Move developed by Midland Valley. Our initial experiments with Fieldmove are positive, particularly with the new, inexpensive (potential for communicating the complexity of key exposures. For example, in studies of metamorphic structures we often search for days to find "Rosetta Stone" outcrops that display key geometric relationships. While conventional photographs rarely can capture the essence of the field exposure, capturing a true 3D representation of the exposure with multiple photos from many orientations can solve this communication problem. As spatial databases evolve these 3D models should be readily importable into the database.

  6. Correction for decay during counting in gamma spectrometry

    International Nuclear Information System (INIS)

    Nir-El, Y.

    2013-01-01

    A basic result in gamma spectrometry is the count rate of a relevant peak. Correction for decay during counting and expressing the count rate at the beginning of the measurement can be done by a multiplicative factor that is derived from integrating the count rate over time. The counting time substituted in this factor must be the live time, whereas the use of the real-time is an error that underestimates the count rate by about the dead-time (DT) (in percentage). This error of underestimation of the count rate is corroborated in the measurement of a nuclide with a high DT. The present methodology is not applicable in systems that include a zero DT correction function. (authors)

  7. Interpretation of galaxy counts

    International Nuclear Information System (INIS)

    Tinsely, B.M.

    1980-01-01

    New models are presented for the interpretation of recent counts of galaxies to 24th magnitude, and predictions are shown to 28th magnitude for future comparison with data from the Space Telescope. The results supersede earlier, more schematic models by the author. Tyson and Jarvis found in their counts a ''local'' density enhancement at 17th magnitude, on comparison with the earlier models; the excess is no longer significant when a more realistic mixture of galaxy colors is used. Bruzual and Kron's conclusion that Kron's counts show evidence for evolution at faint magnitudes is confirmed, and it is predicted that some 23d magnitude galaxies have redshifts greater than unity. These may include spheroidal systems, elliptical galaxies, and the bulges of early-type spirals and S0's, seen during their primeval rapid star formation

  8. Total lymphocyte count as a substitute to cd4 count in management of hiv infected individuals in resource limited society

    International Nuclear Information System (INIS)

    Daud, M.Y.; Qazi, R.A.

    2015-01-01

    Pakistan is a resource limited society and gold standard parameters to monitor HIV disease activity are very costly. The objective of the study was to evaluate total lymphocyte count (TLC) as a surrogate to CD4 count to monitor disease activity in HIV/AIDS in resource limited society. Methods: This cross sectional study was carried out at HIV/AIDS treatment centre, Pakistan Institute of Medical Sciences (PIMS), Islamabad. A total of seven hundred and seventy four (774) HIV positive patients were enrolled in this study, and their CD4 count and total lymphocyte count were checked to find any correlation between the two by using Spearman ranked correlation coefficient. Results: The mean CD4 count was (434.30 ± 269.23), with minimum CD4 count of (9.00), and maximum of (1974.00). The mean total lymphocyte count (TLC) was (6764.0052 ± 2364.02) with minimum TLC (1200.00) and maximum TLC was (20200.00). Using the Pearson's correlation (r) there was a significant and positive correlation between TLC and CD4 count. (r2=0.127 and p=0.000) at 0.01 level. Conclusion: Our study showed a significant positive correlation between CD4 count and total lymphocyte count (TLC), so TLC can be used as a marker of disease activity in HIV infected patients. (author)

  9. Assessment of self-organizing maps to analyze sole-carbon source utilization profiles.

    Science.gov (United States)

    Leflaive, Joséphine; Céréghino, Régis; Danger, Michaël; Lacroix, Gérard; Ten-Hage, Loïc

    2005-07-01

    The use of community-level physiological profiles obtained with Biolog microplates is widely employed to consider the functional diversity of bacterial communities. Biolog produces a great amount of data which analysis has been the subject of many studies. In most cases, after some transformations, these data were investigated with classical multivariate analyses. Here we provided an alternative to this method, that is the use of an artificial intelligence technique, the Self-Organizing Maps (SOM, unsupervised neural network). We used data from a microcosm study of algae-associated bacterial communities placed in various nutritive conditions. Analyses were carried out on the net absorbances at two incubation times for each substrates and on the chemical guild categorization of the total bacterial activity. Compared to Principal Components Analysis and cluster analysis, SOM appeared as a valuable tool for community classification, and to establish clear relationships between clusters of bacterial communities and sole-carbon sources utilization. Specifically, SOM offered a clear bidimensional projection of a relatively large volume of data and were easier to interpret than plots commonly obtained with multivariate analyses. They would be recommended to pattern the temporal evolution of communities' functional diversity.

  10. Quantitative trait loci (QTL) mapping of resistance to strongyles and coccidia in the free-living Soay sheep (Ovis aries).

    Science.gov (United States)

    Beraldi, Dario; McRae, Allan F; Gratten, Jacob; Pilkington, Jill G; Slate, Jon; Visscher, Peter M; Pemberton, Josephine M

    2007-01-01

    A genome-wide scan was performed to detect quantitative trait loci (QTL) for resistance to gastrointestinal parasites and ectoparasitic keds segregating in the free-living Soay sheep population on St. Kilda (UK). The mapping panel consisted of a single pedigree of 882 individuals of which 588 were genotyped. The Soay linkage map used for the scans comprised 251 markers covering the whole genome at average spacing of 15cM. The traits here investigated were the strongyle faecal egg count (FEC), the coccidia faecal oocyst count (FOC) and a count of keds (Melophagus ovinus). QTL mapping was performed by means of variance component analysis so that the genetic parameters of the study traits were also estimated and compared with previous studies in Soay and domestic sheep. Strongyle FEC and coccidia FOC showed moderate heritability (h(2)=0.26 and 0.22, respectively) in lambs but low heritability in adults (h(2)<0.10). Ked count appeared to have very low h(2) in both lambs and adults. Genome scans were performed for the traits with moderate heritability and two genomic regions reached the level of suggestive linkage for coccidia FOC in lambs (logarithm of the odds=2.68 and 2.21 on chromosomes 3 and X, respectively). We believe this is the first study to report a QTL search for parasite resistance in a free-living animal population and therefore may represent a useful reference for similar studies aimed at understanding the genetics of host-parasite co-evolution in the wild.

  11. A counting-card circuit based on PCI bus

    International Nuclear Information System (INIS)

    Shi Jing; Li Yong; Chinese Academy of Sciences, Lanzhou; Su Hong; Dong Chengfu; Li Xiaogang; Ma Xiaoli

    2004-01-01

    A counting-card circuit based on PCI bus that we developed recently used for advanced personal computer will be introduced in this paper briefly. The maximum count capacity of this counting-card is 10 9 -1, ranging from 0 to 999 999 999, the maximum counting time range, 1 x 10 6 s, can be set in 1 cycle, the maximum counting rate is 20 MHz for positive input. (authors)

  12. Photon Counting Energy Dispersive Detector Arrays for X-ray Imaging.

    Science.gov (United States)

    Iwanczyk, Jan S; Nygård, Einar; Meirav, Oded; Arenson, Jerry; Barber, William C; Hartsough, Neal E; Malakhov, Nail; Wessel, Jan C

    2009-01-01

    The development of an innovative detector technology for photon-counting in X-ray imaging is reported. This new generation of detectors, based on pixellated cadmium telluride (CdTe) and cadmium zinc telluride (CZT) detector arrays electrically connected to application specific integrated circuits (ASICs) for readout, will produce fast and highly efficient photon-counting and energy-dispersive X-ray imaging. There are a number of applications that can greatly benefit from these novel imagers including mammography, planar radiography, and computed tomography (CT). Systems based on this new detector technology can provide compositional analysis of tissue through spectroscopic X-ray imaging, significantly improve overall image quality, and may significantly reduce X-ray dose to the patient. A very high X-ray flux is utilized in many of these applications. For example, CT scanners can produce ~100 Mphotons/mm(2)/s in the unattenuated beam. High flux is required in order to collect sufficient photon statistics in the measurement of the transmitted flux (attenuated beam) during the very short time frame of a CT scan. This high count rate combined with a need for high detection efficiency requires the development of detector structures that can provide a response signal much faster than the transit time of carriers over the whole detector thickness. We have developed CdTe and CZT detector array structures which are 3 mm thick with 16×16 pixels and a 1 mm pixel pitch. These structures, in the two different implementations presented here, utilize either a small pixel effect or a drift phenomenon. An energy resolution of 4.75% at 122 keV has been obtained with a 30 ns peaking time using discrete electronics and a (57)Co source. An output rate of 6×10(6) counts per second per individual pixel has been obtained with our ASIC readout electronics and a clinical CT X-ray tube. Additionally, the first clinical CT images, taken with several of our prototype photon-counting and

  13. QuantiFly: Robust Trainable Software for Automated Drosophila Egg Counting.

    Directory of Open Access Journals (Sweden)

    Dominic Waithe

    Full Text Available We report the development and testing of software called QuantiFly: an automated tool to quantify Drosophila egg laying. Many laboratories count Drosophila eggs as a marker of fitness. The existing method requires laboratory researchers to count eggs manually while looking down a microscope. This technique is both time-consuming and tedious, especially when experiments require daily counts of hundreds of vials. The basis of the QuantiFly software is an algorithm which applies and improves upon an existing advanced pattern recognition and machine-learning routine. The accuracy of the baseline algorithm is additionally increased in this study through correction of bias observed in the algorithm output. The QuantiFly software, which includes the refined algorithm, has been designed to be immediately accessible to scientists through an intuitive and responsive user-friendly graphical interface. The software is also open-source, self-contained, has no dependencies and is easily installed (https://github.com/dwaithe/quantifly. Compared to manual egg counts made from digital images, QuantiFly achieved average accuracies of 94% and 85% for eggs laid on transparent (defined and opaque (yeast-based fly media. Thus, the software is capable of detecting experimental differences in most experimental situations. Significantly, the advanced feature recognition capabilities of the software proved to be robust to food surface artefacts like bubbles and crevices. The user experience involves image acquisition, algorithm training by labelling a subset of eggs in images of some of the vials, followed by a batch analysis mode in which new images are automatically assessed for egg numbers. Initial training typically requires approximately 10 minutes, while subsequent image evaluation by the software is performed in just a few seconds. Given the average time per vial for manual counting is approximately 40 seconds, our software introduces a timesaving advantage for

  14. Scaling local species-habitat relations to the larger landscape with a hierarchical spatial count model

    Science.gov (United States)

    Thogmartin, W.E.; Knutson, M.G.

    2007-01-01

    Much of what is known about avian species-habitat relations has been derived from studies of birds at local scales. It is entirely unclear whether the relations observed at these scales translate to the larger landscape in a predictable linear fashion. We derived habitat models and mapped predicted abundances for three forest bird species of eastern North America using bird counts, environmental variables, and hierarchical models applied at three spatial scales. Our purpose was to understand habitat associations at multiple spatial scales and create predictive abundance maps for purposes of conservation planning at a landscape scale given the constraint that the variables used in this exercise were derived from local-level studies. Our models indicated a substantial influence of landscape context for all species, many of which were counter to reported associations at finer spatial extents. We found land cover composition provided the greatest contribution to the relative explained variance in counts for all three species; spatial structure was second in importance. No single spatial scale dominated any model, indicating that these species are responding to factors at multiple spatial scales. For purposes of conservation planning, areas of predicted high abundance should be investigated to evaluate the conservation potential of the landscape in their general vicinity. In addition, the models and spatial patterns of abundance among species suggest locations where conservation actions may benefit more than one species. ?? 2006 Springer Science+Business Media B.V.

  15. TU-FG-209-03: Exploring the Maximum Count Rate Capabilities of Photon Counting Arrays Based On Polycrystalline Silicon

    Energy Technology Data Exchange (ETDEWEB)

    Liang, A K; Koniczek, M; Antonuk, L E; El-Mohri, Y; Zhao, Q [University of Michigan, Ann Arbor, MI (United States)

    2016-06-15

    Purpose: Photon counting arrays (PCAs) offer several advantages over conventional, fluence-integrating x-ray imagers, such as improved contrast by means of energy windowing. For that reason, we are exploring the feasibility and performance of PCA pixel circuitry based on polycrystalline silicon. This material, unlike the crystalline silicon commonly used in photon counting detectors, lends itself toward the economic manufacture of radiation tolerant, monolithic large area (e.g., ∼43×43 cm2) devices. In this presentation, exploration of maximum count rate, a critical performance parameter for such devices, is reported. Methods: Count rate performance for a variety of pixel circuit designs was explored through detailed circuit simulations over a wide range of parameters (including pixel pitch and operating conditions) with the additional goal of preserving good energy resolution. The count rate simulations assume input events corresponding to a 72 kVp x-ray spectrum with 20 mm Al filtration interacting with a CZT detector at various input flux rates. Output count rates are determined at various photon energy threshold levels, and the percentage of counts lost (e.g., due to deadtime or pile-up) is calculated from the ratio of output to input counts. The energy resolution simulations involve thermal and flicker noise originating from each circuit element in a design. Results: Circuit designs compatible with pixel pitches ranging from 250 to 1000 µm that allow count rates over a megacount per second per pixel appear feasible. Such rates are expected to be suitable for radiographic and fluoroscopic imaging. Results for the analog front-end circuitry of the pixels show that acceptable energy resolution can also be achieved. Conclusion: PCAs created using polycrystalline silicon have the potential to offer monolithic large-area detectors with count rate performance comparable to those of crystalline silicon detectors. Further improvement through detailed circuit

  16. Recursive algorithms for phylogenetic tree counting.

    Science.gov (United States)

    Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J

    2013-10-28

    In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.

  17. Earth mapping - aerial or satellite imagery comparative analysis

    Science.gov (United States)

    Fotev, Svetlin; Jordanov, Dimitar; Lukarski, Hristo

    Nowadays, solving the tasks for revision of existing map products and creation of new maps requires making a choice of the land cover image source. The issue of the effectiveness and cost of the usage of aerial mapping systems versus the efficiency and cost of very-high resolution satellite imagery is topical [1, 2, 3, 4]. The price of any remotely sensed image depends on the product (panchromatic or multispectral), resolution, processing level, scale, urgency of task and on whether the needed image is available in the archive or has to be requested. The purpose of the present work is: to make a comparative analysis between the two approaches for mapping the Earth having in mind two parameters: quality and cost. To suggest an approach for selection of the map information sources - airplane-based or spacecraft-based imaging systems with very-high spatial resolution. Two cases are considered: area that equals approximately one satellite scene and area that equals approximately the territory of Bulgaria.

  18. Cowichan Valley energy mapping and modelling. Report 1 - GIS mapping of potential renewable energy resources in the CVRD. Final report. [Vancouver Island, Canada

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-06-15

    The driving force behind the Integrated Energy Mapping and Analysis project was the identification and analysis of a suite of pathways that the Cowichan Valley Regional District (CVRD) can utilise to increase its energy resilience, as well as reduce energy consumption and GHG emissions, with a primary focus on the residential sector. Mapping and analysis undertaken will support provincial energy and GHG reduction targets, and the suite of pathways outlined will address a CVRD internal target that calls for 75% of the region's energy within the residential sector to come from locally sourced renewables by 2050. The target has been developed as a mechanism to meet resilience and climate action target. The maps and findings produced are to be integrated as part of a regional policy framework currently under development. The first task in the project was the production of a series of thematic GIS maps and associated databases of potential renewable energy resources in the CVRD. The renewable energy sources mapped were solar, wind, micro hydro, and biomass (residues and waste). Other sources were also discussed (e.g. geothermal heat) but not mapped due to lack of spatially explicit input data. The task 1 findings are detailed in this report. (LN)

  19. Mapping world cultures: Cluster formation, sources and implications

    OpenAIRE

    Simcha Ronen; Oded Shenkar

    2013-01-01

    This paper extends and builds on Ronen and Shenkar’s synthesized cultural clustering of countries based on similarity and dissimilarity in work-related attitudes. The new map uses an updated dataset, and expands coverage to world areas that were non-accessible at the time. Cluster boundaries are drawn empirically rather than intuitively, and the plot obtained is triple nested, indicating three levels of similarity across given country pairs. Also delineated are cluster adjacency and cluster c...

  20. Comparison of MCNP6 and experimental results for neutron counts, Rossi-{alpha}, and Feynman-{alpha} distributions

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y. [Argonne National Laboratory, 9700 S. Cass Ave., Lemont, IL 60439 (United States); Sadovich, S.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C. [Joint Institute for Power and Nuclear Research-Sosny, 99 Academician A.K. Krasin Str., Minsk 220109 (Belarus)

    2013-07-01

    MCNP6, the general-purpose Monte Carlo N-Particle code, has the capability to perform time-dependent calculations by tracking the time interval between successive events of the neutron random walk. In fixed-source calculations for a subcritical assembly, the zero time value is assigned at the moment the neutron is emitted by the external neutron source. The PTRAC and F8 cards of MCNP allow to tally the time when a neutron is captured by {sup 3}He(n, p) reactions in the neutron detector. From this information, it is possible to build three different time distributions: neutron counts, Rossi-{alpha}, and Feynman-{alpha}. The neutron counts time distribution represents the number of neutrons captured as a function of time. The Rossi-a distribution represents the number of neutron pairs captured as a function of the time interval between two capture events. The Feynman-a distribution represents the variance-to-mean ratio, minus one, of the neutron counts array as a function of a fixed time interval. The MCNP6 results for these three time distributions have been compared with the experimental data of the YALINA Thermal facility and have been found to be in quite good agreement. (authors)

  1. Photon-counting image sensors

    CERN Document Server

    Teranishi, Nobukazu; Theuwissen, Albert; Stoppa, David; Charbon, Edoardo

    2017-01-01

    The field of photon-counting image sensors is advancing rapidly with the development of various solid-state image sensor technologies including single photon avalanche detectors (SPADs) and deep-sub-electron read noise CMOS image sensor pixels. This foundational platform technology will enable opportunities for new imaging modalities and instrumentation for science and industry, as well as new consumer applications. Papers discussing various photon-counting image sensor technologies and selected new applications are presented in this all-invited Special Issue.

  2. The precision of mapping between number words and the approximate number system predicts children's formal math abilities.

    Science.gov (United States)

    Libertus, Melissa E; Odic, Darko; Feigenson, Lisa; Halberda, Justin

    2016-10-01

    Children can represent number in at least two ways: by using their non-verbal, intuitive approximate number system (ANS) and by using words and symbols to count and represent numbers exactly. Furthermore, by the time they are 5years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children's math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation-mapping accuracy and variability-might each relate to math performance. Here, we addressed these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities, even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Solid-State Neutron Multiplicity Counting System Using Commercial Off-the-Shelf Semiconductor Detectors

    Energy Technology Data Exchange (ETDEWEB)

    Rozhdestvenskyy, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-08-09

    This work iterates on the first demonstration of a solid-state neutron multiplicity counting system developed at Lawrence Livermore National Laboratory by using commercial off-the-shelf detectors. The system was demonstrated to determine the mass of a californium-252 neutron source within 20% error requiring only one-hour measurement time with 20 cm2 of active detector area.

  4. iSpectra: An Open Source Toolbox For The Analysis of Spectral Images Recorded on Scanning Electron Microscopes.

    Science.gov (United States)

    Liebske, Christian

    2015-08-01

    iSpectra is an open source and system-independent toolbox for the analysis of spectral images (SIs) recorded on energy-dispersive spectroscopy (EDS) systems attached to scanning electron microscopes (SEMs). The aim of iSpectra is to assign pixels with similar spectral content to phases, accompanied by cumulative phase spectra with superior counting statistics for quantification. Pixel-to-phase assignment starts with a threshold-based pre-sorting of spectra to create groups of pixels with identical elemental budgets, similar to a method described by van Hoek (2014). Subsequent merging of groups and re-assignments of pixels using elemental or principle component histogram plots enables the user to generate chemically and texturally plausible phase maps. A variety of standard image processing algorithms can be applied to groups of pixels to optimize pixel-to-phase assignments, such as morphology operations to account for overlapping excitation volumes over pixels located at phase boundaries. iSpectra supports batch processing and allows pixel-to-phase assignments to be applied to an unlimited amount of SIs, thus enabling phase mapping of large area samples like petrographic thin sections.

  5. Platelet counting using the Coulter electronic counter.

    Science.gov (United States)

    Eggleton, M J; Sharp, A A

    1963-03-01

    A method for counting platelets in dilutions of platelet-rich plasm using the Coulter electronic counter is described.(1) The results obtained show that such platelet counts are at least as accurate as the best methods of visual counting. The various technical difficulties encountered are discussed.

  6. Automated uranium analysis by delayed-neutron counting

    International Nuclear Information System (INIS)

    Kunzendorf, H.; Loevborg, L.; Christiansen, E.M.

    1980-10-01

    Automated uranium analysis by fission-induced delayed-neutron counting is described. A short description is given of the instrumentation including transfer system, process control, irradiation and counting sites, and computer operations. Characteristic parameters of the facility (sample preparations, background, and standards) are discussed. A sensitivity of 817 +- 22 counts per 10 -6 g U is found using irradiation, delay, and counting times of 20 s, 5 s, and 10 s, respectively. Presicion is generally less than 1% for normal geological samples. Critical level and detection limits for 7.5 g samples are 8 and 16 ppb, respectively. The importance of some physical and elemental interferences are outlined. Dead-time corrections of measured count rates are necessary and a polynomical expression is used for count rates up to 10 5 . The presence of rare earth elements is regarded as the most important elemental interference. A typical application is given and other areas of application are described. (auther)

  7. Count rate effect in proportional counters

    International Nuclear Information System (INIS)

    Bednarek, B.

    1980-01-01

    A new concept is presented explaining changes in spectrometric parameters of proportional counters which occur due to varying count rate. The basic feature of this concept is that the gas gain of the counter remains constant in a wide range of count rate and that the decrease in the pulse amplitude and the detorioration of the energy resolution observed are the results of changes in the shape of original current pulses generated in the active volume of the counter. In order to confirm the validity of this statement, measurements of the gas amplification factor have been made in a wide count rate range. It is shown that above a certain critical value the gas gain depends on both the operating voltage and the count rate. (author)

  8. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    and is itself a source and cause of prolific data creation. This calls for scalable map processing techniques that can handle the data volume and which play well with the predominant data models on the Web. (4) Maps are now consumed around the clock by a global audience. While historical maps were singleuser......-defined constraints as well as custom objectives. The purpose of the language is to derive a target multi-scale database from a source database according to holistic specifications. (b) The Glossy SQL compiler allows Glossy SQL to be scalably executed in a spatial analytics system, such as a spatial relational......, there are indications that the method is scalable for databases that contain millions of records, especially if the target language of the compiler is substituted by a cluster-ready variant of SQL. While several realistic use cases for maps have been implemented in CVL, additional non-geographic data visualization uses...

  9. Methematical model of a neutron counting system used for the characteristics control of spontaneously fissioning material

    International Nuclear Information System (INIS)

    Bessis, J.

    1986-09-01

    Methods are described for calculating the probabilities, p(m), of detection of m neutrons, inside a split millisecond counting gate, m varying from zero to some units. At the present stage, these methods suppose the source to be very small. Using the generating function concept, they concern both possible modes of the counting system, for opening gates, i.e.: 1) Trigger pulses randomly with regard to the emitted neutrons, 2) Trigger pulses from the detected neutrons themselves. Computed values are finally compared to the measured ones. This comparison seems to be very favourable, since the respective deviations are often lower than 1 % [fr

  10. Upgradation of automatic liquid scintillation counting system

    International Nuclear Information System (INIS)

    Bhattacharya, Sadhana; Behere, Anita; Sonalkar, S.Y.; Vaidya, P.P.

    2001-01-01

    This paper describes the upgradation of Microprocessor based Automatic Liquid Scintillation Counting systems (MLSC). This system was developed in 1980's and subsequently many systems were manufactured and supplied to Environment Survey labs at various Nuclear Power Plants. Recently this system has been upgraded to a more sophisticated one by using PC add-on hardware and developing Windows based software. The software implements more intuitive graphical user interface and also enhances the features making it comparable with commercially available systems. It implements data processing using full spectrum analysis as against channel ratio method adopted earlier, improving the accuracy of the results. Also it facilitates qualitative as well as quantitative analysis of the β-spectrum. It is possible to analyze a sample containing an unknown β-source. (author)

  11. Resonance ionization spectroscopy: Counting noble gas atoms

    International Nuclear Information System (INIS)

    Hurst, G.S.; Payne, M.G.; Chen, C.H.; Willis, R.D.; Lehmann, B.E.; Kramer, S.D.

    1981-01-01

    The purpose of this paper is to describe new work on the counting of noble gas atoms, using lasers for the selective ionization and detectors for counting individual particles (electrons or positive ions). When positive ions are counted, various kinds of mass analyzers (magnetic, quadrupole, or time-of-flight) can be incorporated to provide A selectivity. We show that a variety of interesting and important applications can be made with atom-counting techniques which are both atomic number (Z) and mass number (A) selective. (orig./FKS)

  12. Basement domain map of the conterminous United States and Alaska

    Science.gov (United States)

    Lund, Karen; Box, Stephen E.; Holm-Denoma, Christopher S.; San Juan, Carma A.; Blakely, Richard J.; Saltus, Richard W.; Anderson, Eric D.; DeWitt, Ed

    2015-01-01

    The basement-domain map is a compilation of basement domains in the conterminous United States and Alaska designed to be used at 1:5,000,000-scale, particularly as a base layer for national-scale mineral resource assessments. Seventy-seven basement domains are represented as eighty-three polygons on the map. The domains are based on interpretations of basement composition, origin, and architecture and developed from a variety of sources. Analysis of previously published basement, lithotectonic, and terrane maps as well as models of planetary development were used to formulate the concept of basement and the methodology of defining domains that spanned the ages of Archean to present but formed through different processes. The preliminary compilations for the study areas utilized these maps, national-scale gravity and aeromagnetic data, published and limited new age and isotopic data, limited new field investigations, and conventional geologic maps. Citation of the relevant source data for compilations and the source and types of original interpretation, as derived from different types of data, are provided in supporting descriptive text and tables.

  13. Research on Topographic Map Updating

    Directory of Open Access Journals (Sweden)

    Ivana Javorović

    2013-04-01

    Full Text Available The investigation of interpretability of panchromatic satellite image IRS-1C integrated with multispectral Landsat TM image with the purpose of updating the topographic map sheet at the scale of 1:25 000 has been described. The geocoding of source map was based on trigonometric points of the map sheet. Satellite images were geocoded using control points selected from the map. The contents of map have been vectorized and topographic database designed. The digital image processing improved the interpretability of images. Then, the vectorization of new contents was made. The change detection of the forest and water area was defined by using unsupervised classification of spatial and spectral merged images. Verification of the results was made using corresponding aerial photographs. Although this methodology could not insure the complete updating of topographic map at the scale of 1:25 000, the database has been updated with huge amount of data. Erdas Imagine 8.3. software was used. 

  14. Counting and Surveying Homeless Youth: Recommendations from YouthCount 2.0!, a Community-Academic Partnership.

    Science.gov (United States)

    Narendorf, Sarah C; Santa Maria, Diane M; Ha, Yoonsook; Cooper, Jenna; Schieszler, Christine

    2016-12-01

    Communities across the United States are increasing efforts to find and count homeless youth. This paper presents findings and lessons learned from a community/academic partnership to count homeless youth and conduct an in depth research survey focused on the health needs of this population. Over a 4 week recruitment period, 632 youth were counted and 420 surveyed. Methodological successes included an extended counting period, broader inclusion criteria to capture those in unstable housing, use of student volunteers in health training programs, recruiting from magnet events for high risk youth, and partnering with community agencies to disseminate findings. Strategies that did not facilitate recruitment included respondent driven sampling, street canvassing beyond known hotspots, and having community agencies lead data collection. Surveying was successful in gathering data on reasons for homelessness, history in public systems of care, mental health history and needs, sexual risk behaviors, health status, and substance use. Youth were successfully surveyed across housing types including shelters or transitional housing (n = 205), those in unstable housing such as doubled up with friends or acquaintances (n = 75), and those who were literally on the streets or living in a place not meant for human habitation (n = 140). Most youth completed the self-report survey and provided detailed information about risk behaviors. Recommendations to combine research data collection with counting are presented.

  15. Improving the counting efficiency in time-correlated single photon counting experiments by dead-time optimization

    Energy Technology Data Exchange (ETDEWEB)

    Peronio, P.; Acconcia, G.; Rech, I.; Ghioni, M. [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)

    2015-11-15

    Time-Correlated Single Photon Counting (TCSPC) has been long recognized as the most sensitive method for fluorescence lifetime measurements, but often requiring “long” data acquisition times. This drawback is related to the limited counting capability of the TCSPC technique, due to pile-up and counting loss effects. In recent years, multi-module TCSPC systems have been introduced to overcome this issue. Splitting the light into several detectors connected to independent TCSPC modules proportionally increases the counting capability. Of course, multi-module operation also increases the system cost and can cause space and power supply problems. In this paper, we propose an alternative approach based on a new detector and processing electronics designed to reduce the overall system dead time, thus enabling efficient photon collection at high excitation rate. We present a fast active quenching circuit for single-photon avalanche diodes which features a minimum dead time of 12.4 ns. We also introduce a new Time-to-Amplitude Converter (TAC) able to attain extra-short dead time thanks to the combination of a scalable array of monolithically integrated TACs and a sequential router. The fast TAC (F-TAC) makes it possible to operate the system towards the upper limit of detector count rate capability (∼80 Mcps) with reduced pile-up losses, addressing one of the historic criticisms of TCSPC. Preliminary measurements on the F-TAC are presented and discussed.

  16. Design and update of a classification system: the UCSD map of science.

    Directory of Open Access Journals (Sweden)

    Katy Börner

    Full Text Available Global maps of science can be used as a reference system to chart career trajectories, the location of emerging research frontiers, or the expertise profiles of institutes or nations. This paper details data preparation, analysis, and layout performed when designing and subsequently updating the UCSD map of science and classification system. The original classification and map use 7.2 million papers and their references from Elsevier's Scopus (about 15,000 source titles, 2001-2005 and Thomson Reuters' Web of Science (WoS Science, Social Science, Arts & Humanities Citation Indexes (about 9,000 source titles, 2001-2004-about 16,000 unique source titles. The updated map and classification adds six years (2005-2010 of WoS data and three years (2006-2008 from Scopus to the existing category structure-increasing the number of source titles to about 25,000. To our knowledge, this is the first time that a widely used map of science was updated. A comparison of the original 5-year and the new 10-year maps and classification system show (i an increase in the total number of journals that can be mapped by 9,409 journals (social sciences had a 80% increase, humanities a 119% increase, medical (32% and natural science (74%, (ii a simplification of the map by assigning all but five highly interdisciplinary journals to exactly one discipline, (iii a more even distribution of journals over the 554 subdisciplines and 13 disciplines when calculating the coefficient of variation, and (iv a better reflection of journal clusters when compared with paper-level citation data. When evaluating the map with a listing of desirable features for maps of science, the updated map is shown to have higher mapping accuracy, easier understandability as fewer journals are multiply classified, and higher usability for the generation of data overlays, among others.

  17. Expressive map design: OGC SLD/SE++ extension for expressive map styles

    Science.gov (United States)

    Christophe, Sidonie; Duménieu, Bertrand; Masse, Antoine; Hoarau, Charlotte; Ory, Jérémie; Brédif, Mathieu; Lecordix, François; Mellado, Nicolas; Turbet, Jérémie; Loi, Hugo; Hurtut, Thomas; Vanderhaeghe, David; Vergne, Romain; Thollot, Joëlle

    2018-05-01

    In the context of custom map design, handling more artistic and expressive tools has been identified as a carto-graphic need, in order to design stylized and expressive maps. Based on previous works on style formalization, an approach for specifying the map style has been proposed and experimented for particular use cases. A first step deals with the analysis of inspiration sources, in order to extract `what does make the style of the source', i.e. the salient visual characteristics to be automatically reproduced (textures, spatial arrangements, linear stylization, etc.). In a second step, in order to mimic and generate those visual characteristics, existing and innovative rendering techniques have been implemented in our GIS engine, thus extending the capabilities to generate expressive renderings. Therefore, an extension of the existing cartographic pipeline has been proposed based on the following aspects: 1- extension of the symbolization specifications OGC SLD/SE in order to provide a formalism to specify and reference expressive rendering methods; 2- separate the specification of each rendering method and its parameterization, as metadata. The main contribution has been described in (Christophe et al. 2016). In this paper, we focus firstly on the extension of the cartographic pipeline (SLD++ and metadata) and secondly on map design capabilities which have been experimented on various topographic styles: old cartographic styles (Cassini), artistic styles (watercolor, impressionism, Japanese print), hybrid topographic styles (ortho-imagery & vector data) and finally abstract and photo-realist styles for the geovisualization of costal area. The genericity and interoperability of our approach are promising and have already been tested for 3D visualization.

  18. Uranium mass and neutron multiplication factor estimates from time-correlation coincidence counts

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Wenxiong [China Academy of Engineering Physics, Center for Strategic Studies, Beijing 100088 (China); Li, Jiansheng [China Academy of Engineering Physics, Institute of Nuclear Physics and Chemistry, Mianyang 621900 (China); Zhu, Jianyu [China Academy of Engineering Physics, Center for Strategic Studies, Beijing 100088 (China)

    2015-10-11

    Time-correlation coincidence counts of neutrons are an important means to measure attributes of nuclear material. The main deficiency in the analysis is that an attribute of an unknown component can only be assessed by comparing it with similar known components. There is a lack of a universal method of measurement suitable for the different attributes of the components. This paper presents a new method that uses universal relations to estimate the mass and neutron multiplication factor of any uranium component with known enrichment. Based on numerical simulations and analyses of 64 highly enriched uranium components with different thicknesses and average radii, the relations between mass, multiplication and coincidence spectral features have been obtained by linear regression analysis. To examine the validity of the method in estimating the mass of uranium components with different sizes, shapes, enrichment, and shielding, the features of time-correlation coincidence-count spectra for other objects with similar attributes are simulated. Most of the masses and multiplications for these objects could also be derived by the formulation. Experimental measurements of highly enriched uranium castings have also been used to verify the formulation. The results show that for a well-designed time-dependent coincidence-count measuring system of a uranium attribute, there are a set of relations dependent on the uranium enrichment by which the mass and multiplication of the measured uranium components of any shape and size can be estimated from the features of the source-detector coincidence-count spectrum.

  19. Novel Family of modified qZS buck-boost multilevel inverters with reduced switch count

    DEFF Research Database (Denmark)

    Husev, Oleksandr; Strzelecki, Ryszard; Blaabjerg, Frede

    2015-01-01

    This paper describes a novel family of modified quasi-Z-source buck-boost multilevel inverters with reduced switch count. The inverters are derived by means of the modified inverter configuration with quasi-Z-source networks. The main benefits of the proposed solutions lie in the increased amount...... of levels with all possible sequences: reduced THD, reduced voltage stress on the transistors and size of the output filter. Also their modulation techniques are proposed and described. Simulation results have confirmed all theoretical predictions. The pros and cons are discussed in the conclusions....

  20. CalCOFI Larvae Counts Positive Tows

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Fish larvae counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets],...

  1. Farmer data sourcing. The case study of the spatial soil information maps in South Tyrol.

    Science.gov (United States)

    Della Chiesa, Stefano; Niedrist, Georg; Thalheimer, Martin; Hafner, Hansjörg; La Cecilia, Daniele

    2017-04-01

    Nord-Italian region South Tyrol is Europe's largest apple growing area exporting ca. 15% in Europe and 2% worldwide. Vineyards represent ca. 1% of Italian production. In order to deliver high quality food, most of the farmers in South Tyrol follow sustainable farming practices. One of the key practice is the sustainable soil management, where farmers collect regularly (each 5 years) soil samples and send for analyses to improve cultivation management, yield and finally profitability. However, such data generally remain inaccessible. On this regard, in South Tyrol, private interests and the public administration have established a long tradition of collaboration with the local farming industry. This has granted to the collection of large spatial and temporal database of soil analyses along all the cultivated areas. Thanks to this best practice, information on soil properties are centralized and geocoded. The large dataset consist mainly in soil information of texture, humus content, pH and microelements availability such as, K, Mg, Bor, Mn, Cu Zn. This data was finally spatialized by mean of geostatistical methods and several high-resolution digital maps were created. In this contribution, we present the best practice where farmers data source soil information in South Tyrol. Show the capability of a large spatial-temporal geocoded soil dataset to reproduce detailed digital soil property maps and to assess long-term changes in soil properties. Finally, implication and potential application are discussed.

  2. Track counting in radon dosimetry

    International Nuclear Information System (INIS)

    Fesenbeck, Ingo; Koehler, Bernd; Reichert, Klaus-Martin

    2013-01-01

    The newly developed, computer-controlled track counting system is capable of imaging and analyzing the entire area of nuclear track detectors. The high optical resolution allows a new analysis approach for the process of automated counting using digital image processing technologies. This way, higher exposed detectors can be evaluated reliably by an automated process as well. (orig.)

  3. CalCOFI Egg Counts Positive Tows

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Fish egg counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets], and...

  4. Regression Models For Multivariate Count Data.

    Science.gov (United States)

    Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei

    2017-01-01

    Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data.

  5. Passive non destructive assay of hull waste by gross neutron counting method

    International Nuclear Information System (INIS)

    Andola, Sanjay; Sur, Amit; Rawool, A.M.; Sharma, B.; Kaushik, T.C.; Gupta, S.C.; Basu, Sekhar; Raman Kumar; Agarwal, K.

    2014-01-01

    The special nuclear material accounting (SNMA) is an important and necessary issue now in nuclear waste management. The hull waste generated from dissolution of spent fuel contains small amounts of Uranium and Plutonium and other actinides due to undissolved trapped material inside zircoalloy tubes. We report here on the development of a Passive Hull monitoring system using gross neutron counting technique and its implementation with semiautomatic instrumentation. The overall sensitivity of the 3 He detector banks placed at 75 cm from the centre of loaded hull cask comes out to 5.2 x 10 -3 counts per neutron (c/n) while with standard Pu-Be source placed in same position it comes out to be 3.1 x 10 3 c/n. The difference in the efficiency is mainly because of the differences in the geometry and size of hull cask as well as difference in the energy spectrum of hull waste and Pu-Be source. This is accounted through Monte Carlo computations. The Pu mass in solid waste comes out as expected and varies with the surface dose rate of drum in almost a proportional manner. Being simple and less time consuming, this setup has been installed for routine assay of solid Hull waste at NRB, Tarapur

  6. Enhancing programming logic thinking using analogy mapping

    Science.gov (United States)

    Sukamto, R. A.; Megasari, R.

    2018-05-01

    Programming logic thinking is the most important competence for computer science students. However, programming is one of the difficult subject in computer science program. This paper reports our work about enhancing students' programming logic thinking using Analogy Mapping for basic programming subject. Analogy Mapping is a computer application which converts source code into analogies images. This research used time series evaluation and the result showed that Analogy Mapping can enhance students' programming logic thinking.

  7. Application of EM holographic methods to borehole vertical electric source data to map a fuel oil spill

    International Nuclear Information System (INIS)

    Bartel, L.C.

    1993-01-01

    The multifrequency, multisource holographic method used in the analysis of seismic data is to extended electromagnetic (EM) data within the audio frequency range. The method is applied to the secondary magnetic fields produced by a borehole, vertical electric source (VES). The holographic method is a numerical reconstruction procedure based on the double focusing principle for both the source array and the receiver array. The approach used here is to Fourier transform the constructed image from frequency space to time space and set time equal to zero. The image is formed when the in-phase part (real part) is a maximum or the out-of-phase (imaginary part) is a minimum; i.e., the EM wave is phase coherent at its origination. In the application here the secondary magnetic fields are treated as scattered fields. In the numerical reconstruction, the seismic analog of the wave vector is used; i.e., the imaginary part of the actual wave vector is ignored. The multifrequency, multisource holographic method is applied to calculated model data and to actual field data acquired to map a diesel fuel oil spill

  8. Applications of EM holographic methods to borehole vertical electric source data to map a fuel oil spill

    International Nuclear Information System (INIS)

    Bartel, L.C.

    1993-01-01

    The multifrequency, multisource holographic method used in the analysis of seismic data is to extended electromagnetic (EM) data within the audio frequency range. The method is applied to the secondary magnetic fields produced by a borehole, vertical electric source (VES). The holographic method is a numerical reconstruction procedure based on the double focusing principle for both the source array and the receiver array. The approach used here is to Fourier transform the constructed image from frequency space to time space and set time equal to zero. The image is formed when the in-phase part (real part) is a maximum or the out-of-phase (imaginary part) is a minimum; i.e., the EM wave is phase coherent at its origination. In the application here the secondary magnetic fields are treated as scattered fields. In the numerical reconstruction, the seismic analog of the wave vector is used; i.e., the imaginary part of the actual wave vector is ignore. The multifrequency, multisource holographic method is applied to calculated model data and to actual field data acquired to map a diesel fuel oil spill

  9. CHANDRA ACIS SURVEY OF X-RAY POINT SOURCES: THE SOURCE CATALOG

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Song; Liu, Jifeng; Qiu, Yanli; Bai, Yu; Yang, Huiqin; Guo, Jincheng; Zhang, Peng, E-mail: jfliu@bao.ac.cn, E-mail: songw@bao.ac.cn [Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China)

    2016-06-01

    The Chandra archival data is a valuable resource for various studies on different X-ray astronomy topics. In this paper, we utilize this wealth of information and present a uniformly processed data set, which can be used to address a wide range of scientific questions. The data analysis procedures are applied to 10,029 Advanced CCD Imaging Spectrometer observations, which produces 363,530 source detections belonging to 217,828 distinct X-ray sources. This number is twice the size of the Chandra Source Catalog (Version 1.1). The catalogs in this paper provide abundant estimates of the detected X-ray source properties, including source positions, counts, colors, fluxes, luminosities, variability statistics, etc. Cross-correlation of these objects with galaxies shows that 17,828 sources are located within the D {sub 25} isophotes of 1110 galaxies, and 7504 sources are located between the D {sub 25} and 2 D {sub 25} isophotes of 910 galaxies. Contamination analysis with the log N –log S relation indicates that 51.3% of objects within 2 D {sub 25} isophotes are truly relevant to galaxies, and the “net” source fraction increases to 58.9%, 67.3%, and 69.1% for sources with luminosities above 10{sup 37}, 10{sup 38}, and 10{sup 39} erg s{sup −1}, respectively. Among the possible scientific uses of this catalog, we discuss the possibility of studying intra-observation variability, inter-observation variability, and supersoft sources (SSSs). About 17,092 detected sources above 10 counts are classified as variable in individual observation with the Kolmogorov–Smirnov (K–S) criterion ( P {sub K–S} < 0.01). There are 99,647 sources observed more than once and 11,843 sources observed 10 times or more, offering us a wealth of data with which to explore the long-term variability. There are 1638 individual objects (∼2350 detections) classified as SSSs. As a quite interesting subclass, detailed studies on X-ray spectra and optical spectroscopic follow-up are needed to

  10. Validity of total leucocytes count and neutrophil count (differential leucocytes) in diagnosing suspected acute appendicitis

    International Nuclear Information System (INIS)

    Anwar, M.W.; Abid, I.

    2012-01-01

    Objective: To compare the diagnostic accuracy of Total Leucocytes Count (TLC) with Neutrophil count; Differential Leucocytes Count (DLC) in diagnosing cases of suspected acute appendicitis. Study design: Validation study. Place and duration of the study: Department of Surgery, Combined Military Hospital (CMH) Rawalpindi, from April 2008 to October 2008. Method: A total of 100 patients of Pain right iliac fossae who underwent appendicectomy were included. Detailed history of all the patients was taken for pain in right lower abdomen, its severity, its nature, relieving or provoking factors. Clinical examination was done in detail. Total and Differential Leucocytes Count was done. Every patient's appendix was examined grossly after appendicectomy for evidence of appendicitis. Diagnostic measures of TLC and DLC were calculated by standard formulas. Results: Sensitivity and specificity of TLC is 86.9% and 81.25% respectively and that of DLC is 82% and 68.75% respectively. Accuracy was 86% for TLC and 80% for DLC. Conclusion: TLC is more sensitive, specific and accurate test as compared to DLC and it should be used as diagnostic aid for suspected acute appendicitis cases. (author)

  11. Scalable, incremental learning with MapReduce parallelization for cell detection in high-resolution 3D microscopy data

    KAUST Repository

    Sung, Chul; Woo, Jongwook; Goodman, Matthew; Huffman, Todd; Choe, Yoonsuck

    2013-01-01

    Accurate estimation of neuronal count and distribution is central to the understanding of the organization and layout of cortical maps in the brain, and changes in the cell population induced by brain disorders. High-throughput 3D microscopy

  12. Mapping Applications to an FPFA Tile

    NARCIS (Netherlands)

    Rosien, M.A.J.; Guo, Y.; Smit, Gerardus Johannes Maria; Krol, Th.

    This paper introduces a transformational design method which can be used to map code written in ahhigh level source language, like C, to a coarse grain reconfigurable architecture. The source code is first translated into a Control Datafow graph (CDFG), which is minimized using a set of behaviour

  13. The Precision of Mapping Between Number Words and the Approximate Number System Predicts Children’s Formal Math Abilities

    Science.gov (United States)

    Libertus, Melissa E.; Odic, Darko; Feigenson, Lisa; Halberda, Justin

    2016-01-01

    Children can represent number in at least two ways: by using their non-verbal, intuitive Approximate Number System (ANS), and by using words and symbols to count and represent numbers exactly. Further, by the time they are five years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children’s math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation – mapping accuracy and variability – might each relate to math performance. Here, we address these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. PMID:27348475

  14. Cowichan Valley energy mapping and modelling. Report 5 - Energy density mapping projections. Final report. [Vancouver Island, Canada

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-06-15

    The driving force behind the Integrated Energy Mapping and Analysis project was the identification and analysis of a suite of pathways that the Cowichan Valley Regional District (CVRD) can utilise to increase its energy resilience, as well as reduce energy consumption and GHG emissions, with a primary focus on the residential sector. Mapping and analysis undertaken will support provincial energy and GHG reduction targets, and the suite of pathways outlined will address a CVRD internal target that calls for 75% of the region's energy within the residential sector to come from locally sourced renewables by 2050. The target has been developed as a mechanism to meet resilience and climate action target. The maps and findings produced are to be integrated as part of a regional policy framework currently under development. Task 5 focused on energy projection mapping to estimate and visualise the energy consumption density and GHG emissions under different scenarios. The scenarios from task 4 were built around the energy consumption density of the residential sector under future land use patterns and rely on different energy source combinations (the suite of pathways). In task 5 the energy usage under the different scenarios were fed back into GIS, thereby giving a visual representation of forecasted residential energy consumption per unit area. The methodology is identical to that used in task 2 where current usage was mapped, whereas the mapping in this task is for future forecasts. These results are documented in this report. In addition, GHG mapping under the various scenarios was also undertaken. (LN)

  15. Enhanced counting efficiency of Cerenkov radiation from bismuth-210

    International Nuclear Information System (INIS)

    Peck, G.A.; Smith, J.D.

    1998-01-01

    This paper describes the measurement of 210 Bi by Cerenkov counting in a commercial liquid scintillation counter. The counting efficiency in water is 0.17 counts per second per Becquerel (17%). When the enhancers Triton X-100 (15% v/v) and sodium salicylate (1% m/v) are added to the solution the counting efficiency for 210 Bi increases from 17% to 75%. The 210 Po daughter of 210 Bi causes interference of 0.85 counts per second per Becquerel in the presence of the enhancers but not in water. When 210 Bi and 210 Po are present in secular equilibrium the total counting efficiency is 160%. When 210 Bi and 210 Po are not in secular equilibrium the 210 Po can be removed immediately before counting by plating onto silver foil. The use of the enhancers gives a substantial increase in counting efficiency compared to counting in water. Compared with solutions used in liquid scintillation counting the enhancer solution is inexpensive and can be disposed of without environmental hazard. (author)

  16. Clean Hands Count

    Medline Plus

    Full Text Available ... why Close Clean Hands Count Centers for Disease Control and Prevention (CDC) Loading... Unsubscribe from Centers for Disease Control and Prevention (CDC)? Cancel Unsubscribe Working... Subscribe Subscribed ...

  17. Design and development of multiple sample counting setup

    International Nuclear Information System (INIS)

    Rath, D.P.; Murali, S.; Babu, D.A.R.

    2010-01-01

    Full text: The analysis of active samples on regular basis for ambient air activity and floor contamination from radio chemical lab accounts for major chunk of the operational activity in Health Physicist's responsibility. The requirement for daily air sample analysis on immediate counting and delayed counting from various labs in addition to samples of smear swipe check of lab led to the urge for development of system that could cater multiple sample analysis in a time programmed manner on a single sample loading. A multiple alpha/beta counting system for counting was designed and fabricated. It has arrangements for loading 10 samples in slots in order, get counted in a time programmed manner with results displayed and records maintained in PC. The paper describes the design and development of multiple sample counting setup presently in use at the facility has resulted in reduction of man-hour consumption in counting and recording of the results

  18. Automated vehicle counting using image processing and machine learning

    Science.gov (United States)

    Meany, Sean; Eskew, Edward; Martinez-Castro, Rosana; Jang, Shinae

    2017-04-01

    Vehicle counting is used by the government to improve roadways and the flow of traffic, and by private businesses for purposes such as determining the value of locating a new store in an area. A vehicle count can be performed manually or automatically. Manual counting requires an individual to be on-site and tally the traffic electronically or by hand. However, this can lead to miscounts due to factors such as human error A common form of automatic counting involves pneumatic tubes, but pneumatic tubes disrupt traffic during installation and removal, and can be damaged by passing vehicles. Vehicle counting can also be performed via the use of a camera at the count site recording video of the traffic, with counting being performed manually post-recording or using automatic algorithms. This paper presents a low-cost procedure to perform automatic vehicle counting using remote video cameras with an automatic counting algorithm. The procedure would utilize a Raspberry Pi micro-computer to detect when a car is in a lane, and generate an accurate count of vehicle movements. The method utilized in this paper would use background subtraction to process the images and a machine learning algorithm to provide the count. This method avoids fatigue issues that are encountered in manual video counting and prevents the disruption of roadways that occurs when installing pneumatic tubes

  19. Statistical characterization of discrete conservative systems: The web map

    Science.gov (United States)

    Ruiz, Guiomar; Tirnakli, Ugur; Borges, Ernesto P.; Tsallis, Constantino

    2017-10-01

    We numerically study the two-dimensional, area preserving, web map. When the map is governed by ergodic behavior, it is, as expected, correctly described by Boltzmann-Gibbs statistics, based on the additive entropic functional SB G[p (x ) ] =-k ∫d x p (x ) lnp (x ) . In contrast, possible ergodicity breakdown and transitory sticky dynamical behavior drag the map into the realm of generalized q statistics, based on the nonadditive entropic functional Sq[p (x ) ] =k 1/-∫d x [p(x ) ] q q -1 (q ∈R ;S1=SB G ). We statistically describe the system (probability distribution of the sum of successive iterates, sensitivity to the initial condition, and entropy production per unit time) for typical values of the parameter that controls the ergodicity of the map. For small (large) values of the external parameter K , we observe q -Gaussian distributions with q =1.935 ⋯ (Gaussian distributions), like for the standard map. In contrast, for intermediate values of K , we observe a different scenario, due to the fractal structure of the trajectories embedded in the chaotic sea. Long-standing non-Gaussian distributions are characterized in terms of the kurtosis and the box-counting dimension of chaotic sea.

  20. cudaMap: a GPU accelerated program for gene expression connectivity mapping.

    Science.gov (United States)

    McArt, Darragh G; Bankhead, Peter; Dunne, Philip D; Salto-Tellez, Manuel; Hamilton, Peter; Zhang, Shu-Dong

    2013-10-11

    Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take > 2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping. cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance. Emerging 'omics' technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap.

  1. Γ-source Neutral Point Clamped Inverter

    DEFF Research Database (Denmark)

    Mo, Wei; Loh, Poh Chiang; Blaabjerg, Frede

    Transformer based Z-source inverters are recently proposed to achieve promising buck-boost capability. They have improved higher buck-boost capability, smaller size and less components count over Z-source inverters. On the other hand, neutral point clamped inverters have less switching stress...... and better output performance comparing with traditional two-level inverters. Integrating these two types of configurations can help neutral point inverters achieve enhanced votlage buck-boost capability....

  2. Seismic maps foster landmark legislation

    Science.gov (United States)

    Borcherdt, Roger D.; Brown, Robert B.; Page, Robert A.; Wentworth, Carl M.; Hendley, James W.

    1995-01-01

    When a powerful earthquake strikes an urban region, damage concentrates not only near the quake's source. Damage can also occur many miles from the source in areas of soft ground. In recent years, scientists have developed ways to identify and map these areas of high seismic hazard. This advance has spurred pioneering legislation to reduce earthquake losses in areas of greatest hazard.

  3. Determination of 241Pu by low level β-proportional counting

    International Nuclear Information System (INIS)

    Rosner, G.; Hoetzl, H.; Winkler, R.

    1992-01-01

    A chemical separation procedure is described which allows the direct determination of low 241 Pu activities in environmental samples with a windowless gas-flow proportional counter. While current separation schemes based on anion exchange yield counting sources of sufficient purity for subsequent α-spectrometry, for β-counting of 241 Pu additional purification steps are required. A combination of anion exchange from 9 mol/1 HCl, LaF 3 precipitation and TTA extraction was found to be suitable even for analysis of long-range Chernobyl fallout samples which contained interfering radionuclides with β-activities at least 3 to 4 orders of magnitude higher than usually encountered. No difference is detectable between the results of the present, direct procedure and the results of the conventional indirect method based on the build-up of 241 Am. Average 241 Pu/ 239+241 Pu ratios in air and deposition samples taken at Neuherberg near Munich were 70±6 with the present procedure and 66±9 from 241 Am build-up. (author) 29 refs.; 3 tabs

  4. Global mapping of transposon location.

    Directory of Open Access Journals (Sweden)

    Abram Gabriel

    2006-12-01

    Full Text Available Transposable genetic elements are ubiquitous, yet their presence or absence at any given position within a genome can vary between individual cells, tissues, or strains. Transposable elements have profound impacts on host genomes by altering gene expression, assisting in genomic rearrangements, causing insertional mutations, and serving as sources of phenotypic variation. Characterizing a genome's full complement of transposons requires whole genome sequencing, precluding simple studies of the impact of transposition on interindividual variation. Here, we describe a global mapping approach for identifying transposon locations in any genome, using a combination of transposon-specific DNA extraction and microarray-based comparative hybridization analysis. We use this approach to map the repertoire of endogenous transposons in different laboratory strains of Saccharomyces cerevisiae and demonstrate that transposons are a source of extensive genomic variation. We also apply this method to mapping bacterial transposon insertion sites in a yeast genomic library. This unique whole genome view of transposon location will facilitate our exploration of transposon dynamics, as well as defining bases for individual differences and adaptive potential.

  5. Calculating concentration of inhaled radiolabeled particles from external gamma counting: External counting efficiency and attenuation coefficient of thorax

    International Nuclear Information System (INIS)

    Langenback, E.G.; Foster, W.M.; Bergofsky, E.H.

    1989-01-01

    We determined the overall external counting efficiency of radiolabeled particles deposited in the sheep lung. This efficiency permits the noninvasive calculation of the number of particles and microcuries from gamma-scintillation lung images of the live sheep. Additionally, we have calculated the attenuation of gamma radiation (120 keV) by the posterior chest wall and the gamma-scintillation camera collection efficiency of radiation emitted from the lung. Four methods were employed in our experiments: (1) by light microscopic counting of discrete carbonized polystyrene particles with a count median diameter (CMD) of 2.85 microns and tagged with cobalt-57, we delineated a linear relationship between the number of particles and the emitted counts per minute (cpm) detected by well scintillation counting; (2) from this conversion relationship we determined the number of particles inhaled and deposited in the lungs by scintillation counting fragments of dissected lung at autopsy; (3) we defined a linear association between the number of particles or microcuries contained in the lung and the emitted radiation as cpm detected by a gamma scintillation camera in the live sheep prior to autopsy; and (4) we compared the emitted radiation from the lungs of the live sheep to that of whole excised lungs in order to calculate the attenuation coefficient (ac) of the chest wall. The mean external counting efficiency was 4.00 X 10(4) particles/cpm (5.1 X 10(-3) microCi/cpm), the camera collection efficiency was 1 cpm/10(4) disintegrations per minute (dpm), and the ac had a mean of 0.178/cm. The external counting efficiency remained relatively constant over a range of particles and microcuries, permitting a more general use of this ratio to estimate number of particles or microcuries depositing after inhalation in a large mammalian lung if a similarly collimated gamma camera system is used

  6. Mapping human health risks from exposure to trace metal contamination of drinking water sources in Pakistan

    International Nuclear Information System (INIS)

    Bhowmik, Avit Kumar; Alamdar, Ambreen; Katsoyiannis, Ioannis; Shen, Heqing; Ali, Nadeem; Ali, Syeda Maria; Bokhari, Habib; Schäfer, Ralf B.; Eqani, Syed Ali Musstjab Akber Shah

    2015-01-01

    The consumption of contaminated drinking water is one of the major causes of mortality and many severe diseases in developing countries. The principal drinking water sources in Pakistan, i.e. ground and surface water, are subject to geogenic and anthropogenic trace metal contamination. However, water quality monitoring activities have been limited to a few administrative areas and a nationwide human health risk assessment from trace metal exposure is lacking. Using geographically weighted regression (GWR) and eight relevant spatial predictors, we calculated nationwide human health risk maps by predicting the concentration of 10 trace metals in the drinking water sources of Pakistan and comparing them to guideline values. GWR incorporated local variations of trace metal concentrations into prediction models and hence mitigated effects of large distances between sampled districts due to data scarcity. Predicted concentrations mostly exhibited high accuracy and low uncertainty, and were in good agreement with observed concentrations. Concentrations for Central Pakistan were predicted with higher accuracy than for the North and South. A maximum 150–200 fold exceedance of guideline values was observed for predicted cadmium concentrations in ground water and arsenic concentrations in surface water. In more than 53% (4 and 100% for the lower and upper boundaries of 95% confidence interval (CI)) of the total area of Pakistan, the drinking water was predicted to be at risk of contamination from arsenic, chromium, iron, nickel and lead. The area with elevated risks is inhabited by more than 74 million (8 and 172 million for the lower and upper boundaries of 95% CI) people. Although these predictions require further validation by field monitoring, the results can inform disease mitigation and water resources management regarding potential hot spots. - Highlights: • Predictions of trace metal concentration use geographically weighted regression • Human health risk

  7. Mapping human health risks from exposure to trace metal contamination of drinking water sources in Pakistan

    Energy Technology Data Exchange (ETDEWEB)

    Bhowmik, Avit Kumar [Institute for Environmental Sciences, University of Koblenz-Landau, Fortstrasse 7, D-76829 Landau in der Pfalz (Germany); Alamdar, Ambreen [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, Xiamen 361021 (China); Katsoyiannis, Ioannis [Aristotle University of Thessaloniki, Department of Chemistry, Division of Chemical Technology, Box 116, Thessaloniki 54124 (Greece); Shen, Heqing [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, Xiamen 361021 (China); Ali, Nadeem [Department of Environmental Sciences, FBAS, International Islamic University, Islamabad (Pakistan); Ali, Syeda Maria [Center of Excellence in Environmental Studies, King Abdulaziz University, Jeddah (Saudi Arabia); Bokhari, Habib [Public Health and Environment Division, Department of Biosciences, COMSATS Institute of Information Technology, Islamabad (Pakistan); Schäfer, Ralf B. [Institute for Environmental Sciences, University of Koblenz-Landau, Fortstrasse 7, D-76829 Landau in der Pfalz (Germany); Eqani, Syed Ali Musstjab Akber Shah, E-mail: ali_ebl2@yahoo.com [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, Xiamen 361021 (China); Public Health and Environment Division, Department of Biosciences, COMSATS Institute of Information Technology, Islamabad (Pakistan)

    2015-12-15

    The consumption of contaminated drinking water is one of the major causes of mortality and many severe diseases in developing countries. The principal drinking water sources in Pakistan, i.e. ground and surface water, are subject to geogenic and anthropogenic trace metal contamination. However, water quality monitoring activities have been limited to a few administrative areas and a nationwide human health risk assessment from trace metal exposure is lacking. Using geographically weighted regression (GWR) and eight relevant spatial predictors, we calculated nationwide human health risk maps by predicting the concentration of 10 trace metals in the drinking water sources of Pakistan and comparing them to guideline values. GWR incorporated local variations of trace metal concentrations into prediction models and hence mitigated effects of large distances between sampled districts due to data scarcity. Predicted concentrations mostly exhibited high accuracy and low uncertainty, and were in good agreement with observed concentrations. Concentrations for Central Pakistan were predicted with higher accuracy than for the North and South. A maximum 150–200 fold exceedance of guideline values was observed for predicted cadmium concentrations in ground water and arsenic concentrations in surface water. In more than 53% (4 and 100% for the lower and upper boundaries of 95% confidence interval (CI)) of the total area of Pakistan, the drinking water was predicted to be at risk of contamination from arsenic, chromium, iron, nickel and lead. The area with elevated risks is inhabited by more than 74 million (8 and 172 million for the lower and upper boundaries of 95% CI) people. Although these predictions require further validation by field monitoring, the results can inform disease mitigation and water resources management regarding potential hot spots. - Highlights: • Predictions of trace metal concentration use geographically weighted regression • Human health risk

  8. Clean Hands Count

    Medline Plus

    Full Text Available ... Clean Hands Count Centers for Disease Control and Prevention (CDC) Loading... Unsubscribe from Centers for Disease Control and Prevention (CDC)? Cancel Unsubscribe Working... Subscribe Subscribed Unsubscribe 65K ...

  9. Clean Hands Count

    Medline Plus

    Full Text Available ... Clean Hands Count Centers for Disease Control and Prevention (CDC) Loading... Unsubscribe from Centers for Disease Control and Prevention (CDC)? Cancel Unsubscribe Working... Subscribe Subscribed Unsubscribe 66K ...

  10. An Embodiment Perspective on Number-Space Mapping in 3.5-year-old Dutch Children

    Science.gov (United States)

    Noordende, Jaccoline E.; Volman, M(Chiel). J. M.; Leseman, Paul P. M.; Kroesbergen, Evelyn H.

    2017-01-01

    Previous research suggests that block adding, subtracting and counting direction are early forms of number-space mapping. In this study, an embodiment perspective on these skills was taken. Embodiment theory assumes that cognition emerges through sensory-motor interaction with the environment. In line with this assumption, it was investigated if…

  11. Neutron resonance transmission spectroscopy with high spatial and energy resolution at the J-PARC pulsed neutron source

    Energy Technology Data Exchange (ETDEWEB)

    Tremsin, A.S., E-mail: ast@ssl.berkeley.edu [University of California at Berkeley, 7 Gauss Way, Berkeley, CA 94720 (United States); Shinohara, T.; Kai, T.; Ooi, M. [Japan Atomic Energy Agency, 2–4 Shirakata-shirane, Tokai-mura, Naka-gun, Ibaraki 319-1195 (Japan); Kamiyama, T.; Kiyanagi, Y.; Shiota, Y. [Hokkaido University, Kita 13 Nishi 8 Kita-ku, Sapporo-shi, Hokkaido 060-8628 (Japan); McPhate, J.B.; Vallerga, J.V.; Siegmund, O.H.W. [University of California at Berkeley, 7 Gauss Way, Berkeley, CA 94720 (United States); Feller, W.B. [NOVA Scientific, Inc., 10 Picker Rd., Sturbridge, MA 01566 (United States)

    2014-05-11

    The sharp variation of neutron attenuation at certain energies specific to particular nuclides (the lower range being from ∼1 eV up to ∼1 keV), can be exploited for the remote mapping of element and/or isotope distributions, as well as temperature probing, within relatively thick samples. Intense pulsed neutron beam-lines at spallation sources combined with a high spatial, high-timing resolution neutron counting detector, provide a unique opportunity to measure neutron transmission spectra through the time-of-flight technique. We present the results of experiments where spatially resolved neutron resonances were measured, at energies up to 50 keV. These experiments were performed with the intense flux low background NOBORU neutron beamline at the J-PARC neutron source and the high timing resolution (∼20 ns at epithermal neutron energies) and spatial resolution (∼55 µm) neutron counting detector using microchannel plates coupled to a Timepix electronic readout. Simultaneous element-specific imaging was carried out for several materials, at a spatial resolution of ∼150 µm. The high timing resolution of our detector combined with the low background beamline, also enabled characterization of the neutron pulse itself – specifically its pulse width, which varies with neutron energy. The results of our measurements are in good agreement with the predicted results for the double pulse structure of the J-PARC facility, which provides two 100 ns-wide proton pulses separated by 600 ns, broadened by the neutron energy moderation process. Thermal neutron radiography can be conducted simultaneously with resonance transmission spectroscopy, and can reveal the internal structure of the samples. The transmission spectra measured in our experiments demonstrate the feasibility of mapping elemental distributions using this non-destructive technique, for those elements (and in certain cases, specific isotopes), which have resonance energies below a few keV, and with lower

  12. Algorithm for counting large directed loops

    Energy Technology Data Exchange (ETDEWEB)

    Bianconi, Ginestra [Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34014 Trieste (Italy); Gulbahce, Natali [Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, NM 87545 (United States)

    2008-06-06

    We derive a Belief-Propagation algorithm for counting large loops in a directed network. We evaluate the distribution of the number of small loops in a directed random network with given degree sequence. We apply the algorithm to a few characteristic directed networks of various network sizes and loop structures and compare the algorithm with exhaustive counting results when possible. The algorithm is adequate in estimating loop counts for large directed networks and can be used to compare the loop structure of directed networks and their randomized counterparts.

  13. OSO-7 observations of high galactic latitude x-ray sources

    International Nuclear Information System (INIS)

    Markert, T.H.; Canizares, C.R.; Clark, G.W.; Li, F.K.; Northridge, P.L.; Sprott, G.F.; Wargo, G.F.

    1976-01-01

    Six hundred days of observations by the MIT X-ray detectors aboard OSO-7 have been analyzed. All-sky maps of X-ray intensity have been constructed from these data. A sample map is displayed. Seven sources with galactic latitude vertical-barb/subi//subi/vertical-bar>10degree, discovered during the mapping process, are reported, and upper limits are set on other high-latitude sources. The OSO-7 results are compared with those of Uhuru and an implication of this comparison, that many of the high-latitude sources may be variable, is discussed

  14. Logistic regression for dichotomized counts.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  15. Housing Inventory Count

    Data.gov (United States)

    Department of Housing and Urban Development — This report displays the data communities reported to HUD about the nature of their dedicated homeless inventory, referred to as their Housing Inventory Count (HIC)....

  16. Sources of the Radio Background Considered

    Energy Technology Data Exchange (ETDEWEB)

    Singal, J.; /KIPAC, Menlo Park /Stanford U.; Stawarz, L.; /KIPAC, Menlo Park /Stanford U. /Jagiellonian U., Astron. Observ.; Lawrence, A.; /Edinburgh U., Inst. Astron. /KIPAC, Menlo Park /Stanford U.; Petrosian, V.; /KIPAC, Menlo Park /Stanford U., Phys. Dept. /Stanford U., Appl. Phys. Dept.

    2011-08-22

    We investigate possible origins of the extragalactic radio background reported by the ARCADE 2 collaboration. The surface brightness of the background is several times higher than that which would result from currently observed radio sources. We consider contributions to the background from diffuse synchrotron emission from clusters and the intergalactic medium, previously unrecognized flux from low surface brightness regions of radio sources, and faint point sources below the flux limit of existing surveys. By examining radio source counts available in the literature, we conclude that most of the radio background is produced by radio point sources that dominate at sub {mu}Jy fluxes. We show that a truly diffuse background produced by elections far from galaxies is ruled out because such energetic electrons would overproduce the observed X-ray/{gamma}-ray background through inverse Compton scattering of the other photon fields. Unrecognized flux from low surface brightness regions of extended radio sources, or moderate flux sources missed entirely by radio source count surveys, cannot explain the bulk of the observed background, but may contribute as much as 10%. We consider both radio supernovae and radio quiet quasars as candidate sources for the background, and show that both fail to produce it at the observed level because of insufficient number of objects and total flux, although radio quiet quasars contribute at the level of at least a few percent. We conclude that the most important population for production of the background is likely ordinary starforming galaxies above redshift 1 characterized by an evolving radio far-infrared correlation, which increases toward the radio loud with redshift.

  17. Assessing double counting of carbon emissions between forest land cover change and forest wildfires: a case study in the United States, 1992-2006

    Science.gov (United States)

    Daolan Zheng; Linda S. Heath; Mark J. Ducey; Brad. Quayle

    2013-01-01

    The relative contributions of double counting of carbon emissions between forest-to-nonforest cover change (FNCC) and forest wildfires are an unknown in estimating net forest carbon exchanges at large scales. This study employed land-cover change maps and forest fire data in the four representative states (Arkansas, California, Minnesota, and Washington) of the US for...

  18. Effect of recirculation and regional counting rate on reliability of noninvasive bicompartmental CBF measurements

    International Nuclear Information System (INIS)

    Herholz, K.

    1985-01-01

    Based on data from routine intravenous Xe133-rCBF studies in 50 patients, using Obrist's algorithm the effect of counting rate statistics and amount of recirculating activity on reproducibility of results was investigated at five simulated counting rate levels. Dependence of the standard deviation of compartmental and noncompartmental flow parameters on recirculation and counting rate was determined by multiple linear regression analysis. Those regression equations permit determination of the optimum accuracy that may be expected from individual flow measurements. Mainly due to a delay of the start-of-fit time an exponential increase in standard deviation of flow measurements was observed as recirculation increased. At constant start-of-fit, however, a linear increase in standard deviation of compartmental flow parameters only was found, while noncompartmental results remained constant. Therefore, and in regard to other studies of potential sources of error, an upper limit of 2.5 min for the start-of-fit time and usage of noncompartmental flow parameters for measurements affected by high recirculation are suggested

  19. Enhanced Resolution Maps of Energetic Neutral Atoms from IBEX

    Science.gov (United States)

    Teodoro, L. A.; Elphic, R. C.; Janzen, P.; Reisenfeld, D.; Wilson, J. T.

    2017-12-01

    The discovery by the Interstellar Boundary Explorer (IBEX) of a "Ribbon" in the measurements of Energetic Neutral Particles (ENA) was a major surprise that lead to the re-thinking of the Physics underpinning the heliosphere-intergalactic medium boundary dynamics. Several physical models have been proposed and tested in their ability to mimic the IBEX observations. Some of the ENA IBEX's include the following features: 1) The presence of fine structure within the ribbon suggests that the physical properties of it exhibit small-scale spacial structure and possibly rapid small-scale variations. 2) The ribbon is a fairly narrow feature at low energies and broadens with increasing energy;The IBEX detectors were designed to maximize count rate by incorporating wide angular and broad energy acceptance. Thus far, the existing mapping software used by the IBEX Science Operation Center has not been design with the "Ribbon" ( 20o wide) in mind: the current generation of maps are binned in 6o longitude pixels by 6o latitude pixels (so the pixels are all of the same size in angle and are quite "blocky"). Furthermore, the instrumental point spread function has not been deconvolved, making any potential narrow features broader than they are. An improvement in the spatial resolution of the IBEX maps would foster a better understanding of the Ribbon and its substructure, and thus reply to some of the basic and profound questions related to its origin, the nature of the outer boundaries of the our solar system and the surrounding interstellar Galactic medium.Here we report on the application of the Bayesian image reconstruction algorithm "Speedy Pixons" to the ENA data with the aim to sharpen the ENA IBEX maps. A preliminary application allow us to conclude that: The peaks in the count rate do appear to be more enhanced in the reconstruction; The reconstruction is clearly denoised; The "Ribbon" is better defined in the reconstruction. We are currently studying the implications of

  20. Multiple-Event, Single-Photon Counting Imaging Sensor

    Science.gov (United States)

    Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.

    2011-01-01

    The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.

  1. Study of principle error sources in gamma spectrometry. Application to cross sections measurement

    International Nuclear Information System (INIS)

    Majah, M. Ibn.

    1985-01-01

    The principle error sources in gamma spectrometry have been studied in purpose to measure cross sections with great precision. Three error sources have been studied: dead time and pile up which depend on counting rate, and coincidence effect that depends on the disintegration scheme of the radionuclide in question. A constant frequency pulse generator has been used to correct the counting loss due to dead time and pile up in cases of long and short disintegration periods. The loss due to coincidence effect can reach 25% and over, depending on the disintegration scheme and on the distance source-detector. After establishing the correction formula and verifying its validity for four examples: iron 56, scandium 48, antimony 120 and gold 196 m, an application has been done by measuring cross sections of nuclear reactions that lead to long disintegration periods which need short distance source-detector counting and thus correcting the loss due to dead time effect, pile up and coincidence effect. 16 refs., 45 figs., 25 tabs. (author)

  2. Lazy reference counting for the Microgrid

    NARCIS (Netherlands)

    Poss, R.; Grelck, C.; Herhut, S.; Scholz, S.-B.

    2012-01-01

    This papers revisits non-deferred reference counting, a common technique to ensure that potentially shared large heap objects can be reused safely when they are both input and output to computations. Traditionally, thread-safe reference counting exploit implicit memory-based communication of counter

  3. Gully Erosion Mapping and Monitoring at Multiple Scales Based on Multi-Source Remote Sensing Data of the Sancha River Catchment, Northeast China

    Directory of Open Access Journals (Sweden)

    Ranghu Wang

    2016-11-01

    Full Text Available This research is focused on gully erosion mapping and monitoring at multiple spatial scales using multi-source remote sensing data of the Sancha River catchment in Northeast China, where gullies extend over a vast area. A high resolution satellite image (Pleiades 1A, 0.7 m was used to obtain the spatial distribution of the gullies of the overall basin. Image visual interpretation with field verification was employed to map the geometric gully features and evaluate gully erosion as well as the topographic differentiation characteristics. Unmanned Aerial Vehicle (UAV remote sensing data and the 3D photo-reconstruction method were employed for detailed gully mapping at a site scale. The results showed that: (1 the sub-meter image showed a strong ability in the recognition of various gully types and obtained satisfactory results, and the topographic factors of elevation, slope and slope aspects exerted significant influence on the gully spatial distribution at the catchment scale; and (2 at a more detailed site scale, UAV imagery combined with 3D photo-reconstruction provided a Digital Surface Model (DSM and ortho-image at the centimeter level as well as a detailed 3D model. The resulting products revealed the area of agricultural utilization and its shaping by human agricultural activities and water erosion in detail, and also provided the gully volume. The present study indicates that using multi-source remote sensing data, including satellite and UAV imagery simultaneously, results in an effective assessment of gully erosion over multiple spatial scales. The combined approach should be continued to regularly monitor gully erosion to understand the erosion process and its relationship with the environment from a comprehensive perspective.

  4. Clean Hands Count

    Medline Plus

    Full Text Available ... stop Loading... Watch Queue Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. Working... Not now Try it free Find ...

  5. Analyzing Variability in Landscape Nutrient Loading Using Spatially-Explicit Maps in the Great Lakes Basin

    Science.gov (United States)

    Hamlin, Q. F.; Kendall, A. D.; Martin, S. L.; Whitenack, H. D.; Roush, J. A.; Hannah, B. A.; Hyndman, D. W.

    2017-12-01

    Excessive loading of nitrogen and phosphorous to the landscape has caused biologically and economically damaging eutrophication and harmful algal blooms in the Great Lakes Basin (GLB) and across the world. We mapped source-specific loads of nitrogen and phosphorous to the landscape using broadly available data across the GLB. SENSMap (Spatially Explicit Nutrient Source Map) is a 30m resolution snapshot of nutrient loads ca. 2010. We use these maps to study variable nutrient loading and provide this information to watershed managers through NOAA's GLB Tipping Points Planner. SENSMap individually maps nutrient point sources and six non-point sources: 1) atmospheric deposition, 2) septic tanks, 3) non-agricultural chemical fertilizer, 4) agricultural chemical fertilizer, 5) manure, and 6) nitrogen fixation from legumes. To model source-specific loads at high resolution, SENSMap synthesizes a wide range of remotely sensed, surveyed, and tabular data. Using these spatially explicit nutrient loading maps, we can better calibrate local land use-based water quality models and provide insight to watershed managers on how to focus nutrient reduction strategies. Here we examine differences in dominant nutrient sources across the GLB, and how those sources vary by land use. SENSMap's high resolution, source-specific approach offers a different lens to understand nutrient loading than traditional semi-distributed or land use based models.

  6. A radiation protection initiative to map old radium sources

    International Nuclear Information System (INIS)

    Risica, S.; Grisanti, G.; Masi, R.; Melfi, A.

    2008-01-01

    Due to a legacy of past events, the Technology and Health Department of the Instituto Superiore di Sanita (ISS) has preserved an old, large archive of the allocation of radium sources in public hospitals. These sources were purchased by the Ministry of Interior first, then by the Ministry of Health, and provided to hospitals for cancer brachytherapy. After a retrieval initiative - organised in the 1980's, but discontinued some years later owing to the saturation of the temporary storage site - a considerable number of these sources remained in the hospitals. As a result of an incomplete transfer of the retrieval data, some events connected with the second world war, and the decision of some hospitals to dispose directly of their sources without informing the ISS, the archive was not completed and a series of initiatives were undertaken by the ISS to update it. On the other hand, following the concerns that arose after September 11th, 2001 about the possible criminal use of radioactive sources, the Carabinieri Environmental Care Command (CCTA) were required by the Minister of Environment to carry out a thorough investigation into all possible nuclear sources and waste in the country. Special attention was devoted to radium sources because of the high risk their loss or theft entails. For this reason, in 2004, the CCTA made an agreement with the ISS to acquire a final, updated picture of the distribution of these radium sources. In March 2007 a comprehensive report on this collaborative action and its conclusions was officially sent to both the Ministry of Health and the Ministry of the Environment. The paper describes the involvement of these two bodies in the issue, their collaborative action and the most relevant results. (author)

  7. Cigarette smoke chemistry market maps under Massachusetts Department of Public Health smoking conditions.

    Science.gov (United States)

    Morton, Michael J; Laffoon, Susan W

    2008-06-01

    This study extends the market mapping concept introduced by Counts et al. (Counts, M.E., Hsu, F.S., Tewes, F.J., 2006. Development of a commercial cigarette "market map" comparison methodology for evaluating new or non-conventional cigarettes. Regul. Toxicol. Pharmacol. 46, 225-242) to include both temporal cigarette and testing variation and also machine smoking with more intense puffing parameters, as defined by the Massachusetts Department of Public Health (MDPH). The study was conducted over a two year period and involved a total of 23 different commercial cigarette brands from the U.S. marketplace. Market mapping prediction intervals were developed for 40 mainstream cigarette smoke constituents and the potential utility of the market map as a comparison tool for new brands was demonstrated. The over-time character of the data allowed for the variance structure of the smoke constituents to be more completely characterized than is possible with one-time sample data. The variance was partitioned among brand-to-brand differences, temporal differences, and the remaining residual variation using a mixed random and fixed effects model. It was shown that a conventional weighted least squares model typically gave similar prediction intervals to those of the more complicated mixed model. For most constituents there was less difference in the prediction intervals calculated from over-time samples and those calculated from one-time samples than had been anticipated. One-time sample maps may be adequate for many purposes if the user is aware of their limitations. Cigarette tobacco fillers were analyzed for nitrate, nicotine, tobacco-specific nitrosamines, ammonia, chlorogenic acid, and reducing sugars. The filler information was used to improve predicting relationships for several of the smoke constituents, and it was concluded that the effects of filler chemistry on smoke chemistry were partial explanations of the observed brand-to-brand variation.

  8. The National Map: from geography to mapping and back again

    Science.gov (United States)

    Kelmelis, John A.; DeMulder, Mark L.; Ogrosky, Charles E.; Van Driel, J. Nicholas; Ryan, Barbara J.

    2003-01-01

    When the means of production for national base mapping were capital intensive, required large production facilities, and had ill-defined markets, Federal Government mapping agencies were the primary providers of the spatial data needed for economic development, environmental management, and national defense. With desktop geographic information systems now ubiquitous, source data available as a commodity from private industry, and the realization that many complex problems faced by society need far more and different kinds of spatial data for their solutions, national mapping organizations must realign their business strategies to meet growing demand and anticipate the needs of a rapidly changing geographic information environment. The National Map of the United States builds on a sound historic foundation of describing and monitoring the land surface and adds a focused effort to produce improved understanding, modeling, and prediction of land-surface change. These added dimensions bring to bear a broader spectrum of geographic science to address extant and emerging issues. Within the overarching construct of The National Map, the U.S. Geological Survey (USGS) is making a transition from data collector to guarantor of national data completeness; from producing paper maps to supporting an online, seamless, integrated database; and from simply describing the Nation’s landscape to linking these descriptions with increased scientific understanding. Implementing the full spectrum of geographic science addresses a myriad of public policy issues, including land and natural resource management, recreation, urban growth, human health, and emergency planning, response, and recovery. Neither these issues nor the science and technologies needed to deal with them are static. A robust research agenda is needed to understand these changes and realize The National Map vision. Initial successes have been achieved. These accomplishments demonstrate the utility of

  9. An intercomparison between gross α counting and gross β counting for grab-sampling determination of airborne radon progeny and thoron progeny

    International Nuclear Information System (INIS)

    Papp, Z.

    2006-01-01

    The instantaneous values of the airborne activity concentrations of radon progeny and thoron progeny have been determined 34 times in a closed and windowless room in a cellar using two independent grab-sampling methods in order to compare the performance of the methods. The activity concentration of radon ( 222 Rn) was also measured and it varied between 200 and 650 Bq m -3 . Two samples of radon and thoron progeny were collected simultaneously from roughly the same air volume by filtering. For the first method, the isotopes were collected on membrane filter and gross α counting was applied over several successive time intervals. This method was a slightly improved version of the methods that are applied generally for this reason for decades. For the second method, the isotopes were collected on glass-fibre filter and gross β counts were registered over several time intervals. This other method was developed a few years ago and the above series of measurements was the first opportunity to make an intercomparison between it and another similar method based on α counting. Individual radon progeny and thoron progeny activity concentrations (for the isotopes 218 Po, 214 Pb, 214 Bi and 212 Pb) were evaluated by both methods. The detailed investigation of the results showed that the systematic deviation of the methods is small but significant and isotope-dependent. The weighted averages of the β/α activity concentration ratios for 218 Po, 214 Pb, 214 Bi, EEDC 222 (Equilibrium-Equivalent Decay-product Concentration of radon progeny) and 212 Pb were 0.99±0.03, 0.90±0.02, 1.03±0.02, 0.96±0.02 and 0.80±0.03, respectively. The source of the systematic deviation is probably the inaccurate knowledge of the counting efficiencies mainly in the case of the α-counting method. A significant random-type difference between the results obtained with the two methods has also been revealed. For example, the β/α ratio for EEDC 222 varied between 0.81±0.01 and 1.22±0

  10. Determining Gate Count Reliability in a Library Setting

    OpenAIRE

    Jeffrey Phillips

    2016-01-01

    Objective – Patron counts are a common form of measurement for library assessment. To develop accurate library statistics, it is necessary to determine any differences between various counting devices. A yearlong comparison between card reader turnstiles and laser gate counters in a university library sought to offer a standard percentage of variance and provide suggestions to increase the precision of counts. Methods – The collection of library exit counts identified the differences be...

  11. Deep far infrared ISOPHOT survey in "Selected Area 57" - I. Observations and source counts

    DEFF Research Database (Denmark)

    Linden-Vornle, M.J.D.; Nørgaard-Nielsen, Hans Ulrik; Jørgensen, H.E.

    2000-01-01

    We present here the results of a deep survey in a 0.4 deg(2) blank field in Selected Area 57 conducted with the ISOPHOT instrument aboard ESAs Infrared Space Observatory (ISO1) at both 60 mu m and 90 mu m. The resulting sky maps have a spatial resolution of 15 x 23 arcsrc(2) per pixel which is much...

  12. Counts-in-Cylinders in the Sloan Digital Sky Survey with Comparisons to N-Body

    Energy Technology Data Exchange (ETDEWEB)

    Berrier, Heather D.; Barton, Elizabeth J.; /UC, Irvine; Berrier, Joel C.; /Arkansas U.; Bullock, James S.; /UC, Irvine; Zentner, Andrew R.; /Pittsburgh U.; Wechsler, Risa H. /KIPAC, Menlo Park /SLAC

    2010-12-16

    Environmental statistics provide a necessary means of comparing the properties of galaxies in different environments and a vital test of models of galaxy formation within the prevailing, hierarchical cosmological model. We explore counts-in-cylinders, a common statistic defined as the number of companions of a particular galaxy found within a given projected radius and redshift interval. Galaxy distributions with the same two-point correlation functions do not necessarily have the same companion count distributions. We use this statistic to examine the environments of galaxies in the Sloan Digital Sky Survey, Data Release 4. We also make preliminary comparisons to four models for the spatial distributions of galaxies, based on N-body simulations, and data from SDSS DR4 to study the utility of the counts-in-cylinders statistic. There is a very large scatter between the number of companions a galaxy has and the mass of its parent dark matter halo and the halo occupation, limiting the utility of this statistic for certain kinds of environmental studies. We also show that prevalent, empirical models of galaxy clustering that match observed two- and three-point clustering statistics well fail to reproduce some aspects of the observed distribution of counts-in-cylinders on 1, 3 and 6-h{sup -1}Mpc scales. All models that we explore underpredict the fraction of galaxies with few or no companions in 3 and 6-h{sup -1} Mpc cylinders. Roughly 7% of galaxies in the real universe are significantly more isolated within a 6 h{sup -1} Mpc cylinder than the galaxies in any of the models we use. Simple, phenomenological models that map galaxies to dark matter halos fail to reproduce high-order clustering statistics in low-density environments.

  13. Evaluation of the charge-sharing effects on spot intensity in XRD setup using photon-counting pixel detectors

    International Nuclear Information System (INIS)

    Nilsson, H.-E.; Mattsson, C.G.; Norlin, B.; Froejdh, C.; Bethke, K.; Vries, R. de

    2006-01-01

    In this study, we examine how charge loss due to charge sharing in photon-counting pixels detectors affects the recording of spot intensity in an X-ray diffraction (XRD) setup. In the photon-counting configuration, the charge from photons that are absorbed at the boarder of a pixel will be shared between two pixels. If the threshold is high enough, these photons will not be counted whereas if it is low enough, they will be counted twice. In an XRD setup, the intensity and position of various spots should be recorded. Thus, the intensity measure will be affected by the setting of the threshold. In this study, we used a system level Monte Carlo simulator to evaluate the variations in the intensity signals for different threshold settings and spot sizes. The simulated setup included an 8keV mono-chromatic source (providing a Gaussian shaped spot) and the MEDIPIX2 photon-counting pixel detector (55 μm x 55 μm pixel size with 300μm silicon) at various detector biases. Our study shows that the charge-sharing distortion can be compensated by numerical post processing and that high resolution in both charge distribution and position can be achieved

  14. Counting Word Frequencies with Python

    Directory of Open Access Journals (Sweden)

    William J. Turkel

    2012-07-01

    Full Text Available Your list is now clean enough that you can begin analyzing its contents in meaningful ways. Counting the frequency of specific words in the list can provide illustrative data. Python has an easy way to count frequencies, but it requires the use of a new type of variable: the dictionary. Before you begin working with a dictionary, consider the processes used to calculate frequencies in a list.

  15. ASSESSING STUDENTS’ COMPETENCE IN DEVELOPING CHOROPLETH MAPS COMBINED WITH DIAGRAM MAPS

    Directory of Open Access Journals (Sweden)

    GABRIELA OSACI-COSTACHE

    2015-01-01

    Full Text Available Choropleth maps combined with diagram maps are frequently used in geography. For this reason, based on the maps developed by students, the study aims at the following: identifying and analyzing the errors made by the students; establishing and analyzing the competence level of the students; identifying the causes that led to these errors; and finding the best solutions to improve both the educational process aiming at the formation of this kind of competences and the students’ results. The map assessment was accomplished during two academic years (2013-2014 and 2014-2015, in the aftermath of the activities meant to train the competence. We assessed 105 maps prepared by the students in Cartography (Faculty of Geography, University of Bucharest based on an analytical evaluation grid, with dichotomous scale, comprising 15 criteria. This tool helped us identify the errors made by the students, as well as their competence level. By applying a questionnaire, we identified the source of the errors from the students’ perspective, while by comparing the errors and the competence levels at the end of the two academic years we were able to come up with potential solutions for the improvement of the teaching and learning process.

  16. Computerized radioautographic grain counting

    International Nuclear Information System (INIS)

    McKanna, J.A.; Casagrande, V.A.

    1985-01-01

    In recent years, radiolabeling techniques have become fundamental assays in physiology and biochemistry experiments. They also have assumed increasingly important roles in morphologic studies. Characteristically, radioautographic analysis of structure has been qualitative rather than quantitative, however, microcomputers have opened the door to several methods for quantifying grain counts and density. The overall goal of this chapter is to describe grain counting using the Bioquant, an image analysis package based originally on the Apple II+, and now available for several popular microcomputers. The authors discuss their image analysis procedures by applying them to a study of development in the central nervous system

  17. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time

    Science.gov (United States)

    Dhar, Amrit

    2017-01-01

    Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780

  18. Visualizing the Logistic Map with a Microcontroller

    Science.gov (United States)

    Serna, Juan D.; Joshi, Amitabh

    2012-01-01

    The logistic map is one of the simplest nonlinear dynamical systems that clearly exhibits the route to chaos. In this paper, we explore the evolution of the logistic map using an open-source microcontroller connected to an array of light-emitting diodes (LEDs). We divide the one-dimensional domain interval [0,1] into ten equal parts, an associate…

  19. The Chandra Source Catalog: Source Variability

    Science.gov (United States)

    Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Evans, I.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to an assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.

  20. Mining the Geophysical Research Abstracts Corpus: Mapping the impact of Free and Open Source Software on the EGU Divisions

    Science.gov (United States)

    Löwe, Peter; Klump, Jens; Robertson, Jesse

    2015-04-01

    Text mining is commonly employed as a tool in data science to investigate and chart emergent information from corpora of research abstracts, such as the Geophysical Research Abstracts (GRA) published by Copernicus. In this context current standards, such as persistent identifiers like DOI and ORCID, allow us to trace, cite and map links between journal publications, the underlying research data and scientific software. This network can be expressed as a directed graph which enables us to chart networks of cooperation and innovation, thematic foci and the locations of research communities in time and space. However, this approach of data science, focusing on the research process in a self-referential manner, rather than the topical work, is still in a developing stage. Scientific work presented at the EGU General Assembly is often the first step towards new approaches and innovative ideas to the geospatial community. It represents a rich, deep and heterogeneous source of geoscientific thought. This corpus is a significant data source for data science, which has not been analysed on this scale previously. In this work, the corpus of the Geophysical Research Abstracts is used for the first time as a data base for analyses of topical text mining. For this, we used a sturdy and customizable software framework, based on the work of Schmitt et al. [1]. For the analysis we used the High Performance Computing infrastructure of the German Research Centre for Geosciences GFZ in Potsdam, Germany. Here, we report on the first results from the analysis of the continuous spreading the of use of Free and Open Source Software Tools (FOSS) within the EGU communities, mapping the general increase of FOSS-themed GRA articles in the last decade and the developing spatial patterns of involved parties and FOSS topics. References: [1] Schmitt, L. M., Christianson, K.T, Gupta R..: Linguistic Computing with UNIX Tools, in Kao, A., Poteet S.R. (Eds.): Natural Language processing and Text

  1. Geologic mapping of the Amirani-Gish Bar region of Io: Implications for the global geologic mapping of Io

    Science.gov (United States)

    Williams, D.A.; Keszthelyi, L.P.; Crown, D.A.; Jaeger, W.L.; Schenk, P.M.

    2007-01-01

    We produced the first geologic map of the Amirani-Gish Bar region of Io, the last of four regional maps generated from Galileo mission data. The Amirani-Gish Bar region has five primary types of geologic materials: plains, mountains, patera floors, flows, and diffuse deposits. The flows and patera floors are thought to be compositionally similar, but are subdivided based on interpretations regarding their emplacement environments and mechanisms. Our mapping shows that volcanic activity in the Amirani-Gish Bar region is dominated by the Amirani Eruptive Center (AEC), now recognized to be part of an extensive, combined Amirani-Maui flow field. A mappable flow connects Amirani and Maui, suggesting that Maui is fed from Amirani, such that the post-Voyager designation "Maui Eruptive Center" should be revised. Amirani contains at least four hot spots detected by Galileo, and is the source of widespread bright (sulfur?) flows and active dark (silicate?) flows being emplaced in the Promethean style (slowly emplaced, compound flow fields). The floor of Gish Bar Patera has been partially resurfaced by dark lava flows, although other parts of its floor are bright and appeared unchanged during the Galileo mission. This suggests that the floor did not undergo complete resurfacing as a lava lake as proposed for other ionian paterae. There are several other hot spots in the region that are the sources of both active dark flows (confined within paterae), and SO2- and S2-rich diffuse deposits. Mapped diffuse deposits around fractures on mountains and in the plains appear to serve as the source for gas venting without the release of magma, an association previously unrecognized in this region. The six mountains mapped in this region exhibit various states of degradation. In addition to gaining insight into this region of Io, all four maps are studied to assess the best methodology to use to produce a new global geologic map of Io based on the newly released, combined Galileo

  2. Cerenkov counting and Cerenkov-scintillation counting with high refractive index organic liquids using a liquid scintillation counter

    International Nuclear Information System (INIS)

    Wiebe, L.I.; Helus, F.; Maier-Borst, W.

    1978-01-01

    18 F and 14 C radioactivity was measured in methyl salicylate (MS), a high refractive index hybrid Cherenkov-scintillation generating medium, using a liquid scintillation counter. At concentrations of up to 21.4%, in MS, dimethyl sulfoxide (DMSO) quenched 14 C fluorescence, and with a 10-fold excess of DMSO over MS, 18 F count rates were reduced below that for DMSO alone, probably as a result of concentration-independent self-quenching due to 'dark-complex' formation. DMSO in lower concentrations did not reduce the counting efficiency of 18 F in MS. Nitrobenzene was a concentration-dependent quencher for both 14 C and 18 F in MS. Chlorobenzene (CB) and DMSO were both found to be weak Cherenkov generators with 18 F. Counting efficiencies for 18 F in MS, CB, and DMSO were 50.3, 7.8 and 4.3% respectively in the coincidence counting mode, and 58.1, 13.0 and 6.8% in the singles mode. 14 C efficiencies were 14.4 and 22.3% for coincidence and singles respectively, and 15.3 and 42.0% using a modern counter designed for coincidence and single photon counting. The high 14 C and 18 F counting efficiency in MS are discussed with respect to excitation mechanism, on the basis of quench and channels ratios changes observed. It is proposed that MS functions as an efficient Cherenkov-scintillation generator for high-energy beta emitters such as 18 F, and as a low-efficiency scintillator for weak beta emitting radionuclides such as 14 C. (author)

  3. Mapping Theory - a mapping of the theoretical territory related to a contemporary concept of public space

    DEFF Research Database (Denmark)

    Smith, Shelley

    2008-01-01

    This working paper maps the theoretical territory of public space - urban public space - in a contemporary urban context. By finding, selecting, registering and examining existing theoretical stand points, the paper founds a basis for the creation of theory in an architectural discourse and for t......This working paper maps the theoretical territory of public space - urban public space - in a contemporary urban context. By finding, selecting, registering and examining existing theoretical stand points, the paper founds a basis for the creation of theory in an architectural discourse...... and for the examination of new spatial constellations for further research in public space. In addition to this, the appendices of the working paper are a kind of database for sources and source analyses....

  4. Development of a nematode offspring counting assay for rapid and simple soil toxicity assessment.

    Science.gov (United States)

    Kim, Shin Woong; Moon, Jongmin; Jeong, Seung-Woo; An, Youn-Joo

    2018-05-01

    Since the introduction of standardized nematode toxicity assays by the American Society for Testing and Materials (ASTM) and International Organization for Standardization (ISO), many studies have reported their use. Given that the currently used standardized nematode toxicity assays have certain limitations, in this study, we examined the use of a novel nematode offspring counting assay for evaluating soil ecotoxicity based on a previous soil-agar isolation method used to recover live adult nematodes. In this new assay, adult Caenorhabditis elegans were exposed to soil using a standardized toxicity assay procedure, and the resulting offspring in test soils attracted by a microbial food source in agar plates were counted. This method differs from previously used assays in terms of its endpoint, namely, the number of nematode offspring. The applicability of the bioassay was demonstrated using metal-spiked soils, which revealed metal concentration-dependent responses, and with 36 field soil samples characterized by different physicochemical properties and containing various metals. Principal component analysis revealed that texture fraction (clay, sand, and silt) and electrical conductivity values were the main factors influencing the nematode offspring counting assay, and these findings warrant further investigation. The nematode offspring counting assay is a rapid and simple process that can provide multi-directional toxicity assessment when used in conjunction with other standard methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Relationship between salivary flow rates and Candida albicans counts.

    Science.gov (United States)

    Navazesh, M; Wood, G J; Brightman, V J

    1995-09-01

    Seventy-one persons (48 women, 23 men; mean age, 51.76 years) were evaluated for salivary flow rates and Candida albicans counts. Each person was seen on three different occasions. Samples of unstimulated whole, chewing-stimulated whole, acid-stimulated parotid, and candy-stimulated parotid saliva were collected under standardized conditions. An oral rinse was also obtained and evaluated for Candida albicans counts. Unstimulated and chewing-stimulated whole flow rates were negatively and significantly (p Candida counts. Unstimulated whole saliva significantly (p Candida counts of 0 versus or = 500 count. Differences in stimulated parotid flow rates were not significant among different levels of Candida counts. The results of this study reveal that whole saliva is a better predictor than parotid saliva in identification of persons with high Candida albicans counts.

  6. Vision-based topological map building and localisation using persistent features

    CSIR Research Space (South Africa)

    Sabatta, DG

    2008-11-01

    Full Text Available stream_source_info Sabatta_2008.pdf.txt stream_content_type text/plain stream_size 32284 Content-Encoding UTF-8 stream_name Sabatta_2008.pdf.txt Content-Type text/plain; charset=UTF-8 Vision-based Topological Map... of topological mapping was introduced into the field of robotics following studies of human cogni- tive mapping undertaken by Kuipers [8]. Since then, much progress has been made in the field of vision-based topologi- cal mapping. Topological mapping lends...

  7. Mapping of low temperature heat sources in Denmark

    DEFF Research Database (Denmark)

    Bühler, Fabian; Holm, Fridolin Müller; Huang, Baijia

    2015-01-01

    heat. The total accessible waste heat potential is found to be approximately 266 PJ per year with 58 % of it below 100 °C. In the natural heat category, temperatures below 20 °C originate from ambient air, sea water and shallow geothermal energy, and temperatures up to 100 °C are found for solar...... and deep geothermal energy. The theoretical solar thermal potential alone would be above 500 PJ per year. For the development of advanced thermodynamic cycles for the integration of heat sources in the Danish energy system, several areas of interest are determined. In the maritime transport sector a high......Low temperature heat sources are available in many applications, ranging from waste heat from industrial processes and buildings to geothermal and solar heat sources. Technical advancements, such as heat pumps with novel cycle design and multi-component working fluids, make the utilisation of many...

  8. Reticulocyte Count Test

    Science.gov (United States)

    ... htm. (2004 Summer). Immature Reticulocyte Fraction(IRF). The Pathology Center Newsletter v9(1). [On-line information]. Available ... Company, Philadelphia, PA [18th Edition]. Levin, M. (2007 March 8, Updated). Reticulocyte Count. MedlinePlus Medical Encyclopedia [On- ...

  9. The National Map - Orthoimagery

    Science.gov (United States)

    Mauck, James; Brown, Kim; Carswell, William J.

    2009-01-01

    Orthorectified digital aerial photographs and satellite images of 1-meter (m) pixel resolution or finer make up the orthoimagery component of The National Map. The process of orthorectification removes feature displacements and scale variations caused by terrain relief and sensor geometry. The result is a combination of the image characteristics of an aerial photograph or satellite image and the geometric qualities of a map. These attributes allow users to: *Measure distance *Calculate areas *Determine shapes of features *Calculate directions *Determine accurate coordinates *Determine land cover and use *Perform change detection *Update maps The standard digital orthoimage is a 1-m or finer resolution, natural color or color infra-red product. Most are now produced as GeoTIFFs and accompanied by a Federal Geographic Data Committee (FGDC)-compliant metadata file. The primary source for 1-m data is the National Agriculture Imagery Program (NAIP) leaf-on imagery. The U.S. Geological Survey (USGS) utilizes NAIP imagery as the image layer on its 'Digital- Map' - a new generation of USGS topographic maps (http://nationalmap.gov/digital_map). However, many Federal, State, and local governments and organizations require finer resolutions to meet a myriad of needs. Most of these images are leaf-off, natural-color products at resolutions of 1-foot (ft) or finer.

  10. Temporal trends in sperm count

    DEFF Research Database (Denmark)

    Levine, Hagai; Jørgensen, Niels; Martino-Andrade, Anderson

    2017-01-01

    a predefined protocol 7518 abstracts were screened and 2510 full articles reporting primary data on SC were reviewed. A total of 244 estimates of SC and TSC from 185 studies of 42 935 men who provided semen samples in 1973-2011 were extracted for meta-regression analysis, as well as information on years.......006, respectively). WIDER IMPLICATIONS: This comprehensive meta-regression analysis reports a significant decline in sperm counts (as measured by SC and TSC) between 1973 and 2011, driven by a 50-60% decline among men unselected by fertility from North America, Europe, Australia and New Zealand. Because......BACKGROUND: Reported declines in sperm counts remain controversial today and recent trends are unknown. A definitive meta-analysis is critical given the predictive value of sperm count for fertility, morbidity and mortality. OBJECTIVE AND RATIONALE: To provide a systematic review and meta-regression...

  11. CERNDxCTA counting mode chip

    International Nuclear Information System (INIS)

    Moraes, D.; Kaplon, J.; Nygard, E.

    2008-01-01

    This ASIC is a counting mode front-end electronic optimized for the readout of CdZnTe/CdTe and silicon sensors, for possible use in applications where the flux of ionizing radiation is high. The chip is implemented in 0.25 μm CMOS technology. The circuit comprises 128 channels equipped with a transimpedance amplifier followed by a gain shaper stage with 21 ns peaking time, two discriminators and two 18-bit counters. The channel architecture is optimized for the detector characteristics in order to achieve the best energy resolution at counting rates of up to 5 M counts/second. The amplifier shows a linear sensitivity of 118 mV/fC and an equivalent noise charge of about 711 e - , for a detector capacitance of 5 pF. Complete evaluation of the circuit is presented using electronic pulses and pixel detectors

  12. Seed counting system evaluation using arduino microcontroller

    Directory of Open Access Journals (Sweden)

    Paulo Fernando Escobar Paim

    2018-01-01

    Full Text Available The development of automated systems has been highlighted in the most diverse productive sectors, among them, the agricultural sector. These systems aim to optimize activities by increasing operational efficiency and quality of work. In this sense, the present work has the objective of evaluating a prototype developed for seed count in laboratory, using Arduino microcontroller. The prototype of the system for seed counting was built using a dosing mechanism commonly used in seeders, electric motor, Arduino Uno, light dependent resistor and light emitting diode. To test the prototype, a completely randomized design (CRD was used in a two-factorial scheme composed of three groups defined according to the number of seeds (500, 1000 and 1500 seeds tested, three speeds of the dosing disc that allowed the distribution in 17, 21 and 32 seeds per second, with 40 repetitions evaluating the seed counting prototype performance in different speeds. The prototype of the bench counter showed a moderate variability of seed number of counted within the nine tests and a high precision in the seed count on the distribution speeds of 17 and 21 seeds per second (s-1 up to 1500 seeds tested. Therefore, based on the observed results, the developed prototype presents itself as an excellent tool for counting seeds in laboratory.

  13. HEPS-BPIX, a single photon counting pixel detector with a high frame rate for the HEPS project

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Wei, E-mail: weiw@ihep.ac.cn [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); State Key Laboratory of Particle Detection and Electronics, Beijing 100049 (China); Zhang, Jie; Ning, Zhe; Lu, Yunpeng; Fan, Lei; Li, Huaishen; Jiang, Xiaoshan; Lan, Allan K.; Ouyang, Qun; Wang, Zheng; Zhu, Kejun; Chen, Yuanbo [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); State Key Laboratory of Particle Detection and Electronics, Beijing 100049 (China); Liu, Peng [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China)

    2016-11-01

    China's next generation light source, named the High Energy Photon Source (HEPS), is currently under construction. HEPS-BPIX (HEPS-Beijing PIXel) is a dedicated pixel readout chip that operates in single photon counting mode for X-ray applications in HEPS. Designed using CMOS 0.13 µm technology, the chip contains a matrix of 104×72 pixels. Each pixel measures 150 µm×150 µm and has a counting depth of 20 bits. A bump-bonded prototyping detector module with a 300-µm thick silicon sensor was tested in the beamline of Beijing Synchrotron Radiation Facility. A fast stream of X-ray images was demonstrated, and a frame rate of 1.2 kHz was proven, with a negligible dead time. The test results showed an equivalent noise charge of 115 e{sup −} rms after bump bonding and a threshold dispersion of 55 e{sup −} rms after calibration.

  14. Using DNA to test the utility of pellet-group counts as an index of deer counts

    Science.gov (United States)

    T. J. Brinkman; D. K. Person; W. Smith; F. Stuart Chapin; K. McCoy; M. Leonawicz; K. Hundertmark

    2013-01-01

    Despite widespread use of fecal pellet-group counts as an index of ungulate density, techniques used to convert pellet-group numbers to ungulate numbers rarely are based on counts of known individuals, seldom evaluated across spatial and temporal scales, and precision is infrequently quantified. Using DNA from fecal pellets to identify individual deer, we evaluated the...

  15. Tutorial on X-ray photon counting detector characterization.

    Science.gov (United States)

    Ren, Liqiang; Zheng, Bin; Liu, Hong

    2018-01-01

    Recent advances in photon counting detection technology have led to significant research interest in X-ray imaging. As a tutorial level review, this paper covers a wide range of aspects related to X-ray photon counting detector characterization. The tutorial begins with a detailed description of the working principle and operating modes of a pixelated X-ray photon counting detector with basic architecture and detection mechanism. Currently available methods and techniques for charactering major aspects including energy response, noise floor, energy resolution, count rate performance (detector efficiency), and charge sharing effect of photon counting detectors are comprehensively reviewed. Other characterization aspects such as point spread function (PSF), line spread function (LSF), contrast transfer function (CTF), modulation transfer function (MTF), noise power spectrum (NPS), detective quantum efficiency (DQE), bias voltage, radiation damage, and polarization effect are also remarked. A cadmium telluride (CdTe) pixelated photon counting detector is employed for part of the characterization demonstration and the results are presented. This review can serve as a tutorial for X-ray imaging researchers and investigators to understand, operate, characterize, and optimize photon counting detectors for a variety of applications.

  16. Reference analysis of the signal + background model in counting experiments

    Science.gov (United States)

    Casadei, D.

    2012-01-01

    The model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered from a Bayesian point of view. This is a widely used model for the searches of rare or exotic events in presence of a background source, as for example in the searches performed by high-energy physics experiments. In the assumption of prior knowledge about the background yield, a reference prior is obtained for the signal alone and its properties are studied. Finally, the properties of the full solution, the marginal reference posterior, are illustrated with few examples.

  17. Scintillation counting apparatus

    International Nuclear Information System (INIS)

    Noakes, J.E.

    1978-01-01

    Apparatus is described for the accurate measurement of radiation by means of scintillation counters and in particular for the liquid scintillation counting of both soft beta radiation and gamma radiation. Full constructional and operating details are given. (UK)

  18. Counts and colors of faint galaxies

    International Nuclear Information System (INIS)

    Kron, R.G.

    1980-01-01

    The color distribution of faint galaxies is an observational dimension which has not yet been fully exploited, despite the important constraints obtainable for galaxy evolution and cosmology. Number-magnitude counts alone contain very diluted information about the state of things because galaxies from a wide range in redshift contribute to the counts at each magnitude. The most-frequently-seen type of galaxy depends on the luminosity function and the relative proportions of galaxies of different spectral classes. The addition of color as a measured quantity can thus considerably sharpen the interpretation of galaxy counts since the apparent color depends on the redshift and rest-frame spectrum. (Auth.)

  19. From laser-plasma accelerators to femtosecond X-ray sources: study, development and applications

    International Nuclear Information System (INIS)

    Corde, S.

    2012-01-01

    During the relativistic interaction between a short and intense laser pulse and an underdense plasma, electrons can be injected and accelerated up to hundreds of MeV in an accelerating structure formed in the wake of the pulse: this is the so-called laser-plasma accelerator. One of the major perspectives for laser-plasma accelerators resides in the realization of compact sources of femtosecond x-ray beams. In this thesis, two x-ray sources was studied and developed. The betatron radiation, intrinsic to laser-plasma accelerators, comes from the transverse oscillations of electrons during their acceleration. Its characterization by photon counting revealed an x-ray beam containing 10"9 photons, with energies extending above 10 keV. We also developed an all-optical Compton source producing photons with energies up to hundreds of keV, based on the collision between a photon beam and an electron beam. The potential of these x-ray sources was highlighted by the realization of single shot phase contrast imaging of a biological sample. Then, we showed that the betatron x-ray radiation can be a powerful tool to study the physics of laser-plasma acceleration. We demonstrated the possibility to map the x-ray emission region, which gives a unique insight into the interaction, permitting us for example to locate the region where electrons are injected. The x-ray angular and spectral properties allow us to gain information on the transverse dynamics of electrons during their acceleration. (author)

  20. Clean Hands Count

    Medline Plus

    Full Text Available ... is starting stop Loading... Watch Queue Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos ... empower patients to play a role in their care by asking or reminding healthcare providers to clean ...