WorldWideScience

Sample records for source count distributions

  1. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    Science.gov (United States)

    Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.

    2017-01-01

    Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.

  2. Absolute nuclear material assay using count distribution (LAMBDA) space

    Science.gov (United States)

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  3. Radiation measurement practice for understanding statistical fluctuation of radiation count using natural radiation sources

    International Nuclear Information System (INIS)

    Kawano, Takao

    2014-01-01

    It is known that radiation is detected at random and the radiation counts fluctuate statistically. In the present study, a radiation measurement experiment was performed to understand the randomness and statistical fluctuation of radiation counts. In the measurement, three natural radiation sources were used. The sources were fabricated from potassium chloride chemicals, chemical fertilizers and kelps. These materials contain naturally occurring potassium-40 that is a radionuclide. From high schools, junior high schools and elementary schools, nine teachers participated to the radiation measurement experiment. Each participant measured the 1-min integration counts of radiation five times using GM survey meters, and 45 sets of data were obtained for the respective natural radiation sources. It was found that the frequency of occurrence of radiation counts was distributed according to a Gaussian distribution curve, although the obtained 45 data sets of radiation counts superficially looked to be fluctuating meaninglessly. (author)

  4. Alpha-particle autoradiography by solid state track detectors to spatial distribution of radioactivity in alpha-counting source

    International Nuclear Information System (INIS)

    Ishigure, Nobuhito; Nakano, Takashi; Enomoto, Hiroko; Koizumi, Akira; Miyamoto, Katsuhiro

    1989-01-01

    A technique of autoradiography using solid state track detectors is described by which spatial distribution of radioactivity in an alpha-counting source can easily be visualized. As solid state track detectors, polymer of allyl diglycol carbonate was used. The advantage of the present technique was proved that alpha-emitters can be handled in the light place alone through the whole course of autoradiography, otherwise in the conventional autoradiography the alpha-emitters, which requires special carefulness from the point of radiation protection, must be handled in the dark place with difficulty. This technique was applied to rough examination of self-absorption of the plutonium source prepared by the following different methods; the source (A) was prepared by drying at room temperature, (B) by drying under an infrared lamp, (C) by drying in ammonia atmosphere after redissolving by the addition of a drop of distilled water which followed complete evaporation under an infrared lamp and (D) by drying under an infrared lamp after adding a drop of diluted neutral detergent. The difference in the spatial distributions of radioactivity could clearly be observed on the autoradiographs. For example, the source (C) showed the most diffuse distribution, which suggested that the self-absorption of this source was the smallest. The present autoradiographic observation was in accordance with the result of the alpha-spectrometry with a silicon surface-barrier detector. (author)

  5. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    Science.gov (United States)

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  6. SPERM COUNT DISTRIBUTIONS IN FERTILE MEN

    Science.gov (United States)

    Sperm concentration and count are often used as indicators of environmental impacts on male reproductive health. Existing clinical databases may be biased towards subfertile men with low sperm counts and less is known about expected sperm count distributions in cohorts of fertil...

  7. A matrix-inversion method for gamma-source mapping from gamma-count data - 59082

    International Nuclear Information System (INIS)

    Bull, Richard K.; Adsley, Ian; Burgess, Claire

    2012-01-01

    Gamma ray counting is often used to survey the distribution of active waste material in various locations. Ideally the output from such surveys would be a map of the activity of the waste. In this paper a simple matrix-inversion method is presented. This allows an array of gamma-count data to be converted to an array of source activities. For each survey area the response matrix is computed using the gamma-shielding code Microshield [1]. This matrix links the activity array to the count array. The activity array is then obtained via matrix inversion. The method was tested on artificially-created arrays of count-data onto which statistical noise had been added. The method was able to reproduce, quite faithfully, the original activity distribution used to generate the dataset. The method has been applied to a number of practical cases, including the distribution of activated objects in a hot cell and to activated Nimonic springs amongst fuel-element debris in vaults at a nuclear plant. (authors)

  8. 26 CFR 1.963-3 - Distributions counting toward a minimum distribution.

    Science.gov (United States)

    2010-04-01

    ... earnings and profits and 100 shares of only one class of stock outstanding. Domestic corporation M, not... distribution of earnings and profits which is attributable to an increase in current earnings, invested in... counting toward a minimum distribution. (a) Conditions under which earnings and profits are counted toward...

  9. Degree of polarization and source counts of faint radio sources from Stacking Polarized intensity

    International Nuclear Information System (INIS)

    Stil, J. M.; George, S. J.; Keller, B. W.; Taylor, A. R.

    2014-01-01

    We present stacking polarized intensity as a means to study the polarization of sources that are too faint to be detected individually in surveys of polarized radio sources. Stacking offers not only high sensitivity to the median signal of a class of radio sources, but also avoids a detection threshold in polarized intensity, and therefore an arbitrary exclusion of sources with a low percentage of polarization. Correction for polarization bias is done through a Monte Carlo analysis and tested on a simulated survey. We show that the nonlinear relation between the real polarized signal and the detected signal requires knowledge of the shape of the distribution of fractional polarization, which we constrain using the ratio of the upper quartile to the lower quartile of the distribution of stacked polarized intensities. Stacking polarized intensity for NRAO VLA Sky Survey (NVSS) sources down to the detection limit in Stokes I, we find a gradual increase in median fractional polarization that is consistent with a trend that was noticed before for bright NVSS sources, but is much more gradual than found by previous deep surveys of radio polarization. Consequently, the polarized radio source counts derived from our stacking experiment predict fewer polarized radio sources for future surveys with the Square Kilometre Array and its pathfinders.

  10. Comparison of MCNP6 and experimental results for neutron counts, Rossi-α, and Feynman-α distributions

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, Y.; Sadovich, S.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2013-01-01

    MCNP6, the general-purpose Monte Carlo N-Particle code, has the capability to perform time-dependent calculations by tracking the time interval between successive events of the neutron random walk. In fixed-source calculations for a subcritical assembly, the zero time value is assigned at the moment the neutron is emitted by the external neutron source. The PTRAC and F8 cards of MCNP allow to tally the time when a neutron is captured by 3 He(n, p) reactions in the neutron detector. From this information, it is possible to build three different time distributions: neutron counts, Rossi-α, and Feynman-α. The neutron counts time distribution represents the number of neutrons captured as a function of time. The Rossi-a distribution represents the number of neutron pairs captured as a function of the time interval between two capture events. The Feynman-a distribution represents the variance-to-mean ratio, minus one, of the neutron counts array as a function of a fixed time interval. The MCNP6 results for these three time distributions have been compared with the experimental data of the YALINA Thermal facility and have been found to be in quite good agreement. (authors)

  11. Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    Klumpp, John [Colorado State University, Department of Environmental and Radiological Health Sciences, Molecular and Radiological Biosciences Building, Colorado State University, Fort Collins, Colorado, 80523 (United States)

    2013-07-01

    We propose a radiation detection system which generates its own discrete sampling distribution based on past measurements of background. The advantage to this approach is that it can take into account variations in background with respect to time, location, energy spectra, detector-specific characteristics (i.e. different efficiencies at different count rates and energies), etc. This would therefore be a 'machine learning' approach, in which the algorithm updates and improves its characterization of background over time. The system would have a 'learning mode,' in which it measures and analyzes background count rates, and a 'detection mode,' in which it compares measurements from an unknown source against its unique background distribution. By characterizing and accounting for variations in the background, general purpose radiation detectors can be improved with little or no increase in cost. The statistical and computational techniques to perform this kind of analysis have already been developed. The necessary signal analysis can be accomplished using existing Bayesian algorithms which account for multiple channels, multiple detectors, and multiple time intervals. Furthermore, Bayesian machine-learning techniques have already been developed which, with trivial modifications, can generate appropriate decision thresholds based on the comparison of new measurements against a nonparametric sampling distribution. (authors)

  12. Confusion-limited extragalactic source survey at 4.755 GHz. I. Source list and areal distributions

    International Nuclear Information System (INIS)

    Ledden, J.E.; Broderick, J.J.; Condon, J.J.; Brown, R.L.

    1980-01-01

    A confusion-limited 4.755-GHz survey covering 0.00 956 sr between right ascensions 07/sup h/05/sup m/ and 18/sup h/ near declination +35 0 has been made with the NRAO 91-m telescope. The survey found 237 sources and is complete above 15 mJy. Source counts between 15 and 100 mJy were obtained directly. The P(D) distribution was used to determine the number counts between 0.5 and 13.2 mJy, to search for anisotropy in the density of faint extragalactic sources, and to set a 99%-confidence upper limit of 1.83 mK to the rms temperature fluctuation of the 2.7-K cosmic microwave background on angular scales smaller than 7.3 arcmin. The discrete-source density, normalized to the static Euclidean slope, falls off sufficiently rapidly below 100 mJy that no new population of faint flat-spectrum sources is required to explain the 4.755-GHz source counts

  13. Comparison of MCNP6 and experimental results for neutron counts, Rossi-{alpha}, and Feynman-{alpha} distributions

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y. [Argonne National Laboratory, 9700 S. Cass Ave., Lemont, IL 60439 (United States); Sadovich, S.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C. [Joint Institute for Power and Nuclear Research-Sosny, 99 Academician A.K. Krasin Str., Minsk 220109 (Belarus)

    2013-07-01

    MCNP6, the general-purpose Monte Carlo N-Particle code, has the capability to perform time-dependent calculations by tracking the time interval between successive events of the neutron random walk. In fixed-source calculations for a subcritical assembly, the zero time value is assigned at the moment the neutron is emitted by the external neutron source. The PTRAC and F8 cards of MCNP allow to tally the time when a neutron is captured by {sup 3}He(n, p) reactions in the neutron detector. From this information, it is possible to build three different time distributions: neutron counts, Rossi-{alpha}, and Feynman-{alpha}. The neutron counts time distribution represents the number of neutrons captured as a function of time. The Rossi-a distribution represents the number of neutron pairs captured as a function of the time interval between two capture events. The Feynman-a distribution represents the variance-to-mean ratio, minus one, of the neutron counts array as a function of a fixed time interval. The MCNP6 results for these three time distributions have been compared with the experimental data of the YALINA Thermal facility and have been found to be in quite good agreement. (authors)

  14. Micro-electrodeposition techniques for the preparation of small actinide counting sources for ultra-high resolution alpha spectrometry by microcalorimetry

    International Nuclear Information System (INIS)

    Plionis, A.A.; Hastings, E.P.; LaMont, S.P.; Dry, D.E.; Bacrania, M.K.; Rabin, M.W.; Rim, J.H.

    2009-01-01

    Special considerations and techniques are desired for the preparation of small actinide counting sources. Counting sources have been prepared on metal disk substrates (planchets) with an active area of only 0.079 mm 2 . This represents a 93.75% reduction in deposition area from standard electrodeposition methods. The actinide distribution upon the smaller planchet must remain thin and uniform to allow alpha particle emissions to escape the counting source with a minimal amount of self-attenuation. This work describes the development of micro-electrodeposition methods and optimization of the technique with respect to deposition time and current density for various planchet sizes. (author)

  15. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution.

    Science.gov (United States)

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section.

  16. Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources.

    Science.gov (United States)

    Klumpp, John; Brandl, Alexander

    2015-03-01

    A particle counting and detection system is proposed that searches for elevated count rates in multiple energy regions simultaneously. The system analyzes time-interval data (e.g., time between counts), as this was shown to be a more sensitive technique for detecting low count rate sources compared to analyzing counts per unit interval (Luo et al. 2013). Two distinct versions of the detection system are developed. The first is intended for situations in which the sample is fixed and can be measured for an unlimited amount of time. The second version is intended to detect sources that are physically moving relative to the detector, such as a truck moving past a fixed roadside detector or a waste storage facility under an airplane. In both cases, the detection system is expected to be active indefinitely; i.e., it is an online detection system. Both versions of the multi-energy detection systems are compared to their respective gross count rate detection systems in terms of Type I and Type II error rates and sensitivity.

  17. Optimizing the calculation of point source count-centroid in pixel size measurement

    International Nuclear Information System (INIS)

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Purpose: Pixel size is an important parameter of gamma camera and SPECT. A number of Methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source(PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (Xm) is an approximation of the true count-centroid (Xp) of the PS, i.e. Xm=Xp+(Xb-Xp)/(1+Rp/Rb), where Rp is the net counting rate of the PS, Xb the background count-centroid and Rb the background counting rate. To get accurate measurement, Rp must be very big, which is unpractical, resulting in the variation of measured pixel size. Rp-independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (Xb-Xp)/(1+Rp/Rb) by bringing Xb closer to Xp and by reducing Rb. In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=I-(0.5)D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent Xp. The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128*128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10cps to 1183cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01( mean= 3.01±0.00) as Rp increased

  18. Limits to source counts and cosmic microwave background fluctuations at 10.6 GHz

    International Nuclear Information System (INIS)

    Seielstad, G.A.; Masson, C.R.; Berge, G.L.

    1981-01-01

    We have determined the distribution of deflections due to sky temperature fluctuations at 10.6 GHz. If all the deflections are due to fine structure in the cosmic microwave background, we limit these fluctuations to ΔT/T -4 on an angular scale of 11 arcmin. If, on the other hand, all the deflections are due to confusion among discrete radio sources, the areal density of these sources is calculated for various slopes of the differential source count relationship and for various cutoff flux densities. If, for example, the slope is 2.1 and the cutoff is 10 mJy, we find (0.25--3.3) 10 6 sources sr -1 Jy -1

  19. How Fred Hoyle Reconciled Radio Source Counts and the Steady State Cosmology

    Science.gov (United States)

    Ekers, Ron

    2012-09-01

    In 1969 Fred Hoyle invited me to his Institute of Theoretical Astronomy (IOTA) in Cambridge to work with him on the interpretation of the radio source counts. This was a period of extreme tension with Ryle just across the road using the steep slope of the radio source counts to argue that the radio source population was evolving and Hoyle maintaining that the counts were consistent with the steady state cosmology. Both of these great men had made some correct deductions but they had also both made mistakes. The universe was evolving, but the source counts alone could tell us very little about cosmology. I will try to give some indication of the atmosphere and the issues at the time and look at what we can learn from this saga. I will conclude by briefly summarising the exponential growth of the size of the radio source counts since the early days and ask whether our understanding has grown at the same rate.

  20. The effect of energy distribution of external source on source multiplication in fast assemblies

    International Nuclear Information System (INIS)

    Karam, R.A.; Vakilian, M.

    1976-02-01

    The essence of this study is the effect of energy distribution of a source on the detection rate as a function of K effective in fast assemblies. This effectiveness, as a function of K was studied in a fission chamber, using the ABN cross-section set and Mach 1 code. It was found that with a source which has a fission spectrum, the reciprocal count rate versus mass relationship is linear down to K effective 0.59. For a thermal source, the linearity was never achieved. (author)

  1. Optimizing the calculation of point source count-centroid in pixel size measurement

    International Nuclear Information System (INIS)

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Pixel size is an important parameter of gamma camera and SPECT. A number of methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source (PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (X m ) is an approximation of the true count-centroid (X p ) of the PS, i.e. X m =X p + (X b -X p )/(1+R p /R b ), where Rp is the net counting rate of the PS, X b the background count-centroid and Rb the background counting. To get accurate measurement, R p must be very big, which is unpractical, resulting in the variation of measured pixel size. R p -independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (X b -X p )/(1 + R p /R b ) by bringing X b closer to X p and by reducing R b . In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=1-(0.5) D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent X p . The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128 x 128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10 cps to 1183 cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01 (mean

  2. Graphite nodule count and size distribution in thin-walled ductile cast iron

    DEFF Research Database (Denmark)

    Pedersen, Karl Martin; Tiedje, Niels Skat

    2008-01-01

    Graphite nodule count and size distribution have been analysed in thin walled ductile cast iron. The 2D nodule counts have been converted into 3D nodule count by using Finite Difference Method (FDM). Particles having a diameter smaller than 5 µm should be neglected in the nodule count as these ar......Graphite nodule count and size distribution have been analysed in thin walled ductile cast iron. The 2D nodule counts have been converted into 3D nodule count by using Finite Difference Method (FDM). Particles having a diameter smaller than 5 µm should be neglected in the nodule count...... as these are inclusions and micro porosities that do not influence the solidification morphology. If there are many small graphite nodules as in thin walled castings only 3D nodule count calculated by FDM will give reliable results. 2D nodule count and 3D nodule count calculated by simple equations will give too low...

  3. Neutron generation time of the reactor 'crocus' by an interval distribution method for counts collected by two detectors

    International Nuclear Information System (INIS)

    Haldy, P.-A.; Chikouche, M.

    1975-01-01

    The distribution is considered of time intervals between a count in one neutron detector and the consequent event registered in a second one. A 'four interval' probability generating function was derived by means of which the expression for the distribution of the time intervals, lasting from triggering detection in the first detector to subsequent count in the second, one could be obtained. The experimental work was conducted in the zero thermal power reactor Crocus, using a neutron source provided by spontaneous fission, a BF 3 counter for the first detector and an He 3 detector for the second instrument. (U.K.)

  4. The Herschel Virgo Cluster Survey. XVII. SPIRE point-source catalogs and number counts

    Science.gov (United States)

    Pappalardo, Ciro; Bendo, George J.; Bianchi, Simone; Hunt, Leslie; Zibetti, Stefano; Corbelli, Edvige; di Serego Alighieri, Sperello; Grossi, Marco; Davies, Jonathan; Baes, Maarten; De Looze, Ilse; Fritz, Jacopo; Pohlen, Michael; Smith, Matthew W. L.; Verstappen, Joris; Boquien, Médéric; Boselli, Alessandro; Cortese, Luca; Hughes, Thomas; Viaene, Sebastien; Bizzocchi, Luca; Clemens, Marcel

    2015-01-01

    Aims: We present three independent catalogs of point-sources extracted from SPIRE images at 250, 350, and 500 μm, acquired with the Herschel Space Observatory as a part of the Herschel Virgo Cluster Survey (HeViCS). The catalogs have been cross-correlated to consistently extract the photometry at SPIRE wavelengths for each object. Methods: Sources have been detected using an iterative loop. The source positions are determined by estimating the likelihood to be a real source for each peak on the maps, according to the criterion defined in the sourceExtractorSussextractor task. The flux densities are estimated using the sourceExtractorTimeline, a timeline-based point source fitter that also determines the fitting procedure with the width of the Gaussian that best reproduces the source considered. Afterwards, each source is subtracted from the maps, removing a Gaussian function in every position with the full width half maximum equal to that estimated in sourceExtractorTimeline. This procedure improves the robustness of our algorithm in terms of source identification. We calculate the completeness and the flux accuracy by injecting artificial sources in the timeline and estimate the reliability of the catalog using a permutation method. Results: The HeViCS catalogs contain about 52 000, 42 200, and 18 700 sources selected at 250, 350, and 500 μm above 3σ and are ~75%, 62%, and 50% complete at flux densities of 20 mJy at 250, 350, 500 μm, respectively. We then measured source number counts at 250, 350, and 500 μm and compare them with previous data and semi-analytical models. We also cross-correlated the catalogs with the Sloan Digital Sky Survey to investigate the redshift distribution of the nearby sources. From this cross-correlation, we select ~2000 sources with reliable fluxes and a high signal-to-noise ratio, finding an average redshift z ~ 0.3 ± 0.22 and 0.25 (16-84 percentile). Conclusions: The number counts at 250, 350, and 500 μm show an increase in

  5. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  6. Comparison of probabilistic models of the distribution of counts

    International Nuclear Information System (INIS)

    Salma, I.; Zemplen-Papp, E.

    1992-01-01

    The binominal, Poisson and modified Poisson models for describing the statistical nature of the distribution of counts are compared theoretically, and conclusions for application are proposed. The validity of the Poisson and the modified Poisson distribution for observing k events in a short time interval is investigated experimentally for various measuring times. The experiments to measure the influence of the significant radioactive decay were performed with 89m Y (T 1/2 =16.06 s), using a multichannel analyser (4096 channels) in the multiscaling mode. According to the results, Poisson distribution describes the counting experiment for short measuring times (up to T=0.5 T 1/2 ) and its application is recommended. However, the analysis of the data demonstrated that for long measurements (T≥1 T 1/2 ) Poisson distribution is not valid and the modified Poisson distribution is preferable. The practical implications in calculating uncertainties and in optimizing the measuring time are discussed. (author) 20 refs.; 7 figs.; 1 tab

  7. Pedestrian count estimation using texture feature with spatial distribution

    Directory of Open Access Journals (Sweden)

    Hongyu Hu

    2016-12-01

    Full Text Available We present a novel pedestrian count estimation approach based on global image descriptors formed from multi-scale texture features that considers spatial distribution. For regions of interest, local texture features are represented based on histograms of multi-scale block local binary pattern, which jointly constitute the feature vector of the whole image. Therefore, to achieve an effective estimation of pedestrian count, principal component analysis is used to reduce the dimension of the global representation features, and a fitting model between image global features and pedestrian count is constructed via support vector regression. The experimental result shows that the proposed method exhibits high accuracy on pedestrian count estimation and can be applied well in the real world.

  8. Comparing distribution models for small samples of overdispersed counts of freshwater fish

    Science.gov (United States)

    Vaudor, Lise; Lamouroux, Nicolas; Olivier, Jean-Michel

    2011-05-01

    The study of species abundance often relies on repeated abundance counts whose number is limited by logistic or financial constraints. The distribution of abundance counts is generally right-skewed (i.e. with many zeros and few high values) and needs to be modelled for statistical inference. We used an extensive dataset involving about 100,000 fish individuals of 12 freshwater fish species collected in electrofishing points (7 m 2) during 350 field surveys made in 25 stream sites, in order to compare the performance and the generality of four distribution models of counts (Poisson, negative binomial and their zero-inflated counterparts). The negative binomial distribution was the best model (Bayesian Information Criterion) for 58% of the samples (species-survey combinations) and was suitable for a variety of life histories, habitat, and sample characteristics. The performance of the models was closely related to samples' statistics such as total abundance and variance. Finally, we illustrated the consequences of a distribution assumption by calculating confidence intervals around the mean abundance, either based on the most suitable distribution assumption or on an asymptotical, distribution-free (Student's) method. Student's method generally corresponded to narrower confidence intervals, especially when there were few (≤3) non-null counts in the samples.

  9. The distribution of controlled drugs on banknotes via counting machines.

    Science.gov (United States)

    Carter, James F; Sleeman, Richard; Parry, Joanna

    2003-03-27

    Bundles of paper, similar to sterling banknotes, were counted in banks in England and Wales. Subsequent analysis showed that the counting process, both by machine and by hand, transferred nanogram amounts of cocaine to the paper. Crystalline material, similar to cocaine hydrochloride, could be observed on the surface of the paper following counting. The geographical distribution of contamination broadly followed Government statistics for cocaine usage within the UK. Diacetylmorphine, Delta(9)-tetrahydrocannabinol (THC) and 3,4-methylenedioxymethylamphetamine (MDMA) were not detected during this study.

  10. Quantifying the sources of variability in equine faecal egg counts: implications for improving the utility of the method.

    Science.gov (United States)

    Denwood, M J; Love, S; Innocent, G T; Matthews, L; McKendrick, I J; Hillary, N; Smith, A; Reid, S W J

    2012-08-13

    The faecal egg count (FEC) is the most widely used means of quantifying the nematode burden of horses, and is frequently used in clinical practice to inform treatment and prevention. The statistical process underlying the FEC is complex, comprising a Poisson counting error process for each sample, compounded with an underlying continuous distribution of means between samples. Being able to quantify the sources of variability contributing to this distribution of means is a necessary step towards providing estimates of statistical power for future FEC and FECRT studies, and may help to improve the usefulness of the FEC technique by identifying and minimising unwanted sources of variability. Obtaining such estimates require a hierarchical statistical model coupled with repeated FEC observations from a single animal over a short period of time. Here, we use this approach to provide the first comparative estimate of multiple sources of within-horse FEC variability. The results demonstrate that a substantial proportion of the observed variation in FEC between horses occurs as a result of variation in FEC within an animal, with the major sources being aggregation of eggs within faeces and variation in egg concentration between faecal piles. The McMaster procedure itself is associated with a comparatively small coefficient of variation, and is therefore highly repeatable when a sufficiently large number of eggs are observed to reduce the error associated with the counting process. We conclude that the variation between samples taken from the same animal is substantial, but can be reduced through the use of larger homogenised faecal samples. Estimates are provided for the coefficient of variation (cv) associated with each within animal source of variability in observed FEC, allowing the usefulness of individual FEC to be quantified, and providing a basis for future FEC and FECRT studies. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Fitting a distribution to miccrobial counts: making sense of zeros

    DEFF Research Database (Denmark)

    Ribeiro Duarte, Ana Sofia; Stockmarr, Anders; Nauta, Maarten

    and standard deviation) and the prevalence of contaminated food units (one minus the proportion of “true zeros”) from a set of microbial counts. By running the model with in silico generated concentration and count data, we could evaluate the performance of this method in terms of estimation of the three......Non-detects or left-censored results are inherent to the traditional methods of microbial enumeration in foods. Typically, a low concentration of microorganisms in a food unit goes undetected in plate counts or most probable number (MPN) counts, and produces “artificial zeros”. However......, these “artificial zeros” are only a share of the total number of zero counts resulting from a sample, as their number adds up to the number of “true zeros” resulting from uncontaminated units. In the process of fitting a probability distribution to microbial counts, “artificial” and “true” zeros are usually...

  12. 2π proportional counting chamber for large-area-coated β sources

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 86; Issue 6. 2 π proportional counting chamber for large-area-coated β sources ... A provision is made for change ofthe source and immediate measurement of source activity. These sources are used to calibrate the efficiency of contamination monitors at radiological ...

  13. Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions

    International Nuclear Information System (INIS)

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.; Snyderman, N. J.; Verbeke, J. M.

    2015-01-01

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutrons in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.

  14. HIGH-RESOLUTION IMAGING OF THE ATLBS REGIONS: THE RADIO SOURCE COUNTS

    Energy Technology Data Exchange (ETDEWEB)

    Thorat, K.; Subrahmanyan, R.; Saripalli, L.; Ekers, R. D., E-mail: kshitij@rri.res.in [Raman Research Institute, C. V. Raman Avenue, Sadashivanagar, Bangalore 560080 (India)

    2013-01-01

    The Australia Telescope Low-brightness Survey (ATLBS) regions have been mosaic imaged at a radio frequency of 1.4 GHz with 6'' angular resolution and 72 {mu}Jy beam{sup -1} rms noise. The images (centered at R.A. 00{sup h}35{sup m}00{sup s}, decl. -67 Degree-Sign 00'00'' and R.A. 00{sup h}59{sup m}17{sup s}, decl. -67 Degree-Sign 00'00'', J2000 epoch) cover 8.42 deg{sup 2} sky area and have no artifacts or imaging errors above the image thermal noise. Multi-resolution radio and optical r-band images (made using the 4 m CTIO Blanco telescope) were used to recognize multi-component sources and prepare a source list; the detection threshold was 0.38 mJy in a low-resolution radio image made with beam FWHM of 50''. Radio source counts in the flux density range 0.4-8.7 mJy are estimated, with corrections applied for noise bias, effective area correction, and resolution bias. The resolution bias is mitigated using low-resolution radio images, while effects of source confusion are removed by using high-resolution images for identifying blended sources. Below 1 mJy the ATLBS counts are systematically lower than the previous estimates. Showing no evidence for an upturn down to 0.4 mJy, they do not require any changes in the radio source population down to the limit of the survey. The work suggests that automated image analysis for counts may be dependent on the ability of the imaging to reproduce connecting emission with low surface brightness and on the ability of the algorithm to recognize sources, which may require that source finding algorithms effectively work with multi-resolution and multi-wavelength data. The work underscores the importance of using source lists-as opposed to component lists-and correcting for the noise bias in order to precisely estimate counts close to the image noise and determine the upturn at sub-mJy flux density.

  15. The negative binomial distribution as a model for external corrosion defect counts in buried pipelines

    International Nuclear Information System (INIS)

    Valor, Alma; Alfonso, Lester; Caleyo, Francisco; Vidal, Julio; Perez-Baruch, Eloy; Hallen, José M.

    2015-01-01

    Highlights: • Observed external-corrosion defects in underground pipelines revealed a tendency to cluster. • The Poisson distribution is unable to fit extensive count data for these type of defects. • In contrast, the negative binomial distribution provides a suitable count model for them. • Two spatial stochastic processes lead to the negative binomial distribution for defect counts. • They are the Gamma-Poisson mixed process and the compound Poisson process. • A Rogeŕs process also arises as a plausible temporal stochastic process leading to corrosion defect clustering and to negative binomially distributed defect counts. - Abstract: The spatial distribution of external corrosion defects in buried pipelines is usually described as a Poisson process, which leads to corrosion defects being randomly distributed along the pipeline. However, in real operating conditions, the spatial distribution of defects considerably departs from Poisson statistics due to the aggregation of defects in groups or clusters. In this work, the statistical analysis of real corrosion data from underground pipelines operating in southern Mexico leads to conclude that the negative binomial distribution provides a better description for defect counts. The origin of this distribution from several processes is discussed. The analysed processes are: mixed Gamma-Poisson, compound Poisson and Roger’s processes. The physical reasons behind them are discussed for the specific case of soil corrosion.

  16. Somatic cell count distributions during lactation predict clinical mastitis

    NARCIS (Netherlands)

    Green, M.J.; Green, L.E.; Schukken, Y.H.; Bradley, A.J.; Peeler, E.J.; Barkema, H.W.; Haas, de Y.; Collis, V.J.; Medley, G.F.

    2004-01-01

    This research investigated somatic cell count (SCC) records during lactation, with the purpose of identifying distribution characteristics (mean and measures of variation) that were most closely associated with clinical mastitis. Three separate data sets were used, one containing quarter SCC (n =

  17. Preparation of source mounts for 4π counting

    International Nuclear Information System (INIS)

    Johnson, E.P.

    1991-01-01

    The 4πβ/γ counter in the ANSTO radioisotope standards laboratory at Lucas Heights constitutes part of the Australian national standard for radioactivity. Sources to be measured in the counter must be mounted on a substrate which is strong enough to withstand careful handling and transport. The substrate must also be electrically conducting to minimise counting errors caused by charging of the source, and it must have very low superficial density so that little or none of the radiation is absorbed. The entire process of fabrication of VYNS films, coating them with gold/palladium and transferring them to source mount rings, as carried out in the radioisotope standards laboratory, is documented. 3 refs., 2 tabs., 6 figs

  18. The European large area ISO survey - III. 90-mu m extragalactic source counts

    DEFF Research Database (Denmark)

    Efstathiou, A.; Oliver, S.; Rowan-Robinson, M.

    2000-01-01

    We present results and source counts at 90 mum extracted from the preliminary analysis of the European Large Area ISO Survey (ELAIS). The survey covered about 12 deg(2) of the sky in four main areas and was carried out with the ISOPHOT instrument onboard the Infrared Space Observatory (ISO...... or small groups of galaxies, suggesting that the sample may include a significant fraction of luminous infrared galaxies. The source counts extracted from a reliable subset of the detected sources are in agreement with strongly evolving models of the starburst galaxy population....

  19. AKARI/IRC source catalogues and source counts for the IRAC Dark Field, ELAIS North and the AKARI Deep Field South

    Science.gov (United States)

    Davidge, H.; Serjeant, S.; Pearson, C.; Matsuhara, H.; Wada, T.; Dryer, B.; Barrufet, L.

    2017-12-01

    We present the first detailed analysis of three extragalactic fields (IRAC Dark Field, ELAIS-N1, ADF-S) observed by the infrared satellite, AKARI, using an optimized data analysis toolkit specifically for the processing of extragalactic point sources. The InfaRed Camera (IRC) on AKARI complements the Spitzer Space Telescope via its comprehensive coverage between 8-24 μm filling the gap between the Spitzer/IRAC and MIPS instruments. Source counts in the AKARI bands at 3.2, 4.1, 7, 11, 15 and 18 μm are presented. At near-infrared wavelengths, our source counts are consistent with counts made in other AKARI fields and in general with Spitzer/IRAC (except at 3.2 μm where our counts lie above). In the mid-infrared (11 - 18 μm), we find our counts are consistent with both previous surveys by AKARI and the Spitzer peak-up imaging survey with the InfraRed Spectrograph (IRS). Using our counts to constrain contemporary evolutionary models, we find that although the models and counts are in agreement at mid-infrared wavelengths there are inconsistencies at wavelengths shortward of 7 μm, suggesting either a problem with stellar subtraction or indicating the need for refinement of the stellar population models. We have also investigated the AKARI/IRC filters, and find an active galactic nucleus selection criteria out to z < 2 on the basis of AKARI 4.1, 11, 15 and 18 μm colours.

  20. Experimental investigation of statistical models describing distribution of counts

    International Nuclear Information System (INIS)

    Salma, I.; Zemplen-Papp, E.

    1992-01-01

    The binomial, Poisson and modified Poisson models which are used for describing the statistical nature of the distribution of counts are compared theoretically, and conclusions for application are considered. The validity of the Poisson and the modified Poisson statistical distribution for observing k events in a short time interval is investigated experimentally for various measuring times. The experiments to measure the influence of the significant radioactive decay were performed with 89 Y m (T 1/2 =16.06 s), using a multichannel analyser (4096 channels) in the multiscaling mode. According to the results, Poisson statistics describe the counting experiment for short measuring times (up to T=0.5T 1/2 ) and its application is recommended. However, analysis of the data demonstrated, with confidence, that for long measurements (T≥T 1/2 ) Poisson distribution is not valid and the modified Poisson function is preferable. The practical implications in calculating uncertainties and in optimizing the measuring time are discussed. Differences between the standard deviations evaluated on the basis of the Poisson and binomial models are especially significant for experiments with long measuring time (T/T 1/2 ≥2) and/or large detection efficiency (ε>0.30). Optimization of the measuring time for paired observations yields the same solution for either the binomial or the Poisson distribution. (orig.)

  1. Counting statistics in low level radioactivity measurements fluctuating counting efficiency

    International Nuclear Information System (INIS)

    Pazdur, M.F.

    1976-01-01

    A divergence between the probability distribution of the number of nuclear disintegrations and the number of observed counts, caused by counting efficiency fluctuation, is discussed. The negative binominal distribution is proposed to describe the probability distribution of the number of counts, instead of Poisson distribution, which is assumed to hold for the number of nuclear disintegrations only. From actual measurements the r.m.s. amplitude of counting efficiency fluctuation is estimated. Some consequences of counting efficiency fluctuation are investigated and the corresponding formulae are derived: (1) for detection limit as a function of the number of partial measurements and the relative amplitude of counting efficiency fluctuation, and (2) for optimum allocation of the number of partial measurements between sample and background. (author)

  2. Fast radio burst event rate counts - I. Interpreting the observations

    Science.gov (United States)

    Macquart, J.-P.; Ekers, R. D.

    2018-02-01

    The fluence distribution of the fast radio burst (FRB) population (the `source count' distribution, N (>F) ∝Fα), is a crucial diagnostic of its distance distribution, and hence the progenitor evolutionary history. We critically reanalyse current estimates of the FRB source count distribution. We demonstrate that the Lorimer burst (FRB 010724) is subject to discovery bias, and should be excluded from all statistical studies of the population. We re-examine the evidence for flat, α > -1, source count estimates based on the ratio of single-beam to multiple-beam detections with the Parkes multibeam receiver, and show that current data imply only a very weak constraint of α ≲ -1.3. A maximum-likelihood analysis applied to the portion of the Parkes FRB population detected above the observational completeness fluence of 2 Jy ms yields α = -2.6_{-1.3}^{+0.7 }. Uncertainties in the location of each FRB within the Parkes beam render estimates of the Parkes event rate uncertain in both normalizing survey area and the estimated post-beam-corrected completeness fluence; this uncertainty needs to be accounted for when comparing the event rate against event rates measured at other telescopes.

  3. Deep Galex Observations of the Coma Cluster: Source Catalog and Galaxy Counts

    Science.gov (United States)

    Hammer, D.; Hornschemeier, A. E.; Mobasher, B.; Miller, N.; Smith, R.; Arnouts, S.; Milliard, B.; Jenkins, L.

    2010-01-01

    We present a source catalog from deep 26 ks GALEX observations of the Coma cluster in the far-UV (FUV; 1530 Angstroms) and near-UV (NUV; 2310 Angstroms) wavebands. The observed field is centered 0.9 deg. (1.6 Mpc) south-west of the Coma core, and has full optical photometric coverage by SDSS and spectroscopic coverage to r-21. The catalog consists of 9700 galaxies with GALEX and SDSS photometry, including 242 spectroscopically-confirmed Coma member galaxies that range from giant spirals and elliptical galaxies to dwarf irregular and early-type galaxies. The full multi-wavelength catalog (cluster plus background galaxies) is 80% complete to NUV=23 and FUV=23.5, and has a limiting depth at NUV=24.5 and FUV=25.0 which corresponds to a star formation rate of 10(exp -3) solar mass yr(sup -1) at the distance of Coma. The GALEX images presented here are very deep and include detections of many resolved cluster members superposed on a dense field of unresolved background galaxies. This required a two-fold approach to generating a source catalog: we used a Bayesian deblending algorithm to measure faint and compact sources (using SDSS coordinates as a position prior), and used the GALEX pipeline catalog for bright and/or extended objects. We performed simulations to assess the importance of systematic effects (e.g. object blends, source confusion, Eddington Bias) that influence source detection and photometry when using both methods. The Bayesian deblending method roughly doubles the number of source detections and provides reliable photometry to a few magnitudes deeper than the GALEX pipeline catalog. This method is also free from source confusion over the UV magnitude range studied here: conversely, we estimate that the GALEX pipeline catalogs are confusion limited at NUV approximately 23 and FUV approximately 24. We have measured the total UV galaxy counts using our catalog and report a 50% excess of counts across FUV=22-23.5 and NUV=21.5-23 relative to previous GALEX

  4. Count data, detection probabilities, and the demography, dynamics, distribution, and decline of amphibians.

    Science.gov (United States)

    Schmidt, Benedikt R

    2003-08-01

    The evidence for amphibian population declines is based on count data that were not adjusted for detection probabilities. Such data are not reliable even when collected using standard methods. The formula C = Np (where C is a count, N the true parameter value, and p is a detection probability) relates count data to demography, population size, or distributions. With unadjusted count data, one assumes a linear relationship between C and N and that p is constant. These assumptions are unlikely to be met in studies of amphibian populations. Amphibian population data should be based on methods that account for detection probabilities.

  5. DEEP GALEX OBSERVATIONS OF THE COMA CLUSTER: SOURCE CATALOG AND GALAXY COUNTS

    International Nuclear Information System (INIS)

    Hammer, D.; Hornschemeier, A. E.; Miller, N.; Jenkins, L.; Mobasher, B.; Smith, R.; Arnouts, S.; Milliard, B.

    2010-01-01

    We present a source catalog from a deep 26 ks Galaxy Evolution Explorer (GALEX) observation of the Coma cluster in the far-UV (FUV; 1530 A) and near-UV (NUV; 2310 A) wavebands. The observed field is centered ∼0. 0 9 (1.6 Mpc) southwest of the Coma core in a well-studied region of the cluster known as 'Coma-3'. The entire field is located within the apparent virial radius of the Coma cluster, and has optical photometric coverage with Sloan Digital Sky Survey (SDSS) and deep spectroscopic coverage to r ∼ 21. We detect GALEX sources to NUV = 24.5 and FUV = 25.0, which corresponds to a star formation rate of ∼10 -3 M sun yr -1 for galaxies at the distance of Coma. We have assembled a catalog of 9700 galaxies with GALEX and SDSS photometry, including 242 spectroscopically confirmed Coma member galaxies that span a large range of galaxy types from giant spirals and elliptical galaxies to dwarf irregular and early-type galaxies. The full multi-wavelength catalog (cluster plus background galaxies) is ∼80% complete to NUV = 23 and FUV = 23.5. The GALEX images presented here are very deep and include detections of many resolved cluster members superposed on a dense field of unresolved background galaxies. This required a two-fold approach to generating a source catalog: we used a Bayesian deblending algorithm to measure faint and compact sources (using SDSS coordinates as position prior), and used the GALEX pipeline catalog for bright and/or extended objects. We performed simulations to assess the importance of systematic effects (e.g., object blends, source confusion, Eddington Bias) that influence the source detection and photometry when using both methods. The Bayesian deblending method roughly doubles the number of source detections and provides reliable photometry to a few magnitudes deeper than the GALEX pipeline catalog. This method is free from source confusion over the UV magnitude range studied here; we estimate that the GALEX pipeline catalogs are

  6. Testing the count rate performance of the scintillation camera by exponential attenuation: Decaying source; Multiple filters

    International Nuclear Information System (INIS)

    Adams, R.; Mena, I.

    1988-01-01

    An algorithm and two fortrAN programs have been developed to evaluate the count rate performance of scintillation cameras from count rates reduced exponentially, either by a decaying source or by filtration. The first method is used with short-lived radionuclides such as 191 /sup m/Ir or 191 /sup m/Au. The second implements a National Electrical Manufacturers' Association (NEMA) protocol in which the count rate from a source of 191 /sup m/Tc is attenuated by a varying number of copper filters stacked over it. The count rate at each data point is corrected for deadtime loss after assigning an arbitrary deadtime (tau). A second-order polynomial equation is fitted to the logarithms of net count rate values: ln(R) = A+BT+CT 2 where R is the net corrected count rate (cps), and T is the elapsed time (or the filter thickness in the NEMA method). Depending on C, tau is incremented or decremented iteratively, and the count rate corrections and curve fittings are repeated until C approaches zero, indicating a correct value of the deadtime (tau). The program then plots the measured count rate versus the corrected count rate values

  7. Calculation of the counting efficiency for extended sources

    International Nuclear Information System (INIS)

    Korun, M.; Vidmar, T.

    2002-01-01

    A computer program for calculation of efficiency calibration curves for extended samples counted on gamma- and X ray spectrometers is described. The program calculates efficiency calibration curves for homogeneous cylindrical samples placed coaxially with the symmetry axis of the detector. The method of calculation is based on integration over the sample volume of the efficiencies for point sources measured in free space on an equidistant grid of points. The attenuation of photons within the sample is taken into account using the self-attenuation function calculated with a two-dimensional detector model. (author)

  8. Theory of overdispersion in counting statistics caused by fluctuating probabilities

    International Nuclear Information System (INIS)

    Semkow, Thomas M.

    1999-01-01

    It is shown that the random Lexis fluctuations of probabilities such as probability of decay or detection cause the counting statistics to be overdispersed with respect to the classical binomial, Poisson, or Gaussian distributions. The generating and the distribution functions for the overdispersed counting statistics are derived. Applications to radioactive decay with detection and more complex experiments are given, as well as distinguishing between the source and background, in the presence of overdispersion. Monte-Carlo verifications are provided

  9. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    Science.gov (United States)

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  10. Count-to-count time interval distribution analysis in a fast reactor

    International Nuclear Information System (INIS)

    Perez-Navarro Gomez, A.

    1973-01-01

    The most important kinetic parameters have been measured at the zero power fast reactor CORAL-I by means of the reactor noise analysis in the time domain, using measurements of the count-to-count time intervals. (Author) 69 refs

  11. Measurement of uranium and plutonium in solid waste by passive photon or neutron counting and isotopic neutron source interrogation

    Energy Technology Data Exchange (ETDEWEB)

    Crane, T.W.

    1980-03-01

    A summary of the status and applicability of nondestructive assay (NDA) techniques for the measurement of uranium and plutonium in 55-gal barrels of solid waste is reported. The NDA techniques reviewed include passive gamma-ray and x-ray counting with scintillator, solid state, and proportional gas photon detectors, passive neutron counting, and active neutron interrogation with neutron and gamma-ray counting. The active neutron interrogation methods are limited to those employing isotopic neutron sources. Three generic neutron sources (alpha-n, photoneutron, and /sup 252/Cf) are considered. The neutron detectors reviewed for both prompt and delayed fission neutron detection with the above sources include thermal (/sup 3/He, /sup 10/BF/sub 3/) and recoil (/sup 4/He, CH/sub 4/) proportional gas detectors and liquid and plastic scintillator detectors. The instrument found to be best suited for low-level measurements (< 10 nCi/g) is the /sup 252/Cf Shuffler. The measurement technique consists of passive neutron counting followed by cyclic activation using a /sup 252/Cf source and delayed neutron counting with the source withdrawn. It is recommended that a waste assay station composed of a /sup 252/Cf Shuffler, a gamma-ray scanner, and a screening station be tested and evaluated at a nuclear waste site. 34 figures, 15 tables.

  12. Deep 3 GHz number counts from a P(D) fluctuation analysis

    Science.gov (United States)

    Vernstrom, T.; Scott, Douglas; Wall, J. V.; Condon, J. J.; Cotton, W. D.; Fomalont, E. B.; Kellermann, K. I.; Miller, N.; Perley, R. A.

    2014-05-01

    Radio source counts constrain galaxy populations and evolution, as well as the global star formation history. However, there is considerable disagreement among the published 1.4-GHz source counts below 100 μJy. Here, we present a statistical method for estimating the μJy and even sub-μJy source count using new deep wide-band 3-GHz data in the Lockman Hole from the Karl G. Jansky Very Large Array. We analysed the confusion amplitude distribution P(D), which provides a fresh approach in the form of a more robust model, with a comprehensive error analysis. We tested this method on a large-scale simulation, incorporating clustering and finite source sizes. We discuss in detail our statistical methods for fitting using Markov chain Monte Carlo, handling correlations, and systematic errors from the use of wide-band radio interferometric data. We demonstrated that the source count can be constrained down to 50 nJy, a factor of 20 below the rms confusion. We found the differential source count near 10 μJy to have a slope of -1.7, decreasing to about -1.4 at fainter flux densities. At 3 GHz, the rms confusion in an 8-arcsec full width at half-maximum beam is ˜ 1.2 μJy beam-1, and a radio background temperature ˜14 mK. Our counts are broadly consistent with published evolutionary models. With these results, we were also able to constrain the peak of the Euclidean normalized differential source count of any possible new radio populations that would contribute to the cosmic radio background down to 50 nJy.

  13. Effects of the thickness of gold deposited on a source backing film in the 4πβ-counting

    International Nuclear Information System (INIS)

    Miyahara, Hiroshi; Yoshida, Makoto; Watanabe, Tamaki

    1976-01-01

    A gold deposited VYNS film as a source backing in the 4πβ-counting has generally been used for reducing the absorption of β-rays. The thickness of the film with the gold is usually a few times thicker than the VYNS film itself. However, Because the appropriate thickness of gold has not yet been determined, the effects of gold thickness on electrical resistivity, plateau characteristics and β-ray counting efficiency were studied. 198 Au (960 keV), 60 Co(315 keV), 59 Fe(273 keV) and 95 Nb(160 keV), which were prepared as sources by the aluminium chloride treatment method, were used. Gold was evaporated under a deposition rate of 1 - 5 μg/cm 2 /min at a pressure less than 1 x 10 -5 Torr. Results show that the gold deposition on the side opposite the source after source preparation is essential. In this case, a maximum counting efficiency is obtained at the mean thickness of 2 μg/cm 2 . When gold is deposited only on the same side as the source, a maximum counting efficiency, which is less than that in the former case, is obtained at the mean thickness of 20 μg/cm 2 . (Evans, J.)

  14. Determining random counts in liquid scintillation counting

    International Nuclear Information System (INIS)

    Horrocks, D.L.

    1979-01-01

    During measurements involving coincidence counting techniques, errors can arise due to the detection of chance or random coincidences in the multiple detectors used. A method and the electronic circuits necessary are here described for eliminating this source of error in liquid scintillation detectors used in coincidence counting. (UK)

  15. The distribution of polarized radio sources >15 μJy IN GOODS-N

    International Nuclear Information System (INIS)

    Rudnick, L.; Owen, F. N.

    2014-01-01

    We present deep Very Large Array observations of the polarization of radio sources in the GOODS-N field at 1.4 GHz at resolutions of 1.''6 and 10''. At 1.''6, we find that the peak flux cumulative number count distribution is N(> p) ∼ 45*(p/30 μJy) –0.6 per square degree above a detection threshold of 14.5 μJy. This represents a break from the steeper slopes at higher flux densities, resulting in fewer sources predicted for future surveys with the Square Kilometer Array and its precursors. It provides a significant challenge for using background rotation measures (RMs) to study clusters of galaxies or individual galaxies. Most of the polarized sources are well above our detection limit, and they are also radio galaxies that are well-resolved even at 10'', with redshifts from ∼0.2-1.9. We determined a total polarized flux for each source by integrating the 10'' polarized intensity maps, as will be done by upcoming surveys such as POSSUM. These total polarized fluxes are a factor of two higher, on average, than the peak polarized flux at 1.''6; this would increase the number counts by ∼50% at a fixed flux level. The detected sources have RMs with a characteristic rms scatter of ∼11 rad m –2 around the local Galactic value, after eliminating likely outliers. The median fractional polarization from all total intensity sources does not continue the trend of increasing at lower flux densities, as seen for stronger sources. The changes in the polarization characteristics seen at these low fluxes likely represent the increasing dominance of star-forming galaxies.

  16. The optimal on-source region size for detections with counting-type telescopes

    Energy Technology Data Exchange (ETDEWEB)

    Klepser, Stefan

    2017-01-15

    The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument point spread function (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ{sup 2}{sub ∞}∼2.51 times the squared PSF width σ{sup 2}{sub PSF39}. While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Gaussian PSF. This result can standardise and simplify analysis procedures, reduce trials and eliminate the need for experience-based ad hoc cut definitions or expensive case-by-case Monte Carlo simulations.

  17. The optimal on-source region size for detections with counting-type telescopes

    International Nuclear Information System (INIS)

    Klepser, Stefan

    2017-01-01

    The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument point spread function (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ"2_∞∼2.51 times the squared PSF width σ"2_P_S_F_3_9. While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Gaussian PSF. This result can standardise and simplify analysis procedures, reduce trials and eliminate the need for experience-based ad hoc cut definitions or expensive case-by-case Monte Carlo simulations.

  18. Over-Distribution in Source Memory

    Science.gov (United States)

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  19. The Bayesian count rate probability distribution in measurement of ionizing radiation by use of a ratemeter

    Energy Technology Data Exchange (ETDEWEB)

    Weise, K.

    2004-06-01

    Recent metrological developments concerning measurement uncertainty, founded on Bayesian statistics, give rise to a revision of several parts of the DIN 25482 and ISO 11929 standard series. These series stipulate detection limits and decision thresholds for ionizing-radiation measurements. Part 3 and, respectively, part 4 of them deal with measurements by use of linear-scale analogue ratemeters. A normal frequency distribution of the momentary ratemeter indication for a fixed count rate value is assumed. The actual distribution, which is first calculated numerically by solving an integral equation, differs, however, considerably from the normal distribution although this one represents an approximation of it for sufficiently large values of the count rate to be measured. As is shown, this similarly holds true for the Bayesian probability distribution of the count rate for sufficiently large given measured values indicated by the ratemeter. This distribution follows from the first one mentioned by means of the Bayes theorem. Its expectation value and variance are needed for the standards to be revised on the basis of Bayesian statistics. Simple expressions are given by the present standards for estimating these parameters and for calculating the detection limit and the decision threshold. As is also shown, the same expressions can similarly be used as sufficient approximations by the revised standards if, roughly, the present indicated value exceeds the reciprocal ratemeter relaxation time constant. (orig.)

  20. One, Two, Three, Four, Nothing More: An Investigation of the Conceptual Sources of the Verbal Counting Principles

    Science.gov (United States)

    Le Corre, Mathieu; Carey, Susan

    2007-01-01

    Since the publication of [Gelman, R., & Gallistel, C. R. (1978). "The child's understanding of number." Cambridge, MA: Harvard University Press.] seminal work on the development of verbal counting as a representation of number, the nature of the ontogenetic sources of the verbal counting principles has been intensely debated. The present…

  1. Counting statistics in radioactivity measurements

    International Nuclear Information System (INIS)

    Martin, J.

    1975-01-01

    The application of statistical methods to radioactivity measurement problems is analyzed in several chapters devoted successively to: the statistical nature of radioactivity counts; the application to radioactive counting of two theoretical probability distributions, Poisson's distribution law and the Laplace-Gauss law; true counting laws; corrections related to the nature of the apparatus; statistical techniques in gamma spectrometry [fr

  2. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'

    DEFF Research Database (Denmark)

    de Nijs, Robin

    2015-01-01

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed...... by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all...... methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics...

  3. Radiation counting statistics

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs.

  4. Radiation counting statistics

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Jee, K. Y.; Park, K. K. [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiments. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. 11 refs., 6 figs., 8 tabs. (Author)

  5. Radiation counting statistics

    International Nuclear Information System (INIS)

    Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H.

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs

  6. A sampling device for counting insect egg clusters and measuring vertical distribution of vegetation

    Science.gov (United States)

    Robert L. Talerico; Robert W., Jr. Wilson

    1978-01-01

    The use of a vertical sampling pole that delineates known volumes and position is illustrated and demonstrated for counting egg clusters of N. sertifer. The pole can also be used to estimate vertical and horizontal coverage, distribution or damage of vegetation or foliage.

  7. Systematic management of sealed source and nucleonic counting system in field service

    International Nuclear Information System (INIS)

    Mahadi Mustapha; Mohd Fitri Abdul Rahman; Jaafar Abdullah

    2005-01-01

    PAT group have received a lot of service from the oil and gas plant. All the services use sealed source and nucleonic counting system. This paper described the detail of management before going to the field service. This management is important to make sure the job is smoothly done and safe to the radiation worker and public. Furthermore this management in line with the regulation from LPTA. (Author)

  8. Calibration of the Accuscan II In Vivo System for I-131 Thyroid Counting

    Energy Technology Data Exchange (ETDEWEB)

    Orval R. Perry; David L. Georgeson

    2011-07-01

    This report describes the March 2011 calibration of the Accuscan II HpGe In Vivo system for I-131 thyroid counting. The source used for the calibration was an Analytics mixed gamma source 82834-121 distributed in an epoxy matrix in a Wheaton Liquid Scintillation Vial with energies from 88.0 keV to 1836.1 keV. The center of the detectors was position 64-feet from the vault floor. This position places the approximate center line of the detectors at the center line of the source in the thyroid tube. The calibration was performed using an RMC II phantom (Appendix J). Validation testing was performed using a Ba-133 source and an ANSI N44.3 Phantom (Appendix I). This report includes an overview introduction and records for the energy/FWHM and efficiency calibrations including verification counting. The Accuscan II system was successfully calibrated for counting the thyroid for I-131 and verified in accordance with ANSI/HPS N13.30-1996 criteria.

  9. Cosmology from angular size counts of extragalactic radio sources

    International Nuclear Information System (INIS)

    Kapahi, V.K.

    1975-01-01

    The cosmological implications of the observed angular sizes of extragalactic radio sources are investigated using (i) the log N-log theta relation, where N is the number of sources with an angular size greater than a value theta, for the complete sample of 3CR sources, and (ii) the thetasub(median) vs flux density (S) relation derived from the 3CR, the All-sky, and the Ooty occulation surveys, spanning a flux density range of about 300:1. The method of estimating the expected N(theta) and thetasub(m)(S) relations for a uniform distribution of sources in space is outlined. Since values of theta>approximately 100second arc in the 3C sample arise from sources of small z, the slope of the N(theta) relation in this range is practically independent of the world model and the distribution of source sizes, but depends strongly on the radio luminosity function (RLF). From the observed slope the RLF is derived in the luminosity range of about 10 23 178 26 W Hz -1 sr -1 to be of the form rho(P)dP proportional to Psup(-2.1)dP. It is shown that the angular size data provide independent evidence of evolution in source properties with epoch. It is difficult to explain the data with the simple steady-state theory even if identified QSOs are excluded from ths source samples and a local deficiency of strong source is postulated. The simplest evolutionary scheme that fits the data in the Einstein-de Sitter cosmology indicates that (a) the local RLF steepens considerably at high luminosities, (b) the comoving density of high luminosity sources increases with z in a manner similar to that implied by the log N-log S data and by the V/Vsub(m) test for QSOs, and (c) the mean physical sizes of radio sources evolve with z approximately as (1+z) -1 . Similar evolutionary effects appear to be present for QSOs as well as radio galaxies. (author)

  10. Determining {sup 252}Cf source strength by absolute passive neutron correlation counting

    Energy Technology Data Exchange (ETDEWEB)

    Croft, S. [Oak Ridge National Laboratory, Oak Ridge, TN 37831-6166 (United States); Henzlova, D., E-mail: henzlova@lanl.gov [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

    2013-06-21

    Physically small, lightly encapsulated, radionuclide sources containing {sup 252}Cf are widely used for a vast variety of industrial, medical, educational and research applications requiring a convenient source of neutrons. For many quantitative applications, such as detector efficiency calibrations, the absolute strength of the neutron emission is needed. In this work we show how, by using a neutron multiplicity counter the neutron emission rate can be obtained with high accuracy. This provides an independent and alternative way to create reference sources in-house for laboratories such as ours engaged in international safeguards metrology. The method makes use of the unique and well known properties of the {sup 252}Cf spontaneous fission system and applies advanced neutron correlation counting methods. We lay out the foundation of the method and demonstrate it experimentally. We show that accuracy comparable to the best methods currently used by national bodies to certify neutron source strengths is possible.

  11. Accommodating Binary and Count Variables in Mediation: A Case for Conditional Indirect Effects

    Science.gov (United States)

    Geldhof, G. John; Anthony, Katherine P.; Selig, James P.; Mendez-Luck, Carolyn A.

    2018-01-01

    The existence of several accessible sources has led to a proliferation of mediation models in the applied research literature. Most of these sources assume endogenous variables (e.g., M, and Y) have normally distributed residuals, precluding models of binary and/or count data. Although a growing body of literature has expanded mediation models to…

  12. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    Science.gov (United States)

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  13. Study of the influence of radionuclide biokinetics on in vivo counting using voxel phantoms

    International Nuclear Information System (INIS)

    Lamart, St.

    2008-10-01

    The in vivo measurement is an efficient method to estimate the retention of activity in case of internal contamination. However, it is currently limited by the use of physical phantoms for the calibration, not enabling to reproduce neither the morphology of the measured person nor the actual distribution of the contamination. The current method of calibration therefore leads to significant systematic uncertainties on the quantification of the contamination. To improve the in vivo measurement, the Laboratory of Internal Dose Assessment (LEDI, IRSN) has developed an original numerical calibration method with the OEDIPE software. It is based on voxel phantoms created from the medical images of persons, and associated with the MCNPX Monte Carlo code of particle transport. The first version of this software enabled to model simple homogeneous sources and to better estimate the systematic uncertainties in the lung counting of actinides due to the detector position and to the heterogeneous distribution of activity inside the lungs. However, it was not possible to take into account the dynamic feature, and often heterogeneous distribution between body organs and tissues of the activity. Still, the efficiency of the detection system depends on the distribution of the source of activity. The main purpose of the thesis work is to answer to the question: what is the influence of the biokinetics of the radionuclides on the in vivo counting? To answer it, it was necessary to deeply modify OEDIPE. This new development enabled to model the source of activity more realistically from the reference biokinetic models defined by the ICRP. The first part of the work consisted in developing the numerical tools needed to integrate the biokinetics in OEDIPE. Then, a methodology was developed to quantify its influence on the in vivo counting from the results of simulations. This method was carried out and validated on the model of the in vivo counting system of the LEDI. Finally, the

  14. Flare stars and Pascal distribution

    International Nuclear Information System (INIS)

    Muradian, R.

    1994-07-01

    Observed statistics of stellar flares are described by Pascal or Negative Binomial Distribution. The analogy with other classes of chaotic production mechanisms such as hadronic particle multiplicity distributions and photoelectron counts from thermal sources is noticed. (author). 12 refs

  15. PRESENTED AT TRIANGLE CONSORTIUM FOR REPRODUCTIVE BIOLOGY MEETING IN CHAPEL HILL, NC ON 2/11/2006: SPERM COUNT DISTRIBUTIONS IN FERTILE MEN

    Science.gov (United States)

    Sperm concentration and count are often used as indicators of environmental impacts on male reproductive health. Existing clinical databases may be biased towards sub-fertile men with low sperm counts and less is known about expected sperm count distributions in cohorts of ferti...

  16. A new framework of statistical inferences based on the valid joint sampling distribution of the observed counts in an incomplete contingency table.

    Science.gov (United States)

    Tian, Guo-Liang; Li, Hui-Qiong

    2017-08-01

    Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.

  17. Amplitude distributions of dark counts and photon counts in NbN superconducting single-photon detectors integrated with the HEMT readout

    Energy Technology Data Exchange (ETDEWEB)

    Kitaygorsky, J. [Kavli Institute of Nanoscience Delft, Delft University of Technology, 2600 GA Delft (Netherlands); Department of Electrical and Computer Engineering and Laboratory for Laser Energetics, University of Rochester, Rochester, NY 14627-0231 (United States); Słysz, W., E-mail: wslysz@ite.waw.pl [Institute of Electron Technology, PL-02 668 Warsaw (Poland); Shouten, R.; Dorenbos, S.; Reiger, E.; Zwiller, V. [Kavli Institute of Nanoscience Delft, Delft University of Technology, 2600 GA Delft (Netherlands); Sobolewski, Roman [Department of Electrical and Computer Engineering and Laboratory for Laser Energetics, University of Rochester, Rochester, NY 14627-0231 (United States)

    2017-01-15

    Highlights: • A new operation regime of NbN superconducting single-photon detectors (SSPDs). • A better understanding of the origin of dark counts generated by the detector. • A promise of PNR functionality in SSPD measurements. - Abstract: We present a new operation regime of NbN superconducting single-photon detectors (SSPDs) by integrating them with a low-noise cryogenic high-electron-mobility transistor and a high-load resistor. The integrated sensors are designed to get a better understanding of the origin of dark counts triggered by the detector, as our scheme allows us to distinguish the origin of dark pulses from the actual photon pulses in SSPDs. The presented approach is based on a statistical analysis of amplitude distributions of recorded trains of the SSPD photoresponse transients. It also enables to obtain information on energy of the incident photons, as well as demonstrates some photon-number-resolving capability of meander-type SSPDs.

  18. A New Method for Calculating Counts in Cells

    Science.gov (United States)

    Szapudi, István

    1998-04-01

    In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.

  19. 234Th distributions in coastal and open ocean waters by non-destructive β-counting

    International Nuclear Information System (INIS)

    Miller, L.A.; Svaeren, I.

    2003-01-01

    Non-destructive β-counting analyses of particulate and dissolved 234 Th activities in seawater are simpler but no less precise than traditional radioanalytical methods. The inherent accuracy limitations of the non-destructive β-counting method, particularly in samples likely to be contaminated with anthropogenic nuclides, are alleviated by recounting the samples over several half-lives and fitting the counting data to the 234 Th decay curve. Precision (including accuracy, estimated at an average of 3%) is better than 10% for particulate or 5% for dissolved samples. Thorium-234 distributions in the Skagerrak indicated a vigorous, presumably biological, particle export from the surface waters, and while bottom sediment resuspension was not an effective export mechanism, it did strip thorium from the dissolved phase. In the Greenland and Norwegian Seas, we saw clear evidence of particulate export from the surface waters, but at 75 m, total 234 Th activities were generally in equilibrium with 238 U. (author)

  20. Estimation of equivalent dose and its uncertainty in the OSL SAR protocol when count numbers do not follow a Poisson distribution

    International Nuclear Information System (INIS)

    Bluszcz, Andrzej; Adamiec, Grzegorz; Heer, Aleksandra J.

    2015-01-01

    The current work focuses on the estimation of equivalent dose and its uncertainty using the single aliquot regenerative protocol in optically stimulated luminescence measurements. The authors show that the count numbers recorded with the use of photomultiplier tubes are well described by negative binomial distributions, different ones for background counts and photon induced counts. This fact is then exploited in pseudo-random count number generation and simulations of D e determination assuming a saturating exponential growth. A least squares fitting procedure is applied using different types of weights to determine whether the obtained D e 's and their error estimates are unbiased and accurate. A weighting procedure is suggested that leads to almost unbiased D e estimates. It is also shown that the assumption of Poisson distribution in D e estimation may lead to severe underestimation of the D e error. - Highlights: • Detailed analysis of statistics of count numbers in luminescence readers. • Generation of realistically scattered pseudo-random numbers of counts in luminescence measurements. • A practical guide for stringent analysis of D e values and errors assessment.

  1. Observations of the Hubble Deep Field with the Infrared Space Observatory .3. Source counts and P(D) analysis

    DEFF Research Database (Denmark)

    Oliver, S.J.; Goldschmidt, P.; Franceschini, A.

    1997-01-01

    We present source counts at 6.7 and 15 mu m from our maps of the Hubble Deep Field (HDF) region, reaching 38.6 mu Jy at 6.7 mu m and 255 mu Jy at 15 mu m. These are the first ever extragalactic number counts to be presented at 6.7 mu m, and are three decades fainter than IRAS at 12 mu m. Both...

  2. Distributed source term analysis, a new approach to nuclear material inventory verification

    CERN Document Server

    Beddingfield, D H

    2002-01-01

    The Distributed Source-Term Analysis (DSTA) technique is a new approach to measuring in-process material holdup that is a significant departure from traditional hold-up measurement methodology. The DSTA method is a means of determining the mass of nuclear material within a large, diffuse, volume using passive neutron counting. The DSTA method is a more efficient approach than traditional methods of holdup measurement and inventory verification. The time spent in performing DSTA measurement and analysis is a fraction of that required by traditional techniques. The error ascribed to a DSTA survey result is generally less than that from traditional methods. Also, the negative bias ascribed to gamma-ray methods is greatly diminished because the DSTA method uses neutrons which are more penetrating than gamma-rays.

  3. Distributed source term analysis, a new approach to nuclear material inventory verification

    International Nuclear Information System (INIS)

    Beddingfield, D.H.; Menlove, H.O.

    2002-01-01

    The Distributed Source-Term Analysis (DSTA) technique is a new approach to measuring in-process material holdup that is a significant departure from traditional hold-up measurement methodology. The DSTA method is a means of determining the mass of nuclear material within a large, diffuse, volume using passive neutron counting. The DSTA method is a more efficient approach than traditional methods of holdup measurement and inventory verification. The time spent in performing DSTA measurement and analysis is a fraction of that required by traditional techniques. The error ascribed to a DSTA survey result is generally less than that from traditional methods. Also, the negative bias ascribed to γ-ray methods is greatly diminished because the DSTA method uses neutrons which are more penetrating than γ-rays

  4. Radio source counts: comments on their convergence and assessment of the contribution to fluctuations of the microwave background

    International Nuclear Information System (INIS)

    Danese, L.; De Zotti, G.; Mandolesi, N.

    1982-01-01

    We point out that statistically estimated high frequency counts at milli-Jansky levels exhibit a slower convergence than expected on the basis of extrapolations of counts at higher flux densities and at longer wavelengths. This seems to demand a substantial cosmological evolution for at least a sub-population of flat-spectrum sources different from QSO's, a fact that might have important implications also in connection with the problem of the origin of the X-ray background. We also compute the discrete source contributions to small scale fluctuations in the Rayleigh-Jeans region of the cosmic microwave background and we show that they set a serious limit to the searches for truly primordial anisotropies using conventional radio-astronomical techniques

  5. Fission meter and neutron detection using poisson distribution comparison

    Science.gov (United States)

    Rowland, Mark S; Snyderman, Neal J

    2014-11-18

    A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.

  6. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  7. Protecting count queries in study design.

    Science.gov (United States)

    Vinterbo, Staal A; Sarwate, Anand D; Boxwala, Aziz A

    2012-01-01

    Today's clinical research institutions provide tools for researchers to query their data warehouses for counts of patients. To protect patient privacy, counts are perturbed before reporting; this compromises their utility for increased privacy. The goal of this study is to extend current query answer systems to guarantee a quantifiable level of privacy and allow users to tailor perturbations to maximize the usefulness according to their needs. A perturbation mechanism was designed in which users are given options with respect to scale and direction of the perturbation. The mechanism translates the true count, user preferences, and a privacy level within administrator-specified bounds into a probability distribution from which the perturbed count is drawn. Users can significantly impact the scale and direction of the count perturbation and can receive more accurate final cohort estimates. Strong and semantically meaningful differential privacy is guaranteed, providing for a unified privacy accounting system that can support role-based trust levels. This study provides an open source web-enabled tool to investigate visually and numerically the interaction between system parameters, including required privacy level and user preference settings. Quantifying privacy allows system administrators to provide users with a privacy budget and to monitor its expenditure, enabling users to control the inevitable loss of utility. While current measures of privacy are conservative, this system can take advantage of future advances in privacy measurement. The system provides new ways of trading off privacy and utility that are not provided in current study design systems.

  8. Source distribution dependent scatter correction for PVI

    International Nuclear Information System (INIS)

    Barney, J.S.; Harrop, R.; Dykstra, C.J.

    1993-01-01

    Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction

  9. Image authentication using distributed source coding.

    Science.gov (United States)

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  10. Calculation of total counting efficiency of a NaI(Tl) detector by hybrid Monte-Carlo method for point and disk sources

    Energy Technology Data Exchange (ETDEWEB)

    Yalcin, S. [Education Faculty, Kastamonu University, 37200 Kastamonu (Turkey)], E-mail: yalcin@gazi.edu.tr; Gurler, O.; Kaynak, G. [Department of Physics, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey); Gundogdu, O. [Department of Physics, School of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH (United Kingdom)

    2007-10-15

    This paper presents results on the total gamma counting efficiency of a NaI(Tl) detector from point and disk sources. The directions of photons emitted from the source were determined by Monte-Carlo techniques and the photon path lengths in the detector were determined by analytic equations depending on photon directions. This is called the hybrid Monte-Carlo method where analytical expressions are incorporated into the Monte-Carlo simulations. A major advantage of this technique is the short computation time compared to other techniques on similar computational platforms. Another advantage is the flexibility for inputting detector-related parameters (such as source-detector distance, detector radius, source radius, detector linear attenuation coefficient) into the algorithm developed, thus making it an easy and flexible method to apply to other detector systems and configurations. The results of the total counting efficiency model put forward for point and disc sources were compared with the previous work reported in the literature.

  11. Calculation of total counting efficiency of a NaI(Tl) detector by hybrid Monte-Carlo method for point and disk sources

    International Nuclear Information System (INIS)

    Yalcin, S.; Gurler, O.; Kaynak, G.; Gundogdu, O.

    2007-01-01

    This paper presents results on the total gamma counting efficiency of a NaI(Tl) detector from point and disk sources. The directions of photons emitted from the source were determined by Monte-Carlo techniques and the photon path lengths in the detector were determined by analytic equations depending on photon directions. This is called the hybrid Monte-Carlo method where analytical expressions are incorporated into the Monte-Carlo simulations. A major advantage of this technique is the short computation time compared to other techniques on similar computational platforms. Another advantage is the flexibility for inputting detector-related parameters (such as source-detector distance, detector radius, source radius, detector linear attenuation coefficient) into the algorithm developed, thus making it an easy and flexible method to apply to other detector systems and configurations. The results of the total counting efficiency model put forward for point and disc sources were compared with the previous work reported in the literature

  12. Precise Mapping Of A Spatially Distributed Radioactive Source

    International Nuclear Information System (INIS)

    Beck, A.; Caras, I.; Piestum, S.; Sheli, E.; Melamud, Y.; Berant, S.; Kadmon, Y.; Tirosh, D.

    1999-01-01

    Spatial distribution measurement of radioactive sources is a routine task in the nuclear industry. The precision of each measurement depends upon the specific application. However, the technological edge of this precision is motivated by the production of standards for calibration. Within this definition, the most demanding field is the calibration of standards for medical equipment. In this paper, a semi-empirical method for controlling the measurement precision is demonstrated, using a relatively simple laboratory apparatus. The spatial distribution of the source radioactivity is measured as part of the quality assurance tests, during the production of flood sources. These sources are further used in calibration of medical gamma cameras. A typical flood source is a 40 x 60 cm 2 plate with an activity of 10 mCi (or more) of 57 Co isotope. The measurement set-up is based on a single NaI(Tl) scintillator with a photomultiplier tube, moving on an X Y table which scans the flood source. In this application the source is required to have a uniform activity distribution over its surface

  13. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  14. Galactic distribution of X-ray burst sources

    International Nuclear Information System (INIS)

    Lewin, W.H.G.; Hoffman, J.A.; Doty, J.; Clark, G.W.; Swank, J.H.; Becker, R.H.; Pravdo, S.H.; Serlemitsos, P.J.

    1977-01-01

    It is stated that 18 X-ray burst sources have been observed to date, applying the following definition for these bursts - rise times of less than a few seconds, durations of seconds to minutes, and recurrence in some regular pattern. If single burst events that meet the criteria of rise time and duration, but not recurrence are included, an additional seven sources can be added. A sky map is shown indicating their positions. The sources are spread along the galactic equator and cluster near low galactic longitudes, and their distribution is different from that of the observed globular clusters. Observations based on the SAS-3 X-ray observatory studies and the Goddard X-ray Spectroscopy Experiment on OSO-9 are described. The distribution of the sources is examined and the effect of uneven sky exposure on the observed distribution is evaluated. It has been suggested that the bursts are perhaps produced by remnants of disrupted globular clusters and specifically supermassive black holes. This would imply the existence of a new class of unknown objects, and at present is merely an ad hoc method of relating the burst sources to globular clusters. (U.K.)

  15. Distributed power sources for Mars colonization

    International Nuclear Information System (INIS)

    Miley, George H.; Shaban, Yasser

    2003-01-01

    One of the fundamental needs for Mars colonization is an abundant source of energy. The total energy system will probably use a mixture of sources based on solar energy, fuel cells, and nuclear energy. Here we concentrate on the possibility of developing a distributed system employing several unique new types of nuclear energy sources, specifically small fusion devices using inertial electrostatic confinement and portable 'battery type' proton reaction cells

  16. Distributed coding of multiview sparse sources with joint recovery

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren

    2016-01-01

    In support of applications involving multiview sources in distributed object recognition using lightweight cameras, we propose a new method for the distributed coding of sparse sources as visual descriptor histograms extracted from multiview images. The problem is challenging due to the computati...... transform (SIFT) descriptors extracted from multiview images shows that our method leads to bit-rate saving of up to 43% compared to the state-of-the-art distributed compressed sensing method with independent encoding of the sources....

  17. DC KIDS COUNT e-Databook Indicators

    Science.gov (United States)

    DC Action for Children, 2012

    2012-01-01

    This report presents indicators that are included in DC Action for Children's 2012 KIDS COUNT e-databook, their definitions and sources and the rationale for their selection. The indicators for DC KIDS COUNT represent a mix of traditional KIDS COUNT indicators of child well-being, such as the number of children living in poverty, and indicators of…

  18. Activity distribution of a cobalt-60 teletherapy source

    International Nuclear Information System (INIS)

    Jaffray, D.A.; Munro, P.; Battista, J.J.; Fenster, A.

    1991-01-01

    In the course of quantifying the effect of radiation source size on the spatial resolution of portal images, a concentric ring structure in the activity distribution of a Cobalt-60 teletherapy source has been observed. The activity distribution was measured using a strip integral technique and confirmed independently by a contact radiograph of an identical but inactive source replica. These two techniques suggested that this concentric ring structure is due to the packing configuration of the small 60Co pellets that constitute the source. The source modulation transfer function (MTF) showed that this ring structure has a negligible influence on the spatial resolution of therapy images when compared to the effect of the large size of the 60Co source

  19. Microseism Source Distribution Observed from Ireland

    Science.gov (United States)

    Craig, David; Bean, Chris; Donne, Sarah; Le Pape, Florian; Möllhoff, Martin

    2017-04-01

    Ocean generated microseisms (OGM) are recorded globally with similar spectral features observed everywhere. The generation mechanism for OGM and their subsequent propagation to continental regions has led to their use as a proxy for sea-state characteristics. Also many modern seismological methods make use of OGM signals. For example, the Earth's crust and upper mantle can be imaged using ``ambient noise tomography``. For many of these methods an understanding of the source distribution is necessary to properly interpret the results. OGM recorded on near coastal seismometers are known to be related to the local ocean wavefield. However, contributions from more distant sources may also be present. This is significant for studies attempting to use OGM as a proxy for sea-state characteristics such as significant wave height. Ireland has a highly energetic ocean wave climate and is close to one of the major source regions for OGM. This provides an ideal location to study an OGM source region in detail. Here we present the source distribution observed from seismic arrays in Ireland. The region is shown to consist of several individual source areas. These source areas show some frequency dependence and generally occur at or near the continental shelf edge. We also show some preliminary results from an off-shore OBS network to the North-West of Ireland. The OBS network includes instruments on either side of the shelf and should help interpret the array observations.

  20. Precise method for correcting count-rate losses in scintillation cameras

    International Nuclear Information System (INIS)

    Madsen, M.T.; Nickles, R.J.

    1986-01-01

    Quantitative studies performed with scintillation detectors often require corrections for lost data because of the finite resolving time of the detector. Methods that monitor losses by means of a reference source or pulser have unacceptably large statistical fluctuations associated with their correction factors. Analytic methods that model the detector as a paralyzable system require an accurate estimate of the system resolving time. Because the apparent resolving time depends on many variables, including the window setting, source distribution, and the amount of scattering material, significant errors can be introduced by relying on a resolving time obtained from phantom measurements. These problems can be overcome by curve-fitting the data from a reference source to a paralyzable model in which the true total count rate in the selected window is estimated from the observed total rate. The resolving time becomes a free parameter in this method which is optimized to provide the best fit to the observed reference data. The fitted curve has the inherent accuracy of the reference source method with the precision associated with the observed total image count rate. Correction factors can be simply calculated from the ratio of the true reference source rate and the fitted curve. As a result, the statistical uncertainty of the data corrected by this method is not significantly increased

  1. Computer program for source distribution process in radiation facility

    International Nuclear Information System (INIS)

    Al-Kassiri, H.; Abdul Ghani, B.

    2007-08-01

    Computer simulation for dose distribution using Visual Basic has been done according to the arrangement and activities of Co-60 sources. This program provides dose distribution in treated products depending on the product density and desired dose. The program is useful for optimization of sources distribution during loading process. there is good agreement between calculated data for the program and experimental data.(Author)

  2. Study on serum TNF-α level, B-cell count and T-cell subsets distribution in peripheral blood in patients with rheumatoid arthritis

    International Nuclear Information System (INIS)

    Shi Buqing

    2006-01-01

    Objective: To study the changes of serum TNF-α levels, B-cell count and T-cell subsets distribution in peripheral blood in patients with rheumatoid arthritis. Methods: Serum TNF-α levels (with RIA), B cell as well as T cell subsets distribution type (with monoclonal antibody technique) were examined in 37 patients with rheumatoid arthritis and 30 controls. Results Serum TNF-α levels and B lymphocytes count were significantly higher in the patients than those in controls (P 3 , CD 4 and CD 4 /CD 8 were obviously lower (P<0.01). Conclusion: Rheumatoid arthritis is an autoimmune disease with abnormal immunoregulation. (authors)

  3. Effects of plutonium redistribution on lung counting

    International Nuclear Information System (INIS)

    Swinth, K.L.

    1976-01-01

    Early counts of Pu deposition in lungs will tend to overestimate lung contents since calibrations are performed with a uniform distribution and since a more favorable geometry exists in contaminated subjects because the activity is closer to the periphery of the lungs. Although the concentration into the outer regions of the lungs continues as evidenced by the autopsy studies, the counts performed by L X-rays will probably underestimate the lung content; because, simplistically, the geometry several years after exposure consists of a spherical shell with a point of activity in the center. This point of activity represents concentration in the lymph nodes from which the 60 keV gamma of 241 Am will be counted, but from which few of the L X-rays will be counted (this is an example of interorgan distribution). When a correction is made to the L X-ray intensity, the lymph node contribution will tend to increase the amount subtracted while correcting for 241 Am X-rays. It is doubtful that the relative increase in X-ray intensity by concentration in the pleural and sub-pleural regions will compensate for this effect. This will make the plutonium burden disappear while the 241 Am can still be detected. This effect has been observed in a case where counts with an intraesophageal probe indicated a substantial lymph node burden. In order to improve the accuracy of in vivo plutonium measurements, an improved understanding of pulmonary distribution and of distribution effects on in vivo counting are required

  4. Influence of materials and counting-rate effects on 3He neutron spectrometry

    International Nuclear Information System (INIS)

    Evans, A.E.

    1984-01-01

    The high energy resolution of the Cuttler-Shalev 3 He neutron spectrometer causes spectral measurements with this instrument to be strongly susceptible to artifacts caused by the presence of scattering or absorbing materials in or near the detector or the source, and to false peaks generated by pileup coincidences of the rather long-risetime pulses from the detector. These effects are particularly important when pulse-height distributions vary over several orders of magnitude in count rate versus channel. A commercial pile-up elimination circuit greatly improves but does not eliminate the pileup problem. Previously reported spurious peaks in the pulse-height distributions from monoenergetic neutron sources have been determined to be due to the influence of the iron in the detector wall. 6 references, 9 figures

  5. The dose distribution surrounding 192Ir and 137Cs seed sources

    International Nuclear Information System (INIS)

    Thomason, C.; Mackie, T.R.; Wisconsin Univ., Madison, WI; Lindstrom, M.J.; Higgins, P.D.

    1991-01-01

    Dose distributions in water were measured using LiF thermoluminescent dosemeters for 192 Ir seed sources with stainless steel and with platinum encapsulation to determine the effect of differing encapsulation. Dose distribution was measured for a 137 Cs seed source. In addition, dose distributions surrounding these sources were calculated using the EGS4 Monte Carlo code and were compared to measured data. The two methods are in good agreement for all three sources. Tables are given describing dose distribution surrounding each source as a function of distance and angle. Specific dose constants were also determined from results of Monte Carlo simulation. This work confirms the use of the EGS4 Monte Carlo code in modelling 192 Ir and 137 Cs seed sources to obtain brachytherapy dose distributions. (author)

  6. A Heuristic Approach to Distributed Generation Source Allocation for Electrical Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    M. Sharma

    2010-12-01

    Full Text Available The recent trends in electrical power distribution system operation and management are aimed at improving system conditions in order to render good service to the customer. The reforms in distribution sector have given major scope for employment of distributed generation (DG resources which will boost the system performance. This paper proposes a heuristic technique for allocation of distribution generation source in a distribution system. The allocation is determined based on overall improvement in network performance parameters like reduction in system losses, improvement in voltage stability, improvement in voltage profile. The proposed Network Performance Enhancement Index (NPEI along with the heuristic rules facilitate determination of feasible location and corresponding capacity of DG source. The developed approach is tested with different test systems to ascertain its effectiveness.

  7. Finite-size effects in transcript sequencing count distribution: its power-law correction necessarily precedes downstream normalization and comparative analysis.

    Science.gov (United States)

    Wong, Wing-Cheong; Ng, Hong-Kiat; Tantoso, Erwin; Soong, Richie; Eisenhaber, Frank

    2018-02-12

    Though earlier works on modelling transcript abundance from vertebrates to lower eukaroytes have specifically singled out the Zip's law, the observed distributions often deviate from a single power-law slope. In hindsight, while power-laws of critical phenomena are derived asymptotically under the conditions of infinite observations, real world observations are finite where the finite-size effects will set in to force a power-law distribution into an exponential decay and consequently, manifests as a curvature (i.e., varying exponent values) in a log-log plot. If transcript abundance is truly power-law distributed, the varying exponent signifies changing mathematical moments (e.g., mean, variance) and creates heteroskedasticity which compromises statistical rigor in analysis. The impact of this deviation from the asymptotic power-law on sequencing count data has never truly been examined and quantified. The anecdotal description of transcript abundance being almost Zipf's law-like distributed can be conceptualized as the imperfect mathematical rendition of the Pareto power-law distribution when subjected to the finite-size effects in the real world; This is regardless of the advancement in sequencing technology since sampling is finite in practice. Our conceptualization agrees well with our empirical analysis of two modern day NGS (Next-generation sequencing) datasets: an in-house generated dilution miRNA study of two gastric cancer cell lines (NUGC3 and AGS) and a publicly available spike-in miRNA data; Firstly, the finite-size effects causes the deviations of sequencing count data from Zipf's law and issues of reproducibility in sequencing experiments. Secondly, it manifests as heteroskedasticity among experimental replicates to bring about statistical woes. Surprisingly, a straightforward power-law correction that restores the distribution distortion to a single exponent value can dramatically reduce data heteroskedasticity to invoke an instant increase in

  8. Change-Point Methods for Overdispersed Count Data

    National Research Council Canada - National Science Library

    Wilken, Brian A

    2007-01-01

    .... Although the Poisson model is often used to model count data, the two-parameter gamma-Poisson mixture parameterization of the negative binomial distribution is often a more adequate model for overdispersed count data...

  9. Sources and magnitude of sampling error in redd counts for bull trout

    Science.gov (United States)

    Jason B. Dunham; Bruce Rieman

    2001-01-01

    Monitoring of salmonid populations often involves annual redd counts, but the validity of this method has seldom been evaluated. We conducted redd counts of bull trout Salvelinus confluentus in two streams in northern Idaho to address four issues: (1) relationships between adult escapements and redd counts; (2) interobserver variability in redd...

  10. Recursive algorithms for phylogenetic tree counting.

    Science.gov (United States)

    Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J

    2013-10-28

    In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.

  11. Liquid scintillation counting system with automatic gain correction

    International Nuclear Information System (INIS)

    Frank, R.B.

    1976-01-01

    An automatic liquid scintillation counting apparatus is described including a scintillating medium in the elevator ram of the sample changing apparatus. An appropriate source of radiation, which may be the external source for standardizing samples, produces reference scintillations in the scintillating medium which may be used for correction of the gain of the counting system

  12. Limit of sensitivity of low-background counting equipment

    International Nuclear Information System (INIS)

    Homann, S.G.

    1991-01-01

    The Hazards Control Department's Radiological Measurements Laboratory (RML) analyzes many types of sample media in support of the Laboratory's health and safety program. The Department has determined that the equation for the minimum limit of sensitivity, MDC(α,β) = 2.71 + 3.29 (r b t s ) 1/2 is also adequate for RML counting systems with very-low-background levels. This paper reviews the normal distribution case and address the special case of determining the limit of sensitivity of a counting system when the background count rate is well known and small. In the latter case, we must use an exact test procedure based on the binomial distribution. However, the error in using the normal distribution for calculating a detection system's limit of sensitivity is not significant even as the total observed number of counts approaches or equals zero. 2 refs., 4 figs

  13. Relationship between γ detection dead-time and count correction factor

    International Nuclear Information System (INIS)

    Wu Huailong; Zhang Jianhua; Chu Chengsheng; Hu Guangchun; Zhang Changfan; Hu Gen; Gong Jian; Tian Dongfeng

    2015-01-01

    The relationship between dead-time and count correction factor was investigated by using interference source for purpose of high γ activity measurement. The count rates maintain several 10 s"-"l with γ energy of 0.3-1.3 MeV for 10"4-10"5 Bq radioactive source. It is proved that the relationship between count loss and dead-time is unconcerned at various energy and various count intensities. The same correction formula can be used for any nuclide measurement. (authors)

  14. A contribution to the analysis of the activity distribution of a radioactive source trapped inside a cylindrical volume, using the M.C.N.P.X. code

    International Nuclear Information System (INIS)

    Portugal, L.; Oliveira, C.; Trindade, R.; Paiva, I.

    2006-01-01

    Orphan sources, activated materials or contaminated materials with natural or artificial radionuclides have been detected in scrap metal products destined to recycling. The consequences of the melting of a source during the process could result in economical, environmental and social impacts. From the point of view of the radioactive waste management, a scenario of 100 ton of contaminated steel in one piece is a major problem. So, it is of great importance to develop a methodology that would allow us to predict the activity distribution inside a volume of steel. In previous work we were able to distinguish between the cases where the source is disseminated all over the entire cylinder and the cases where it is concentrated in different volumes. Now the main goal is to distinguish between different radiuses of spherical source geometries trapped inside the cylinder. For this, a methodology was proposed based on the ratio of the counts of two regions of the gamma spectrum, obtained with a sodium iodide detector, using the M.C.N.P.X. Monte Carlo simulation code. These calculated ratios allow us to determine a function r = aR 2 + bR + c, where R is the ratio between the counts of the two regions of the gamma spectrum and r is the radius of the source. For simulation purposes six 60 Co sources were used (a point source, four spheres of 5 cm, 10 cm, 15 cm and 20 cm radius and the overall contaminated cylinder) trapped inside two types of matrix, concrete and stainless steel. The methodology applied has shown to predict and distinguish accurately the distribution of a source inside a material roughly independently of the matrix and density considered. (authors)

  15. A contribution to the analysis of the activity distribution of a radioactive source trapped inside a cylindrical volume, using the M.C.N.P.X. code

    Energy Technology Data Exchange (ETDEWEB)

    Portugal, L.; Oliveira, C.; Trindade, R.; Paiva, I. [Instituto Tecnologico e Nuclear, Dpto. Proteccao Radiologica e Seguranca Nuclear, Sacavem (Portugal)

    2006-07-01

    Orphan sources, activated materials or contaminated materials with natural or artificial radionuclides have been detected in scrap metal products destined to recycling. The consequences of the melting of a source during the process could result in economical, environmental and social impacts. From the point of view of the radioactive waste management, a scenario of 100 ton of contaminated steel in one piece is a major problem. So, it is of great importance to develop a methodology that would allow us to predict the activity distribution inside a volume of steel. In previous work we were able to distinguish between the cases where the source is disseminated all over the entire cylinder and the cases where it is concentrated in different volumes. Now the main goal is to distinguish between different radiuses of spherical source geometries trapped inside the cylinder. For this, a methodology was proposed based on the ratio of the counts of two regions of the gamma spectrum, obtained with a sodium iodide detector, using the M.C.N.P.X. Monte Carlo simulation code. These calculated ratios allow us to determine a function r = aR{sup 2} + bR + c, where R is the ratio between the counts of the two regions of the gamma spectrum and r is the radius of the source. For simulation purposes six {sup 60}Co sources were used (a point source, four spheres of 5 cm, 10 cm, 15 cm and 20 cm radius and the overall contaminated cylinder) trapped inside two types of matrix, concrete and stainless steel. The methodology applied has shown to predict and distinguish accurately the distribution of a source inside a material roughly independently of the matrix and density considered. (authors)

  16. OpenCFU, a new free and open-source software to count cell colonies and other circular objects.

    Science.gov (United States)

    Geissmann, Quentin

    2013-01-01

    Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net.

  17. Decoy-state quantum key distribution with both source errors and statistical fluctuations

    International Nuclear Information System (INIS)

    Wang Xiangbin; Yang Lin; Peng Chengzhi; Pan Jianwei

    2009-01-01

    We show how to calculate the fraction of single-photon counts of the 3-intensity decoy-state quantum cryptography faithfully with both statistical fluctuations and source errors. Our results rely only on the bound values of a few parameters of the states of pulses.

  18. The Competition Between a Localised and Distributed Source of Buoyancy

    Science.gov (United States)

    Partridge, Jamie; Linden, Paul

    2012-11-01

    We propose a new mathematical model to study the competition between localised and distributed sources of buoyancy within a naturally ventilated filling box. The main controlling parameters in this configuration are the buoyancy fluxes of the distributed and local source, specifically their ratio Ψ. The steady state dynamics of the flow are heavily dependent on this parameter. For large Ψ, where the distributed source dominates, we find the space becomes well mixed as expected if driven by an distributed source alone. Conversely, for small Ψ we find the space reaches a stable two layer stratification. This is analogous to the classical case of a purely local source but here the lower layer is buoyant compared to the ambient, due to the constant flux of buoyancy emanating from the distributed source. The ventilation flow rate, buoyancy of the layers and also the location of the interface height, which separates the two layer stratification, are obtainable from the model. To validate the theoretical model, small scale laboratory experiments were carried out. Water was used as the working medium with buoyancy being driven directly by temperature differences. Theoretical results were compared with experimental data and overall good agreement was found. A CASE award project with Arup.

  19. Project and construction of counting system for neutron probe

    International Nuclear Information System (INIS)

    Monteiro, W.P.

    1985-01-01

    A counting system was developed for coupling neutron probe aiming to register pulses produced by slow neutron interaction in the detector. The neutron probe consists of fast neutron source, thermal neutron detector, amplifier circuit and pulse counting circuit. The counting system is composed by counting circuit, timer and signal circuit. (M.C.K.)

  20. Study on advancement of in vivo counting using mathematical simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kinase, Sakae [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2003-05-01

    To obtain an assessment of the committed effective dose, individual monitoring for the estimation of intakes of radionuclides is required. For individual monitoring of exposure to intakes of radionuclides, direct measurement of radionuclides in the body - in vivo counting- is very useful. To advance in a precision in vivo counting which fulfills the requirements of ICRP 1990 recommendations, some problems, such as the investigation of uncertainties in estimates of body burdens by in vivo counting, and the selection of the way to improve the precision, have been studied. In the present study, a calibration technique for in vivo counting application using Monte Carlo simulation was developed. The advantage of the technique is that counting efficiency can be obtained for various shapes and sizes that are very difficult to change for phantoms. To validate the calibration technique, the response functions and counting efficiencies of a whole-body counter installed in JAERI were evaluated using the simulation and measurements. Consequently, the calculations are in good agreement with the measurements. The method for the determination of counting efficiency curves as a function of energy was developed using the present technique and a physiques correction equation was derived from the relationship between parameters of correction factor and counting efficiencies of the JAERI whole-body counter. The uncertainties in body burdens of {sup 137}Cs estimated with the JAERI whole-body counter were also investigated using the Monte Carlo simulation and measurements. It was found that the uncertainties of body burdens estimated with the whole-body counter are strongly dependent on various sources of uncertainty such as radioactivity distribution within the body and counting statistics. Furthermore, the evaluation method of the peak efficiencies of a Ge semi-conductor detector was developed by Monte Carlo simulation for optimum arrangement of Ge semi-conductor detectors for

  1. Bayesian approach in MN low dose of radiation counting

    International Nuclear Information System (INIS)

    Serna Berna, A.; Alcaraz, M.; Acevedo, C.; Navarro, J. L.; Alcanzar, M. D.; Canteras, M.

    2006-01-01

    The Micronucleus assay in lymphocytes is a well established technique for the assessment of genetic damage induced by ionizing radiation. Due to the presence of a natural background of MN the net MN is obtained by subtracting this value to the gross value. When very low doses of radiation are given the induced MN is close even lower than the predetermined background value. Furthermore, the damage distribution induced by the radiation follows a Poisson probability distribution. These two facts pose a difficult task to obtain the net counting rate in the exposed situations. It is possible to overcome this problem using a bayesian approach, in which the selection of a priori distributions for the background and net counting rate plays an important role. In the present work we make a detailed analysed using bayesian theory to infer the net counting rate in two different situations: a) when the background is known for an individual sample, using exact value value for the background and Jeffreys prior for the net counting rate, and b) when the background is not known and we make use of a population background distribution as background prior function and constant prior for the net counting rate. (Author)

  2. Photon counting and fluctuation of molecular movement

    International Nuclear Information System (INIS)

    Inohara, Koichi

    1978-01-01

    The direct measurement of the fluctuation of molecular motions, which provides with useful information on the molecular movement, was conducted by introducing photon counting method. The utilization of photon counting makes it possible to treat the molecular system consisting of a small number of molecules like a radioisotope in the detection of a small number of atoms, which are significant in biological systems. This method is based on counting the number of photons of the definite polarization emitted in a definite time interval from the fluorescent molecules excited by pulsed light, which are bound to the marked large molecules found in a definite spatial region. Using the probability of finding a number of molecules oriented in a definite direction in the definite spatial region, the probability of counting a number of photons in a definite time interval can be calculated. Thus the measurable count rate of photons can be related with the fluctuation of molecular movement. The measurement was carried out under the condition, in which the probability of the simultaneous arrival of more than two photons at a detector is less than 1/100. As the experimental results, the resolving power of photon-counting apparatus, the frequency distribution of the number of photons of some definite polarization counted for 1 nanosecond are shown. In the solution, the variance of the number of molecules of 500 on the average is 1200, which was estimated from the experimental data by assuming normal distribution. This departure from the Poisson distribution means that a certain correlation does exist in molecular movement. In solid solution, no significant deviation was observed. The correlation existing in molecular movement can be expressed in terms of the fluctuation of the number of molecules. (Nakai, Y.)

  3. Parametric normalization for full-energy peak efficiency of HPGe γ-ray spectrometers at different counting positions for bulky sources.

    Science.gov (United States)

    Peng, Nie; Bang-Fa, Ni; Wei-Zhi, Tian

    2013-02-01

    Application of effective interaction depth (EID) principle for parametric normalization of full energy peak efficiencies at different counting positions, originally for quasi-point sources, has been extended to bulky sources (within ∅30 mm×40 mm) with arbitrary matrices. It is also proved that the EID function for quasi-point source can be directly used for cylindrical bulky sources (within ∅30 mm×40 mm) with the geometric center as effective point source for low atomic number (Z) and low density (D) media and high energy γ-rays. It is also found that in general EID for bulky sources is dependent upon Z and D of the medium and the energy of the γ-rays in question. In addition, the EID principle was theoretically verified by MCNP calculations. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Gating circuit for single photon-counting fluorescence lifetime instruments using high repetition pulsed light sources

    International Nuclear Information System (INIS)

    Laws, W.R.; Potter, D.W.; Sutherland, J.C.

    1984-01-01

    We have constructed a circuit that permits conventional timing electronics to be used in single photon-counting fluorimeters with high repetition rate excitation sources (synchrotrons and mode-locked lasers). Most commercial time-to-amplitude and time-to-digital converters introduce errors when processing very short time intervals and when subjected to high-frequency signals. This circuit reduces the frequency of signals representing the pulsed light source (stops) to the rate of detected fluorescence events (starts). Precise timing between the start/stop pair is accomplished by using the second stop pulse after a start pulse. Important features of our design are that the circuit is insensitive to the simultaneous occurrence of start and stop signals and that the reduction in the stop frequency allows the start/stop time interval to be placed in linear regions of the response functions of commercial timing electronics

  5. Seed counting system evaluation using arduino microcontroller

    Directory of Open Access Journals (Sweden)

    Paulo Fernando Escobar Paim

    2018-01-01

    Full Text Available The development of automated systems has been highlighted in the most diverse productive sectors, among them, the agricultural sector. These systems aim to optimize activities by increasing operational efficiency and quality of work. In this sense, the present work has the objective of evaluating a prototype developed for seed count in laboratory, using Arduino microcontroller. The prototype of the system for seed counting was built using a dosing mechanism commonly used in seeders, electric motor, Arduino Uno, light dependent resistor and light emitting diode. To test the prototype, a completely randomized design (CRD was used in a two-factorial scheme composed of three groups defined according to the number of seeds (500, 1000 and 1500 seeds tested, three speeds of the dosing disc that allowed the distribution in 17, 21 and 32 seeds per second, with 40 repetitions evaluating the seed counting prototype performance in different speeds. The prototype of the bench counter showed a moderate variability of seed number of counted within the nine tests and a high precision in the seed count on the distribution speeds of 17 and 21 seeds per second (s-1 up to 1500 seeds tested. Therefore, based on the observed results, the developed prototype presents itself as an excellent tool for counting seeds in laboratory.

  6. Flow cytometry total cell counts : A field study assessing microbiological water quality and growth in unchlorinated drinking water distribution systems

    NARCIS (Netherlands)

    Liu, G.; Van der Mark, E.J.; Verberk, J.Q.; Van Dijk, J.C.

    2013-01-01

    e objective of this study was to evaluate the application of flow cytometry total cell counts (TCCs) as a parameter to assess microbial growth in drinking water distribution systems and to determine the relationships between different parameters describing the biostability of treated water. A

  7. Algorithm for counting large directed loops

    Energy Technology Data Exchange (ETDEWEB)

    Bianconi, Ginestra [Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34014 Trieste (Italy); Gulbahce, Natali [Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, NM 87545 (United States)

    2008-06-06

    We derive a Belief-Propagation algorithm for counting large loops in a directed network. We evaluate the distribution of the number of small loops in a directed random network with given degree sequence. We apply the algorithm to a few characteristic directed networks of various network sizes and loop structures and compare the algorithm with exhaustive counting results when possible. The algorithm is adequate in estimating loop counts for large directed networks and can be used to compare the loop structure of directed networks and their randomized counterparts.

  8. A UNIFIED EMPIRICAL MODEL FOR INFRARED GALAXY COUNTS BASED ON THE OBSERVED PHYSICAL EVOLUTION OF DISTANT GALAXIES

    International Nuclear Information System (INIS)

    Béthermin, Matthieu; Daddi, Emanuele; Sargent, Mark T.; Elbaz, David; Mullaney, James; Pannella, Maurilio; Magdis, Georgios; Hezaveh, Yashar; Le Borgne, Damien; Buat, Véronique; Charmandaris, Vassilis; Lagache, Guilaine; Scott, Douglas

    2012-01-01

    We reproduce the mid-infrared to radio galaxy counts with a new empirical model based on our current understanding of the evolution of main-sequence (MS) and starburst (SB) galaxies. We rely on a simple spectral energy distribution (SED) library based on Herschel observations: a single SED for the MS and another one for SB, getting warmer with redshift. Our model is able to reproduce recent measurements of galaxy counts performed with Herschel, including counts per redshift slice. This agreement demonstrates the power of our 2-Star-Formation Modes (2SFM) decomposition in describing the statistical properties of infrared sources and their evolution with cosmic time. We discuss the relative contribution of MS and SB galaxies to the number counts at various wavelengths and flux densities. We also show that MS galaxies are responsible for a bump in the 1.4 GHz radio counts around 50 μJy. Material of the model (predictions, SED library, mock catalogs, etc.) is available online.

  9. The Impact of Source Distribution on Scalar Transport over Forested Hills

    Science.gov (United States)

    Ross, Andrew N.; Harman, Ian N.

    2015-08-01

    Numerical simulations of neutral flow over a two-dimensional, isolated, forested ridge are conducted to study the effects of scalar source distribution on scalar concentrations and fluxes over forested hills. Three different constant-flux sources are considered that span a range of idealized but ecologically important source distributions: a source at the ground, one uniformly distributed through the canopy, and one decaying with depth in the canopy. A fourth source type, where the in-canopy source depends on both the wind speed and the difference in concentration between the canopy and a reference concentration on the leaf, designed to mimic deposition, is also considered. The simulations show that the topographically-induced perturbations to the scalar concentration and fluxes are quantitatively dependent on the source distribution. The net impact is a balance of different processes affecting both advection and turbulent mixing, and can be significant even for moderate topography. Sources that have significant input in the deep canopy or at the ground exhibit a larger magnitude advection and turbulent flux-divergence terms in the canopy. The flows have identical velocity fields and so the differences are entirely due to the different tracer concentration fields resulting from the different source distributions. These in-canopy differences lead to larger spatial variations in above-canopy scalar fluxes for sources near the ground compared to cases where the source is predominantly located near the canopy top. Sensitivity tests show that the most significant impacts are often seen near to or slightly downstream of the flow separation or reattachment points within the canopy flow. The qualitative similarities to previous studies using periodic hills suggest that important processes occurring over isolated and periodic hills are not fundamentally different. The work has important implications for the interpretation of flux measurements over forests, even in

  10. A computer simulation used to investigate optimization in low level counting

    International Nuclear Information System (INIS)

    Brown, R.C.; Kephart, G.S.

    1984-01-01

    The differential form of the interval distribution for randomly spaced events such as radioactive decay is represented as dP/sub t/=ae - /supat/dt, the Poisson distribution. As applied to radioactive decay, this states that the probability (dP/sub t/) of the duration of a particular interval (elapsed time between counts) will be between t and t+dt as a function of the count rate (a). Thus a logarithmic transformation of this probability distribution results in a linear function whose slope and intercept are defined by the count rate. The effort expended in defining the interval distribution of a given radiation measurement equates in the laboratory to measuring and accumulating discrete time intervals between events rather than the usual approach of counting events per unit time. It follows from basic information theory that this greater effort should result in an improved statistical confidence in determinations of the ''true'' count rate (a). Using a random number generator as an analog of the discrete decay event, the authors have devised a Monte Carlo approach to investigate application of the above theory to the low level counting situation. This investigative approach is well suited to sensitivity analyses such that any constraints on proposed optimization techniques can be well defined prior to introducing these methods into the counting requirements in the laboratory

  11. Fitting a distribution to microbial counts: Making sense of zeroes

    DEFF Research Database (Denmark)

    Ribeiro Duarte, Ana Sofia; Stockmarr, Anders; Nauta, Maarten

    2015-01-01

    The accurate estimation of true prevalence and concentration of microorganisms in foods is an important element of quantitative microbiological risk assessment (QMRA). This estimation is often based on microbial detection and enumeration data. Among such data are artificial zero counts, that orig......The accurate estimation of true prevalence and concentration of microorganisms in foods is an important element of quantitative microbiological risk assessment (QMRA). This estimation is often based on microbial detection and enumeration data. Among such data are artificial zero counts......, that originated by chance from contaminated food products. When these products are not differentiated from uncontaminated products that originate true zero counts, the estimates of true prevalence and concentration may be inaccurate. This inaccuracy is especially relevant in situations where highly pathogenic...... bacteria are involved and where growth can occur along the food pathway. Our aim was to develop a method that provides accurate estimates of concentration parameters and differentiates between artificial and true zeroes, thus also accurately estimating true prevalence. We first show the disadvantages...

  12. Brightness distribution data on 2918 radio sources at 365 MHz

    International Nuclear Information System (INIS)

    Cotton, W.D.; Owen, F.N.; Ghigo, F.D.

    1975-01-01

    This paper is the second in a series describing the results of a program attempting to fit models of the brightness distribution to radio sources observed at 365 MHz with the Bandwidth Synthesis Interferometer (BSI) operated by the University of Texas Radio Astronomy Observatory. Results for a further 2918 radio sources are given. An unresolved model and three symmetric extended models with angular sizes in the range 10--70 arcsec were attempted for each radio source. In addition, for 348 sources for which other observations of brightness distribution are published, the reference to the observations and a brief description are included

  13. Counting statistics and loss corrections for the APS

    International Nuclear Information System (INIS)

    Lee, W.K.; Mills, D.M.

    1992-01-01

    It has been suggested that for timing experiments, it might be advantageous to arrange the bunches in the storage ring in an asymmetrical mode. In this paper, we determine the counting losses from pulsed x-ray sources from basic probabilistic arguments and from Poisson statistics. In particular the impact on single photon counting losses of a variety of possible filling modes for the Advanced Photon Source (APS) is examined. For bunches of equal current, a loss of 10% occurs whenever the count rate exceeds 21% of the bunch repetition rate. This changes slightly when bunches containing unequal numbers of particles are considered. The results are applied to several common detector/electronics systems

  14. Counting statistics and loss corrections for the APS

    International Nuclear Information System (INIS)

    Lee, W.K.; Mills, D.M.

    1992-01-01

    It has been suggested that for timing experiments, it might be advantageous to arrange the bunches in the storage ring in an asymmetrical mode. In this paper, we determine the counting losses from pulsed x-ray sources from basic probabilistic arguments and from Poisson statistics. In particular the impact on single-photon counting losses of a variety of possible filling modes for the Advanced Photon Source (APS) is examined. For bunches of equal current, a loss of 10% occurs whenever the count rate exceeds 21% of the bunch repetition rate. This changes slightly when bunches containing unequal numbers of particles are considered. The results are applied to several common detector/electronics systems

  15. 2013 Kids Count in Colorado! Community Matters

    Science.gov (United States)

    Colorado Children's Campaign, 2013

    2013-01-01

    "Kids Count in Colorado!" is an annual publication of the Children's Campaign, providing state and county level data on child well-being factors including child health, education, and economic status. Since its first release 20 years ago, "Kids Count in Colorado!" has become the most trusted source for data and information on…

  16. Multicounter neutron detector for examination of content and spatial distribution of fissile materials in bulk samples

    International Nuclear Information System (INIS)

    Swiderska-Kowalczyk, M.; Starosta, W.; Zoltowski, T.

    1999-01-01

    A new neutron coincidence well-counter is presented. This experimental device can be applied for passive assay of fissile and, in particular, for plutonium bearing materials. It contains of a set of the 3 He tubes placed inside a polyethylene moderator. Outputs from the tubes, first processed by preamplifier/amplifier/discriminator circuits, are then analysed using a correlator connected with PC, and correlation techniques implemented in software. Such a neutron counter enables determination of the 240 Pu effective mass in samples of a small Pu content (i.e., where the multiplication effects can be neglected) having a fairly big volume (up to 0.17 m 3 ), if only the isotopic composition is known. For determination of neutron sources distribution inside a sample, a heuristic method based on hierarchical cluster analysis was applied. As input parameters, amplitudes and phases of two-dimensional Fourier transformation of the count profiles matrices for known point sources distributions and for the examined samples were taken. Such matrices of profiles counts are collected using the sample scanning with detection head. In the clustering processes, process, counts profiles of unknown samples are fitted into dendrograms employing the 'proximity' criterion of the examined sample profile to standard samples profiles. Distribution of neutron sources in the examined sample is then evaluated on the basis of a comparison with standard sources distributions. (author)

  17. Set of counts by scintillations for atmospheric samplings

    International Nuclear Information System (INIS)

    Appriou, D.; Doury, A.

    1962-01-01

    The author reports the development of a scintillation-based counting assembly with the following characteristics: a photo-multiplier with a wide photo-cathode, a thin plastic scintillator for the counting of beta + alpha (and possibility of mounting an alpha scintillator), a relatively small own motion with respect to activities to be counted, a weakly varying efficiency. The authors discuss the counting objective, present equipment tests (counter, proportional amplifier and pre-amplifier, input drawer). They describe the apparatus operation, discuss the selection of scintillators, report the study of the own movement (electron-based background noise, total background noise, background noise reduction), discuss counts (influence of the external source, sensitivity to alpha radiations, counting homogeneity, minimum detectable activity) and efficiencies

  18. Multiplicity counting from fission detector signals with time delay effects

    Science.gov (United States)

    Nagy, L.; Pázsit, I.; Pál, L.

    2018-03-01

    In recent work, we have developed the theory of using the first three auto- and joint central moments of the currents of up to three fission chambers to extract the singles, doubles and triples count rates of traditional multiplicity counting (Pázsit and Pál, 2016; Pázsit et al., 2016). The objective is to elaborate a method for determining the fissile mass, neutron multiplication, and (α, n) neutron emission rate of an unknown assembly of fissile material from the statistics of the fission chamber signals, analogous to the traditional multiplicity counting methods with detectors in the pulse mode. Such a method would be an alternative to He-3 detector systems, which would be free from the dead time problems that would be encountered in high counting rate applications, for example the assay of spent nuclear fuel. A significant restriction of our previous work was that all neutrons born in a source event (spontaneous fission) were assumed to be detected simultaneously, which is not fulfilled in reality. In the present work, this restriction is eliminated, by assuming an independent, identically distributed random time delay for all neutrons arising from one source event. Expressions are derived for the same auto- and joint central moments of the detector current(s) as in the previous case, expressed with the singles, doubles, and triples (S, D and T) count rates. It is shown that if the time-dispersion of neutron detections is of the same order of magnitude as the detector pulse width, as they typically are in measurements of fast neutrons, the multiplicity rates can still be extracted from the moments of the detector current, although with more involved calibration factors. The presented formulae, and hence also the performance of the proposed method, are tested by both analytical models of the time delay as well as with numerical simulations. Methods are suggested also for the modification of the method for large time delay effects (for thermalised neutrons).

  19. Temporal aggregation of migration counts can improve accuracy and precision of trends

    Directory of Open Access Journals (Sweden)

    Tara L. Crewe

    2016-12-01

    Full Text Available Temporal replicate counts are often aggregated to improve model fit by reducing zero-inflation and count variability, and in the case of migration counts collected hourly throughout a migration, allows one to ignore nonindependence. However, aggregation can represent a loss of potentially useful information on the hourly or seasonal distribution of counts, which might impact our ability to estimate reliable trends. We simulated 20-year hourly raptor migration count datasets with known rate of change to test the effect of aggregating hourly counts to daily or annual totals on our ability to recover known trend. We simulated data for three types of species, to test whether results varied with species abundance or migration strategy: a commonly detected species, e.g., Northern Harrier, Circus cyaneus; a rarely detected species, e.g., Peregrine Falcon, Falco peregrinus; and a species typically counted in large aggregations with overdispersed counts, e.g., Broad-winged Hawk, Buteo platypterus. We compared accuracy and precision of estimated trends across species and count types (hourly/daily/annual using hierarchical models that assumed a Poisson, negative binomial (NB or zero-inflated negative binomial (ZINB count distribution. We found little benefit of modeling zero-inflation or of modeling the hourly distribution of migration counts. For the rare species, trends analyzed using daily totals and an NB or ZINB data distribution resulted in a higher probability of detecting an accurate and precise trend. In contrast, trends of the common and overdispersed species benefited from aggregation to annual totals, and for the overdispersed species in particular, trends estimating using annual totals were more precise, and resulted in lower probabilities of estimating a trend (1 in the wrong direction, or (2 with credible intervals that excluded the true trend, as compared with hourly and daily counts.

  20. The dose distribution surrounding sup 192 Ir and sup 137 Cs seed sources

    Energy Technology Data Exchange (ETDEWEB)

    Thomason, C [Wisconsin Univ., Madison, WI (USA). Dept. of Medical Physics; Mackie, T R [Wisconsin Univ., Madison, WI (USA). Dept. of Medical Physics Wisconsin Univ., Madison, WI (USA). Dept. of Human Oncology; Lindstrom, M J [Wisconsin Univ., Madison, WI (USA). Biostatistics Center; Higgins, P D [Cleveland Clinic Foundation, OH (USA). Dept. of Radiation Oncology

    1991-04-01

    Dose distributions in water were measured using LiF thermoluminescent dosemeters for {sup 192}Ir seed sources with stainless steel and with platinum encapsulation to determine the effect of differing encapsulation. Dose distribution was measured for a {sup 137}Cs seed source. In addition, dose distributions surrounding these sources were calculated using the EGS4 Monte Carlo code and were compared to measured data. The two methods are in good agreement for all three sources. Tables are given describing dose distribution surrounding each source as a function of distance and angle. Specific dose constants were also determined from results of Monte Carlo simulation. This work confirms the use of the EGS4 Monte Carlo code in modelling {sup 192}Ir and {sup 137}Cs seed sources to obtain brachytherapy dose distributions. (author).

  1. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  2. Marginalized multilevel hurdle and zero-inflated models for overdispersed and correlated count data with excess zeros.

    Science.gov (United States)

    Kassahun, Wondwosen; Neyens, Thomas; Molenberghs, Geert; Faes, Christel; Verbeke, Geert

    2014-11-10

    Count data are collected repeatedly over time in many applications, such as biology, epidemiology, and public health. Such data are often characterized by the following three features. First, correlation due to the repeated measures is usually accounted for using subject-specific random effects, which are assumed to be normally distributed. Second, the sample variance may exceed the mean, and hence, the theoretical mean-variance relationship is violated, leading to overdispersion. This is usually allowed for based on a hierarchical approach, combining a Poisson model with gamma distributed random effects. Third, an excess of zeros beyond what standard count distributions can predict is often handled by either the hurdle or the zero-inflated model. A zero-inflated model assumes two processes as sources of zeros and combines a count distribution with a discrete point mass as a mixture, while the hurdle model separately handles zero observations and positive counts, where then a truncated-at-zero count distribution is used for the non-zero state. In practice, however, all these three features can appear simultaneously. Hence, a modeling framework that incorporates all three is necessary, and this presents challenges for the data analysis. Such models, when conditionally specified, will naturally have a subject-specific interpretation. However, adopting their purposefully modified marginalized versions leads to a direct marginal or population-averaged interpretation for parameter estimates of covariate effects, which is the primary interest in many applications. In this paper, we present a marginalized hurdle model and a marginalized zero-inflated model for correlated and overdispersed count data with excess zero observations and then illustrate these further with two case studies. The first dataset focuses on the Anopheles mosquito density around a hydroelectric dam, while adolescents' involvement in work, to earn money and support their families or themselves, is

  3. Counting efficiency formulae for two, three or four photomultiplier systems

    International Nuclear Information System (INIS)

    Grau Malonda, A.

    1993-01-01

    Counting efficiency formulae as a function of the non-detection probability and the electron distributions for systems with two, three or dour photomultipliers are obtained in this paper. It is assumed that the photocathode electron emission follows the Poisson distribution. The obtained formulae are basic to compute the counting efficiency in liquid scintillation spectrometers

  4. Effect of source angular distribution on the evaluation of gamma-ray skyshine

    Energy Technology Data Exchange (ETDEWEB)

    Sheu, R.D.; Jiang, S.H. [Dept. of Engineering and System Science, National Tsing Hua Univ., Taiwan (China); Chang, B.J.; Chen, I.J. [Division of Health Physics, Inst. of Nuclear Energy Research, Taiwan (China)

    2000-03-01

    The effect of the angular distribution of the equivalent point source on the analysis of the skyshine dose rates was investigated in detail. The dedicated skyshine codes SKYDOSE and McSKY were revised to include the capability of dealing with the anisotropic source. It was found that a replace of the cosine-distributed source by an isotropic source will overestimate the skyshine dose rates for large roof-subtended angles and cause underestimation for small roof-subtended angles. For building with roof shielding, however, replacing the cosine-distributed source by an isotropic source will always underestimate the skyshine dose rates. The skyshine dose rates from a volume source calculated by the dedicated skyshine code agree very well with those of the MCNP Monte Carlo calculation. (author)

  5. A new approach to counting measurements: Addressing the problems with ISO-11929

    Science.gov (United States)

    Klumpp, John; Miller, Guthrie; Poudel, Deepesh

    2018-06-01

    We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: "what is the probability distribution of the true amount in the sample, given the data?" The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the "measurement strength" that depends only on measurement-stage count quantities. We show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an "action threshold" on the measurement strength which is similar to the decision threshold recommended by the current standard. We further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.

  6. Modeling Repeated Count Data : Some Extensions of the Rasch Poisson Counts Model

    NARCIS (Netherlands)

    van Duijn, M.A.J.; Jansen, Margo

    1995-01-01

    We consider data that can be summarized as an N X K table of counts-for example, test data obtained by administering K tests to N subjects. The cell entries y(ij) are assumed to be conditionally independent Poisson-distributed random variables, given the NK Poisson intensity parameters mu(ij). The

  7. Probing the Cosmological Principle in the counts of radio galaxies at different frequencies

    Science.gov (United States)

    Bengaly, Carlos A. P.; Maartens, Roy; Santos, Mario G.

    2018-04-01

    According to the Cosmological Principle, the matter distribution on very large scales should have a kinematic dipole that is aligned with that of the CMB. We determine the dipole anisotropy in the number counts of two all-sky surveys of radio galaxies. For the first time, this analysis is presented for the TGSS survey, allowing us to check consistency of the radio dipole at low and high frequencies by comparing the results with the well-known NVSS survey. We match the flux thresholds of the catalogues, with flux limits chosen to minimise systematics, and adopt a strict masking scheme. We find dipole directions that are in good agreement with each other and with the CMB dipole. In order to compare the amplitude of the dipoles with theoretical predictions, we produce sets of lognormal realisations. Our realisations include the theoretical kinematic dipole, galaxy clustering, Poisson noise, simulated redshift distributions which fit the NVSS and TGSS source counts, and errors in flux calibration. The measured dipole for NVSS is ~2 times larger than predicted by the mock data. For TGSS, the dipole is almost ~ 5 times larger than predicted, even after checking for completeness and taking account of errors in source fluxes and in flux calibration. Further work is required to understand the nature of the systematics that are the likely cause of the anomalously large TGSS dipole amplitude.

  8. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    Science.gov (United States)

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  9. Counts and colors of faint galaxies

    International Nuclear Information System (INIS)

    Kron, R.G.

    1980-01-01

    The color distribution of faint galaxies is an observational dimension which has not yet been fully exploited, despite the important constraints obtainable for galaxy evolution and cosmology. Number-magnitude counts alone contain very diluted information about the state of things because galaxies from a wide range in redshift contribute to the counts at each magnitude. The most-frequently-seen type of galaxy depends on the luminosity function and the relative proportions of galaxies of different spectral classes. The addition of color as a measured quantity can thus considerably sharpen the interpretation of galaxy counts since the apparent color depends on the redshift and rest-frame spectrum. (Auth.)

  10. Deformation due to distributed sources in micropolar thermodiffusive medium

    Directory of Open Access Journals (Sweden)

    Sachin Kaushal

    2010-10-01

    Full Text Available The general solution to the field equations in micropolar generalized thermodiffusive in the context of G-L theory is investigated by applying the Laplace and Fourier transform's as a result of various sources. An application of distributed normal forces or thermal sources or potential sources has been taken to show the utility of the problem. To get the solution in the physical form, a numerical inversion technique has been applied. The transformed components of stress, temperature distribution and chemical potential for G-L theory and CT theory has been depicted graphically and results are compared analytically to show the impact of diffusion, relaxation times and micropolarity on these quantities. Some special case of interest are also deduced from present investigation.

  11. Free-form analysis of the cosmological evolution of radio sources

    International Nuclear Information System (INIS)

    Robertson, J.G.

    1980-01-01

    This paper extends an iterative scheme for calculation of free-form evolution functions able to reconcile observed radio source counts with the standard General Relativistic cosmological models. It is assumed that the luminosity dependence of the evolution consists of a gradual turn-on of evolution above a certain luminosity. No particular functional form is assumed for the redshift dependence of the evolution (i.e. it is free-form). The extension concerns the use of the luminosity distribution to supply an effective luminosity function, thus overcoming a problem of consistency at the high-luminosity end of the luminosity function, where the evolution function has to be known. This method also guarantees that the correct average redshifts will be predicted where they are known observationally at high flux densities. The new iterative scheme has been applied to the source counts at 408 MHz from the Molonglo Cross telescope, using the Einstein-de Sitter cosmology and a recent determination of the luminosity distribution for sources of S 408 > 10 Jy. (author)

  12. Real-time ArcGIS and heterotrophic plate count based chloramine disinfectant control in water distribution system.

    Science.gov (United States)

    Bai, Xiaohui; Zhi, Xinghua; Zhu, Huifeng; Meng, Mingqun; Zhang, Mingde

    2015-01-01

    This study investigates the effect of chloramine residual on bacteria growth and regrowth and the relationship between heterotrophic plate counts (HPCs) and the concentration of chloramine residual in the Shanghai drinking water distribution system (DWDS). In this study, models to control HPCs in the water distribution system and consumer taps are also developed. Real-time ArcGIS was applied to show the distribution and changed results of the chloramine residual concentration in the pipe system by using these models. Residual regression analysis was used to get a reasonable range of the threshold values that allows the chloramine residual to efficiently inhibit bacteria growth in the Shanghai DWDS; the threshold values should be between 0.45 and 0.5 mg/L in pipe water and 0.2 and 0.25 mg/L in tap water. The low residual chloramine value (0.05 mg/L) of the Chinese drinking water quality standard may pose a potential health risk for microorganisms that should be improved. Disinfection by-products (DBPs) were detected, but no health risk was identified.

  13. Y-Source Boost DC/DC Converter for Distributed Generation

    DEFF Research Database (Denmark)

    Siwakoti, Yam P.; Loh, Poh Chiang; Blaabjerg, Frede

    2015-01-01

    This paper introduces a versatile Y-source boost dc/dc converter intended for distributed power generation, where high gain is often demanded. The proposed converter uses a Y-source impedance network realized with a tightly coupled three-winding inductor for high voltage boosting that is presently...

  14. Supply and distribution for γ-ray sources

    International Nuclear Information System (INIS)

    Yamamoto, Takeo

    1997-01-01

    Japan Atomic energy Research Institute (JAERI) is the only facility to supply and distribute radioisotopes (RI) in Japan. The γ-ray sources for medical use are 192 Ir and 169 Yb for non-destructive examination and 192 Ir, 198 Au and 153 Gd for clinical use. All of these demands in Japan are supplied with domestic products at present. Meanwhile, γ-ray sources imported are 60 Co sources for medical and industrial uses including sterilization of medical instruments, 137 Cs for irradiation to blood and 241 Am for industrial measurements. The major overseas suppliers are Nordion International Inc. and Amersham International plc. RI products on the market are divided into two groups; one is the primary products which are supplied in liquid or solid after chemical or physical treatments of radioactive materials obtained from reactor and the other is the secondary product which is a final product after various processing. Generally these secondary products are used in practice. In Japan, both of the domestic and imported products are supplied to the users via JRIA (Japan Radioisotope Association). The association participates in the sales and the distributions of the secondary products and also in the processings of the primary ones to their sealed sources. Furthermore, stable supplying systems for these products are almost established according to the half life of each nuclide only if there is no accident in the reactor. (M.N.)

  15. Low Count Anomaly Detection at Large Standoff Distances

    Science.gov (United States)

    Pfund, David Michael; Jarman, Kenneth D.; Milbrath, Brian D.; Kiff, Scott D.; Sidor, Daniel E.

    2010-02-01

    Searching for hidden illicit sources of gamma radiation in an urban environment is difficult. Background radiation profiles are variable and cluttered with transient acquisitions from naturally occurring radioactive materials and medical isotopes. Potentially threatening sources likely will be nearly hidden in this noise and encountered at high standoff distances and low threat count rates. We discuss an anomaly detection algorithm that characterizes low count sources as threatening or non-threatening and operates well in the presence of high benign source variability. We discuss the algorithm parameters needed to reliably find sources both close to the detector and far away from it. These parameters include the cutoff frequencies of background tracking filters and the integration time of the spectrometer. This work is part of the development of the Standoff Radiation Imaging System (SORIS) as part of DNDO's Standoff Radiation Detection System Advanced Technology Demonstration (SORDS-ATD) program.

  16. Color quench correction for low level Cherenkov counting.

    Science.gov (United States)

    Tsroya, S; Pelled, O; German, U; Marco, R; Katorza, E; Alfassi, Z B

    2009-05-01

    The Cherenkov counting efficiency varies strongly with color quenching, thus correction curves must be used to obtain correct results. The external (152)Eu source of a Quantulus 1220 liquid scintillation counting (LSC) system was used to obtain a quench indicative parameter based on spectra area ratio. A color quench correction curve for aqueous samples containing (90)Sr/(90)Y was prepared. The main advantage of this method over the common spectra indicators is its usefulness also for low level Cherenkov counting.

  17. Limits of reliability for the measurement of integral count

    International Nuclear Information System (INIS)

    Erbeszkorn, L.

    1979-01-01

    A method is presented for exact and approximate calculation of reliability limits of measured nuclear integral count. The formulae are applicable in measuring conditions which assure the Poisson distribution of the counts. The coefficients of the approximate formulae for 90, 95, 98 and 99 per cent reliability levels are given. The exact reliability limits for 90 per cent reliability level are calculated up to 80 integral counts. (R.J.)

  18. High rate 4π β-γ coincidence counting system

    International Nuclear Information System (INIS)

    Johnson, L.O.; Gehrke, R.J.

    1978-01-01

    A high count rate 4π β-γ coincidence counting system for the determination of absolute disintegration rates of short half-life radionuclides is described. With this system the dead time per pulse is minimized by not stretching any pulses beyond the width necessary to satisfy overlap coincidence requirements. The equations used to correct for the β, γ, and coincidence channel dead times and for accidental coincidences are presented but not rigorously developed. Experimental results are presented for a decaying source of 56 Mn initially at 2 x 10 6 d/s and a set of 60 Co sources of accurately known source strengths varying from 10 3 to 2 x 10 6 d/s. A check of the accidental coincidence equation for the case of two independent sources with varying source strengths is presented

  19. Sediment sources and their Distribution in Chwaka Bay, Zanzibar ...

    African Journals Online (AJOL)

    This work establishes sediment sources, character and their distribution in Chwaka Bay using (i) stable isotopes compositions of organic carbon (OC) and nitrogen, (ii) contents of OC, nitrogen and CaCO3, (iii) C/N ratios, (iv) distribution of sediment mean grain size and sorting, and (v) thickness of unconsolidated sediments.

  20. The SCUBA-2 Cosmology Legacy Survey: the EGS deep field - I. Deep number counts and the redshift distribution of the recovered cosmic infrared background at 450 and 850 μ m

    Science.gov (United States)

    Zavala, J. A.; Aretxaga, I.; Geach, J. E.; Hughes, D. H.; Birkinshaw, M.; Chapin, E.; Chapman, S.; Chen, Chian-Chou; Clements, D. L.; Dunlop, J. S.; Farrah, D.; Ivison, R. J.; Jenness, T.; Michałowski, M. J.; Robson, E. I.; Scott, Douglas; Simpson, J.; Spaans, M.; van der Werf, P.

    2017-01-01

    We present deep observations at 450 and 850 μm in the Extended Groth Strip field taken with the SCUBA-2 camera mounted on the James Clerk Maxwell Telescope as part of the deep SCUBA-2 Cosmology Legacy Survey (S2CLS), achieving a central instrumental depth of σ450 = 1.2 mJy beam-1 and σ850 = 0.2 mJy beam-1. We detect 57 sources at 450 μm and 90 at 850 μm with signal-to-noise ratio >3.5 over ˜70 arcmin2. From these detections, we derive the number counts at flux densities S450 > 4.0 mJy and S850 > 0.9 mJy, which represent the deepest number counts at these wavelengths derived using directly extracted sources from only blank-field observations with a single-dish telescope. Our measurements smoothly connect the gap between previous shallower blank-field single-dish observations and deep interferometric ALMA results. We estimate the contribution of our SCUBA-2 detected galaxies to the cosmic infrared background (CIB), as well as the contribution of 24 μm-selected galaxies through a stacking technique, which add a total of 0.26 ± 0.03 and 0.07 ± 0.01 MJy sr-1, at 450 and 850 μm, respectively. These surface brightnesses correspond to 60 ± 20 and 50 ± 20 per cent of the total CIB measurements, where the errors are dominated by those of the total CIB. Using the photometric redshifts of the 24 μm-selected sample and the redshift distributions of the submillimetre galaxies, we find that the redshift distribution of the recovered CIB is different at each wavelength, with a peak at z ˜ 1 for 450 μm and at z ˜ 2 for 850 μm, consistent with previous observations and theoretical models.

  1. Neutron multicounter detector for investigation of content and spatial distribution of fission materials in large volume samples

    International Nuclear Information System (INIS)

    Swiderska-Kowalczyk, M.; Starosta, W.; Zoltowski, T.

    1998-01-01

    The experimental device is a neutron coincidence well counter. It can be applied for passive assay of fissile - especially for plutonium bearing - materials. It consist of a set of 3 He tubes placed inside a polyethylene moderator; outputs from the tubes, first processed by preamplifier/amplifier/discriminator circuits, are then analysed using neutron correlator connected with a PC, and correlation techniques implemented in software. Such a neutron counter allows for determination of plutonium mass ( 240 Pu effective mass) in nonmultiplying samples having fairly big volume (up to 0.14 m 3 ). For determination of neutron sources distribution inside the sample, the heuristic methods based on hierarchical cluster analysis are applied. As an input parameters, amplitudes and phases of two-dimensional Fourier transformation of the count profiles matrices for known point sources distributions and for the examined samples, are taken. Such matrices are collected by means of sample scanning by detection head. During clustering process, counts profiles for unknown samples fitted into dendrograms using the 'proximity' criterion of the examined sample profile to standard samples profiles. Distribution of neutron sources in an examined sample is then evaluated on the basis of comparison with standard sources distributions. (author)

  2. Correction for intrinsic and set dead-time losses in radioactivity counting

    International Nuclear Information System (INIS)

    Wyllie, H.A.

    1992-12-01

    Equations are derived for the determination of the intrinsic dead time of the components which precede the paralysis unit in a counting system for measuring radioactivity. The determination depends on the extension of the set dead time by the intrinsic dead time. Improved formulae are given for the dead-time correction of the count rate of a radioactive source in a single-channel system. A variable in the formulae is the intrinsic dead time which is determined concurrently with the counting of the source. The only extra equipment required in a conventional system is a scaler. 5 refs., 2 tabs., 21 figs

  3. Minimum-phase distribution of cosmic source brightness

    International Nuclear Information System (INIS)

    Gal'chenko, A.A.; Malov, I.F.; Mogil'nitskaya, L.F.; Frolov, V.A.

    1984-01-01

    Minimum-phase distributions of brightness (profiles) for cosmic radio sources 3C 144 (the wave lambda=21 cm), 3C 338 (lambda=3.5 m), and 3C 353 (labda=31.3 cm and 3.5 m) are obtained. A real possibility for the profile recovery from module fragments of its Fourier-image is shown

  4. A 31 GHz Survey of Low-Frequency Selected Radio Sources

    Science.gov (United States)

    Mason, B. S.; Weintraub, L.; Sievers, J.; Bond, J. R.; Myers, S. T.; Pearson, T. J.; Readhead, A. C. S.; Shepherd, M. C.

    2009-10-01

    The 100 m Robert C. Byrd Green Bank Telescope and the 40 m Owens Valley Radio Observatory telescope have been used to conduct a 31 GHz survey of 3165 known extragalactic radio sources over 143 deg2 of the sky. Target sources were selected from the NRAO VLA Sky Survey in fields observed by the Cosmic Background Imager (CBI); most are extragalactic active galactic nuclei (AGNs) with 1.4 GHz flux densities of 3-10 mJy. The resulting 31 GHz catalogs are presented in full online. Using a maximum-likelihood analysis to obtain an unbiased estimate of the distribution of the 1.4-31 GHz spectral indices of these sources, we find a mean 31-1.4 GHz flux ratio of 0.110 ± 0.003 corresponding to a spectral index of α = -0.71 ± 0.01 (S ν vprop να) 9.0% ± 0.8% of sources have α > - 0.5 and 1.2% ± 0.2% have α > 0. By combining this spectral-index distribution with 1.4 GHz source counts, we predict 31 GHz source counts in the range 1 mJy S 31) = (16.7 ± 1.7) deg-2(S 31/1 mJy)-0.80±0.07. We also assess the contribution of mJy-level (S 1.4 GHz < 3.4 mJy) radio sources to the 31 GHz cosmic microwave background power spectrum, finding a mean power of ell(ell + 1)C src ell/(2π) = 44 ± 14 μK2 and a 95% upper limit of 80 μK2 at ell = 2500. Including an estimated contribution of 12 μK2 from the population of sources responsible for the turn-up in counts below S 1.4 GHz = 1 mJy, this amounts to 21% ± 7% of what is needed to explain the CBI high-ell excess signal, 275 ± 63 μK2. These results are consistent with other measurements of the 31 GHz point-source foreground.

  5. Distributed plastic optical fibre measurement of pH using a photon counting OTDR

    International Nuclear Information System (INIS)

    Saunders, C; Scully, P J

    2005-01-01

    Distributed measurement of pH was demonstrated at a sensitised region 4m from the distal end of a 20m length of plastic optical fibre. The cladding was removed from the fibre over 150mm and the bare core was exposed to an aqueous solution of methyl red at three values of pH, between 2.89 and 9.70. The optical fibre was interrogated at 648nm using a Luciol photon counting optical time domain reflectometer, and demonstrated that the sensing region was attenuated as a function of pH. The attenuation varied from 16.3 dB at pH 2.89 to 8.6 dB at pH 9.70; this range equated to -1.13 ± 0.04 dB/pH. It is thus possible to determine both the position to ± 12mm and pH to an estimated ± 0.5pH at the sensing region

  6. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  7. Analysis of the emission characteristics of ion sources for high-value optical counting processes

    International Nuclear Information System (INIS)

    Beermann, Nils

    2009-01-01

    The production of complex high-quality thin film systems requires a detailed understanding of all partial processes. One of the most relevant partial processes is the condensation of the coating material on the substrate surface. The optical and mechanical material properties can be adjusted by the well-defined impingement of energetic ions during deposition. Thus, in the past, a variety of different ion sources were developed. With respect to the present and future challenges in the production of precisely fabricated high performance optical coatings, the ion emission of the sources has commonly not been characterized sufficiently so far. This question is addressed in the frame of this work which itself is thematically integrated in the field of process-development and -control of ion assisted deposition processes. In a first step, a Faraday cup measurement system was developed which allows the spatially resolved determination of the ion energy distribution as well as the ion current distribution. Subsequently, the ion emission profiles of six ion sources were determined depending on the relevant operating parameters. Consequently, a data pool for process planning and supplementary process analysis is made available. On the basis of the acquired results, the basic correlations between the operating parameters and the ion emission are demonstrated. The specific properties of the individual sources as well as the respective control strategies are pointed out with regard to the thin film properties and production yield. Finally, a synthesis of the results and perspectives for future activities are given. (orig.)

  8. Multiplicity counting from fission chamber signals in the current mode

    Energy Technology Data Exchange (ETDEWEB)

    Pázsit, I. [Chalmers University of Technology, Department of Physics, Division of Subatomic and Plasma Physics, SE-412 96 Göteborg (Sweden); Pál, L. [Centre for Energy Research, Hungarian Academy of Sciences, 114, POB 49, H-1525 Budapest (Hungary); Nagy, L. [Chalmers University of Technology, Department of Physics, Division of Subatomic and Plasma Physics, SE-412 96 Göteborg (Sweden); Budapest University of Technology and Economics, Institute of Nuclear Techniques, H-1111 Budapest (Hungary)

    2016-12-11

    In nuclear safeguards, estimation of sample parameters using neutron-based non-destructive assay methods is traditionally based on multiplicity counting with thermal neutron detectors in the pulse mode. These methods in general require multi-channel analysers and various dead time correction methods. This paper proposes and elaborates on an alternative method, which is based on fast neutron measurements with fission chambers in the current mode. A theory of “multiplicity counting” with fission chambers is developed by incorporating Böhnel's concept of superfission [1] into a master equation formalism, developed recently by the present authors for the statistical theory of fission chamber signals [2,3]. Explicit expressions are derived for the first three central auto- and cross moments (cumulants) of the signals of up to three detectors. These constitute the generalisation of the traditional Campbell relationships for the case when the incoming events represent a compound Poisson distribution. Because now the expressions contain the factorial moments of the compound source, they contain the same information as the singles, doubles and triples rates of traditional multiplicity counting. The results show that in addition to the detector efficiency, the detector pulse shape also enters the formulas; hence, the method requires a more involved calibration than the traditional method of multiplicity counting. However, the method has some advantages by not needing dead time corrections, as well as having a simpler and more efficient data processing procedure, in particular for cross-correlations between different detectors, than the traditional multiplicity counting methods.

  9. Accuracy in activation analysis: count rate effects

    International Nuclear Information System (INIS)

    Lindstrom, R.M.; Fleming, R.F.

    1980-01-01

    The accuracy inherent in activation analysis is ultimately limited by the uncertainty of counting statistics. When careful attention is paid to detail, several workers have shown that all systematic errors can be reduced to an insignificant fraction of the total uncertainty, even when the statistical limit is well below one percent. A matter of particular importance is the reduction of errors due to high counting rate. The loss of counts due to random coincidence (pulse pileup) in the amplifier and to digitization time in the ADC may be treated as a series combination of extending and non-extending dead times, respectively. The two effects are experimentally distinct. Live timer circuits in commercial multi-channel analyzers compensate properly for ADC dead time for long-lived sources, but not for pileup. Several satisfactory solutions are available, including pileup rejection and dead time correction circuits, loss-free ADCs, and computed corrections in a calibrated system. These methods are sufficiently reliable and well understood that a decaying source can be measured routinely with acceptably small errors at a dead time as high as 20 percent

  10. ROC-king onwards: intraepithelial lymphocyte counts, distribution & role in coeliac disease mucosal interpretation.

    Science.gov (United States)

    Rostami, Kamran; Marsh, Michael N; Johnson, Matt W; Mohaghegh, Hamid; Heal, Calvin; Holmes, Geoffrey; Ensari, Arzu; Aldulaimi, David; Bancel, Brigitte; Bassotti, Gabrio; Bateman, Adrian; Becheanu, Gabriel; Bozzola, Anna; Carroccio, Antonio; Catassi, Carlo; Ciacci, Carolina; Ciobanu, Alexandra; Danciu, Mihai; Derakhshan, Mohammad H; Elli, Luca; Ferrero, Stefano; Fiorentino, Michelangelo; Fiorino, Marilena; Ganji, Azita; Ghaffarzadehgan, Kamran; Going, James J; Ishaq, Sauid; Mandolesi, Alessandra; Mathews, Sherly; Maxim, Roxana; Mulder, Chris J; Neefjes-Borst, Andra; Robert, Marie; Russo, Ilaria; Rostami-Nejad, Mohammad; Sidoni, Angelo; Sotoudeh, Masoud; Villanacci, Vincenzo; Volta, Umberto; Zali, Mohammad R; Srivastava, Amitabh

    2017-12-01

    Counting intraepithelial lymphocytes (IEL) is central to the histological diagnosis of coeliac disease (CD), but no definitive 'normal' IEL range has ever been published. In this multicentre study, receiver operating characteristic (ROC) curve analysis was used to determine the optimal cut-off between normal and CD (Marsh III lesion) duodenal mucosa, based on IEL counts on >400 mucosal biopsy specimens. The study was designed at the International Meeting on Digestive Pathology, Bucharest 2015. Investigators from 19 centres, eight countries of three continents, recruited 198 patients with Marsh III histology and 203 controls and used one agreed protocol to count IEL/100 enterocytes in well-oriented duodenal biopsies. Demographic and serological data were also collected. The mean ages of CD and control groups were 45.5 (neonate to 82) and 38.3 (2-88) years. Mean IEL count was 54±18/100 enterocytes in CD and 13±8 in normal controls (p=0.0001). ROC analysis indicated an optimal cut-off point of 25 IEL/100 enterocytes, with 99% sensitivity, 92% specificity and 99.5% area under the curve. Other cut-offs between 20 and 40 IEL were less discriminatory. Additionally, there was a sufficiently high number of biopsies to explore IEL counts across the subclassification of the Marsh III lesion. Our ROC curve analyses demonstrate that for Marsh III lesions, a cut-off of 25 IEL/100 enterocytes optimises discrimination between normal control and CD biopsies. No differences in IEL counts were found between Marsh III a, b and c lesions. There was an indication of a continuously graded dose-response by IEL to environmental (gluten) antigenic influence. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  11. Quantitative Compton suppression spectrometry at elevated counting rates

    International Nuclear Information System (INIS)

    Westphal, G.P.; Joestl, K.; Schroeder, P.; Lauster, R.; Hausch, E.

    1999-01-01

    For quantitative Compton suppression spectrometry the decrease of coincidence efficiency with counting rate should be made negligible to avoid a virtual increase of relative peak areas of coincident isomeric transitions with counting rate. To that aim, a separate amplifier and discriminator has been used for each of the eight segments of the active shield of a new well-type Compton suppression spectrometer, together with an optimized, minimum dead-time design of the anticoincidence logic circuitry. Chance coincidence losses in the Compton suppression spectrometer are corrected instrumentally by comparing the chance coincidence rate to the counting rate of the germanium detector in a pulse-counting Busy circuit (G.P. Westphal, J. Rad. Chem. 179 (1994) 55) which is combined with the spectrometer's LFC counting loss correction system. The normally not observable chance coincidence rate is reconstructed from the rates of germanium detector and scintillation detector in an auxiliary coincidence unit, after the destruction of true coincidence by delaying one of the coincidence partners. Quantitative system response has been tested in two-source measurements with a fixed reference source of 60 Co of 14 kc/s, and various samples of 137 Cs, up to aggregate counting rates of 180 kc/s for the well-type detector, and more than 1400 kc/s for the BGO shield. In these measurements, the net peak areas of the 1173.3 keV line of 60 Co remained constant at typical values of 37 000 with and 95 000 without Compton suppression, with maximum deviations from the average of less than 1.5%

  12. Non-Poisson counting statistics of a hybrid G-M counter dead time model

    International Nuclear Information System (INIS)

    Lee, Sang Hoon; Jae, Moosung; Gardner, Robin P.

    2007-01-01

    The counting statistics of a G-M counter with a considerable dead time event rate deviates from Poisson statistics. Important characteristics such as observed counting rates as a function true counting rates, variances and interval distributions were analyzed for three dead time models, non-paralyzable, paralyzable and hybrid, with the help of GMSIM, a Monte Carlo dead time effect simulator. The simulation results showed good agreements with the models in observed counting rates and variances. It was found through GMSIM simulations that the interval distribution for the hybrid model showed three distinctive regions, a complete cutoff region for the duration of the total dead time, a degraded exponential and an enhanced exponential regions. By measuring the cutoff and the duration of degraded exponential from the pulse interval distribution, it is possible to evaluate the two dead times in the hybrid model

  13. Neutron coincidence counting based on time interval analysis with dead time corrected one and two dimensional Rossi-alpha distributions: an application for passive neutron waste assay

    International Nuclear Information System (INIS)

    Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.

    1996-03-01

    The report describes a new neutron multiplicity counting method based on Rossi-alpha distributions. The report also gives the necessary dead time correction formulas for the multiplicity counting method. The method was tested numerically using a Monte Carlo simulation of pulse trains. The use of this multiplicity method in the field of waste assay is explained: it can be used to determine the amount of fissile material in a waste drum without prior knowledge of the actual detection efficiency

  14. Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...

  15. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...

  16. Radioactive source calibration technique for the CMS hadron calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Hazen, E.; Lawlor, C.; Rohlf, J.W. E-mail: rohlf@bu.edu; Wu, S.X.; Baumbaugh, A.; Elias, J.E.; Freeman, J.; Green, D.; Lazic, D.; Los, S.; Ronzhin, A.; Sergueev, S.; Shaw, T.; Vidal, R.; Whitmore, J.; Zimmerman, T.; Adams, M.; Burchesky, K.; Qian, W.; Baden, A.; Bard, R.; Breden, H.; Grassi, T.; Skuja, A.; Fisher, W.; Mans, J.; Tully, C.; Barnes, V.; Laasanen, A.; Barbaro, P. de; Budd, H

    2003-10-01

    Relative calibration of the scintillator tiles used in the hadronic calorimeter for the Compact Muon Solenoid detector at the CERN Large Hadron Collider is established and maintained using a radioactive source technique. A movable source can be positioned remotely to illuminate each scintillator tile individually, and the resulting photo-detector current is measured to provide the relative calibration. The unique measurement technique described here makes use of the normal high-speed data acquisition system required for signal digitization at the 40 MHz collider frequency. The data paths for collider measurements and source measurements are then identical, and systematic uncertainties associated with having different signal paths are avoided. In this high-speed mode, the source signal is observed as a Poisson photo-electron distribution with a mean that is smaller than the width of the electronics noise (pedestal) distribution. We report demonstration of the technique using prototype electronics for the complete readout chain and show the typical response observed with a 144 channel test beam system. The electronics noise has a root-mean-square of 1.6 least counts, and a 1 mCi source produces a shift of the mean value of 0.1 least counts. Because of the speed of the data acquisition system, this shift can be measured to a statistical precision better than a fraction of a percent on a millisecond time scale. The result is reproducible to better than 2% over a time scale of 1 month.

  17. Flow Cytometry Total Cell Counts: A Field Study Assessing Microbiological Water Quality and Growth in Unchlorinated Drinking Water Distribution Systems

    Science.gov (United States)

    Liu, G.; Van der Mark, E. J.; Verberk, J. Q. J. C.; Van Dijk, J. C.

    2013-01-01

    The objective of this study was to evaluate the application of flow cytometry total cell counts (TCCs) as a parameter to assess microbial growth in drinking water distribution systems and to determine the relationships between different parameters describing the biostability of treated water. A one-year sampling program was carried out in two distribution systems in The Netherlands. Results demonstrated that, in both systems, the biomass differences measured by ATP were not significant. TCC differences were also not significant in treatment plant 1, but decreased slightly in treatment plant 2. TCC values were found to be higher at temperatures above 15°C than at temperatures below 15°C. The correlation study of parameters describing biostability found no relationship among TCC, heterotrophic plate counts, and Aeromonas. Also no relationship was found between TCC and ATP. Some correlation was found between the subgroup of high nucleic acid content bacteria and ATP (R 2 = 0.63). Overall, the results demonstrated that TCC is a valuable parameter to assess the drinking water biological quality and regrowth; it can directly and sensitively quantify biomass, detect small changes, and can be used to determine the subgroup of active HNA bacteria that are related to ATP. PMID:23819117

  18. Flow Cytometry Total Cell Counts: A Field Study Assessing Microbiological Water Quality and Growth in Unchlorinated Drinking Water Distribution Systems

    Directory of Open Access Journals (Sweden)

    G. Liu

    2013-01-01

    Full Text Available The objective of this study was to evaluate the application of flow cytometry total cell counts (TCCs as a parameter to assess microbial growth in drinking water distribution systems and to determine the relationships between different parameters describing the biostability of treated water. A one-year sampling program was carried out in two distribution systems in The Netherlands. Results demonstrated that, in both systems, the biomass differences measured by ATP were not significant. TCC differences were also not significant in treatment plant 1, but decreased slightly in treatment plant 2. TCC values were found to be higher at temperatures above 15°C than at temperatures below 15°C. The correlation study of parameters describing biostability found no relationship among TCC, heterotrophic plate counts, and Aeromonas. Also no relationship was found between TCC and ATP. Some correlation was found between the subgroup of high nucleic acid content bacteria and ATP (R2=0.63. Overall, the results demonstrated that TCC is a valuable parameter to assess the drinking water biological quality and regrowth; it can directly and sensitively quantify biomass, detect small changes, and can be used to determine the subgroup of active HNA bacteria that are related to ATP.

  19. Estimation of subcriticality by neutron source multiplication method

    International Nuclear Information System (INIS)

    Sakurai, Kiyoshi; Suzaki, Takenori; Arakawa, Takuya; Naito, Yoshitaka

    1995-03-01

    Subcritical cores were constructed in a core tank of the TCA by arraying 2.6% enriched UO 2 fuel rods into nxn square lattices of 1.956 cm pitch. Vertical distributions of the neutron count rates for the fifteen subcritical cores (n=17, 16, 14, 11, 8) with different water levels were measured at 5 cm interval with 235 U micro-fission counters at the in-core and out-core positions arranging a 252 C f neutron source at near core center. The continuous energy Monte Carlo code MCNP-4A was used for the calculation of neutron multiplication factors and neutron count rates. In this study, important conclusions are as follows: (1) Differences of neutron multiplication factors resulted from exponential experiment and MCNP-4A are below 1% in most cases. (2) Standard deviations of neutron count rates calculated from MCNP-4A with 500000 histories are 5-8%. The calculated neutron count rates are consistent with the measured one. (author)

  20. Theoretical assessment of whole body counting performances using numerical phantoms of different gender and sizes.

    Science.gov (United States)

    Marzocchi, O; Breustedt, B; Mostacci, D; Zankl, M; Urban, M

    2011-03-01

    A goal of whole body counting (WBC) is the estimation of the total body burden of radionuclides disregarding the actual position within the body. To achieve the goal, the detectors need to be placed in regions where the photon flux is as independent as possible from the distribution of the source. At the same time, the detectors need high photon fluxes in order to achieve better efficiency and lower minimum detectable activities. This work presents a method able to define the layout of new WBC systems and to study the behaviour of existing ones using both detection efficiency and its dependence on the position of the source within the body of computational phantoms.

  1. Theoretical assessment of whole body counting performances using numerical phantoms of different gender and sizes

    International Nuclear Information System (INIS)

    Marzocchi, O.; Breustedt, B.; Mostacci, D.; Zankl, M.; Urban, M.

    2011-01-01

    A goal of whole body counting (WBC) is the estimation of the total body burden of radionuclides disregarding the actual position within the body. To achieve the goal, the detectors need to be placed in regions where the photon flux is as independent as possible from the distribution of the source. At the same time, the detectors need high photon fluxes in order to achieve better efficiency and lower minimum detectable activities. This work presents a method able to define the layout of new WBC systems and to study the behaviour of existing ones using both detection efficiency and its dependence on the position of the source within the body of computational phantoms. (authors)

  2. Continuous-variable quantum key distribution with Gaussian source noise

    International Nuclear Information System (INIS)

    Shen Yujie; Peng Xiang; Yang Jian; Guo Hong

    2011-01-01

    Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.

  3. Determining the temperature and density distribution from a Z-pinch radiation source

    International Nuclear Information System (INIS)

    Matuska, W.; Lee, H.

    1997-01-01

    High temperature radiation sources exceeding one hundred eV can be produced via z-pinches using currently available pulsed power. The usual approach to compare the z-pinch simulation and experimental data is to convert the radiation output at the source, whose temperature and density distributions are computed from the 2-D MHD code, into simulated data such as a spectrometer reading. This conversion process involves a radiation transfer calculation through the axially symmetric source, assuming local thermodynamic equilibrium (LTE), and folding the radiation that reaches the detector with the frequency-dependent response function. In this paper the authors propose a different approach by which they can determine the temperature and density distributions of the radiation source directly from the spatially resolved spectral data. This unfolding process is reliable and unambiguous for the ideal case where LTE holds and the source is axially symmetric. In reality, imperfect LTE and axial symmetry will introduce inaccuracies into the unfolded distributions. The authors use a parameter optimization routine to find the temperature and density distributions that best fit the data. They know from their past experience that the radiation source resulting from the implosion of a thin foil does not exhibit good axial symmetry. However, recent experiments carried out at Sandia National Laboratory using multiple wire arrays were very promising to achieve reasonably good symmetry. For these experiments the method will provide a valuable diagnostic tool

  4. Simple bounds for counting processes with monotone rate of occurrence of failures

    International Nuclear Information System (INIS)

    Kaminskiy, Mark P.

    2007-01-01

    The article discusses some aspects of analogy between certain classes of distributions used as models for time to failure of nonrepairable objects, and the counting processes used as models for failure process for repairable objects. The notion of quantiles for the counting processes with strictly increasing cumulative intensity function is introduced. The classes of counting processes with increasing (decreasing) rate of occurrence of failures are considered. For these classes, the useful nonparametric bounds for cumulative intensity function based on one known quantile are obtained. These bounds, which can be used for repairable objects, are similar to the bounds introduced by Barlow and Marshall [Barlow, R. Marshall, A. Bounds for distributions with monotone hazard rate, I and II. Ann Math Stat 1964; 35: 1234-74] for IFRA (DFRA) time to failure distributions applicable to nonrepairable objects

  5. Pulse-duration discrimination for increasing counting characteristic plateau and for improving counting rate stability of a scintillation counter

    International Nuclear Information System (INIS)

    Kuz'min, M.G.

    1977-01-01

    For greater stability of scintillation counters operation, discussed is the possibility for increasing the plateau and reducing its slope. Presented is the circuit for discrimination of the signal pulses from input pulses of a photomultiplier. The counting characteristics have been measured with the scintillation detectors being irradiated by different gamma sources ( 60 Co, 137 Cs, 241 Am) and without the source when the scintillation detector is shielded by a tungsten cylinder with a wall thickness of 23 mm. The comparison has revealed that discrimination in duration increase the plateau and reduces its slope. Proceeding from comparison of the noise characteristics, the relationship is found between the noise pulse number and gamma radiation energy. For better stability of the counting rate it is suggested to introduce into the scintillation counter the circuit for duration discrimination of the output pulses of a photomultiplier

  6. Radial dose distribution of 192Ir and 137Cs seed sources

    International Nuclear Information System (INIS)

    Thomason, C.; Higgins, P.

    1989-01-01

    The radial dose distributions in water around /sup 192/ Ir seed sources with both platinum and stainless steel encapsulation have been measured using LiF thermoluminescent dosimeters (TLD) for distances of 1 to 12 cm along the perpendicular bisector of the source to determine the effect of source encapsulation. Similar measurements also have been made around a /sup 137/ Cs seed source of comparable dimensions. The data were fit to a third order polynomial to obtain an empirical equation for the radial dose factor which then can be used in dosimetry. The coefficients of this equation for each of the three sources are given. The radial dose factor of the stainless steel encapsulated /sup 192/ Ir and that of the platinum encapsulated /sup 192/ Ir agree to within 2%. The radial dose distributions measured here for /sup 192/ Ir with either type of encapsulation and for /sup 137/ Cs are indistinguishable from those of other authors when considering uncertainties involved. For clinical dosimetry based on isotropic point or line source models, any of these equations may be used without significantly affecting accuracy

  7. Evaluation of the charge-sharing effects on spot intensity in XRD setup using photon-counting pixel detectors

    International Nuclear Information System (INIS)

    Nilsson, H.-E.; Mattsson, C.G.; Norlin, B.; Froejdh, C.; Bethke, K.; Vries, R. de

    2006-01-01

    In this study, we examine how charge loss due to charge sharing in photon-counting pixels detectors affects the recording of spot intensity in an X-ray diffraction (XRD) setup. In the photon-counting configuration, the charge from photons that are absorbed at the boarder of a pixel will be shared between two pixels. If the threshold is high enough, these photons will not be counted whereas if it is low enough, they will be counted twice. In an XRD setup, the intensity and position of various spots should be recorded. Thus, the intensity measure will be affected by the setting of the threshold. In this study, we used a system level Monte Carlo simulator to evaluate the variations in the intensity signals for different threshold settings and spot sizes. The simulated setup included an 8keV mono-chromatic source (providing a Gaussian shaped spot) and the MEDIPIX2 photon-counting pixel detector (55 μm x 55 μm pixel size with 300μm silicon) at various detector biases. Our study shows that the charge-sharing distortion can be compensated by numerical post processing and that high resolution in both charge distribution and position can be achieved

  8. Application of Voxel Phantoms to Study the Influence of Heterogeneous Distribution of Actinides in Lungs on In Vivo Counting Calibration Factors Using Animal Experimentations

    Energy Technology Data Exchange (ETDEWEB)

    Lamart, S.; Pierrat, N.; De Carlan, L.; Franck, D. [IRSN/DRPH/SDI/LEDI, BP 17, F-92 262 Fontenay-aux-Roses (France); Dudoignon, N. [IRSN/DRPH/SRBE/LRPAT, BP 17, F-92 262 Fontenay-aux-Roses (France); Rateau, S.; Van der Meeren, A.; Rouit, E. [CEA/DSV/DRR/SRCA/LRT BP no 12, F-91680 Bruyeres-le-Chatel (France); Bottlaender, M. [CEA/SHFJ, 4, place du General Leclerc F-91400 Orsay (France)

    2006-07-01

    Calibration of lung counting system dedicated to retention assessment of actinides in the lungs remains critical due to large uncertainties in calibration factors. Among them, the detector positioning, the chest wall thickness and composition (muscle/fat) assessment, and the distribution of the contamination are the main parameters influencing the detector response. In order to reduce these uncertainties, a numerical approach based on the application of voxel phantoms (numerical phantoms based on tomographic images, CT or MRI) associated to a Monte-Carlo code (namely M.C.N.P.) was developed. It led to the development of a dedicated tool, called O.E.D.I.P.E., that allows to easily handle realistic voxel phantoms for the simulation of in vivo measurement (or dose calculation, application that will not be presented in this paper). The goal of this paper is to present our study of the influence of the lung distribution on calibration factors using both animal experimentations and our numerical method. Indeed, physical anthropomorphic phantoms used for calibration always consider a uniform distribution of the source in the lungs, which is not true in many contamination conditions. The purpose of the study is to compare the response of the measurement detectors using a real distribution of actinide particles in the lungs, obtained from animal experimentations, with the homogeneous one considered as the reference. This comparison was performed using O.E.D.I.P.E. that can almost simulate any source distribution. A non human primate was contaminated heterogeneously by intra-tracheal administration of actinide oxide. After euthanasia, gamma spectrometry measurements were performed on the pulmonary lobes to obtain the distribution of the contamination in the lungs. This realistic distribution was used to simulate an heterogeneous contamination in the numerical phantom of the non human primate, which was compared with a simulation of an homogeneous contamination presenting the

  9. Application of Voxel Phantoms to Study the Influence of Heterogeneous Distribution of Actinides in Lungs on In Vivo Counting Calibration Factors Using Animal Experimentations

    International Nuclear Information System (INIS)

    Lamart, S.; Pierrat, N.; De Carlan, L.; Franck, D.; Dudoignon, N.; Rateau, S.; Van der Meeren, A.; Rouit, E.; Bottlaender, M.

    2006-01-01

    Calibration of lung counting system dedicated to retention assessment of actinides in the lungs remains critical due to large uncertainties in calibration factors. Among them, the detector positioning, the chest wall thickness and composition (muscle/fat) assessment, and the distribution of the contamination are the main parameters influencing the detector response. In order to reduce these uncertainties, a numerical approach based on the application of voxel phantoms (numerical phantoms based on tomographic images, CT or MRI) associated to a Monte-Carlo code (namely M.C.N.P.) was developed. It led to the development of a dedicated tool, called O.E.D.I.P.E., that allows to easily handle realistic voxel phantoms for the simulation of in vivo measurement (or dose calculation, application that will not be presented in this paper). The goal of this paper is to present our study of the influence of the lung distribution on calibration factors using both animal experimentations and our numerical method. Indeed, physical anthropomorphic phantoms used for calibration always consider a uniform distribution of the source in the lungs, which is not true in many contamination conditions. The purpose of the study is to compare the response of the measurement detectors using a real distribution of actinide particles in the lungs, obtained from animal experimentations, with the homogeneous one considered as the reference. This comparison was performed using O.E.D.I.P.E. that can almost simulate any source distribution. A non human primate was contaminated heterogeneously by intra-tracheal administration of actinide oxide. After euthanasia, gamma spectrometry measurements were performed on the pulmonary lobes to obtain the distribution of the contamination in the lungs. This realistic distribution was used to simulate an heterogeneous contamination in the numerical phantom of the non human primate, which was compared with a simulation of an homogeneous contamination presenting the

  10. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    . However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones......Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity...

  11. Coincidence and noncoincidence counting (81Rb and 43K): a comparative study

    International Nuclear Information System (INIS)

    Ikeda, S.; Duken, H.; Tillmanns, H.; Bing, R.J.

    1975-01-01

    The accuracy of imaging and resolution obtained with 81 Rb and 43 K using coincidence and noncoincidence counting was compared. Phantoms and isolated infarcted dog hearts were used. The results clearly show the superiority of coincidence counting with a resolution of 0.5 cm. Noncoincidence counting failed to reveal even sizable defects in the radioactive source. (U.S.)

  12. Analysis of neutron propagation from the skyshine port of a fusion neutron source facility

    Energy Technology Data Exchange (ETDEWEB)

    Wakisaka, M. [Hokkaido University, Kita-8, Nishi-5, Kita-ku, Sapporo 080-8628 (Japan); Kaneko, J. [Hokkaido University, Kita-8, Nishi-5, Kita-ku, Sapporo 080-8628 (Japan)]. E-mail: kin@qe.eng.hokudai.ac.jp; Fujita, F. [Hokkaido University, Kita-8, Nishi-5, Kita-ku, Sapporo 080-8628 (Japan); Ochiai, K. [Japan Atomic Energy Institute, Tokai-mura, Ibaraki-ken 319-1195 (Japan); Nishitani, T. [Japan Atomic Energy Institute, Tokai-mura, Ibaraki-ken 319-1195 (Japan); Yoshida, S. [Tokai University, 1117 Kitakaname, Hirastuka, Kanagawa-ken 259-1292 (Japan); Sawamura, T. [Hokkaido University, Kita-8, Nishi-5, Kita-ku, Sapporo 080-8628 (Japan)

    2005-12-01

    The process of neutron leaking from a 14MeV neutron source facility was analyzed by calculations and experiments. The experiments were performed at the Fusion Neutron Source (FNS) facility of the Japan Atomic Energy Institute, Tokai-mura, Japan, which has a port on the roof for skyshine experiments, and a {sup 3}He counter surrounded with a polyethylene moderator of different thicknesses was used to estimate the energy spectra and dose distributions. The {sup 3}He counter with a 3-cm-thick moderator was also used for dose measurements, and the doses evaluated by the counter counts and the calculated count-to-dose conversion factor agreed with the calculations to within {approx}30%. The dose distribution was found to fit a simple analytical expression, D(r)=Q{sub D}exp(-r/{lambda}{sub D})r and the parameters Q{sub D} and {lambda}{sub D} are discussed.

  13. Pixel-Cluster Counting Luminosity Measurement in ATLAS

    CERN Document Server

    McCormack, William Patrick; The ATLAS collaboration

    2016-01-01

    A precision measurement of the delivered luminosity is a key component of the ATLAS physics program at the Large Hadron Collider (LHC). A fundamental ingredient of the strategy to control the systematic uncertainties affecting the absolute luminosity has been to compare the measurements of several luminometers, most of which use more than one counting technique. The level of consistency across the various methods provides valuable cross-checks as well as an estimate of the detector-related systematic uncertainties. This poster describes the development of a luminosity algorithm based on pixel-cluster counting in the recently installed ATLAS inner b-layer (IBL), using data recorded during the 2015 pp run at the LHC. The noise and background contamination of the luminosity-associated cluster count is minimized by a multi-component fit to the measured cluster-size distribution in the forward pixel modules of the IBL. The linearity, long-term stability and statistical precision of the cluster-counting method are ...

  14. Pixel-Cluster Counting Luminosity Measurement In ATLAS

    CERN Document Server

    AUTHOR|(SzGeCERN)782710; The ATLAS collaboration

    2017-01-01

    A precision measurement of the delivered luminosity is a key component of the ATLAS physics program at the Large Hadron Collider (LHC). A fundamental ingredient of the strategy to control the systematic uncertainties affecting the absolute luminosity has been to compare the measure- ments of several luminometers, most of which use more than one counting technique. The level of consistency across the various methods provides valuable cross-checks as well as an estimate of the detector-related systematic uncertainties. This poster describes the development of a luminosity algorithm based on pixel-cluster counting in the recently installed ATLAS inner b-layer (IBL), using data recorded during the 2015 pp run at the LHC. The noise and background contamination of the luminosity-associated cluster count is minimized by a multi-component fit to the measured cluster-size distribution in the forward pixel modules of the IBL. The linearity, long-term stability and statistical precision of the cluster- counting method a...

  15. Optimal Matched Filter in the Low-number Count Poisson Noise Regime and Implications for X-Ray Source Detection

    Science.gov (United States)

    Ofek, Eran O.; Zackay, Barak

    2018-04-01

    Detection of templates (e.g., sources) embedded in low-number count Poisson noise is a common problem in astrophysics. Examples include source detection in X-ray images, γ-rays, UV, neutrinos, and search for clusters of galaxies and stellar streams. However, the solutions in the X-ray-related literature are sub-optimal in some cases by considerable factors. Using the lemma of Neyman–Pearson, we derive the optimal statistics for template detection in the presence of Poisson noise. We demonstrate that, for known template shape (e.g., point sources), this method provides higher completeness, for a fixed false-alarm probability value, compared with filtering the image with the point-spread function (PSF). In turn, we find that filtering by the PSF is better than filtering the image using the Mexican-hat wavelet (used by wavdetect). For some background levels, our method improves the sensitivity of source detection by more than a factor of two over the popular Mexican-hat wavelet filtering. This filtering technique can also be used for fast PSF photometry and flare detection; it is efficient and straightforward to implement. We provide an implementation in MATLAB. The development of a complete code that works on real data, including the complexities of background subtraction and PSF variations, is deferred for future publication.

  16. Clinical significance of determination of changes of serum TNF-α levels, peripheral B lymphocyte count and T lymphocyte subsets distribution pattern in patients with pregnancy induced hypertension syndrome

    International Nuclear Information System (INIS)

    Zhao Wenjuan

    2006-01-01

    Objective: To explore the changes of serum TNF-α levels, peripheral B cell count and T subsets distribution pattern in patients with pregnancy induced hypertension syndrome. Methods: Serum TNF-α levels (with RIA), peripheral B cell count as well as T subsets (with monoclonal technique) were examined in 34 patients with pregnancy induced hypertension syndrome and 35 controls. Results: The serum TNF-α levels and B lymphocytes count were significantly higher than those in controls (P 3 , CD 4 , CD4/CD8 ratio were significantly lower than those in controls (P<0.01). Conclusion: Pregnancy induced hY- pertension syndrome is a kind of autoimmune diseases with abnormal immunoregulation. (authors)

  17. Automatic, time-interval traffic counts for recreation area management planning

    Science.gov (United States)

    D. L. Erickson; C. J. Liu; H. K. Cordell

    1980-01-01

    Automatic, time-interval recorders were used to count directional vehicular traffic on a multiple entry/exit road network in the Red River Gorge Geological Area, Daniel Boone National Forest. Hourly counts of entering and exiting traffic differed according to recorder location, but an aggregated distribution showed a delayed peak in exiting traffic thought to be...

  18. Simulation on Poisson and negative binomial models of count road accident modeling

    Science.gov (United States)

    Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

    2016-11-01

    Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

  19. Undercooling, nodule count and carbides in thin walled ductile cast iron

    DEFF Research Database (Denmark)

    Pedersen, Karl Martin; Tiedje, Niels Skat

    2008-01-01

    Ductile cast iron has been cast in plate thicknesses between 2 to 8 mm. The temperature has been measured during the solidification and the graphite nodule count and size distribution together with the type and amount of carbides have been analysed afterwards. Low nodule count gives higher...

  20. Detection of anomalies in radio tomography of asteroids: Source count and forward errors

    Science.gov (United States)

    Pursiainen, S.; Kaasalainen, M.

    2014-09-01

    The purpose of this study was to advance numerical methods for radio tomography in which asteroid's internal electric permittivity distribution is to be recovered from radio frequency data gathered by an orbiter. The focus was on signal generation via multiple sources (transponders) providing one potential, or even essential, scenario to be implemented in a challenging in situ measurement environment and within tight payload limits. As a novel feature, the effects of forward errors including noise and a priori uncertainty of the forward (data) simulation were examined through a combination of the iterative alternating sequential (IAS) inverse algorithm and finite-difference time-domain (FDTD) simulation of time evolution data. Single and multiple source scenarios were compared in two-dimensional localization of permittivity anomalies. Three different anomaly strengths and four levels of total noise were tested. Results suggest, among other things, that multiple sources can be necessary to obtain appropriate results, for example, to distinguish three separate anomalies with permittivity less or equal than half of the background value, relevant in recovery of internal cavities.

  1. Mathematical efficiency calibration with uncertain source geometries using smart optimization

    International Nuclear Information System (INIS)

    Menaa, N.; Bosko, A.; Bronson, F.; Venkataraman, R.; Russ, W. R.; Mueller, W.; Nizhnik, V.; Mirolo, L.

    2011-01-01

    The In Situ Object Counting Software (ISOCS), a mathematical method developed by CANBERRA, is a well established technique for computing High Purity Germanium (HPGe) detector efficiencies for a wide variety of source shapes and sizes. In the ISOCS method, the user needs to input the geometry related parameters such as: the source dimensions, matrix composition and density, along with the source-to-detector distance. In many applications, the source dimensions, the matrix material and density may not be well known. Under such circumstances, the efficiencies may not be very accurate since the modeled source geometry may not be very representative of the measured geometry. CANBERRA developed an efficiency optimization software known as 'Advanced ISOCS' that varies the not well known parameters within user specified intervals and determines the optimal efficiency shape and magnitude based on available benchmarks in the measured spectra. The benchmarks could be results from isotopic codes such as MGAU, MGA, IGA, or FRAM, activities from multi-line nuclides, and multiple counts of the same item taken in different geometries (from the side, bottom, top etc). The efficiency optimization is carried out using either a random search based on standard probability distributions, or using numerical techniques that carry out a more directed (referred to as 'smart' in this paper) search. Measurements were carried out using representative source geometries and radionuclide distributions. The radionuclide activities were determined using the optimum efficiency and compared against the true activities. The 'Advanced ISOCS' method has many applications among which are: Safeguards, Decommissioning and Decontamination, Non-Destructive Assay systems and Nuclear reactor outages maintenance. (authors)

  2. Quantum key distribution with an unknown and untrusted source

    Science.gov (United States)

    Zhao, Yi; Qi, Bing; Lo, Hoi-Kwong

    2009-03-01

    The security of a standard bi-directional ``plug & play'' quantum key distribution (QKD) system has been an open question for a long time. This is mainly because its source is equivalently controlled by an eavesdropper, which means the source is unknown and untrusted. Qualitative discussion on this subject has been made previously. In this paper, we present the first quantitative security analysis on a general class of QKD protocols whose sources are unknown and untrusted. The securities of standard BB84 protocol, weak+vacuum decoy state protocol, and one-decoy decoy state protocol, with unknown and untrusted sources are rigorously proved. We derive rigorous lower bounds to the secure key generation rates of the above three protocols. Our numerical simulation results show that QKD with an untrusted source gives a key generation rate that is close to that with a trusted source. Our work is published in [1]. [4pt] [1] Y. Zhao, B. Qi, and H.-K. Lo, Phys. Rev. A, 77:052327 (2008).

  3. Preparation of Films and Sources for 4{pi} Counting; Prigotovlenie dlya 4{pi}-scheta plenok i istochnikov

    Energy Technology Data Exchange (ETDEWEB)

    Konstantinov, A. A.; Sazonova, T. E. [Vsesojuznyj Nauchno - Isledovatel' skij Institut Im. D.I. Mendeleeva, Leningrad, SSSR (Russian Federation)

    1967-03-15

    To obtain a high degree of accuracy in determining the specific activity of sources by the absolute counting of particles with a 4{pi} counter, attention must be paid to the preparation of the radioactive sources. At the Mendeleev Institute of Metrology, celluloid films (surface density 8-10 {mu}/cm{sup 2}) coated on both sides with gold (10-15 {mu}g/cm{sup 2} ) or palladium (5-6 {mu}g/cm{sup 2}) are used as the bases of the radioactive sources. In order to reduce the correction for absorption of beta particles in the radioactive deposit, the base is specially treated with insulin. The authors present an extremely sensitive and effective method, employing the electron-capture nuclide {sup 54}Mn ({sup 54}Cr), for determining the uniform distribution of the active layer over the entire insulintreated surface. A solution of {sup 54}Mn ({sup 54}Cr) salt was applied to the insulin-tteated film, and the source of {sup 54}Cr ({sup 54}Mn) Auger K electrons thus obtained was investigated with the help of a proportional 4{pi} counter. The total number of {sup 54}Cr ({sup 54}Mn) Auger K electrons from the source was 8-12% less than the fluorescence coefficient (calculated from the number of {sup 54}Cr ({sup 54}Mn) K X-quanta emitted by the source) and the number of K electrons absorbed in the film (determined by the 'sandwich' method). From the differences, for insulintreated and untreated {sup 54}Mn ({sup 54}Cr) sources, between the calculated and recorded number of Auger electrons it is possible to reach a definite conclusion regarding the quality of the insulin treatment. (author) [Russian] Dlja poluchenija vysokoj tochnosti izmerenij pri opredelenii udel'noj aktivnosti istochnikov metodom absoljutnogo scheta chastic s pomoshh'ju 4{pi}- schetchika, bol'shoe vnimanie dolzhno byt' udeleno prigotovleniju radioaktivnyh istochnikov. Vo VNIIM v kachestve podlozhek radioaktivnyh istochnikov ispol'zujutsja celluloidnye plenki (poverhnostnaja plot- nost' 8-10 mkg/sm{sup 2

  4. Identifying fecal pollution sources using 3M(™) Petrifilm (™) count plates and antibiotic resistance analysis in the Horse Creek Watershed in Aiken County, SC (USA).

    Science.gov (United States)

    Harmon, S Michele; West, Ryan T; Yates, James R

    2014-12-01

    Sources of fecal coliform pollution in a small South Carolina (USA) watershed were identified using inexpensive methods and commonly available equipment. Samples from the upper reaches of the watershed were analyzed with 3M(™) Petrifilm(™) count plates. We were able to narrow down the study's focus to one particular tributary, Sand River, that was the major contributor of the coliform pollution (both fecal and total) to a downstream reservoir that is heavily used for recreation purposes. Concentrations of total coliforms ranged from 2,400 to 120,333 cfu/100 mL, with sharp increases in coliform counts observed in samples taken after rain events. Positive correlations between turbidity and fecal coliform counts suggested a relationship between fecal pollution and stormwater runoff. Antibiotic resistance analysis (ARA) compared antibiotic resistance profiles of fecal coliform isolates from the stream to those of a watershed-specific fecal source library (equine, waterfowl, canines, and untreated sewage). Known fecal source isolates and unknown isolates from the stream were exposed to six antibiotics at three concentrations each. Discriminant analysis grouped known isolates with an overall average rate of correct classification (ARCC) of 84.3 %. A total of 401 isolates from the first stream location were classified as equine (45.9 %), sewage (39.4 %), waterfowl (6.2 %), and feline (8.5 %). A similar pattern was observed at the second sampling location, with 42.6 % equine, 45.2 % sewage, 2.8 % waterfowl, 0.6 % canine, and 8.8 % feline. While there were slight weather-dependent differences, the vast majority of the coliform pollution in this stream appeared to be from two sources, equine and sewage. This information will contribute to better land use decisions and further justify implementation of low-impact development practices within this urban watershed.

  5. Blind Separation of Nonstationary Sources Based on Spatial Time-Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Zhang Yimin

    2006-01-01

    Full Text Available Blind source separation (BSS based on spatial time-frequency distributions (STFDs provides improved performance over blind source separation methods based on second-order statistics, when dealing with signals that are localized in the time-frequency (t-f domain. In this paper, we propose the use of STFD matrices for both whitening and recovery of the mixing matrix, which are two stages commonly required in many BSS methods, to provide robust BSS performance to noise. In addition, a simple method is proposed to select the auto- and cross-term regions of time-frequency distribution (TFD. To further improve the BSS performance, t-f grouping techniques are introduced to reduce the number of signals under consideration, and to allow the receiver array to separate more sources than the number of array sensors, provided that the sources have disjoint t-f signatures. With the use of one or more techniques proposed in this paper, improved performance of blind separation of nonstationary signals can be achieved.

  6. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    Science.gov (United States)

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  7. Simulated and measured neutron/gamma light output distribution for poly-energetic neutron/gamma sources

    Science.gov (United States)

    Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.

    2018-03-01

    In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.

  8. Preverbal and verbal counting and computation.

    Science.gov (United States)

    Gallistel, C R; Gelman, R

    1992-08-01

    We describe the preverbal system of counting and arithmetic reasoning revealed by experiments on numerical representations in animals. In this system, numerosities are represented by magnitudes, which are rapidly but inaccurately generated by the Meck and Church (1983) preverbal counting mechanism. We suggest the following. (1) The preverbal counting mechanism is the source of the implicit principles that guide the acquisition of verbal counting. (2) The preverbal system of arithmetic computation provides the framework for the assimilation of the verbal system. (3) Learning to count involves, in part, learning a mapping from the preverbal numerical magnitudes to the verbal and written number symbols and the inverse mappings from these symbols to the preverbal magnitudes. (4) Subitizing is the use of the preverbal counting process and the mapping from the resulting magnitudes to number words in order to generate rapidly the number words for small numerosities. (5) The retrieval of the number facts, which plays a central role in verbal computation, is mediated via the inverse mappings from verbal and written numbers to the preverbal magnitudes and the use of these magnitudes to find the appropriate cells in tabular arrangements of the answers. (6) This model of the fact retrieval process accounts for the salient features of the reaction time differences and error patterns revealed by experiments on mental arithmetic. (7) The application of verbal and written computational algorithms goes on in parallel with, and is to some extent guided by, preverbal computations, both in the child and in the adult.

  9. Establishment of reference intervals for complete blood count parameters during normal pregnancy in Beijing.

    Science.gov (United States)

    Li, Aiwei; Yang, Shuo; Zhang, Jie; Qiao, Rui

    2017-11-01

    To observe the changes of complete blood count (CBC) parameters during pregnancy and establish appropriate reference intervals for healthy pregnant women. Healthy pregnant women took the blood tests at all trimesters. All blood samples were processed on Sysmex XE-2100. The following CBC parameters were analyzed: red blood cell count (RBC), hemoglobin (Hb), hematocrit (Hct), mean corpuscular volume (MCV), mean corpuscular hemoglobin (MCH), mean corpuscular hemoglobin concentration (MCHC), red blood cell distribution width (RDW), platelet count (PLT), mean platelet volume (MPV), platelet distribution width (PDW), white blood cell count (WBC), and leukocyte differential count. Reference intervals were established using the 2.5th and 97.5th percentile of the distribution. Complete blood count parameters showed dynamic changes during trimesters. RBC, Hb, Hct declined at trimester 1, reaching their lowest point at trimester 2, and began to rise again at trimester 3. WBC, neutrophil count (Neut), monocyte count (MONO), RDW, and PDW went up from trimester 1 to trimester 3. On the contrary, MCHC, lymphocyte count (LYMPH), PLT, and MPV gradually descended during pregnancy. There were statistical significances in all CBC parameters between pregnant women and normal women, regardless of the trimesters (Ppregnancy) as follows: RBC 4.50 vs 3.94×10 12 /L, Hb 137 vs 120 g/L, WBC 5.71 vs 9.06×10 9 /L, LYMPH% 32.2 vs 18.0, Neut% 58.7 vs 75.0, and PLT 251 vs 202×10 9 /L. The changes of CBC parameters during pregnancy are described, and reference intervals for Beijing pregnant women are demonstrated in this study. © 2017 Wiley Periodicals, Inc.

  10. Automatic spark counting of alpha-tracks in plastic foils

    International Nuclear Information System (INIS)

    Somogyi, G.; Medveczky, L.; Hunyadi, I.; Nyako, B.

    1976-01-01

    The possibility of alpha-track counting by jumping spark counter in cellulose acetate and polycarbonate nuclear track detectors was studied. A theoretical treatment is presented which predicts the optimum residual thickness of the etched foils in which completely through-etched tracks (i.e. holes) can be obtained for alpha-particles of various energies and angles of incidence. In agreement with the theoretical prediction it is shown that a successful spark counting of alpha-tracks can be performed even in polycarbonate foils. Some counting characteristics, such as counting efficiency vs particle energy at various etched foil thicknesses, surface spark density produced by electric breakdowns in unexposed foils vs foil thickness, etc. have been determined. Special attention was given to the spark counting of alpha-tracks entering thin detectors at right angle. The applicability of the spark counting technique is demonstrated in angular distribution measurements of the 27 Al(p,α 0 ) 24 Mg nuclear reaction at Ep = 1899 keV resonance energy. For this study 15 μm thick Makrofol-G foils and a jumping spark counter of improved construction were used. (orig.) [de

  11. The Copenhagen primary care differential count (CopDiff) database

    DEFF Research Database (Denmark)

    Andersen, Christen Bertel L; Siersma, V.; Karlslund, W.

    2014-01-01

    BACKGROUND: The differential blood cell count provides valuable information about a person's state of health. Together with a variety of biochemical variables, these analyses describe important physiological and pathophysiological relations. There is a need for research databases to explore assoc...... the construction of the Copenhagen Primary Care Differential Count database as well as the distribution of characteristics of the population it covers and the variables that are recorded. Finally, it gives examples of its use as an inspiration to peers for collaboration.......BACKGROUND: The differential blood cell count provides valuable information about a person's state of health. Together with a variety of biochemical variables, these analyses describe important physiological and pathophysiological relations. There is a need for research databases to explore...... Practitioners' Laboratory has registered all analytical results since July 1, 2000. The Copenhagen Primary Care Differential Count database contains all differential blood cell count results (n=1,308,022) from July 1, 2000 to January 25, 2010 requested by general practitioners, along with results from analysis...

  12. A Frank mixture copula family for modeling higher-order correlations of neural spike counts

    International Nuclear Information System (INIS)

    Onken, Arno; Obermayer, Klaus

    2009-01-01

    In order to evaluate the importance of higher-order correlations in neural spike count codes, flexible statistical models of dependent multivariate spike counts are required. Copula families, parametric multivariate distributions that represent dependencies, can be applied to construct such models. We introduce the Frank mixture family as a new copula family that has separate parameters for all pairwise and higher-order correlations. In contrast to the Farlie-Gumbel-Morgenstern copula family that shares this property, the Frank mixture copula can model strong correlations. We apply spike count models based on the Frank mixture copula to data generated by a network of leaky integrate-and-fire neurons and compare the goodness of fit to distributions based on the Farlie-Gumbel-Morgenstern family. Finally, we evaluate the importance of using proper single neuron spike count distributions on the Shannon information. We find notable deviations in the entropy that increase with decreasing firing rates. Moreover, we find that the Frank mixture family increases the log likelihood of the fit significantly compared to the Farlie-Gumbel-Morgenstern family. This shows that the Frank mixture copula is a useful tool to assess the importance of higher-order correlations in spike count codes.

  13. A calculation of dose distribution around 32P spherical sources and its clinical application

    International Nuclear Information System (INIS)

    Ohara, Ken; Tanaka, Yoshiaki; Nishizawa, Kunihide; Maekoshi, Hisashi

    1977-01-01

    In order to avoid the radiation hazard in radiation therapy of craniopharyngioma by using 32 P, it is helpful to prepare a detailed dose distribution in the vicinity of the source in the tissue. Valley's method is used for calculations. A problem of the method is pointed out and the method itself is refined numerically: it extends a region of xi where an approximate polynomial is available, and it determines an optimum degree of the polynomial as 9. Usefulness of the polynomial is examined by comparing with Berger's scaled absorbed dose distribution F(xi) and the Valley's result. The dose and dose rate distributions around uniformly distributed spherical sources are computed from the termwise integration of our polynomial of degree 9 over the range of xi from 0 to 1.7. The dose distributions calculated from the spherical surface to a point at 0.5 cm outside the source, are given, when the radii of sources are 0.5, 0.6, 0.7, 1.0, and 1.5 cm respectively. The therapeutic dose for a craniopharyngioma which has a spherically shaped cyst, and the absorbed dose to the normal tissue, (oculomotor nerve), are obtained from these dose rate distributions. (auth.)

  14. Presence of renewable sources of energy, cogeneration, energy efficiency and distributed generation in the International Nuclear Information System (INIS)

    International Nuclear Information System (INIS)

    Pares Ferrer, Marianela; Oviedo Rivero, Irayda; Gonzalez Garcia, Alejandro

    2011-01-01

    The International Nuclear Information System (INIS) it was created in 1970 by the International Atomic Energy Agency (OIEA) with the objective of propitiating the exchange of scientific information and technique on the peaceful uses of the energy atomic. INIS processes most of scientific literature and technique in engineering matters nuclear, safeguard and non proliferation and applications in agriculture and health that it generates in the world and it contributes to create a repository of nuclear information for present and future generations. Additionally it includes economic aspects and environmental of other energy sources that facilitate comparative studies for the taking of decisions. The database INIS, is its main informative product and it counts with more than 3 million registrations. One of the services that lends the Center of Administration of the Information and Development of the Energy (CUBAENERGIA), like center INIS in Cuba, is the search of information on the peaceful use of the science and nuclear technology in the Countries Members and the registration of information on their applications in Cuba. More recently, it extends this service to the Renewable Sources application of Energy in the country; as part of the works of administration of the information that it carries out for the National Group of Renewable Energy, Cogeneration, Saving and Energy Efficiency, created in the 2007 and coordinated by the MINBAS with the participation of institutions belonging to Organisms of the Administration Central of the State. In this work the results of a preliminary study are presented on the witnesses in the INIS of the Renewable Sources of Energy, the Cogeneration, Energy Efficiency, and the Distributed Generation. As well as of the application of metric tools to the opposing registrations for the case of the Distributed generation, that which allowed to characterize their historical evolution, the participation for countries in their development and

  15. Quantum key distribution with entangled photon sources

    International Nuclear Information System (INIS)

    Ma Xiongfeng; Fung, Chi-Hang Fred; Lo, H.-K.

    2007-01-01

    A parametric down-conversion (PDC) source can be used as either a triggered single-photon source or an entangled-photon source in quantum key distribution (QKD). The triggering PDC QKD has already been studied in the literature. On the other hand, a model and a post-processing protocol for the entanglement PDC QKD are still missing. We fill in this important gap by proposing such a model and a post-processing protocol for the entanglement PDC QKD. Although the PDC model is proposed to study the entanglement-based QKD, we emphasize that our generic model may also be useful for other non-QKD experiments involving a PDC source. Since an entangled PDC source is a basis-independent source, we apply Koashi and Preskill's security analysis to the entanglement PDC QKD. We also investigate the entanglement PDC QKD with two-way classical communications. We find that the recurrence scheme increases the key rate and the Gottesman-Lo protocol helps tolerate higher channel losses. By simulating a recent 144-km open-air PDC experiment, we compare three implementations: entanglement PDC QKD, triggering PDC QKD, and coherent-state QKD. The simulation result suggests that the entanglement PDC QKD can tolerate higher channel losses than the coherent-state QKD. The coherent-state QKD with decoy states is able to achieve highest key rate in the low- and medium-loss regions. By applying the Gottesman-Lo two-way post-processing protocol, the entanglement PDC QKD can tolerate up to 70 dB combined channel losses (35 dB for each channel) provided that the PDC source is placed in between Alice and Bob. After considering statistical fluctuations, the PDC setup can tolerate up to 53 dB channel losses

  16. Development of counting system for wear measurements using Thin Layer Activation and the Wearing Apparatus

    Energy Technology Data Exchange (ETDEWEB)

    França, Michel de A.; Suita, Julio C.; Salgado, César M., E-mail: mchldante@gmail.com, E-mail: suita@ien.gov.br, E-mail: otero@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    This paper focus on developing a counting system for the Wearing Apparatus, which is a device previously built to generate measurable wear on a given surface (Main Source) and to carry the fillings from it to a filter (second source). The Thin Layer Activation is a technique used to produce activity on one of the Wearing Apparatus' piece, this activity is proportional to the amount of material worn, or scrapped, from the piece's surface. Thus, by measuring the activity on those two points it is possible to measure the produced wear. The methodology used in this work is based on simulations through MCNP-X Code to nd the best specifications for shielding, solid angles, detectors dimensions and collimation for the Counting System. By simulating several scenarios, each one different from the other, and analyzing the results in the form of Counts Per Second, the ideal counting system's specifications and geometry to measure the activity in the Main Source and the Filter (second source) is chosen. After that, a set of previously activated stainless steel foils were used to reproduce the real experiments' conditions, this real experiment consists of using TLA and the Wearing Apparatus, the results demonstrate that the counting system and methodology are adequate for such experiments. (author)

  17. Galaxy evolution and large-scale structure in the far-infrared. II. The IRAS faint source survey

    International Nuclear Information System (INIS)

    Lonsdale, C.J.; Hacking, P.B.; Conrow, T.P.; Rowan-Robinson, M.

    1990-01-01

    The new IRAS Faint Source Survey data base is used to confirm the conclusion of Hacking et al. (1987) that the 60 micron source counts fainter than about 0.5 Jy lie in excess of predictions based on nonevolving model populations. The existence of an anisotropy between the northern and southern Galactic caps discovered by Rowan-Robinson et al. (1986) and Needham and Rowan-Robinson (1988) is confirmed, and it is found to extend below their sensitivity limit to about 0.3 Jy in 60 micron flux density. The count anisotropy at f(60) greater than 0.3 can be interpreted reasonably as due to the Local Supercluster; however, no one structure accounting for the fainter anisotropy can be easily identified in either optical or far-IR two-dimensional sky distributions. The far-IR galaxy sky distributions are considerably smoother than distributions from the published optical galaxy catalogs. It is likely that structure of the large size discussed here have been discriminated against in earlier studies due to insufficient volume sampling. 105 refs

  18. Planck intermediate results. VII. Statistical properties of infrared and radio extragalactic sources from the Planck Early Release Compact Source Catalogue at frequencies between 100 and 857 GHz

    Science.gov (United States)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Argüeso, F.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Balbi, A.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Bernard, J.-P.; Bersanelli, M.; Bethermin, M.; Bhatia, R.; Bonaldi, A.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Cabella, P.; Cardoso, J.-F.; Catalano, A.; Cayón, L.; Chamballu, A.; Chary, R.-R.; Chen, X.; Chiang, L.-Y.; Christensen, P. R.; Clements, D. L.; Colafrancesco, S.; Colombi, S.; Colombo, L. P. L.; Coulais, A.; Crill, B. P.; Cuttaia, F.; Danese, L.; Davis, R. J.; de Bernardis, P.; de Gasperis, G.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Dörl, U.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Fosalba, P.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Jaffe, T. R.; Jaffe, A. H.; Jagemann, T.; Jones, W. C.; Juvela, M.; Keihänen, E.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurinsky, N.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lawrence, C. R.; Leonardi, R.; Lilje, P. B.; López-Caniego, M.; Macías-Pérez, J. F.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Mitra, S.; Miville-Deschènes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sajina, A.; Sandri, M.; Savini, G.; Scott, D.; Smoot, G. F.; Starck, J.-L.; Sudiwala, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Türler, M.; Valenziano, L.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; White, M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2013-02-01

    We make use of the Planck all-sky survey to derive number counts and spectral indices of extragalactic sources - infrared and radio sources - from the Planck Early Release Compact Source Catalogue (ERCSC) at 100 to 857 GHz (3 mm to 350 μm). Three zones (deep, medium and shallow) of approximately homogeneous coverage are used to permit a clean and controlled correction for incompleteness, which was explicitly not done for the ERCSC, as it was aimed at providing lists of sources to be followed up. Our sample, prior to the 80% completeness cut, contains between 217 sources at 100 GHz and 1058 sources at 857 GHz over about 12 800 to 16 550 deg2 (31 to 40% of the sky). After the 80% completeness cut, between 122 and 452 and sources remain, with flux densities above 0.3 and 1.9 Jy at 100 and 857 GHz. The sample so defined can be used for statistical analysis. Using the multi-frequency coverage of the Planck High Frequency Instrument, all the sources have been classified as either dust-dominated (infrared galaxies) or synchrotron-dominated (radio galaxies) on the basis of their spectral energy distributions (SED). Our sample is thus complete, flux-limited and color-selected to differentiate between the two populations. We find an approximately equal number of synchrotron and dusty sources between 217 and 353 GHz; at 353 GHz or higher (or 217 GHz and lower) frequencies, the number is dominated by dusty (synchrotron) sources, as expected. For most of the sources, the spectral indices are also derived. We provide for the first time counts of bright sources from 353 to 857 GHz and the contributions from dusty and synchrotron sources at all HFI frequencies in the key spectral range where these spectra are crossing. The observed counts are in the Euclidean regime. The number counts are compared to previously published data (from earlier Planck results, Herschel, BLAST, SCUBA, LABOCA, SPT, and ACT) and models taking into account both radio or infrared galaxies, and covering a

  19. The Integration of Renewable Energy Sources into Electric Power Distribution Systems, Vol. II Utility Case Assessments

    Energy Technology Data Exchange (ETDEWEB)

    Zaininger, H.W.

    1994-01-01

    Electric utility distribution system impacts associated with the integration of renewable energy sources such as photovoltaics (PV) and wind turbines (WT) are considered in this project. The impacts are expected to vary from site to site according to the following characteristics: the local solar insolation and/or wind characteristics, renewable energy source penetration level, whether battery or other energy storage systems are applied, and local utility distribution design standards and planning practices. Small, distributed renewable energy sources are connected to the utility distribution system like other, similar kW- and MW-scale equipment and loads. Residential applications are expected to be connected to single-phase 120/240-V secondaries. Larger kW-scale applications may be connected to three+phase secondaries, and larger hundred-kW and y-scale applications, such as MW-scale windfarms, or PV plants, may be connected to electric utility primary systems via customer-owned primary and secondary collection systems. In any case, the installation of small, distributed renewable energy sources is expected to have a significant impact on local utility distribution primary and secondary system economics. Small, distributed renewable energy sources installed on utility distribution systems will also produce nonsite-specific utility generation system benefits such as energy and capacity displacement benefits, in addition to the local site-specific distribution system benefits. Although generation system benefits are not site-specific, they are utility-specific, and they vary significantly among utilities in different regions. In addition, transmission system benefits, environmental benefits and other benefits may apply. These benefits also vary significantly among utilities and regions. Seven utility case studies considering PV, WT, and battery storage were conducted to identify a range of potential renewable energy source distribution system applications. The

  20. New complete sample of identified radio sources. Part 2. Statistical study

    International Nuclear Information System (INIS)

    Soltan, A.

    1978-01-01

    Complete sample of radio sources with known redshifts selected in Paper I is studied. Source counts in the sample and the luminosity - volume test show that both quasars and galaxies are subject to the evolution. Luminosity functions for different ranges of redshifts are obtained. Due to many uncertainties only simplified models of the evolution are tested. Exponential decline of the liminosity with time of all the bright sources is in a good agreement both with the luminosity- volume test and N(S) realtion in the entire range of observed flux densities. It is shown that sources in the sample are randomly distributed in scales greater than about 17 Mpc. (author)

  1. Searching Malware and Sources of Its Distribution in the Internet

    Directory of Open Access Journals (Sweden)

    L. L. Protsenko

    2011-09-01

    Full Text Available In the article is considered for the first time developed by the author algorithm of searching malware and sources of its distribution, based on published HijackThis logs in the Internet.

  2. Estimation of low-level neutron dose-equivalent rate by using extrapolation method for a curie level Am–Be neutron source

    International Nuclear Information System (INIS)

    Li, Gang; Xu, Jiayun; Zhang, Jie

    2015-01-01

    Neutron radiation protection is an important research area because of the strong radiation biological effect of neutron field. The radiation dose of neutron is closely related to the neutron energy, and the connected relationship is a complex function of energy. For the low-level neutron radiation field (e.g. the Am–Be source), the commonly used commercial neutron dosimeter cannot always reflect the low-level dose rate, which is restricted by its own sensitivity limit and measuring range. In this paper, the intensity distribution of neutron field caused by a curie level Am–Be neutron source was investigated by measuring the count rates obtained through a 3 He proportional counter at different locations around the source. The results indicate that the count rates outside of the source room are negligible compared with the count rates measured in the source room. In the source room, 3 He proportional counter and neutron dosimeter were used to measure the count rates and dose rates respectively at different distances to the source. The results indicate that both the count rates and dose rates decrease exponentially with the increasing distance, and the dose rates measured by a commercial dosimeter are in good agreement with the results calculated by the Geant4 simulation within the inherent errors recommended by ICRP and IEC. Further studies presented in this paper indicate that the low-level neutron dose equivalent rates in the source room increase exponentially with the increasing low-energy neutron count rates when the source is lifted from the shield with different radiation intensities. Based on this relationship as well as the count rates measured at larger distance to the source, the dose rates can be calculated approximately by the extrapolation method. This principle can be used to estimate the low level neutron dose values in the source room which cannot be measured directly by a commercial dosimeter. - Highlights: • The scope of the affected area for

  3. Moving Sources Detection System

    International Nuclear Information System (INIS)

    Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Normand, Stephane

    2013-06-01

    To monitor radioactivity passing through a pipe or in a given container such as a train or a truck, radiation detection systems are commonly employed. These detectors could be used in a network set along the source track to increase the overall detection efficiency. However detection methods are based on counting statistics analysis. The method usually implemented consists in trigging an alarm when an individual signal rises over a threshold initially estimated in regards to the natural background signal. The detection efficiency is then proportional to the number of detectors in use, due to the fact that each sensor is taken as a standalone sensor. A new approach is presented in this paper taking into account the temporal periodicity of the signals taken by all distributed sensors as a whole. This detection method is not based only on counting statistics but also on the temporal series analysis aspect. Therefore, a specific algorithm is then developed in our lab for this kind of applications and shows a significant improvement, especially in terms of detection efficiency and false alarms reduction. We also plan on extracting information from the source vector. This paper presents the theoretical approach and some preliminary results obtain in our laboratory. (authors)

  4. Compton suppression gamma-counting: The effect of count rate

    Science.gov (United States)

    Millard, H.T.

    1984-01-01

    Past research has shown that anti-coincidence shielded Ge(Li) spectrometers enhanced the signal-to-background ratios for gamma-photopeaks, which are situated on high Compton backgrounds. Ordinarily, an anti- or non-coincidence spectrum (A) and a coincidence spectrum (C) are collected simultaneously with these systems. To be useful in neutron activation analysis (NAA), the fractions of the photopeak counts routed to the two spectra must be constant from sample to sample to variations must be corrected quantitatively. Most Compton suppression counting has been done at low count rate, but in NAA applications, count rates may be much higher. To operate over the wider dynamic range, the effect of count rate on the ratio of the photopeak counts in the two spectra (A/C) was studied. It was found that as the count rate increases, A/C decreases for gammas not coincident with other gammas from the same decay. For gammas coincident with other gammas, A/C increases to a maximum and then decreases. These results suggest that calibration curves are required to correct photopeak areas so quantitative data can be obtained at higher count rates. ?? 1984.

  5. Exact distribution of a pattern in a set of random sequences generated by a Markov source: applications to biological data.

    Science.gov (United States)

    Nuel, Gregory; Regad, Leslie; Martin, Juliette; Camproux, Anne-Claude

    2010-01-26

    In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.). Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Our algorithms prove to be effective and able to handle real data sets with multiple sequences, as well as biological patterns of

  6. EcoCount

    Directory of Open Access Journals (Sweden)

    Phillip P. Allen

    2014-05-01

    Full Text Available Techniques that analyze biological remains from sediment sequences for environmental reconstructions are well established and widely used. Yet, identifying, counting, and recording biological evidence such as pollen grains remain a highly skilled, demanding, and time-consuming task. Standard procedure requires the classification and recording of between 300 and 500 pollen grains from each representative sample. Recording the data from a pollen count requires significant effort and focused resources from the palynologist. However, when an adaptation to the recording procedure is utilized, efficiency and time economy improve. We describe EcoCount, which represents a development in environmental data recording procedure. EcoCount is a voice activated fully customizable digital count sheet that allows the investigator to continuously interact with a field of view during the data recording. Continuous viewing allows the palynologist the opportunity to remain engaged with the essential task, identification, for longer, making pollen counting more efficient and economical. EcoCount is a versatile software package that can be used to record a variety of environmental evidence and can be installed onto different computer platforms, making the adoption by users and laboratories simple and inexpensive. The user-friendly format of EcoCount allows any novice to be competent and functional in a very short time.

  7. RBC count

    Science.gov (United States)

    ... by kidney disease) RBC destruction ( hemolysis ) due to transfusion, blood vessel injury, or other cause Leukemia Malnutrition Bone ... slight risk any time the skin is broken) Alternative Names Erythrocyte count; Red blood cell count; Anemia - RBC count Images Blood test ...

  8. Recommended methods for monitoring change in bird populations by counting and capture of migrants

    Science.gov (United States)

    David J. T. Hussell; C. John Ralph

    2005-01-01

    Counts and banding captures of spring or fall migrants can generate useful information on the status and trends of the source populations. To do so, the counts and captures must be taken and recorded in a standardized and consistent manner. We present recommendations for field methods for counting and capturing migrants at intensively operated sites, such as bird...

  9. Power distribution monitor in a nuclear reactor

    International Nuclear Information System (INIS)

    Uematsu, Hitoshi

    1983-01-01

    Purpose: To enable accurate monitoring for the reactor power distribution within a short time in a case where abnormality occurs in in-core neutron monitors or in a case where the reactor core state changes after the calibration for the neutron monitors. Constitution: The power distribution monitor comprises a power distribution calculator adapted to be inputted counted values from a reactor core present state data instruments and calculate the neutron flux distribution in the reactor core and the power distribution based on previously incorporated physical models, an RCF calculator adapted to be inputted with the counted values from the in-core neutron monitors and the neutron flux distribution and the power distribution calculated in the power distribution calculator and compensate the counted errors included in the counted values form the in-core neutron monitors and the calculation errors included in the power distribution calculated in the power distribution calculator to thereby calculate the power distribution within the reactor core, and an input/output device for the input of the data required for said power distribution calculator and the display for the calculation result calculated in the RCF calculator. (Ikeda, J.)

  10. Binomial distribution of Poisson statistics and tracks overlapping probability to estimate total tracks count with low uncertainty

    International Nuclear Information System (INIS)

    Khayat, Omid; Afarideh, Hossein; Mohammadnia, Meisam

    2015-01-01

    In the solid state nuclear track detectors of chemically etched type, nuclear tracks with center-to-center neighborhood of distance shorter than two times the radius of tracks will emerge as overlapping tracks. Track overlapping in this type of detectors causes tracks count losses and it becomes rather severe in high track densities. Therefore, tracks counting in this condition should include a correction factor for count losses of different tracks overlapping orders since a number of overlapping tracks may be counted as one track. Another aspect of the problem is the cases where imaging the whole area of the detector and counting all tracks are not possible. In these conditions a statistical generalization method is desired to be applicable in counting a segmented area of the detector and the results can be generalized to the whole surface of the detector. Also there is a challenge in counting the tracks in densely overlapped tracks because not sufficient geometrical or contextual information are available. It this paper we present a statistical counting method which gives the user a relation between the tracks overlapping probabilities on a segmented area of the detector surface and the total number of tracks. To apply the proposed method one can estimate the total number of tracks on a solid state detector of arbitrary shape and dimensions by approximating the tracks averaged area, whole detector surface area and some orders of tracks overlapping probabilities. It will be shown that this method is applicable in high and ultra high density tracks images and the count loss error can be enervated using a statistical generalization approach. - Highlights: • A correction factor for count losses of different tracks overlapping orders. • For the cases imaging the whole area of the detector is not possible. • Presenting a statistical generalization method for segmented areas. • Giving a relation between the tracks overlapping probabilities and the total tracks

  11. Distributed quantum computing with single photon sources

    International Nuclear Information System (INIS)

    Beige, A.; Kwek, L.C.

    2005-01-01

    Full text: Distributed quantum computing requires the ability to perform nonlocal gate operations between the distant nodes (stationary qubits) of a large network. To achieve this, it has been proposed to interconvert stationary qubits with flying qubits. In contrast to this, we show that distributed quantum computing only requires the ability to encode stationary qubits into flying qubits but not the conversion of flying qubits into stationary qubits. We describe a scheme for the realization of an eventually deterministic controlled phase gate by performing measurements on pairs of flying qubits. Our scheme could be implemented with a linear optics quantum computing setup including sources for the generation of single photons on demand, linear optics elements and photon detectors. In the presence of photon loss and finite detector efficiencies, the scheme could be used to build large cluster states for one way quantum computing with a high fidelity. (author)

  12. CMP reflection imaging via interferometry of distributed subsurface sources

    Science.gov (United States)

    Kim, D.; Brown, L. D.; Quiros, D. A.

    2015-12-01

    The theoretical foundations of recovering body wave energy via seismic interferometry are well established. However in practice, such recovery remains problematic. Here, synthetic seismograms computed for subsurface sources are used to evaluate the geometrical combinations of realistic ambient source and receiver distributions that result in useful recovery of virtual body waves. This study illustrates how surface receiver arrays that span a limited distribution suite of sources, can be processed to reproduce virtual shot gathers that result in CMP gathers which can be effectively stacked with traditional normal moveout corrections. To verify the feasibility of the approach in practice, seismic recordings of 50 aftershocks following the magnitude of 5.8 Virginia earthquake occurred in August, 2011 have been processed using seismic interferometry to produce seismic reflection images of the crustal structure above and beneath the aftershock cluster. Although monotonic noise proved to be problematic by significantly reducing the number of usable recordings, the edited dataset resulted in stacked seismic sections characterized by coherent reflections that resemble those seen on a nearby conventional reflection survey. In particular, "virtual" reflections at travel times of 3 to 4 seconds suggest reflector sat approximately 7 to 12 km depth that would seem to correspond to imbricate thrust structures formed during the Appalachian orogeny. The approach described here represents a promising new means of body wave imaging of 3D structure that can be applied to a wide array of geologic and energy problems. Unlike other imaging techniques using natural sources, this technique does not require precise source locations or times. It can thus exploit aftershocks too small for conventional analyses. This method can be applied to any type of microseismic cloud, whether tectonic, volcanic or man-made.

  13. Automatic quench compensation for liquid scintillation counting system

    International Nuclear Information System (INIS)

    Nather, R.E.

    1978-01-01

    A method of automatic quench compensation is provided, where a reference measure of quench is taken on a sample prior to taking a sample count. The measure of quench is then compared with a reference voltage source which has been established to vary in proportion to the variation of the measure of quench with the level of a system parameter required to restore at least one isotope spectral energy endpoint substantially to a selected counting window discriminator level in order to determine the amount of adjustment of the system parameter required to restore the endpoint. This is followed by the appropriate adjustment of the system parameter required to restore the relative position of the discriminator windows and the sample spectrum and is followed in turn by taking a sample count

  14. Correlated Sources in Distributed Networks--Data Transmission, Common Information Characterization and Inferencing

    Science.gov (United States)

    Liu, Wei

    2011-01-01

    Correlation is often present among observations in a distributed system. This thesis deals with various design issues when correlated data are observed at distributed terminals, including: communicating correlated sources over interference channels, characterizing the common information among dependent random variables, and testing the presence of…

  15. Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic Sensor Networks

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider the problem of remote vector Gaussian source coding for a wireless acoustic sensor network. Each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information. The node’s measurement...... and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network. We show that for this distributed source coding scenario, one can encode a so-called conditional sufficient statistic of the sources instead of jointly...

  16. Theory and application of Cerenkov counting

    Energy Technology Data Exchange (ETDEWEB)

    Ross, H.H.

    1976-01-01

    The production of Cherenkov radiation by charged particles moving through a transparent medium (the Cherenkov generator) is a strictly physical process that is virtually independent of the chemistry of the medium. Thus, it is possible to calculate easily all of the important parameters of the Cherenkov process such as excitation threshold, emission intensity, spectral distribution, directional characteristics, and time response. The results of many of these physical effects are unknown in conventional liquid scintillation spectroscopy and, therefore, many unique assay techniques have been developed as a result of their consideration. This paper discusses the theoretical basis of the Cherenkov process, explores the unusual characteristics of the phenomenon, and demonstrates how these unusual characteristics can be used to develop equally unusual counting methodologies. Particular emphasis is directed toward an analysis of published data (obtained with conventional liquid scintillation instrumentation) that would be difficult or impossible to obtain with other counting techniques. Also discussed is a new type of sample vial that uses an isolated waveshifting system. The device was specifically designed for Cherenkov counting.

  17. Gas source localization and gas distribution mapping with a micro-drone

    International Nuclear Information System (INIS)

    Neumann, Patrick P.

    2013-01-01

    The objective of this Ph.D. thesis is the development and validation of a VTOL-based (Vertical Take Off and Landing) micro-drone for the measurement of gas concentrations, to locate gas emission sources, and to build gas distribution maps. Gas distribution mapping and localization of a static gas source are complex tasks due to the turbulent nature of gas transport under natural conditions and becomes even more challenging when airborne. This is especially so, when using a VTOL-based micro-drone that induces disturbances through its rotors, which heavily affects gas distribution. Besides the adaptation of a micro-drone for gas concentration measurements, a novel method for the determination of the wind vector in real-time is presented. The on-board sensors for the flight control of the micro-drone provide a basis for the wind vector calculation. Furthermore, robot operating software for controlling the micro-drone autonomously is developed and used to validate the algorithms developed within this Ph.D. thesis in simulations and real-world experiments. Three biologically inspired algorithms for locating gas sources are adapted and developed for use with the micro-drone: the surge-cast algorithm (a variant of the silkworm moth algorithm), the zigzag / dung beetle algorithm, and a newly developed algorithm called ''pseudo gradient algorithm''. The latter extracts from two spatially separated measuring positions the information necessary (concentration gradient and mean wind direction) to follow a gas plume to its emission source. The performance of the algorithms is evaluated in simulations and real-world experiments. The distance overhead and the gas source localization success rate are used as main performance criteria for comparing the algorithms. Next, a new method for gas source localization (GSL) based on a particle filter (PF) is presented. Each particle represents a weighted hypothesis of the gas source position. As a first step, the PF-based GSL algorithm

  18. Gas source localization and gas distribution mapping with a micro-drone

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, Patrick P.

    2013-07-01

    The objective of this Ph.D. thesis is the development and validation of a VTOL-based (Vertical Take Off and Landing) micro-drone for the measurement of gas concentrations, to locate gas emission sources, and to build gas distribution maps. Gas distribution mapping and localization of a static gas source are complex tasks due to the turbulent nature of gas transport under natural conditions and becomes even more challenging when airborne. This is especially so, when using a VTOL-based micro-drone that induces disturbances through its rotors, which heavily affects gas distribution. Besides the adaptation of a micro-drone for gas concentration measurements, a novel method for the determination of the wind vector in real-time is presented. The on-board sensors for the flight control of the micro-drone provide a basis for the wind vector calculation. Furthermore, robot operating software for controlling the micro-drone autonomously is developed and used to validate the algorithms developed within this Ph.D. thesis in simulations and real-world experiments. Three biologically inspired algorithms for locating gas sources are adapted and developed for use with the micro-drone: the surge-cast algorithm (a variant of the silkworm moth algorithm), the zigzag / dung beetle algorithm, and a newly developed algorithm called ''pseudo gradient algorithm''. The latter extracts from two spatially separated measuring positions the information necessary (concentration gradient and mean wind direction) to follow a gas plume to its emission source. The performance of the algorithms is evaluated in simulations and real-world experiments. The distance overhead and the gas source localization success rate are used as main performance criteria for comparing the algorithms. Next, a new method for gas source localization (GSL) based on a particle filter (PF) is presented. Each particle represents a weighted hypothesis of the gas source position. As a first step, the PF

  19. Gas source localization and gas distribution mapping with a micro-drone

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, Patrick P.

    2013-07-01

    The objective of this Ph.D. thesis is the development and validation of a VTOL-based (Vertical Take Off and Landing) micro-drone for the measurement of gas concentrations, to locate gas emission sources, and to build gas distribution maps. Gas distribution mapping and localization of a static gas source are complex tasks due to the turbulent nature of gas transport under natural conditions and becomes even more challenging when airborne. This is especially so, when using a VTOL-based micro-drone that induces disturbances through its rotors, which heavily affects gas distribution. Besides the adaptation of a micro-drone for gas concentration measurements, a novel method for the determination of the wind vector in real-time is presented. The on-board sensors for the flight control of the micro-drone provide a basis for the wind vector calculation. Furthermore, robot operating software for controlling the micro-drone autonomously is developed and used to validate the algorithms developed within this Ph.D. thesis in simulations and real-world experiments. Three biologically inspired algorithms for locating gas sources are adapted and developed for use with the micro-drone: the surge-cast algorithm (a variant of the silkworm moth algorithm), the zigzag / dung beetle algorithm, and a newly developed algorithm called ''pseudo gradient algorithm''. The latter extracts from two spatially separated measuring positions the information necessary (concentration gradient and mean wind direction) to follow a gas plume to its emission source. The performance of the algorithms is evaluated in simulations and real-world experiments. The distance overhead and the gas source localization success rate are used as main performance criteria for comparing the algorithms. Next, a new method for gas source localization (GSL) based on a particle filter (PF) is presented. Each particle represents a weighted hypothesis of the gas source position. As a first step, the PF-based GSL algorithm

  20. Experimental Study for Automatic Colony Counting System Based Onimage Processing

    Science.gov (United States)

    Fang, Junlong; Li, Wenzhe; Wang, Guoxin

    Colony counting in many colony experiments is detected by manual method at present, therefore it is difficult for man to execute the method quickly and accurately .A new automatic colony counting system was developed. Making use of image-processing technology, a study was made on the feasibility of distinguishing objectively white bacterial colonies from clear plates according to the RGB color theory. An optimal chromatic value was obtained based upon a lot of experiments on the distribution of the chromatic value. It has been proved that the method greatly improves the accuracy and efficiency of the colony counting and the counting result is not affected by using inoculation, shape or size of the colony. It is revealed that automatic detection of colony quantity using image-processing technology could be an effective way.

  1. Ragweed (Ambrosia) pollen source inventory for Austria.

    Science.gov (United States)

    Karrer, G; Skjøth, C A; Šikoparija, B; Smith, M; Berger, U; Essl, F

    2015-08-01

    This study improves the spatial coverage of top-down Ambrosia pollen source inventories for Europe by expanding the methodology to Austria, a country that is challenging in terms of topography and the distribution of ragweed plants. The inventory combines annual ragweed pollen counts from 19 pollen-monitoring stations in Austria (2004-2013), 657 geographical observations of Ambrosia plants, a Digital Elevation Model (DEM), local knowledge of ragweed ecology and CORINE land cover information from the source area. The highest mean annual ragweed pollen concentrations were generally recorded in the East of Austria where the highest densities of possible growth habitats for Ambrosia were situated. Approximately 99% of all observations of Ambrosia populations were below 745m. The European infection level varies from 0.1% at Freistadt in Northern Austria to 12.8% at Rosalia in Eastern Austria. More top-down Ambrosia pollen source inventories are required for other parts of Europe. A method for constructing top-down pollen source inventories for invasive ragweed plants in Austria, a country that is challenging in terms of topography and ragweed distribution. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  2. Monitoring device for the reactor power distribution

    International Nuclear Information System (INIS)

    Uematsu, Hitoshi; Tsuiki, Makoto

    1982-01-01

    Purpose: To enable accurate monitoring for the power distribution in a short time, as well as independent detection for in-core neutron flux detectors in abnormal operation due to failures or like other causes to thereby surely provide reliable substitute values. Constitution: Counted values are inputted from a reactor core present status data detector by a power distribution calculation device to calculate the in-core neutron flux density and the power distribution based on previously stored physical models. While on the other hand, counted value from the in-core neutron detectors and the neutron flux distribution and the power distribution calculated from the power distribution calculation device are inputted from a BCF calculation device to compensate the counting errors incorporated in the counted value from the in-core neutron flux detectors and the calculation errors incorporated in the power distribution calculated in the power distribution calculation device respectively and thereby calculate the power distribution in the reactor core. Further, necessary data are inputted to the power distribution calculation device by an input/output device and the results calculated in the BCF calculation device are displayed. (Aizawa, K.)

  3. A New Method for the 2D DOA Estimation of Coherently Distributed Sources

    Directory of Open Access Journals (Sweden)

    Liang Zhou

    2014-03-01

    Full Text Available The purpose of this paper is to develop a new technique for estimating the two- dimensional (2D direction-of-arrivals (DOAs of coherently distributed (CD sources, which can estimate effectively the central azimuth and central elevation of CD sources at the cost of less computational cost. Using the special L-shape array, a new approach for parametric estimation of CD sources is proposed. The proposed method is based on two rotational invariance relations under small angular approximation, and estimates two rotational matrices which depict the relations, using propagator technique. And then the central DOA estimations are obtained by utilizing the primary diagonal elements of two rotational matrices. Simulation results indicate that the proposed method can exhibit a good performance under small angular spread and be applied to the multisource scenario where different sources may have different angular distribution shapes. Without any peak-finding search and the eigendecomposition of the high-dimensional sample covariance matrix, the proposed method has significantly reduced the computational cost compared with the existing methods, and thus is beneficial to real-time processing and engineering realization. In addition, our approach is also a robust estimator which does not depend on the angular distribution shape of CD sources.

  4. Perceived loudness of spatially distributed sound sources

    DEFF Research Database (Denmark)

    Song, Woo-keun; Ellermeier, Wolfgang; Minnaar, Pauli

    2005-01-01

    psychoacoustic attributes into account. Therefore, a method for deriving loudness maps was developed in an earlier study [Song, Internoise2004, paper 271]. The present experiment investigates to which extent perceived loudness depends on the distribution of individual sound sources. Three loudspeakers were...... positioned 1.5 m from the centre of the listener’s head, one straight ahead, and two 10 degrees to the right and left, respectively. Six participants matched the loudness of either one, or two simultaneous sounds (narrow-band noises with 1-kHz, and 3.15-kHz centre frequencies) to a 2-kHz, 60-dB SPL narrow......-band noise placed in the frontal loudspeaker. The two sounds were either originating from the central speaker, or from the two offset loudspeakers. It turned out that the subjects perceived the noises to be softer when they were distributed in space. In addition, loudness was calculated from the recordings...

  5. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  6. Topology Counts: Force Distributions in Circular Spring Networks

    Science.gov (United States)

    Heidemann, Knut M.; Sageman-Furnas, Andrew O.; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F.; Wardetzky, Max

    2018-02-01

    Filamentous polymer networks govern the mechanical properties of many biological materials. Force distributions within these networks are typically highly inhomogeneous, and, although the importance of force distributions for structural properties is well recognized, they are far from being understood quantitatively. Using a combination of probabilistic and graph-theoretical techniques, we derive force distributions in a model system consisting of ensembles of random linear spring networks on a circle. We show that characteristic quantities, such as the mean and variance of the force supported by individual springs, can be derived explicitly in terms of only two parameters: (i) average connectivity and (ii) number of nodes. Our analysis shows that a classical mean-field approach fails to capture these characteristic quantities correctly. In contrast, we demonstrate that network topology is a crucial determinant of force distributions in an elastic spring network. Our results for 1D linear spring networks readily generalize to arbitrary dimensions.

  7. Topology Counts: Force Distributions in Circular Spring Networks.

    Science.gov (United States)

    Heidemann, Knut M; Sageman-Furnas, Andrew O; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F; Wardetzky, Max

    2018-02-09

    Filamentous polymer networks govern the mechanical properties of many biological materials. Force distributions within these networks are typically highly inhomogeneous, and, although the importance of force distributions for structural properties is well recognized, they are far from being understood quantitatively. Using a combination of probabilistic and graph-theoretical techniques, we derive force distributions in a model system consisting of ensembles of random linear spring networks on a circle. We show that characteristic quantities, such as the mean and variance of the force supported by individual springs, can be derived explicitly in terms of only two parameters: (i) average connectivity and (ii) number of nodes. Our analysis shows that a classical mean-field approach fails to capture these characteristic quantities correctly. In contrast, we demonstrate that network topology is a crucial determinant of force distributions in an elastic spring network. Our results for 1D linear spring networks readily generalize to arbitrary dimensions.

  8. Estimation of atomic interaction parameters by photon counting

    DEFF Research Database (Denmark)

    Kiilerich, Alexander Holm; Mølmer, Klaus

    2014-01-01

    Detection of radiation signals is at the heart of precision metrology and sensing. In this article we show how the fluctuations in photon counting signals can be exploited to optimally extract information about the physical parameters that govern the dynamics of the emitter. For a simple two......-level emitter subject to photon counting, we show that the Fisher information and the Cram\\'er- Rao sensitivity bound based on the full detection record can be evaluated from the waiting time distribution in the fluorescence signal which can, in turn, be calculated for both perfect and imperfect detectors...

  9. Measurement-device-independent quantum key distribution with correlated source-light-intensity errors

    Science.gov (United States)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2018-04-01

    We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.

  10. Determination of confidence limits for experiments with low numbers of counts

    International Nuclear Information System (INIS)

    Kraft, R.P.; Burrows, D.N.; Nousek, J.A.

    1991-01-01

    Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference. 12 refs

  11. Boosting up quantum key distribution by learning statistics of practical single-photon sources

    International Nuclear Information System (INIS)

    Adachi, Yoritoshi; Yamamoto, Takashi; Koashi, Masato; Imoto, Nobuyuki

    2009-01-01

    We propose a simple quantum-key-distribution (QKD) scheme for practical single-photon sources (SPSs), which works even with a moderate suppression of the second-order correlation g (2) of the source. The scheme utilizes a passive preparation of a decoy state by monitoring a fraction of the signal via an additional beam splitter and a detector at the sender's side to monitor photon-number splitting attacks. We show that the achievable distance increases with the precision with which the sub-Poissonian tendency is confirmed in higher photon-number distribution of the source, rather than with actual suppression of the multiphoton emission events. We present an example of the secure key generation rate in the case of a poor SPS with g (2) =0.19, in which no secure key is produced with the conventional QKD scheme, and show that learning the photon-number distribution up to several numbers is sufficient for achieving almost the same distance as that of an ideal SPS.

  12. Calibration of the Accuscan II In Vivo System for I-125 Thyroid Counting

    Energy Technology Data Exchange (ETDEWEB)

    Ovard R. Perry; David L. Georgeson

    2011-07-01

    This report describes the March 2011 calibration of the Accuscan II HpGe In Vivo system for I-125 thyroid counting. The source used for the calibration was a DOE manufactured Am-241/Eu-152 source contained in a 22 ml vial BEA Am-241/Eu-152 RMC II-1 with energies from 26 keV to 344 keV. The center of the detector housing was positioned 64 inches from the vault floor. This position places the approximate center line of the detector housing at the center line of the source in the phantom thyroid tube. The energy and efficiency calibration were performed using an RMC II phantom (Appendix J). Performance testing was conducted using source BEA Am-241/Eu-152 RMC II-1 and Validation testing was performed using an I-125 source in a 30 ml vial (I-125 BEA Thyroid 002) and an ANSI N44.3 phantom (Appendix I). This report includes an overview introduction and records for the energy/FWHM and efficiency calibration including performance verification and validation counting. The Accuscan II system was successfully calibrated for counting the thyroid for I-125 and verified in accordance with ANSI/HPS N13.30-1996 criteria.

  13. Distribution and Source Identification of Pb Contamination in industrial soil

    Science.gov (United States)

    Ko, M. S.

    2017-12-01

    INTRODUCTION Lead (Pb) is toxic element that induce neurotoxic effect to human, because competition of Pb and Ca in nerve system. Lead is classified as a chalophile element and galena (PbS) is the major mineral. Although the Pb is not an abundant element in nature, various anthropogenic source has been enhanced Pb enrichment in the environment after the Industrial Revolution. The representative anthropogenic sources are batteries, paint, mining, smelting, and combustion of fossil fuel. Isotope analysis widely used to identify the Pb contamination source. The Pb has four stable isotopes that are 208Pb, 207Pb, 206Pb, and 204Pb in natural. The Pb is stable isotope and the ratios maintain during physical and chemical fractionation. Therefore, variations of Pb isotope abundance and relative ratios could imply the certain Pb contamination source. In this study, distributions and isotope ratios of Pb in industrial soil were used to identify the Pb contamination source and dispersion pathways. MATERIALS AND METHODS Soil samples were collected at depth 0­-6 m from an industrial area in Korea. The collected soil samples were dried and sieved under 2 mm. Soil pH, aqua-regia digestion and TCLP carried out using sieved soil sample. The isotope analysis was carried out to determine the abundance of Pb isotope. RESULTS AND DISCUSSION The study area was developed land for promotion of industrial facilities. The study area was forest in 1980, and the satellite image show the alterations of land use with time. The variations of land use imply the possibilities of bringing in external contaminated soil. The Pb concentrations in core samples revealed higher in lower soil compare with top soil. Especially, 4 m soil sample show highest Pb concentrations that are approximately 1500 mg/kg. This result indicated that certain Pb source existed at 4 m depth. CONCLUSIONS This study investigated the distribution and source identification of Pb in industrial soil. The land use and Pb

  14. Determining profile of dose distribution for PD-103 brachytherapy source

    International Nuclear Information System (INIS)

    Berkay, Camgoz; Mehmet, N. Kumru; Gultekin, Yegin

    2006-01-01

    Full text: Brachytherapy is a particular radiotherapy for cancer treatments. By destructing cancerous cells using radiation, the treatment proceeded. When alive tissues are subject it is hazardous to study experimental. For brachytherapy sources generally are studied as theoretical using computer simulation. General concept of the treatment is to locate the radioactive source into cancerous area of related tissue. In computer studies Monte Carlo mathematical method that is in principle based on random number generations, is used. Palladium radioisotope is LDR (Low radiation Dose Rate) source. Main radioactive material was coated with titanium cylinder with 3mm length, 0.25 mm radius. There are two parts of Pd-103 in the titanium cylinder. It is impossible to investigate differential effects come from two part as experimental. Because the source dimensions are small compared with measurement distances. So there is only simulation method. In dosimetric studies it is aimed to determine absorbed dose distribution in tissue as radial and angular. In nuclear physics it is obligation to use computer based methods for researchers. Radiation studies have hazards for scientist and people interacted with radiation. When hazard exceed over recommended limits or physical conditions are not suitable (long work time, non economical experiments, inadequate sensitivity of materials etc.) it is unavoidable to simulate works and experiments before practices of scientific methods in life. In medical area, usage of radiation is required computational work for cancer treatments. Some computational studies are routine in clinics and other studies have scientific development purposes. In brachytherapy studies there are significant differences between experimental measurements and theoretical (computer based) output data. Errors of data taken from experimental studies are larger than simulation values errors. In design of a new brachytherapy source it is important to consider detailed

  15. Development of unfolding method to obtain pin-wise source strength distribution from PWR spent fuel assembly measurement

    International Nuclear Information System (INIS)

    Sitompul, Yos Panagaman; Shin, Hee-Sung; Park, Se-Hwan; Oh, Jong Myeong; Seo, Hee; Kim, Ho Dong

    2013-01-01

    An unfolding method has been developed to obtain a pin-wise source strength distribution of a 14 × 14 pressurized water reactor (PWR) spent fuel assembly. Sixteen measured gamma dose rates at 16 control rod guide tubes of an assembly are unfolded to 179 pin-wise source strengths of the assembly. The method calculates and optimizes five coefficients of the quadratic fitting function for X-Y source strength distribution, iteratively. The pin-wise source strengths are obtained at the sixth iteration, with a maximum difference between two sequential iterations of about 0.2%. The relative distribution of pin-wise source strength from the unfolding is checked using a comparison with the design code (Westinghouse APA code). The result shows that the relative distribution from the unfolding and design code is consistent within a 5% difference. The absolute value of the pin-wise source strength is also checked by reproducing the dose rates at the measurement points. The result shows that the pin-wise source strengths from the unfolding reproduce the dose rates within a 2% difference. (author)

  16. Sources and distribution of anthropogenic radionuclides in different marine environments

    International Nuclear Information System (INIS)

    Holm, E.

    1997-01-01

    The knowledge of the distribution in time and space radiologically important radionuclides from different sources in different marine environments is important for assessment of dose commitment following controlled or accidental releases and for detecting eventual new sources. Present sources from nuclear explosion tests, releases from nuclear facilities and the Chernobyl accident provide a tool for such studies. The different sources can be distinguished by different isotopic and radionuclide composition. Results show that radiocaesium behaves rather conservatively in the south and north Atlantic while plutonium has a residence time of about 8 years. On the other hand enhanced concentrations of plutonium in surface waters in arctic regions where vertical mixing is small and iceformation plays an important role. Significantly increased concentrations of plutonium are also found below the oxic layer in anoxic basins due to geochemical concentration. (author)

  17. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    Science.gov (United States)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  18. A simple method for calibration of Lucas scintillation cell counting system for measurement of 226Ra and 222Rn

    Directory of Open Access Journals (Sweden)

    N.K. Sethy

    2014-10-01

    Full Text Available Known quantity of radium from high grade ore solution was chemically separated and carefully kept inside the cavity of a Lucas Cell (LC. The 222Rn gradually builds up and attain secular equilibrium with its parent 226Ra. This gives a steady count after a suitable buildup period (>25 days. This secondary source was used to calibrate the radon counting system. The method is validated in by comparison with identical measurement with AlphaGuard Aquakit. The radon counting system was used to evaluate dissolved radon in ground water sample by gross alpha counting in LC. Radon counting system measures the collected radon after a delay of >180 min by gross alpha counting. Simultaneous measurement also carried out by AlphaGuard Aquakit in identical condition. AlphaGuard measures dissolved radon from water sample by constant aeration in a closed circuit without giving any delay. Both the methods are matching with a correlation coefficient of >0.9. This validates the calibration of Lucas scintillation cell counting system by designed encapsulated source. This study provides an alternative for calibration in absence of costly Radon source available in the market.

  19. Coded aperture imaging of alpha source spatial distribution

    International Nuclear Information System (INIS)

    Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul

    2012-01-01

    The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.

  20. Investigation of internal conversion electron lines by track counting technique

    CERN Document Server

    Islamov, T A; Kambarova, N T; Muminov, T M; Lebedev, N A; Solnyshkin, A A; Aleshin, Yu D; Kolesnikov, V V; Silaev, V I; Niipf-Tashgu, T

    2001-01-01

    The methodology of counting the tracks of the internal conversion electron (ICE) in the nuclear photoemulsion is described. The results on counting the ICE tracks on the photoplates for sup 1 sup 6 sup 1 Ho, sup 1 sup 6 sup 3 Tm, sup 1 sup 6 sup 6 Tm, sup 1 sup 3 sup 5 Ce is described. The above results are obtained through the MBI-9 microscope and the MAS-1 automated facility. The ICE track counting on the photoplates provides for essentially higher sensitivity as compared to the photometry method. This makes it possible to carry out measurements with the sources by 1000 times weaker as by the study into the density of blackening

  1. Fiber optic distributed temperature sensing for fire source localization

    Science.gov (United States)

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Sigrist, Markus W.; Li, Jun; Dong, Fengzhong

    2017-08-01

    A method for localizing a fire source based on a distributed temperature sensor system is proposed. Two sections of optical fibers were placed orthogonally to each other as the sensing elements. A tray of alcohol was lit to act as a fire outbreak in a cabinet with an uneven ceiling to simulate a real scene of fire. Experiments were carried out to demonstrate the feasibility of the method. Rather large fluctuations and systematic errors with respect to predicting the exact room coordinates of the fire source caused by the uneven ceiling were observed. Two mathematical methods (smoothing recorded temperature curves and finding temperature peak positions) to improve the prediction accuracy are presented, and the experimental results indicate that the fluctuation ranges and systematic errors are significantly reduced. The proposed scheme is simple and appears reliable enough to locate a fire source in large spaces.

  2. Bayesian Kernel Mixtures for Counts.

    Science.gov (United States)

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  3. Spatial distribution of saline water and possible sources of intrusion ...

    African Journals Online (AJOL)

    The spatial distribution of saline water and possible sources of intrusion into Lekki lagoon and transitional effects on the lacustrine ichthyofaunal characteristics were studied during March, 2006 and February, 2008. The water quality analysis indicated that, salinity has drastically increased recently in the lagoon (0.007 to ...

  4. Count-to-count time interval distribution analysis in a fast reactor; Estudio de la distribucion de intervalos de tiempo entre detecciones consecutivas de neutrones en un reactor rapido

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Navarro Gomez, A

    1973-07-01

    The most important kinetic parameters have been measured at the zero power fast reactor CORAL-I by means of the reactor noise analysis in the time domain, using measurements of the count-to-count time intervals. (Author) 69 refs.

  5. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local universe

    DEFF Research Database (Denmark)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene

    2017-01-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe....... Assuming that the distribution of the neutrino sources follows that of matter we look for correlations between `warm' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance...... (including that of IceCube-Gen2) we demonstrate that sources with local density exceeding $10^{-6} \\, \\text{Mpc}^{-3}$ and neutrino luminosity $L_{\

  6. An efficient central DOA tracking algorithm for multiple incoherently distributed sources

    Science.gov (United States)

    Hassen, Sonia Ben; Samet, Abdelaziz

    2015-12-01

    In this paper, we develop a new tracking method for the direction of arrival (DOA) parameters assuming multiple incoherently distributed (ID) sources. The new approach is based on a simple covariance fitting optimization technique exploiting the central and noncentral moments of the source angular power densities to estimate the central DOAs. The current estimates are treated as measurements provided to the Kalman filter that model the dynamic property of directional changes for the moving sources. Then, the covariance-fitting-based algorithm and the Kalman filtering theory are combined to formulate an adaptive tracking algorithm. Our algorithm is compared to the fast approximated power iteration-total least square-estimation of signal parameters via rotational invariance technique (FAPI-TLS-ESPRIT) algorithm using the TLS-ESPRIT method and the subspace updating via FAPI-algorithm. It will be shown that the proposed algorithm offers an excellent DOA tracking performance and outperforms the FAPI-TLS-ESPRIT method especially at low signal-to-noise ratio (SNR) values. Moreover, the performances of the two methods increase as the SNR values increase. This increase is more prominent with the FAPI-TLS-ESPRIT method. However, their performances degrade when the number of sources increases. It will be also proved that our method depends on the form of the angular distribution function when tracking the central DOAs. Finally, it will be shown that the more the sources are spaced, the more the proposed method can exactly track the DOAs.

  7. Exact distribution of a pattern in a set of random sequences generated by a Markov source: applications to biological data

    Directory of Open Access Journals (Sweden)

    Regad Leslie

    2010-01-01

    Full Text Available Abstract Background In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.. Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. Results The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Conclusions Our algorithms prove to be effective and able to handle real data sets with

  8. Tower counts

    Science.gov (United States)

    Woody, Carol Ann; Johnson, D.H.; Shrier, Brianna M.; O'Neal, Jennifer S.; Knutzen, John A.; Augerot, Xanthippe; O'Neal, Thomas A.; Pearsons, Todd N.

    2007-01-01

    Counting towers provide an accurate, low-cost, low-maintenance, low-technology, and easily mobilized escapement estimation program compared to other methods (e.g., weirs, hydroacoustics, mark-recapture, and aerial surveys) (Thompson 1962; Siebel 1967; Cousens et al. 1982; Symons and Waldichuk 1984; Anderson 2000; Alaska Department of Fish and Game 2003). Counting tower data has been found to be consistent with that of digital video counts (Edwards 2005). Counting towers do not interfere with natural fish migration patterns, nor are fish handled or stressed; however, their use is generally limited to clear rivers that meet specific site selection criteria. The data provided by counting tower sampling allow fishery managers to determine reproductive population size, estimate total return (escapement + catch) and its uncertainty, evaluate population productivity and trends, set harvest rates, determine spawning escapement goals, and forecast future returns (Alaska Department of Fish and Game 1974-2000 and 1975-2004). The number of spawning fish is determined by subtracting subsistence, sport-caught fish, and prespawn mortality from the total estimated escapement. The methods outlined in this protocol for tower counts can be used to provide reasonable estimates ( plus or minus 6%-10%) of reproductive salmon population size and run timing in clear rivers. 

  9. Flows and Stratification of an Enclosure Containing Both Localised and Vertically Distributed Sources of Buoyancy

    Science.gov (United States)

    Partridge, Jamie; Linden, Paul

    2013-11-01

    We examine the flows and stratification established in a naturally ventilated enclosure containing both a localised and vertically distributed source of buoyancy. The enclosure is ventilated through upper and lower openings which connect the space to an external ambient. Small scale laboratory experiments were carried out with water as the working medium and buoyancy being driven directly by temperature differences. A point source plume gave localised heating while the distributed source was driven by a controllable heater mat located in the side wall of the enclosure. The transient temperatures, as well as steady state temperature profiles, were recorded and are reported here. The temperature profiles inside the enclosure were found to be dependent on the effective opening area A*, a combination of the upper and lower openings, and the ratio of buoyancy fluxes from the distributed and localised source Ψ =Bw/Bp . Industrial CASE award with ARUP.

  10. Panchromatic spectral energy distributions of Herschel sources

    Science.gov (United States)

    Berta, S.; Lutz, D.; Santini, P.; Wuyts, S.; Rosario, D.; Brisbin, D.; Cooray, A.; Franceschini, A.; Gruppioni, C.; Hatziminaoglou, E.; Hwang, H. S.; Le Floc'h, E.; Magnelli, B.; Nordon, R.; Oliver, S.; Page, M. J.; Popesso, P.; Pozzetti, L.; Pozzi, F.; Riguccini, L.; Rodighiero, G.; Roseboom, I.; Scott, D.; Symeonidis, M.; Valtchanov, I.; Viero, M.; Wang, L.

    2013-03-01

    Combining far-infrared Herschel photometry from the PACS Evolutionary Probe (PEP) and Herschel Multi-tiered Extragalactic Survey (HerMES) guaranteed time programs with ancillary datasets in the GOODS-N, GOODS-S, and COSMOS fields, it is possible to sample the 8-500 μm spectral energy distributions (SEDs) of galaxies with at least 7-10 bands. Extending to the UV, optical, and near-infrared, the number of bands increases up to 43. We reproduce the distribution of galaxies in a carefully selected restframe ten colors space, based on this rich data-set, using a superposition of multivariate Gaussian modes. We use this model to classify galaxies and build median SEDs of each class, which are then fitted with a modified version of the magphys code that combines stellar light, emission from dust heated by stars and a possible warm dust contribution heated by an active galactic nucleus (AGN). The color distribution of galaxies in each of the considered fields can be well described with the combination of 6-9 classes, spanning a large range of far- to near-infrared luminosity ratios, as well as different strength of the AGN contribution to bolometric luminosities. The defined Gaussian grouping is used to identify rare or odd sources. The zoology of outliers includes Herschel-detected ellipticals, very blue z ~ 1 Ly-break galaxies, quiescent spirals, and torus-dominated AGN with star formation. Out of these groups and outliers, a new template library is assembled, consisting of 32 SEDs describing the intrinsic scatter in the restframe UV-to-submm colors of infrared galaxies. This library is tested against L(IR) estimates with and without Herschel data included, and compared to eightother popular methods often adopted in the literature. When implementing Herschel photometry, these approaches produce L(IR) values consistent with each other within a median absolute deviation of 10-20%, the scatter being dominated more by fine tuning of the codes, rather than by the choice of

  11. Production, Distribution, and Applications of Californium-252 Neutron Sources

    International Nuclear Information System (INIS)

    Balo, P.A.; Knauer, J.B.; Martin, R.C.

    1999-01-01

    The radioisotope 252 Cf is routinely encapsulated into compact, portable, intense neutron sources with a 2.6-year half-life. A source the size of a person's little finger can emit up to 10 11 neutrons/s. Californium-252 is used commercially as a reliable, cost-effective neutron source for prompt gamma neutron activation analysis (PGNAA) of coal, cement, and minerals, as well as for detection and identification of explosives, laud mines, and unexploded military ordnance. Other uses are neutron radiography, nuclear waste assays, reactor start-up sources, calibration standards, and cancer therapy. The inherent safety of source encapsulations is demonstrated by 30 years of experience and by U.S. Bureau of Mines tests of source survivability during explosions. The production and distribution center for the U. S Department of Energy (DOE) Californium Program is the Radiochemical Engineering Development Center (REDC) at Oak Ridge National Laboratory (ORNL). DOE sells 252 Cf to commercial reencapsulators domestically and internationally. Sealed 252 Cf sources are also available for loan to agencies and subcontractors of the U.S. government and to universities for educational, research, and medical applications. The REDC has established the Californium User Facility (CUF) for Neutron Science to make its large inventory of 252 Cf sources available to researchers for irradiations inside uncontaminated hot cells. Experiments at the CUF include a land mine detection system, neutron damage testing of solid-state detectors, irradiation of human cancer cells for boron neutron capture therapy experiments, and irradiation of rice to induce genetic mutations

  12. Effects of environmental variables on invasive amphibian activity: Using model selection on quantiles for counts

    Science.gov (United States)

    Muller, Benjamin J.; Cade, Brian S.; Schwarzkoph, Lin

    2018-01-01

    Many different factors influence animal activity. Often, the value of an environmental variable may influence significantly the upper or lower tails of the activity distribution. For describing relationships with heterogeneous boundaries, quantile regressions predict a quantile of the conditional distribution of the dependent variable. A quantile count model extends linear quantile regression methods to discrete response variables, and is useful if activity is quantified by trapping, where there may be many tied (equal) values in the activity distribution, over a small range of discrete values. Additionally, different environmental variables in combination may have synergistic or antagonistic effects on activity, so examining their effects together, in a modeling framework, is a useful approach. Thus, model selection on quantile counts can be used to determine the relative importance of different variables in determining activity, across the entire distribution of capture results. We conducted model selection on quantile count models to describe the factors affecting activity (numbers of captures) of cane toads (Rhinella marina) in response to several environmental variables (humidity, temperature, rainfall, wind speed, and moon luminosity) over eleven months of trapping. Environmental effects on activity are understudied in this pest animal. In the dry season, model selection on quantile count models suggested that rainfall positively affected activity, especially near the lower tails of the activity distribution. In the wet season, wind speed limited activity near the maximum of the distribution, while minimum activity increased with minimum temperature. This statistical methodology allowed us to explore, in depth, how environmental factors influenced activity across the entire distribution, and is applicable to any survey or trapping regime, in which environmental variables affect activity.

  13. Geometric effects in alpha particle detection from distributed air sources

    International Nuclear Information System (INIS)

    Gil, L.R.; Leitao, R.M.S.; Marques, A.; Rivera, A.

    1994-08-01

    Geometric effects associated to detection of alpha particles from distributed air sources, as it happens in Radon and Thoron measurements, are revisited. The volume outside which no alpha particle may reach the entrance window of the detector is defined and determined analytically for rectangular and cylindrical symmetry geometries. (author). 3 figs

  14. On Distributions of Emission Sources and Speed-of-Sound in Proton-Proton (Proton-Antiproton Collisions

    Directory of Open Access Journals (Sweden)

    Li-Na Gao

    2015-01-01

    Full Text Available The revised (three-source Landau hydrodynamic model is used in this paper to study the (pseudorapidity distributions of charged particles produced in proton-proton and proton-antiproton collisions at high energies. The central source is assumed to contribute with a Gaussian function which covers the rapidity distribution region as wide as possible. The target and projectile sources are assumed to emit isotropically particles in their respective rest frames. The model calculations obtained with a Monte Carlo method are fitted to the experimental data over an energy range from 0.2 to 13 TeV. The values of the squared speed-of-sound parameter in different collisions are then extracted from the width of the rapidity distributions.

  15. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    Energy Technology Data Exchange (ETDEWEB)

    AlRashidi, M.R., E-mail: malrash2002@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait); AlHajri, M.F., E-mail: mfalhajri@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait)

    2011-10-15

    Highlights: {yields} A new hybrid PSO for optimal DGs placement and sizing. {yields} Statistical analysis to fine tune PSO parameters. {yields} Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  16. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    International Nuclear Information System (INIS)

    AlRashidi, M.R.; AlHajri, M.F.

    2011-01-01

    Highlights: → A new hybrid PSO for optimal DGs placement and sizing. → Statistical analysis to fine tune PSO parameters. → Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  17. Fusion Neutronic Source deuterium endash tritium neutron spectrum measurements using natural diamond detectors

    International Nuclear Information System (INIS)

    Krasilnikov, A.V.; Kaneko, J.; Isobe, M.; Maekawa, F.; Nishitani, T.

    1997-01-01

    Two natural diamond detectors (NDDs) operating at room temperature were used for Fusion Neutronics Source (FNS) deuterium endash tritium (DT) neutron spectra measurements at different points around the tritium target and for different deuteron beam energies. Energy resolution of both NDDs were measured, with values 1.95% and 2.8%. Due to the higher energy resolution of one of the two NDDs studied it was possible to measure the shape of the DT neutron energy distribution and its broadening due to deuteron scattering inside the target. The influence of pulse pileup on the energy resolution of the combined system (NDD+electronics) at count rates up to 3.8x10 5 counts/s was investigated. A 3.58% energy resolution for the spectrometric system based on NDD and a 0.25 μs shaping time amplifier has been measured at a count rate of 5.7x10 5 counts/s. It is shown that special development of a fast pulse signal processor is necessary for NDD based spectrometry at count rates of approximately 10 6 counts/s. copyright 1997 American Institute of Physics

  18. SEDS: THE SPITZER EXTENDED DEEP SURVEY. SURVEY DESIGN, PHOTOMETRY, AND DEEP IRAC SOURCE COUNTS

    International Nuclear Information System (INIS)

    Ashby, M. L. N.; Willner, S. P.; Fazio, G. G.; Huang, J.-S.; Hernquist, L.; Hora, J. L.; Arendt, R.; Barmby, P.; Barro, G.; Faber, S.; Guhathakurta, P.; Bell, E. F.; Bouwens, R.; Cattaneo, A.; Croton, D.; Davé, R.; Dunlop, J. S.; Egami, E.; Finlator, K.; Grogin, N. A.

    2013-01-01

    The Spitzer Extended Deep Survey (SEDS) is a very deep infrared survey within five well-known extragalactic science fields: the UKIDSS Ultra-Deep Survey, the Extended Chandra Deep Field South, COSMOS, the Hubble Deep Field North, and the Extended Groth Strip. SEDS covers a total area of 1.46 deg 2 to a depth of 26 AB mag (3σ) in both of the warm Infrared Array Camera (IRAC) bands at 3.6 and 4.5 μm. Because of its uniform depth of coverage in so many widely-separated fields, SEDS is subject to roughly 25% smaller errors due to cosmic variance than a single-field survey of the same size. SEDS was designed to detect and characterize galaxies from intermediate to high redshifts (z = 2-7) with a built-in means of assessing the impact of cosmic variance on the individual fields. Because the full SEDS depth was accumulated in at least three separate visits to each field, typically with six-month intervals between visits, SEDS also furnishes an opportunity to assess the infrared variability of faint objects. This paper describes the SEDS survey design, processing, and publicly-available data products. Deep IRAC counts for the more than 300,000 galaxies detected by SEDS are consistent with models based on known galaxy populations. Discrete IRAC sources contribute 5.6 ± 1.0 and 4.4 ± 0.8 nW m –2 sr –1 at 3.6 and 4.5 μm to the diffuse cosmic infrared background (CIB). IRAC sources cannot contribute more than half of the total CIB flux estimated from DIRBE data. Barring an unexpected error in the DIRBE flux estimates, half the CIB flux must therefore come from a diffuse component.

  19. Reduction of CMOS Image Sensor Read Noise to Enable Photon Counting.

    Science.gov (United States)

    Guidash, Michael; Ma, Jiaju; Vogelsang, Thomas; Endsley, Jay

    2016-04-09

    Recent activity in photon counting CMOS image sensors (CIS) has been directed to reduction of read noise. Many approaches and methods have been reported. This work is focused on providing sub 1 e(-) read noise by design and operation of the binary and small signal readout of photon counting CIS. Compensation of transfer gate feed-through was used to provide substantially reduced CDS time and source follower (SF) bandwidth. SF read noise was reduced by a factor of 3 with this method. This method can be applied broadly to CIS devices to reduce the read noise for small signals to enable use as a photon counting sensor.

  20. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    Science.gov (United States)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  1. Full counting statistics of multiple Andreev reflections in incoherent diffusive superconducting junctions

    International Nuclear Information System (INIS)

    Samuelsson, P.

    2007-01-01

    We present a theory for the full distribution of current fluctuations in incoherent diffusive superconducting junctions, subjected to a voltage bias. This theory of full counting statistics of incoherent multiple Andreev reflections is valid for an arbitrary applied voltage. We present a detailed discussion of the properties of the first four cumulants as well as the low and high voltage regimes of the full counting statistics. (orig.)

  2. Family of Quantum Sources for Improving Near Field Accuracy in Transducer Modeling by the Distributed Point Source Method

    Directory of Open Access Journals (Sweden)

    Dominique Placko

    2016-10-01

    Full Text Available The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD. In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.

  3. Counting carbohydrates

    Science.gov (United States)

    Carb counting; Carbohydrate-controlled diet; Diabetic diet; Diabetes-counting carbohydrates ... Many foods contain carbohydrates (carbs), including: Fruit and fruit juice Cereal, bread, pasta, and rice Milk and milk products, soy milk Beans, legumes, ...

  4. Repeatability of differential goat bulk milk culture and associations with somatic cell count, total bacterial count, and standard plate count

    OpenAIRE

    Koop, G.; Dik, N.; Nielen, M.; Lipman, L.J.A.

    2010-01-01

    The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms, 3 bulk milk samples were collected at intervals of 2 wk. The samples were cultured for SPC, coliform count, and staphylococcal count and for the presence of Staphylococcus aureus. Furthermore, SCC ...

  5. Errors associated with moose-hunter counts of occupied beaver Castor fiber lodges in Norway

    OpenAIRE

    Parker, Howard; Rosell, Frank; Gustavsen, Per Øyvind

    2002-01-01

    In Norway, Sweden and Finland moose Alces alces hunting teams are often employed to survey occupied beaver (Castor fiber and C. canadensis) lodges while hunting. Results may be used to estimate population density or trend, or for issuing harvest permits. Despite the method's increasing popularity, the errors involved have never been identified. In this study we 1) compare hunting-team counts of occupied lodges with total counts, 2) identify the sources of error between counts and 3) evaluate ...

  6. Determination of efficiency curves for HPGE detector in different counting geometries

    International Nuclear Information System (INIS)

    Rodrigues, Josianne L.; Kastner, Geraldo F.; Ferreira, Andrea V.

    2011-01-01

    This paper presents the first experimental results related to determination of efficiency curves for HPGe detector in different counting geometries. The detector is a GX2520 Canberra belonging to CDTN/CNEN. Efficiency curves for punctual were determined by using a certified set of gamma sources. These curves were determined for three counting geometries. Following that, efficiency curves for non punctual samples were determined by using standard solutions of radionuclides in 500 ml and 1000 ml wash bottle Marinelli

  7. The Galactic Distribution of Massive Star Formation from the Red MSX Source Survey

    Science.gov (United States)

    Figura, Charles C.; Urquhart, J. S.

    2013-01-01

    Massive stars inject enormous amounts of energy into their environments in the form of UV radiation and molecular outflows, creating HII regions and enriching local chemistry. These effects provide feedback mechanisms that aid in regulating star formation in the region, and may trigger the formation of subsequent generations of stars. Understanding the mechanics of massive star formation presents an important key to understanding this process and its role in shaping the dynamics of galactic structure. The Red MSX Source (RMS) survey is a multi-wavelength investigation of ~1200 massive young stellar objects (MYSO) and ultra-compact HII (UCHII) regions identified from a sample of colour-selected sources from the Midcourse Space Experiment (MSX) point source catalog and Two Micron All Sky Survey. We present a study of over 900 MYSO and UCHII regions investigated by the RMS survey. We review the methods used to determine distances, and investigate the radial galactocentric distribution of these sources in context with the observed structure of the galaxy. The distribution of MYSO and UCHII regions is found to be spatially correlated with the spiral arms and galactic bar. We examine the radial distribution of MYSOs and UCHII regions and find variations in the star formation rate between the inner and outer Galaxy and discuss the implications for star formation throughout the galactic disc.

  8. Double hard scattering without double counting

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Markus [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Gaunt, Jonathan R. [VU Univ. Amsterdam (Netherlands). NIKHEF Theory Group; Schoenwald, Kay [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2017-02-15

    Double parton scattering in proton-proton collisions includes kinematic regions in which two partons inside a proton originate from the perturbative splitting of a single parton. This leads to a double counting problem between single and double hard scattering. We present a solution to this problem, which allows for the definition of double parton distributions as operator matrix elements in a proton, and which can be used at higher orders in perturbation theory. We show how the evaluation of double hard scattering in this framework can provide a rough estimate for the size of the higher-order contributions to single hard scattering that are affected by double counting. In a numeric study, we identify situations in which these higher-order contributions must be explicitly calculated and included if one wants to attain an accuracy at which double hard scattering becomes relevant, and other situations where such contributions may be neglected.

  9. Double hard scattering without double counting

    International Nuclear Information System (INIS)

    Diehl, Markus; Gaunt, Jonathan R.

    2017-02-01

    Double parton scattering in proton-proton collisions includes kinematic regions in which two partons inside a proton originate from the perturbative splitting of a single parton. This leads to a double counting problem between single and double hard scattering. We present a solution to this problem, which allows for the definition of double parton distributions as operator matrix elements in a proton, and which can be used at higher orders in perturbation theory. We show how the evaluation of double hard scattering in this framework can provide a rough estimate for the size of the higher-order contributions to single hard scattering that are affected by double counting. In a numeric study, we identify situations in which these higher-order contributions must be explicitly calculated and included if one wants to attain an accuracy at which double hard scattering becomes relevant, and other situations where such contributions may be neglected.

  10. Establishment of a Practical Approach for Characterizing the Source of Particulates in Water Distribution Systems

    Directory of Open Access Journals (Sweden)

    Seon-Ha Chae

    2016-02-01

    Full Text Available Water quality complaints related to particulate matter and discolored water can be troublesome for water utilities in terms of follow-up investigations and implementation of appropriate actions because particulate matter can enter from a variety of sources; moreover, physicochemical processes can affect the water quality during the purification and transportation processes. The origin of particulates can be attributed to sources such as background organic/inorganic materials from water sources, water treatment plants, water distribution pipelines that have deteriorated, and rehabilitation activities in the water distribution systems. In this study, a practical method is proposed for tracing particulate sources. The method entails collecting information related to hydraulic, water quality, and structural conditions, employing a network flow-path model, and establishing a database of physicochemical properties for tubercles and slimes. The proposed method was implemented within two city water distribution systems that were located in Korea. These applications were conducted to demonstrate the practical applicability of the method for providing solutions to customer complaints. The results of the field studies indicated that the proposed method would be feasible for investigating the sources of particulates and for preparing appropriate action plans for complaints related to particulate matter.

  11. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  12. Is a top-heavy initial mass function needed to reproduce the submillimetre galaxy number counts?

    Science.gov (United States)

    Safarzadeh, Mohammadtaher; Lu, Yu; Hayward, Christopher C.

    2017-12-01

    Matching the number counts and redshift distribution of submillimetre galaxies (SMGs) without invoking modifications to the initial mass ffunction (IMF) has proved challenging for semi-analytic models (SAMs) of galaxy formation. We adopt a previously developed SAM that is constrained to match the z = 0 galaxy stellar mass function and makes various predictions which agree well with observational constraints; we do not recalibrate the SAM for this work. We implement three prescriptions to predict the submillimetre flux densities of the model galaxies; two depend solely on star formation rate, whereas the other also depends on the dust mass. By comparing the predictions of the models, we find that taking into account the dust mass, which affects the dust temperature and thus influences the far-infrared spectral energy distribution, is crucial for matching the number counts and redshift distribution of SMGs. Moreover, despite using a standard IMF, our model can match the observed SMG number counts and redshift distribution reasonably well, which contradicts the conclusions of some previous studies that a top-heavy IMF, in addition to taking into account the effect of dust mass, is needed to match these observations. Although we have not identified the key ingredient that is responsible for our model matching the observed SMG number counts and redshift distribution without IMF variation - which is challenging given the different prescriptions for physical processes employed in the SAMs of interest - our results demonstrate that in SAMs, IMF variation is degenerate with other physical processes, such as stellar feedback.

  13. Road safety performance measures and AADT uncertainty from short-term counts.

    Science.gov (United States)

    Milligan, Craig; Montufar, Jeannette; Regehr, Jonathan; Ghanney, Bartholomew

    2016-12-01

    The objective of this paper is to enable better risk analysis of road safety performance measures by creating the first knowledge base on uncertainty surrounding annual average daily traffic (AADT) estimates when the estimates are derived by expanding short-term counts with the individual permanent counter method. Many road safety performance measures and performance models use AADT as an input. While there is an awareness that the input suffers from uncertainty, the uncertainty is not well known or accounted for. The paper samples data from a set of 69 permanent automatic traffic recorders in Manitoba, Canada, to simulate almost 2 million short-term counts over a five year period. These short-term counts are expanded to AADT estimates by transferring temporal information from a directly linked nearby permanent count control station, and the resulting AADT values are compared to a known reference AADT to compute errors. The impacts of five factors on AADT error are considered: length of short-term count, number of short-term counts, use of weekday versus weekend counts, distance from a count to its expansion control station, and the AADT at the count site. The mean absolute transfer error for expanded AADT estimates is 6.7%, and this value varied by traffic pattern group from 5% to 10.5%. Reference percentiles of the error distribution show that almost all errors are between -20% and +30%. Error decreases substantially by using a 48-h count instead of a 24-h count, and only slightly by using two counts instead of one. Weekday counts are superior to weekend counts, especially if the count is only 24h. Mean absolute transfer error increases with distance to control station (elasticity 0.121, p=0.001), and increases with AADT (elasticity 0.857, proad safety performance measures that use AADT as inputs. Analytical frameworks for such analysis exist but are infrequently used in road safety because the evidence base on AADT uncertainty is not well developed. Copyright

  14. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    Science.gov (United States)

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  15. THE HAWAII SCUBA-2 LENSING CLUSTER SURVEY: NUMBER COUNTS AND SUBMILLIMETER FLUX RATIOS

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, Li-Yen; Cowie, Lennox L.; Barger, Amy J. [Institute of Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Chen, Chian-Chou [Center for Extragalactic Astronomy, Department of Physics, Durham University, South Road, Durham DH1 3LE (United Kingdom); Wang, Wei-Hao [Academia Sinica Institute of Astronomy and Astrophysics, P.O. Box 23-141, Taipei 10617, Taiwan (China)

    2016-09-20

    We present deep number counts at 450 and 850 μ m using the SCUBA-2 camera on the James Clerk Maxwell Telescope. We combine data for six lensing cluster fields and three blank fields to measure the counts over a wide flux range at each wavelength. Thanks to the lensing magnification, our measurements extend to fluxes fainter than 1 mJy and 0.2 mJy at 450 μ m and 850 μ m, respectively. Our combined data highly constrain the faint end of the number counts. Integrating our counts shows that the majority of the extragalactic background light (EBL) at each wavelength is contributed by faint sources with L {sub IR} < 10{sup 12} L {sub ⊙}, corresponding to luminous infrared galaxies (LIRGs) or normal galaxies. By comparing our result with the 500 μ m stacking of K -selected sources from the literature, we conclude that the K -selected LIRGs and normal galaxies still cannot fully account for the EBL that originates from sources with L {sub IR} < 10{sup 12} L {sub ⊙}. This suggests that many faint submillimeter galaxies may not be included in the UV star formation history. We also explore the submillimeter flux ratio between the two bands for our 450 μ m and 850 μ m selected sources. At 850 μ m, we find a clear relation between the flux ratio and the observed flux. This relation can be explained by a redshift evolution, where galaxies at higher redshifts have higher luminosities and star formation rates. In contrast, at 450 μ m, we do not see a clear relation between the flux ratio and the observed flux.

  16. Lower white blood cell counts in elite athletes training for highly aerobic sports.

    Science.gov (United States)

    Horn, P L; Pyne, D B; Hopkins, W G; Barnes, C J

    2010-11-01

    White cell counts at rest might be lower in athletes participating in selected endurance-type sports. Here, we analysed blood tests of elite athletes collected over a 10-year period. Reference ranges were established for 14 female and 14 male sports involving 3,679 samples from 937 females and 4,654 samples from 1,310 males. Total white blood cell counts and counts of neutrophils, lymphocytes and monocytes were quantified. Each sport was scaled (1-5) for its perceived metabolic stress (aerobic-anaerobic) and mechanical stress (concentric-eccentric) by 13 sports physiologists. Substantially lower total white cell and neutrophil counts were observed in aerobic sports of cycling and triathlon (~16% of test results below the normal reference range) compared with team or skill-based sports such as water polo, cricket and volleyball. Mechanical stress of sports had less effect on the distribution of cell counts. The lower white cell counts in athletes in aerobic sports probably represent an adaptive response, not underlying pathology.

  17. Z-Source-Inverter-Based Flexible Distributed Generation System Solution for Grid Power Quality Improvement

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Vilathgamuwa, D. M.; Loh, Poh Chiang

    2009-01-01

    Distributed generation (DG) systems are usually connected to the grid using power electronic converters. Power delivered from such DG sources depends on factors like energy availability and load demand. The converters used in power conversion do not operate with their full capacity all the time......-stage buck-boost inverter, recently proposed Z-source inverter (ZSI) is a good candidate for future DG systems. This paper presents a controller design for a ZSI-based DG system to improve power quality of distribution systems. The proposed control method is tested with simulation results obtained using...

  18. 10 CFR 32.74 - Manufacture and distribution of sources or devices containing byproduct material for medical use.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Manufacture and distribution of sources or devices... SPECIFIC DOMESTIC LICENSES TO MANUFACTURE OR TRANSFER CERTAIN ITEMS CONTAINING BYPRODUCT MATERIAL Generally Licensed Items § 32.74 Manufacture and distribution of sources or devices containing byproduct material for...

  19. Set of counts by scintillations for atmospheric samplings; Ensemble de comptages par scintillations pour prelevements atmospheriques

    Energy Technology Data Exchange (ETDEWEB)

    Appriou, D.; Doury, A.

    1962-07-01

    The author reports the development of a scintillation-based counting assembly with the following characteristics: a photo-multiplier with a wide photo-cathode, a thin plastic scintillator for the counting of beta + alpha (and possibility of mounting an alpha scintillator), a relatively small own motion with respect to activities to be counted, a weakly varying efficiency. The authors discuss the counting objective, present equipment tests (counter, proportional amplifier and pre-amplifier, input drawer). They describe the apparatus operation, discuss the selection of scintillators, report the study of the own movement (electron-based background noise, total background noise, background noise reduction), discuss counts (influence of the external source, sensitivity to alpha radiations, counting homogeneity, minimum detectable activity) and efficiencies.

  20. Standardization of 241Am by digital coincidence counting, liquid scintillation counting and defined solid angle counting

    International Nuclear Information System (INIS)

    Balpardo, C.; Capoulat, M.E.; Rodrigues, D.; Arenillas, P.

    2010-01-01

    The nuclide 241 Am decays by alpha emission to 237 Np. Most of the decays (84.6%) populate the excited level of 237 Np with energy of 59.54 keV. Digital coincidence counting was applied to standardize a solution of 241 Am by alpha-gamma coincidence counting with efficiency extrapolation. Electronic discrimination was implemented with a pressurized proportional counter and the results were compared with two other independent techniques: Liquid scintillation counting using the logical sum of double coincidences in a TDCR array and defined solid angle counting taking into account activity inhomogeneity in the active deposit. The results show consistency between the three methods within a limit of a 0.3%. An ampoule of this solution will be sent to the International Reference System (SIR) during 2009. Uncertainties were analysed and compared in detail for the three applied methods.

  1. Investigating The Neutron Flux Distribution Of The Miniature Neutron Source Reactor MNSR Type

    International Nuclear Information System (INIS)

    Nguyen Hoang Hai; Do Quang Binh

    2011-01-01

    Neutron flux distribution is the important characteristic of nuclear reactor. In this article, four energy group neutron flux distributions of the miniature neutron source reactor MNSR type versus radial and axial directions are investigated in case the control rod is fully withdrawn. In addition, the effect of control rod positions on the thermal neutron flux distribution is also studied. The group constants for all reactor components are generated by the WIMSD code, and the neutron flux distributions are calculated by the CITATION code. The results show that the control rod positions only affect in the planning area for distribution in the region around the control rod. (author)

  2. Benjamin Thompson, Count Rumford Count Rumford on the nature of heat

    CERN Document Server

    Brown, Sanborn C

    1967-01-01

    Men of Physics: Benjamin Thompson - Count Rumford: Count Rumford on the Nature of Heat covers the significant contributions of Count Rumford in the fields of physics. Count Rumford was born with the name Benjamin Thompson on March 23, 1753, in Woburn, Massachusetts. This book is composed of two parts encompassing 11 chapters, and begins with a presentation of Benjamin Thompson's biography and his interest in physics, particularly as an advocate of an """"anti-caloric"""" theory of heat. The subsequent chapters are devoted to his many discoveries that profoundly affected the physical thought

  3. Some target assay uncertainties for passive neutron coincidence counting

    International Nuclear Information System (INIS)

    Ensslin, N.; Langner, D.G.; Menlove, H.O.; Miller, M.C.; Russo, P.A.

    1990-01-01

    This paper provides some target assay uncertainties for passive neutron coincidence counting of plutonium metal, oxide, mixed oxide, and scrap and waste. The target values are based in part on past user experience and in part on the estimated results from new coincidence counting techniques that are under development. The paper summarizes assay error sources and the new coincidence techniques, and recommends the technique that is likely to yield the lowest assay uncertainty for a given material type. These target assay uncertainties are intended to be useful for NDA instrument selection and assay variance propagation studies for both new and existing facilities. 14 refs., 3 tabs

  4. A practical algorithm for distribution state estimation including renewable energy sources

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher [Electronic and Electrical Department, Shiraz University of Technology, Modares Blvd., P.O. 71555-313, Shiraz (Iran); Firouzi, Bahman Bahmani [Islamic Azad University Marvdasht Branch, Marvdasht (Iran)

    2009-11-15

    Renewable energy is energy that is in continuous supply over time. These kinds of energy sources are divided into five principal renewable sources of energy: the sun, the wind, flowing water, biomass and heat from within the earth. According to some studies carried out by the research institutes, about 25% of the new generation will be generated by Renewable Energy Sources (RESs) in the near future. Therefore, it is necessary to study the impact of RESs on the power systems, especially on the distribution networks. This paper presents a practical Distribution State Estimation (DSE) including RESs and some practical consideration. The proposed algorithm is based on the combination of Nelder-Mead simplex search and Particle Swarm Optimization (PSO) algorithms, called PSO-NM. The proposed algorithm can estimate load and RES output values by Weighted Least-Square (WLS) approach. Some practical considerations are var compensators, Voltage Regulators (VRs), Under Load Tap Changer (ULTC) transformer modeling, which usually have nonlinear and discrete characteristics, and unbalanced three-phase power flow equations. The comparison results with other evolutionary optimization algorithms such as original PSO, Honey Bee Mating Optimization (HBMO), Neural Networks (NNs), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) for a test system demonstrate that PSO-NM is extremely effective and efficient for the DSE problems. (author)

  5. Platelet Counts, MPV and PDW in Culture Proven and Probable Neonatal Sepsis and Association of Platelet Counts with Mortality Rate

    International Nuclear Information System (INIS)

    Ahmad, M. S.; Waheed, A.

    2014-01-01

    Objective: To determine frequency of thrombocytopenia and thrombocytosis, the MPV (mean platelet volume) and PDW (platelet distribution width) in patients with probable and culture proven neonatal sepsis and determine any association between platelet counts and mortality rate. Study Design: Descriptive analytical study. Place and Duration of Study: NICU, Fazle Omar Hospital, from January 2011 to December 2012. Methodology: Cases of culture proven and probable neonatal sepsis, admitted in Fazle Omar Hospital, Rabwah, were included in the study. Platelet counts, MPV and PDW of the cases were recorded. Mortality was documented. Frequencies of thrombocytopenia ( 450000/mm3) were ascertained. Mortality rates in different groups according to platelet counts were calculated and compared by chi-square test to check association. Results: Four hundred and sixty nine patients were included; 68 (14.5%) of them died. One hundred and thirty six (29%) had culture proven sepsis, and 333 (71%) were categorized as probable sepsis. Thrombocytopenia was present in 116 (24.7%), and thrombocytosis was present in 36 (7.7%) cases. Median platelet count was 213.0/mm3. Twenty eight (27.7%) patients with thrombocytopenia, and 40 (12.1%) cases with normal or raised platelet counts died (p < 0.001). Median MPV was 9.30, and median PDW was 12.30. MPV and PDW of the patients who died and who were discharged were not significantly different from each other. Conclusion: Thrombocytopenia is a common complication of neonatal sepsis. Those with thrombocytopenia have higher mortality rate. No significant difference was present between PDW and MPV of the cases who survived and died. (author)

  6. Impact of microbial count distributions on human health risk estimates

    DEFF Research Database (Denmark)

    Ribeiro Duarte, Ana Sofia; Nauta, Maarten

    2015-01-01

    Quantitative microbiological risk assessment (QMRA) is influenced by the choice of the probability distribution used to describe pathogen concentrations, as this may eventually have a large effect on the distribution of doses at exposure. When fitting a probability distribution to microbial...... enumeration data, several factors may have an impact on the accuracy of that fit. Analysis of the best statistical fits of different distributions alone does not provide a clear indication of the impact in terms of risk estimates. Thus, in this study we focus on the impact of fitting microbial distributions...... on risk estimates, at two different concentration scenarios and at a range of prevalence levels. By using five different parametric distributions, we investigate whether different characteristics of a good fit are crucial for an accurate risk estimate. Among the factors studied are the importance...

  7. Distribution of hadron intranuclear cascade for large distance from a source

    International Nuclear Information System (INIS)

    Bibin, V.L.; Kazarnovskij, M.V.; Serezhnikov, S.V.

    1985-01-01

    Analytical solution of the problem of three-component hadron cascade development for large distances from a source is obtained in the framework of a series of simplifying assumptions. It makes possible to understand physical mechanisms of the process studied and to obtain approximate asymptotic expressions for hadron distribution functions

  8. A LATENT CLASS POISSON REGRESSION-MODEL FOR HETEROGENEOUS COUNT DATA

    NARCIS (Netherlands)

    WEDEL, M; DESARBO, WS; BULT, [No Value; RAMASWAMY, [No Value

    1993-01-01

    In this paper an approach is developed that accommodates heterogeneity in Poisson regression models for count data. The model developed assumes that heterogeneity arises from a distribution of both the intercept and the coefficients of the explanatory variables. We assume that the mixing

  9. Polarization effect of CdZnTe imaging detector based on high energy γ source

    International Nuclear Information System (INIS)

    Li Miao; Xiao Shali; Wang Xi; Shen Min; Zhang Liuqiang; Cao Yulin; Chen Yuxiao

    2011-01-01

    The inner electric potential distribution of CdZnTe detector was derived by applying poisson equation with the first type boundary condition, and the polarization effect of CdZnTe pixellated detector for imaging 137 Cs γ source was investigated. The results of numerical calculation and experiment indicate that electric potential distribution is mainly influenced by applied bias for low charge density in CdZnTe crystal and, in turn, there is linear relationship between electric potential distribution and applied bias that induces uniform electric field under low irradiated flux. However, the electric potential appears polarization phenomenon, and the electric field in CdZnTe crystal is distorted when CdZnTe detector is under high irradiated flux. Consequently, charge carriers in CdZnTe crystal drift towards the edge pixels of irradiated region, and hence, the shut-off central pixels are surrounded by a ring of low counting pixels. The polarization effect indeed deteriorates the performance of CdZnTe detector severely and the event counts of edge pixels for irradiated region reduce about 70%. (authors)

  10. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    Science.gov (United States)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit

  11. Counting cormorants

    DEFF Research Database (Denmark)

    Bregnballe, Thomas; Carss, David N; Lorentsen, Svein-Håkon

    2013-01-01

    This chapter focuses on Cormorant population counts for both summer (i.e. breeding) and winter (i.e. migration, winter roosts) seasons. It also explains differences in the data collected from undertaking ‘day’ versus ‘roost’ counts, gives some definitions of the term ‘numbers’, and presents two...

  12. Poisson regression for modeling count and frequency outcomes in trauma research.

    Science.gov (United States)

    Gagnon, David R; Doron-LaMarca, Susan; Bell, Margret; O'Farrell, Timothy J; Taft, Casey T

    2008-10-01

    The authors describe how the Poisson regression method for analyzing count or frequency outcome variables can be applied in trauma studies. The outcome of interest in trauma research may represent a count of the number of incidents of behavior occurring in a given time interval, such as acts of physical aggression or substance abuse. Traditional linear regression approaches assume a normally distributed outcome variable with equal variances over the range of predictor variables, and may not be optimal for modeling count outcomes. An application of Poisson regression is presented using data from a study of intimate partner aggression among male patients in an alcohol treatment program and their female partners. Results of Poisson regression and linear regression models are compared.

  13. The Fractal Characteristics of the Landslides by Box-Counting and P-A Model

    Science.gov (United States)

    Wang, Zhiwang; Zhou, Fangfang; Cao, Hao

    2018-01-01

    The landslide is a kind of complicated phenomenon with nonlinear inter-reaction. The traditional theories and methods are difficult to study the uncertainty characteristics of dynamic evolution of the landslides. This paper applies box-counting and P-A model to study the fractal characteristics of geometric shape and spatial distribution of the landslide hazards in the study area from Badong county to Zigui county in TGP reservoir region. The data obtained from the study area shows power-law distributions of geometric shape and spatial distribution of the landslides, and thus reveals some fractal or self-similarity properties. The fractral dimensions DAP of the spatial distribution of landslides by P-A model shows that DAP of the western landslides in the study area are smaller than those of the east, which shows that the geometry of the eastern landslide is more irregular and complicated than the western ones. The results show box-counting model and P-A model can be used to characterize the fractal characteristics of geometric shape and spatial distribution of the landslides.

  14. Short communication: Repeatability of differential goat bulk milk culture and associations with somatic cell count, total bacterial count, and standard plate count.

    Science.gov (United States)

    Koop, G; Dik, N; Nielen, M; Lipman, L J A

    2010-06-01

    The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms, 3 bulk milk samples were collected at intervals of 2 wk. The samples were cultured for SPC, coliform count, and staphylococcal count and for the presence of Staphylococcus aureus. Furthermore, SCC (Fossomatic 5000, Foss, Hillerød, Denmark) and TBC (BactoScan FC 150, Foss) were measured. Staphylococcal count was correlated to SCC (r=0.40), TBC (r=0.51), and SPC (r=0.53). Coliform count was correlated to TBC (r=0.33), but not to any of the other variables. Staphylococcus aureus did not correlate to SCC. The contribution of the staphylococcal count to the SPC was 31%, whereas the coliform count comprised only 1% of the SPC. The agreement of the repeated measurements was low. This study indicates that staphylococci in goat bulk milk are related to SCC and make a significant contribution to SPC. Because of the high variation in bacterial counts, repeated sampling is necessary to draw valid conclusions from bulk milk culturing. 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Analysis of correlated count data using generalised linear mixed models exemplified by field data on aggressive behaviour of boars

    Directory of Open Access Journals (Sweden)

    N. Mielenz

    2015-01-01

    Full Text Available Population-averaged and subject-specific models are available to evaluate count data when repeated observations per subject are present. The latter are also known in the literature as generalised linear mixed models (GLMM. In GLMM repeated measures are taken into account explicitly through random animal effects in the linear predictor. In this paper the relevant GLMMs are presented based on conditional Poisson or negative binomial distribution of the response variable for given random animal effects. Equations for the repeatability of count data are derived assuming normal distribution and logarithmic gamma distribution for the random animal effects. Using count data on aggressive behaviour events of pigs (barrows, sows and boars in mixed-sex housing, we demonstrate the use of the Poisson »log-gamma intercept«, the Poisson »normal intercept« and the »normal intercept« model with negative binomial distribution. Since not all count data can definitely be seen as Poisson or negative-binomially distributed, questions of model selection and model checking are examined. Emanating from the example, we also interpret the least squares means, estimated on the link as well as the response scale. Options provided by the SAS procedure NLMIXED for estimating model parameters and for estimating marginal expected values are presented.

  16. Bayesian inference from count data using discrete uniform priors.

    Directory of Open Access Journals (Sweden)

    Federico Comoglio

    Full Text Available We consider a set of sample counts obtained by sampling arbitrary fractions of a finite volume containing an homogeneously dispersed population of identical objects. We report a Bayesian derivation of the posterior probability distribution of the population size using a binomial likelihood and non-conjugate, discrete uniform priors under sampling with or without replacement. Our derivation yields a computationally feasible formula that can prove useful in a variety of statistical problems involving absolute quantification under uncertainty. We implemented our algorithm in the R package dupiR and compared it with a previously proposed Bayesian method based on a Gamma prior. As a showcase, we demonstrate that our inference framework can be used to estimate bacterial survival curves from measurements characterized by extremely low or zero counts and rather high sampling fractions. All in all, we provide a versatile, general purpose algorithm to infer population sizes from count data, which can find application in a broad spectrum of biological and physical problems.

  17. Algorithms for random generation and counting a Markov chain approach

    CERN Document Server

    Sinclair, Alistair

    1993-01-01

    This monograph studies two classical computational problems: counting the elements of a finite set of combinatorial structures, and generating them at random from some probability distribution. Apart from their intrinsic interest, these problems arise naturally in many branches of mathematics and the natural sciences.

  18. On the fast response of charnel electron multipliers in coUnting mode operation

    International Nuclear Information System (INIS)

    Belyaevskij, O.A.; Gladyshev, I.L.; Korobochko, Yu.S.; Mineev, V.I.

    1983-01-01

    Dependences of amplitude distribution of pulses at the outlet of channel electron multipliers (CEM) and effectiveness of monitoring on counting rate at different supply voltages are determined. It is shown that the maximUm counting rate of CEM runs into 6x10 5 s -1 at short-term and 10 5 s -1 at long-term operation using monitoring eqUipment with operation threshold of 2.5 mV

  19. Using a topographic index to distribute variable source area runoff predicted with the SCS curve-number equation

    Science.gov (United States)

    Lyon, Steve W.; Walter, M. Todd; Gérard-Marchant, Pierre; Steenhuis, Tammo S.

    2004-10-01

    Because the traditional Soil Conservation Service curve-number (SCS-CN) approach continues to be used ubiquitously in water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed and tested a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Predicting the location of source areas is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point-source pollution. The method presented here used the traditional SCS-CN approach to predict runoff volume and spatial extent of saturated areas and a topographic index, like that used in TOPMODEL, to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was applied to two subwatersheds of the Delaware basin in the Catskill Mountains region of New York State and one watershed in south-eastern Australia to produce runoff-probability maps. Observed saturated area locations in the watersheds agreed with the distributed CN-VSA method. Results showed good agreement with those obtained from the previously validated soil moisture routing (SMR) model. When compared with the traditional SCS-CN method, the distributed CN-VSA method predicted a similar total volume of runoff, but vastly different locations of runoff generation. Thus, the distributed CN-VSA approach provides a physically based method that is simple enough to be incorporated into water quality models, and other tools that currently use the traditional SCS-CN method, while still adhering to the principles of VSA hydrology.

  20. Digital intelligence sources transporter

    International Nuclear Information System (INIS)

    Zhang Zhen; Wang Renbo

    2011-01-01

    It presents from the collection of particle-ray counting, infrared data communication, real-time monitoring and alarming, GPRS and other issues start to realize the digital management of radioactive sources, complete the real-time monitoring of all aspects, include the storing of radioactive sources, transporting and using, framing intelligent radioactive sources transporter, as a result, achieving reliable security supervision of radioactive sources. (authors)

  1. Characterization of a neutron source of 239PuBe

    International Nuclear Information System (INIS)

    Hernandez V, R.; Chacon R, A.; Hernandez D, V. M.; Mercado, G. A.; Vega C, H. R.; Ramirez G, J.

    2009-10-01

    The spectrum equivalent dose and environmental equivalent dose f a 239 PuBe source have been determined. The appropriate handling of a neutron source depends on the knowledge of its characteristics, such as its energy distribution, total rate of flowing and dosimetric magnitudes. In many facilities have not spectrometer that allows to determine the spectrum and then area monitors are used that give a dosimetric magnitude starting from measuring the flowing rate and the use of conversion factors, however this procedure has many limitations and it is preferable to measure the spectra and starting from this information the interest dosimetric magnitudes are calculated. In this work a Bonner sphere spectrometer has been used with a 6 LiI(Eu) scintillator obtaining the count rates that produce, to a distance of 100 cm, a 239 PuBe source of 1.85E(11) Bq. The spectrum was reconstructed starting from the count rates using BUNKIUT code and response matrix UTA4. With the spectrum information was calculated the source intensity, total flow, energy average, equivalent dose rate, environmental equivalent dose rate, equivalent dose coefficient and environmental equivalent dose coefficient. By means of two area monitors for neutrons, Eberline ASP-1 and LB 6411 of Berthold the equivalent dose and environmental equivalent dose were measured. The determinate values were compared with those reported in literature and it found that are coincident inside 17%. (Author)

  2. Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette

    Directory of Open Access Journals (Sweden)

    Fang Li

    2013-10-01

    Full Text Available This paper proposes an approach for acoustic emission (AE source localization in a large marble stone using distributed feedback (DFB fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location.

  3. Logistic quantile regression provides improved estimates for bounded avian counts: a case study of California Spotted Owl fledgling production

    Science.gov (United States)

    Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...

  4. Neutron diffraction measurements at the INES diffractometer using a neutron radiative capture based counting technique

    Energy Technology Data Exchange (ETDEWEB)

    Festa, G. [Centro NAST, Universita degli Studi di Roma Tor Vergata, Roma (Italy); Pietropaolo, A., E-mail: antonino.pietropaolo@roma2.infn.it [Centro NAST, Universita degli Studi di Roma Tor Vergata, Roma (Italy); Grazzi, F.; Barzagli, E. [CNR-ISC Firenze (Italy); Scherillo, A. [CNR-ISC Firenze (Italy); ISIS facility Rutherford Appleton Laboratory (United Kingdom); Schooneveld, E.M. [ISIS facility Rutherford Appleton Laboratory (United Kingdom)

    2011-10-21

    The global shortage of {sup 3}He gas is an issue to be addressed in neutron detection. In the context of the research and development activity related to the replacement of {sup 3}He for neutron counting systems, neutron diffraction measurements performed on the INES beam line at the ISIS pulsed spallation neutron source are presented. For these measurements two different neutron counting devices have been used: a 20 bar pressure squashed {sup 3}He tube and a Yttrium-Aluminum-Perovskite scintillation detector. The scintillation detector was coupled to a cadmium sheet that registers the prompt radiative capture gamma rays generated by the (n,{gamma}) nuclear reactions occurring in cadmium. The assessment of the scintillator based counting system was done by performing a Rietveld refinement analysis on the diffraction pattern from an ancient Japanese blade and comparing the results with those obtained by a {sup 3}He tube placed at the same angular position. The results obtained demonstrate the considerable potential of the proposed counting approach based on the radiative capture gamma rays at spallation neutron sources.

  5. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    Energy Technology Data Exchange (ETDEWEB)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene, E-mail: mertsch@nbi.ku.dk, E-mail: mohamed.rameez@nbi.ku.dk, E-mail: tamborra@nbi.ku.dk [Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2017-03-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ''warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10{sup −6} Mpc{sup −3} and neutrino luminosity L {sub ν} ∼< 10{sup 42} erg s{sup −1} (10{sup 41} erg s{sup −1}) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.

  6. Fast coincidence counting with active inspection systems

    Science.gov (United States)

    Mullens, J. A.; Neal, J. S.; Hausladen, P. A.; Pozzi, S. A.; Mihalczo, J. T.

    2005-12-01

    This paper describes 2nd and 3rd order time coincidence distributions measurements with a GHz processor that synchronously samples 5 or 10 channels of data from radiation detectors near fissile material. On-line, time coincidence distributions are measured between detectors or between detectors and an external stimulating source. Detector-to-detector correlations are useful for passive measurements also. The processor also measures the number of times n pulses occur in a selectable time window and compares this multiplet distribution to a Poisson distribution as a method of determining the occurrence of fission. The detectors respond to radiation emitted in the fission process induced internally by inherent sources or by external sources such as LINACS, DT generators either pulsed or steady state with alpha detectors, etc. Data can be acquired from prompt emission during the source pulse, prompt emissions immediately after the source pulse, or delayed emissions between source pulses. These types of time coincidence measurements (occurring on the time scale of the fission chain multiplication processes for nuclear weapons grade U and Pu) are useful for determining the presence of these fissile materials and quantifying the amount, and are useful for counter terrorism and nuclear material control and accountability. This paper presents the results for a variety of measurements.

  7. Fast coincidence counting with active inspection systems

    International Nuclear Information System (INIS)

    Mullens, J.A.; Neal, J.S.; Hausladen, P.A.; Pozzi, S.A.; Mihalczo, J.T.

    2005-01-01

    This paper describes 2nd and 3rd order time coincidence distributions measurements with a GHz processor that synchronously samples 5 or 10 channels of data from radiation detectors near fissile material. On-line, time coincidence distributions are measured between detectors or between detectors and an external stimulating source. Detector-to-detector correlations are useful for passive measurements also. The processor also measures the number of times n pulses occur in a selectable time window and compares this multiplet distribution to a Poisson distribution as a method of determining the occurrence of fission. The detectors respond to radiation emitted in the fission process induced internally by inherent sources or by external sources such as LINACS, DT generators either pulsed or steady state with alpha detectors, etc. Data can be acquired from prompt emission during the source pulse, prompt emissions immediately after the source pulse, or delayed emissions between source pulses. These types of time coincidence measurements (occurring on the time scale of the fission chain multiplication processes for nuclear weapons grade U and Pu) are useful for determining the presence of these fissile materials and quantifying the amount, and are useful for counter terrorism and nuclear material control and accountability. This paper presents the results for a variety of measurements

  8. Statistical Methods for Unusual Count Data

    DEFF Research Database (Denmark)

    Guthrie, Katherine A.; Gammill, Hilary S.; Kamper-Jørgensen, Mads

    2016-01-01

    microchimerism data present challenges for statistical analysis, including a skewed distribution, excess zero values, and occasional large values. Methods for comparing microchimerism levels across groups while controlling for covariates are not well established. We compared statistical models for quantitative...... microchimerism values, applied to simulated data sets and 2 observed data sets, to make recommendations for analytic practice. Modeling the level of quantitative microchimerism as a rate via Poisson or negative binomial model with the rate of detection defined as a count of microchimerism genome equivalents per...

  9. Temperature distribution of a simplified rotor due to a uniform heat source

    Science.gov (United States)

    Welzenbach, Sarah; Fischer, Tim; Meier, Felix; Werner, Ewald; kyzy, Sonun Ulan; Munz, Oliver

    2018-03-01

    In gas turbines, high combustion efficiency as well as operational safety are required. Thus, labyrinth seal systems with honeycomb liners are commonly used. In the case of rubbing events in the seal system, the components can be damaged due to cyclic thermal and mechanical loads. Temperature differences occurring at labyrinth seal fins during rubbing events can be determined by considering a single heat source acting periodically on the surface of a rotating cylinder. Existing literature analysing the temperature distribution on rotating cylindrical bodies due to a stationary heat source is reviewed. The temperature distribution on the circumference of a simplified labyrinth seal fin is calculated using an available and easy to implement analytical approach. A finite element model of the simplified labyrinth seal fin is created and the numerical results are compared to the analytical results. The temperature distributions calculated by the analytical and the numerical approaches coincide for low sliding velocities, while there are discrepancies of the calculated maximum temperatures for higher sliding velocities. The use of the analytical approach allows the conservative estimation of the maximum temperatures arising in labyrinth seal fins during rubbing events. At the same time, high calculation costs can be avoided.

  10. Combining counts and incidence data: an efficient approach for estimating the log-normal species abundance distribution and diversity indices.

    Science.gov (United States)

    Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G

    2012-10-01

    Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.

  11. Counting Raindrops and the Distribution of Intervals Between Them.

    Science.gov (United States)

    Van De Giesen, N.; Ten Veldhuis, M. C.; Hut, R.; Pape, J. J.

    2017-12-01

    Drop size distributions are often assumed to follow a generalized gamma function, characterized by one parameter, Λ, [1]. In principle, this Λ can be estimated by measuring the arrival rate of raindrops. The arrival rate should follow a Poisson distribution. By measuring the distribution of the time intervals between drops arriving at a certain surface area, one should not only be able to estimate the arrival rate but also the robustness of the underlying assumption concerning steady state. It is important to note that many rainfall radar systems also assume fixeddrop size distributions, and associated arrival rates, to derive rainfall rates. By testing these relationships with a simple device, we will be able to improve both land-based and space-based radar rainfall estimates. Here, an open-hardware sensor design is presented, consisting of a 3D printed housing for a piezoelectric element, some simple electronics and an Arduino. The target audience for this device are citizen scientists who want to contribute to collecting rainfall information beyond the standard rain gauge. The core of the sensor is a simple piezo-buzzer, as found in many devices such as watches and fire alarms. When a raindrop falls on a piezo-buzzer, a small voltage is generated , which can be used to register the drop's arrival time. By registering the intervals between raindrops, the associated Poisson distribution can be estimated. In addition to the hardware, we will present the first results of a measuring campaign in Myanmar that will have ran from August to October 2017. All design files and descriptions are available through GitHub: https://github.com/nvandegiesen/Intervalometer. This research is partially supported through the TWIGA project, funded by the European Commission's H2020 program under call SC5-18-2017 `Novel in-situ observation systems'. Reference [1]: Uijlenhoet, R., and J. N. M. Stricker. "A consistent rainfall parameterization based on the exponential raindrop size

  12. Repeatability of differential goat bulk milk culture and associations with somatic cell count, total bacterial count, and standard plate count

    NARCIS (Netherlands)

    Koop, G.; Dik, N.; Nielen, M.; Lipman, L.J.A.

    2010-01-01

    The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms,

  13. North Slope, Alaska: Source rock distribution, richness, thermal maturity, and petroleum charge

    Science.gov (United States)

    Peters, K.E.; Magoon, L.B.; Bird, K.J.; Valin, Z.C.; Keller, M.A.

    2006-01-01

    Four key marine petroleum source rock units were identified, characterized, and mapped in the subsurface to better understand the origin and distribution of petroleum on the North Slope of Alaska. These marine source rocks, from oldest to youngest, include four intervals: (1) Middle-Upper Triassic Shublik Formation, (2) basal condensed section in the Jurassic-Lower Cretaceous Kingak Shale, (3) Cretaceous pebble shale unit, and (4) Cretaceous Hue Shale. Well logs for more than 60 wells and total organic carbon (TOC) and Rock-Eval pyrolysis analyses for 1183 samples in 125 well penetrations of the source rocks were used to map the present-day thickness of each source rock and the quantity (TOC), quality (hydrogen index), and thermal maturity (Tmax) of the organic matter. Based on assumptions related to carbon mass balance and regional distributions of TOC, the present-day source rock quantity and quality maps were used to determine the extent of fractional conversion of the kerogen to petroleum and to map the original TOC (TOCo) and the original hydrogen index (HIo) prior to thermal maturation. The quantity and quality of oil-prone organic matter in Shublik Formation source rock generally exceeded that of the other units prior to thermal maturation (commonly TOCo > 4 wt.% and HIo > 600 mg hydrocarbon/g TOC), although all are likely sources for at least some petroleum on the North Slope. We used Rock-Eval and hydrous pyrolysis methods to calculate expulsion factors and petroleum charge for each of the four source rocks in the study area. Without attempting to identify the correct methods, we conclude that calculations based on Rock-Eval pyrolysis overestimate expulsion factors and petroleum charge because low pressure and rapid removal of thermally cracked products by the carrier gas retards cross-linking and pyrobitumen formation that is otherwise favored by natural burial maturation. Expulsion factors and petroleum charge based on hydrous pyrolysis may also be high

  14. Time evolution of distribution functions in dissipative environments

    International Nuclear Information System (INIS)

    Hu Li-Yun; Chen Fei; Wang Zi-Sheng; Fan Hong-Yi

    2011-01-01

    By introducing the thermal entangled state representation, we investigate the time evolution of distribution functions in the dissipative channels by bridging the relation between the initial distribution function and the any time distribution function. We find that most of them are expressed as such integrations over the Laguerre—Gaussian function. Furthermore, as applications, we derive the time evolution of photon-counting distribution by bridging the relation between the initial distribution function and the any time photon-counting distribution, and the time evolution of R-function characteristic of nonclassicality depth. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  15. MANTA--an open-source, high density electrophysiology recording suite for MATLAB.

    Science.gov (United States)

    Englitz, B; David, S V; Sorenson, M D; Shamma, S A

    2013-01-01

    The distributed nature of nervous systems makes it necessary to record from a large number of sites in order to decipher the neural code, whether single cell, local field potential (LFP), micro-electrocorticograms (μECoG), electroencephalographic (EEG), magnetoencephalographic (MEG) or in vitro micro-electrode array (MEA) data are considered. High channel-count recordings also optimize the yield of a preparation and the efficiency of time invested by the researcher. Currently, data acquisition (DAQ) systems with high channel counts (>100) can be purchased from a limited number of companies at considerable prices. These systems are typically closed-source and thus prohibit custom extensions or improvements by end users. We have developed MANTA, an open-source MATLAB-based DAQ system, as an alternative to existing options. MANTA combines high channel counts (up to 1440 channels/PC), usage of analog or digital headstages, low per channel cost (1 year, recording reliably from 128 channels. It offers a growing list of features, including integrated spike sorting, PSTH and CSD display and fully customizable electrode array geometry (including 3D arrays), some of which are not available in commercial systems. MANTA runs on a typical PC and communicates via TCP/IP and can thus be easily integrated with existing stimulus generation/control systems in a lab at a fraction of the cost of commercial systems. With modern neuroscience developing rapidly, MANTA provides a flexible platform that can be rapidly adapted to the needs of new analyses and questions. Being open-source, the development of MANTA can outpace commercial solutions in functionality, while maintaining a low price-point.

  16. Standardization of {sup 241}Am by digital coincidence counting, liquid scintillation counting and defined solid angle counting

    Energy Technology Data Exchange (ETDEWEB)

    Balpardo, C., E-mail: balpardo@cae.cnea.gov.a [Laboratorio de Metrologia de Radioisotopos, CNEA, Buenos Aires (Argentina); Capoulat, M.E.; Rodrigues, D.; Arenillas, P. [Laboratorio de Metrologia de Radioisotopos, CNEA, Buenos Aires (Argentina)

    2010-07-15

    The nuclide {sup 241}Am decays by alpha emission to {sup 237}Np. Most of the decays (84.6%) populate the excited level of {sup 237}Np with energy of 59.54 keV. Digital coincidence counting was applied to standardize a solution of {sup 241}Am by alpha-gamma coincidence counting with efficiency extrapolation. Electronic discrimination was implemented with a pressurized proportional counter and the results were compared with two other independent techniques: Liquid scintillation counting using the logical sum of double coincidences in a TDCR array and defined solid angle counting taking into account activity inhomogeneity in the active deposit. The results show consistency between the three methods within a limit of a 0.3%. An ampoule of this solution will be sent to the International Reference System (SIR) during 2009. Uncertainties were analysed and compared in detail for the three applied methods.

  17. Light source distribution and scattering phase function influence light transport in diffuse multi-layered media

    Science.gov (United States)

    Vaudelle, Fabrice; L'Huillier, Jean-Pierre; Askoura, Mohamed Lamine

    2017-06-01

    Red and near-Infrared light is often used as a useful diagnostic and imaging probe for highly scattering media such as biological tissues, fruits and vegetables. Part of diffusively reflected light gives interesting information related to the tissue subsurface, whereas light recorded at further distances may probe deeper into the interrogated turbid tissues. However, modelling diffusive events occurring at short source-detector distances requires to consider both the distribution of the light sources and the scattering phase functions. In this report, a modified Monte Carlo model is used to compute light transport in curved and multi-layered tissue samples which are covered with a thin and highly diffusing tissue layer. Different light source distributions (ballistic, diffuse or Lambertian) are tested with specific scattering phase functions (modified or not modified Henyey-Greenstein, Gegenbauer and Mie) to compute the amount of backscattered and transmitted light in apple and human skin structures. Comparisons between simulation results and experiments carried out with a multispectral imaging setup confirm the soundness of the theoretical strategy and may explain the role of the skin on light transport in whole and half-cut apples. Other computational results show that a Lambertian source distribution combined with a Henyey-Greenstein phase function provides a higher photon density in the stratum corneum than in the upper dermis layer. Furthermore, it is also shown that the scattering phase function may affect the shape and the magnitude of the Bidirectional Reflectance Distribution (BRDF) exhibited at the skin surface.

  18. Photon-counting multifactor optical encryption and authentication

    International Nuclear Information System (INIS)

    Pérez-Cabré, E; Millán, M S; Mohammed, E A; Saadon, H L

    2015-01-01

    The multifactor optical encryption authentication method [Opt. Lett., 31 721-3 (2006)] reinforces optical security by allowing the simultaneous authentication of up to four factors. In this work, the photon-counting imaging technique is applied to the multifactor encrypted function so that a sparse phase-only distribution is generated for the encrypted data. The integration of both techniques permits an increased capacity for signal hiding with simultaneous data reduction for better fulfilling the general requirements of protection, storage and transmission. Cryptanalysis of the proposed method is carried out in terms of chosen-plaintext and chosen-ciphertext attacks. Although the multifactor authentication process is not substantially altered by those attacks, its integration with the photon-counting imaging technique prevents from possible partial disclosure of any encrypted factor, thus increasing the security level of the overall process. Numerical experiments and results are provided and discussed. (paper)

  19. Single-photon sources based on single molecules in solids

    International Nuclear Information System (INIS)

    Moerner, W E

    2004-01-01

    Single molecules in suitable host crystals have been demonstrated to be useful single-photon emitters both at liquid-helium temperatures and at room temperature. The low-temperature source achieved controllable emission of single photons from a single terrylene molecule in p-terphenyl by an adiabatic rapid passage technique. In contrast with almost all other single-molecule systems, terrylene single molecules show extremely high photostability under continuous, high-intensity irradiation. A room-temperature source utilizing this material has been demonstrated, in which fast pumping into vibrational sidebands of the electronically excited state achieved efficient inversion of the emissive level. This source yielded a single-photon emission probability p(1) of 0.86 at a detected count rate near 300 000 photons s -1 , with very small probability of emission of more than one photon. Thus, single molecules in solids can be considered as contenders for applications of single-photon sources such as quantum key distribution

  20. Inconsistencies in authoritative national paediatric workforce data sources.

    Science.gov (United States)

    Allen, Amy R; Doherty, Richard; Hilton, Andrew M; Freed, Gary L

    2017-12-01

    Objective National health workforce data are used in workforce projections, policy and planning. If data to measure the current effective clinical medical workforce are not consistent, accurate and reliable, policy options pursued may not be aligned with Australia's actual needs. The aim of the present study was to identify any inconsistencies and contradictions in the numerical count of paediatric specialists in Australia, and discuss issues related to the accuracy of collection and analysis of medical workforce data. Methods This study compared respected national data sources regarding the number of medical practitioners in eight fields of paediatric speciality medical (non-surgical) practice. It also counted the number of doctors listed on the websites of speciality paediatric hospitals and clinics as practicing in these eight fields. Results Counts of medical practitioners varied markedly for all specialties across the data sources examined. In some fields examined, the range of variability across data sources exceeded 450%. Conclusions The national datasets currently available from federal and speciality sources do not provide consistent or reliable counts of the number of medical practitioners. The lack of an adequate baseline for the workforce prevents accurate predictions of future needs to provide the best possible care of children in Australia. What is known about the topic? Various national data sources contain counts of the number of medical practitioners in Australia. These data are used in health workforce projections, policy and planning. What does this paper add? The present study found that the current data sources do not provide consistent or reliable counts of the number of practitioners in eight selected fields of paediatric speciality practice. There are several potential issues in the way workforce data are collected or analysed that cause the variation between sources to occur. What are the implications for practitioners? Without accurate

  1. Comparison of approximate formulas for decision levels and detection limits for paired counting with the exact results

    International Nuclear Information System (INIS)

    Potter, W.E.

    2005-01-01

    The exact probability density function for paired counting can be expressed in terms of modified Bessel functions of integral order when the expected blank count is known. Exact decision levels and detection limits can be computed in a straightforward manner. For many applications perturbing half-integer corrections to Gaussian distributions yields satisfactory results for decision levels. When there is concern about the uncertainty for the expected value of the blank count, a way to bound the errors of both types using confidence intervals for the expected blank count is discussed. (author)

  2. Dose distribution and dosimetry parameters calculation of MED3633 Palladium-103 source in water phantom using MCNP

    International Nuclear Information System (INIS)

    Mowlavi, A. A.; Binesh, A.; Moslehitabar, H.

    2006-01-01

    Palladium-103 ( 103 Pd) is a brachytherapy source for cancer treatment. The Monte Carlo codes are usually applied for dose distribution and effect of shieldings. Monte Carlo calculation of dose distribution in water phantom due to a MED3633 103 Pd source is presented in this work. Materials and Methods: The dose distribution around the 10 3Pd Model MED3633 located in the center of 30*30*30 m 3 water phantom cube was calculated using MCNP code by the Monte Carlo method. The percentage depth dose variation along the different axis parallel and perpendicular to the source was also calculated. Then, the isodose curves for 100%, 75%, 50% and 25% percentage depth dose and dosimetry parameters of TG-43 protocol were determined. Results: The results show that the Monte Carlo Method could calculate dose deposition in high gradient region, near the source, accurately. The isodose curves and dosimetric characteristics obtained for MED3633 103 Pd source are in good agreement with published results. Conclusion: The isodose curves of the MED3633 103 Pd source have been derived form dose calculation by MCNP code. The calculated dosimetry parameters for the source agree quite well with their Monte Carlo calculated and experimental measurement values

  3. A standardised faecal collection protocol for intestinal helminth egg counts in Asian elephants, Elephas maximus

    Directory of Open Access Journals (Sweden)

    Carly L. Lynsdale

    2015-12-01

    Full Text Available The quantitative assessment of parasite infection is necessary to measure, manage and reduce infection risk in both wild and captive animal populations. Traditional faecal flotation methods which aim to quantify parasite burden, such as the McMaster egg counting technique, are widely used in veterinary medicine, agricultural management and wildlife parasitology. Although many modifications to the McMaster method exist, few account for systematic variation in parasite egg output which may lead to inaccurate estimations of infection intensity through faecal egg counts (FEC. To adapt the McMaster method for use in sampling Asian elephants (Elephas maximus, we tested a number of possible sources of error regarding faecal sampling, focussing on helminth eggs and using a population of over 120 semi-captive elephants distributed across northern Myanmar. These included time of day of defecation, effects of storage in 10% formalin and 10% formol saline and variation in egg distribution between and within faecal boluses. We found no significant difference in the distribution of helminth eggs within faecal matter or for different defecation times, however, storage in formol saline and formalin significantly decreased egg recovery. This is the first study to analyse several collection and storage aspects of a widely-used traditional parasitology method for helminth parasites of E. maximus using known host individuals. We suggest that for the modified McMaster technique, a minimum of one fresh sample per elephant collected from any freshly produced bolus in the total faecal matter and at any point within a 7.5 h time period (7.30am–2.55 pm will consistently represent parasite load. This study defines a protocol which may be used to test pre-analytic factors and effectively determine infection load in species which produce large quantities of vegetative faeces, such as non-ruminant megaherbivores.

  4. Applied categorical and count data analysis

    CERN Document Server

    Tang, Wan; Tu, Xin M

    2012-01-01

    Introduction Discrete Outcomes Data Source Outline of the BookReview of Key Statistical ResultsSoftwareContingency Tables Inference for One-Way Frequency TableInference for 2 x 2 TableInference for 2 x r TablesInference for s x r TableMeasures of AssociationSets of Contingency Tables Confounding Effects Sets of 2 x 2 TablesSets of s x r TablesRegression Models for Categorical Response Logistic Regression for Binary ResponseInference about Model ParametersGoodness of FitGeneralized Linear ModelsRegression Models for Polytomous ResponseRegression Models for Count Response Poisson Regression Mode

  5. Effects of neutron spectrum and external neutron source on neutron multiplication parameters in accelerator-driven system

    International Nuclear Information System (INIS)

    Shahbunder, Hesham; Pyeon, Cheol Ho; Misawa, Tsuyoshi; Lim, Jae-Yong; Shiroya, Seiji

    2010-01-01

    The neutron multiplication parameters: neutron multiplication M, subcritical multiplication factor k s , external source efficiency φ*, play an important role for numerical assessment and reactor power evaluation of an accelerator-driven system (ADS). Those parameters can be evaluated by using the measured reaction rate distribution in the subcritical system. In this study, the experimental verification of this methodology is performed in various ADS cores; with high-energy (100 MeV) proton-tungsten source in hard and soft neutron spectra cores and 14 MeV D-T neutron source in soft spectrum core. The comparison between measured and calculated multiplication parameters reveals a maximum relative difference in the range of 6.6-13.7% that is attributed to the calculation nuclear libraries uncertainty and accuracy for energies higher than 20 MeV and also dependent on the reaction rate distribution position and count rates. The effects of different core neutron spectra and external neutron sources on the neutron multiplication parameters are discussed.

  6. FDTD verification of deep-set brain tumor hyperthermia using a spherical microwave source distribution

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, D. [20th Intelligence Squadron, Offutt AFB, NE (United States); Rappaport, C.M. [Northeastern Univ., Boston, MA (United States). Center for Electromagnetics Research; Terzuoli, A.J. Jr. [Air Force Inst. of Tech., Dayton, OH (United States). Graduate School of Engineering

    1996-10-01

    Although use of noninvasive microwave hyperthermia to treat cancer is problematic in many human body structures, careful selection of the source electric field distribution around the entire surface of the head can generate a tightly focused global power density maximum at the deepest point within the brain. An analytic prediction of the optimum volume field distribution in a layered concentric head model based on summing spherical harmonic modes is derived and presented. This ideal distribution is then verified using a three-dimensional finite difference time domain (TDTD) simulation with a discretized, MRI-based head model excited by the spherical source. The numerical computation gives a very similar dissipated power pattern as the analytic prediction. This study demonstrates that microwave hyperthermia can theoretically be a feasible cancer treatment modality for tumors in the head, providing a well-resolved hot-spot at depth without overheating any other healthy tissue.

  7. Universal continuous-variable quantum computation: Requirement of optical nonlinearity for photon counting

    International Nuclear Information System (INIS)

    Bartlett, Stephen D.; Sanders, Barry C.

    2002-01-01

    Although universal continuous-variable quantum computation cannot be achieved via linear optics (including squeezing), homodyne detection, and feed-forward, inclusion of ideal photon-counting measurements overcomes this obstacle. These measurements are sometimes described by arrays of beam splitters to distribute the photons across several modes. We show that such a scheme cannot be used to implement ideal photon counting and that such measurements necessarily involve nonlinear evolution. However, this requirement of nonlinearity can be moved ''off-line,'' thereby permitting universal continuous-variable quantum computation with linear optics

  8. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  9. A practical two-way system of quantum key distribution with untrusted source

    International Nuclear Information System (INIS)

    Chen Ming-Juan; Liu Xiang

    2011-01-01

    The most severe problem of a two-way 'plug-and-play' (p and p) quantum key distribution system is that the source can be controlled by the eavesdropper. This kind of source is defined as an “untrusted source . This paper discusses the effects of the fluctuation of internal transmittance on the final key generation rate and the transmission distance. The security of the standard BB84 protocol, one-decoy state protocol, and weak+vacuum decoy state protocol, with untrusted sources and the fluctuation of internal transmittance are studied. It is shown that the one-decoy state is sensitive to the statistical fluctuation but weak+vacuum decoy state is only slightly affected by the fluctuation. It is also shown that both the maximum secure transmission distance and final key generation rate are reduced when Alice's laboratory transmittance fluctuation is considered. (general)

  10. Quality control methods in accelerometer data processing: identifying extreme counts.

    Directory of Open Access Journals (Sweden)

    Carly Rich

    Full Text Available Accelerometers are designed to measure plausible human activity, however extremely high count values (EHCV have been recorded in large-scale studies. Using population data, we develop methodological principles for establishing an EHCV threshold, propose a threshold to define EHCV in the ActiGraph GT1M, determine occurrences of EHCV in a large-scale study, identify device-specific error values, and investigate the influence of varying EHCV thresholds on daily vigorous PA (VPA.We estimated quantiles to analyse the distribution of all accelerometer positive count values obtained from 9005 seven-year old children participating in the UK Millennium Cohort Study. A threshold to identify EHCV was derived by differentiating the quantile function. Data were screened for device-specific error count values and EHCV, and a sensitivity analysis conducted to compare daily VPA estimates using three approaches to accounting for EHCV.Using our proposed threshold of ≥ 11,715 counts/minute to identify EHCV, we found that only 0.7% of all non-zero counts measured in MCS children were EHCV; in 99.7% of these children, EHCV comprised < 1% of total non-zero counts. Only 11 MCS children (0.12% of sample returned accelerometers that contained negative counts; out of 237 such values, 211 counts were equal to -32,768 in one child. The medians of daily minutes spent in VPA obtained without excluding EHCV, and when using a higher threshold (≥19,442 counts/minute were, respectively, 6.2% and 4.6% higher than when using our threshold (6.5 minutes; p<0.0001.Quality control processes should be undertaken during accelerometer fieldwork and prior to analysing data to identify monitors recording error values and EHCV. The proposed threshold will improve the validity of VPA estimates in children's studies using the ActiGraph GT1M by ensuring only plausible data are analysed. These methods can be applied to define appropriate EHCV thresholds for different accelerometer models.

  11. On the errors in measurements of Ohio 5 radio sources in the light of the GB survey

    International Nuclear Information System (INIS)

    Machalski, J.

    1975-01-01

    Positions and flux densities of 405 OSU 5 radio sources surveyed at 1415 MHz down to 0.18 f.u. (Brundage et al. 1971) have been examined in the light of data from the GB survey made at 1400 MHz (Maslowski 1972). An identification analysis has shown that about 56% of OSU sources reveal themselves as single, 18% - as confused, 20% - as unresolved and 6% - having no counterparts in the GB survey down to 0.09 f.u. - seem to be spurious. The single OSU sources are strongly affected by the underestimation of their flux densities due to base-line procedure in their vicinity. The average value of about 0.03 f.u. has been found for the systematic underestimation. The second systematic error is due to the presence of a significant number of confused sources with strong overestimation of their flux densities. The confusion effect gives a characteristic non-Gaussian tail in the difference distribution between observed and real flux densities. The confusion effect has a strong influence on source counts from the OSU 5 survey. Differential number-counts relatively to that from the GB survey shows that the counts agree between themselves within the statistical uncertainty up to about 0.40 f.u., which is approximately 4 delta (delta - average rms flux density error in the OSU 5 survey). Below 0.40 f.u. the number of sources missing due to the confusion effect is significantly greater than the number-overestimation due to the noise error. Thus, this part of the OSU 5 source counts cannot be treated seriously, even in the statistical sense. An analysis of the approximate reliability and completeness of the OSU 5 survey shows that, although the total reliability estimated by the authors of the survey is good, the completeness is significantly lower due to the underestimation of the confusion effect magnitude. In fact, the OSU 5 completeness is 67% at 0.18 f.u. and 79% at 0.25 f.u. (author)

  12. Clean Hands Count

    Medline Plus

    Full Text Available ... Like this video? Sign in to make your opinion count. Sign in 131 2 Don't like this video? Sign in to make your opinion count. Sign in 3 Loading... Loading... Transcript The ...

  13. Effects of lek count protocols on greater sage-grouse population trend estimates

    Science.gov (United States)

    Monroe, Adrian; Edmunds, David; Aldridge, Cameron L.

    2016-01-01

    frequency also improved precision. These results suggest that the current distribution of count timings available in lek count databases such as that of Wyoming (conducted up to 90 minutes after sunrise) can be used to estimate sage-grouse population trends without reducing precision or accuracy relative to trends from counts conducted within 30 minutes of sunrise. However, only 10% of all Wyoming counts in our sample (1995−2014) were conducted 61−90 minutes after sunrise, and further increasing this percentage may still bias trend estimates because of declining lek attendance. 

  14. Counts-in-Cylinders in the Sloan Digital Sky Survey with Comparisons to N-Body

    Energy Technology Data Exchange (ETDEWEB)

    Berrier, Heather D.; Barton, Elizabeth J.; /UC, Irvine; Berrier, Joel C.; /Arkansas U.; Bullock, James S.; /UC, Irvine; Zentner, Andrew R.; /Pittsburgh U.; Wechsler, Risa H. /KIPAC, Menlo Park /SLAC

    2010-12-16

    Environmental statistics provide a necessary means of comparing the properties of galaxies in different environments and a vital test of models of galaxy formation within the prevailing, hierarchical cosmological model. We explore counts-in-cylinders, a common statistic defined as the number of companions of a particular galaxy found within a given projected radius and redshift interval. Galaxy distributions with the same two-point correlation functions do not necessarily have the same companion count distributions. We use this statistic to examine the environments of galaxies in the Sloan Digital Sky Survey, Data Release 4. We also make preliminary comparisons to four models for the spatial distributions of galaxies, based on N-body simulations, and data from SDSS DR4 to study the utility of the counts-in-cylinders statistic. There is a very large scatter between the number of companions a galaxy has and the mass of its parent dark matter halo and the halo occupation, limiting the utility of this statistic for certain kinds of environmental studies. We also show that prevalent, empirical models of galaxy clustering that match observed two- and three-point clustering statistics well fail to reproduce some aspects of the observed distribution of counts-in-cylinders on 1, 3 and 6-h{sup -1}Mpc scales. All models that we explore underpredict the fraction of galaxies with few or no companions in 3 and 6-h{sup -1} Mpc cylinders. Roughly 7% of galaxies in the real universe are significantly more isolated within a 6 h{sup -1} Mpc cylinder than the galaxies in any of the models we use. Simple, phenomenological models that map galaxies to dark matter halos fail to reproduce high-order clustering statistics in low-density environments.

  15. Counting states of black strings with traveling waves

    International Nuclear Information System (INIS)

    Horowitz, G.T.; Marolf, D.

    1997-01-01

    We consider a family of solutions to string theory which depend on arbitrary functions and contain regular event horizons. They describe six-dimensional extremal black strings with traveling waves and have an inhomogeneous distribution of momentum along the string. The structure of these solutions near the horizon is studied and the horizon area computed. We also count the number of BPS string states at weak coupling whose macroscopic momentum distribution agrees with that of the black string. It is shown that the number of such states is given by the Bekenstein-Hawking entropy of the black string with traveling waves. copyright 1997 The American Physical Society

  16. Star counts in M15 on U, B and V plates

    Energy Technology Data Exchange (ETDEWEB)

    Calvani, M [Padua Univ. (Italy). Ist. di Astronomia; Nobili, L [Padua Univ. (Italy). Ist. di Fisica; Turolla, R [Scuola Internazionale Superiore di Studi Avanzati, Trieste (Italy)

    1980-11-01

    We present new counts of stars in M15, using plates in B, V and U. We are able to explore relatively close to the central parts of the cluster (0.1 pc) and we derive the best fitting parameters for the star distribution.

  17. Noun Countability; Count Nouns and Non-count Nouns, What are the Syntactic Differences Between them?

    Directory of Open Access Journals (Sweden)

    Azhar A. Alkazwini

    2016-11-01

    Full Text Available Words that function as the subjects of verbs, objects of verbs or prepositions and which can have a plural form and possessive ending are known as nouns. They are described as referring to persons, places, things, states, or qualities and might also be used as an attributive modifier. In this paper, classes and subclasses of nouns shall be presented, then, noun countability branching into count and non-count nous shall be discussed. A number of present examples illustrating differences between count and non-count nouns and this includes determiner-head-co-occurrence restrictions of number, subject-verb agreement, in addition to some exceptions to this agreement rule shall be discussed. Also, the lexically inherent number in nouns and how inherently plural nouns are classified in terms of (+/- count are illustrated. This research will discuss partitive construction of count and non-count nouns, nouns as attributive modifier and, finally, conclude with the fact that there are syntactic difference between count and non-count in the English Language.

  18. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    Science.gov (United States)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models

  19. Calibration of nuclides by gamma-gamma sum peak coincidence counting

    International Nuclear Information System (INIS)

    Guevara, E.A.

    1986-01-01

    The feasibility of extending sum peak coincidence counting to the direct calibration of gamma-ray emitters having particular decay schemes was investigated, also checkings of the measurement accuracy, by comparing with more precise beta-gamma coincidence counting have been performed. New theoretical studies and experiments were developed, demonstrating the reliability of the procedure. Uncertainties of less than one percent were obtained when certain radioactive sources were measured. The application of the procedure to 60 Co, 22 Na, 47 Ca and 148 Pm was studied. Theoretical bases of sum peak coincidence counting were set in order to extend it as an alternative method for absolute activity determination. In this respect, theoretical studies were performed for positive and negative beta decay, and electron capture, either accompanied or unaccompanied by coincident gamma rays. They include decay schemes containing up to three daughter nuclide excited levels, for different geometrical configurations. Equations are proposed for a possible generalization of the procedure. (M.E.L.) [es

  20. Gravity and count probabilities in an expanding universe

    Science.gov (United States)

    Bouchet, Francois R.; Hernquist, Lars

    1992-01-01

    The time evolution of nonlinear clustering on large scales in cold dark matter, hot dark matter, and white noise models of the universe is investigated using N-body simulations performed with a tree code. Count probabilities in cubic cells are determined as functions of the cell size and the clustering state (redshift), and comparisons are made with various theoretical models. We isolate the features that appear to be the result of gravitational instability, those that depend on the initial conditions, and those that are likely a consequence of numerical limitations. More specifically, we study the development of skewness, kurtosis, and the fifth moment in relation to variance, the dependence of the void probability on time as well as on sparseness of sampling, and the overall shape of the count probability distribution. Implications of our results for theoretical and observational studies are discussed.

  1. Clean Hands Count

    Medline Plus

    Full Text Available ... starting stop Loading... Watch Queue Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with ... ads? Get YouTube Red. Working... Not now Try it free Find out why Close Clean Hands Count ...

  2. Ammonia-oxidizing bacteria in a chloraminated distribution system: seasonal occurrence, distribution and disinfection resistance.

    Science.gov (United States)

    Wolfe, R L; Lieu, N I; Izaguirre, G; Means, E G

    1990-02-01

    Nitrification in chloraminated drinking water can have a number of adverse effects on water quality, including a loss of total chlorine and ammonia-N and an increase in the concentration of heterotrophic plate count bacteria and nitrite. To understand how nitrification develops, a study was conducted to examine the factors that influence the occurrence of ammonia-oxidizing bacteria (AOB) in a chloraminated distribution system. Samples were collected over an 18-month period from a raw-water source, a conventional treatment plant effluent, and two covered, finished-water reservoirs that previously experienced nitrification episodes. Sediment and biofilm samples were collected from the interior wall surfaces of two finished-water pipelines and one of the covered reservoirs. The AOB were enumerated by a most-probable-number technique, and isolates were isolated and identified. The resistance of naturally occurring AOB to chloramines and free chlorine was also examined. The results of the monitoring program indicated that the levels of AOB, identified as members of the genus Nitrosomonas, were seasonally dependent in both source and finished waters, with the highest levels observed in the warm summer months. The concentrations of AOB in the two reservoirs, both of which have floating covers made of synthetic rubber (Hypalon; E.I. du Pont de Nemours & Co., Inc., Wilmington, Del.), had most probable numbers that ranged from less than 0.2 to greater than 300/ml and correlated significantly with temperature and levels of heterotrophic plate count bacteria. No AOB were detected in the chloraminated reservoirs when the water temperature was below 16 to 18 degrees C. The study indicated that nitrifiers occur throughout the chloraminated distribution system. Higher concentrations of AOB were found in the reservoir and pipe sediment materials than in the pipe biofilm samples. The AOB were approximately 13 times more resistant to monochloramine than to free chlorine. After 33 min

  3. Herschel-ATLAS: Dust Temperature and Redshift Distribution of SPIRE and PACS Detected Sources Using Submillimetre Colours

    Science.gov (United States)

    Amblard, A.; Cooray, Asantha; Serra, P.; Temi, P.; Barton, E.; Negrello, M.; Auld, R.; Baes, M.; Baldry, I. K.; Bamford, S.; hide

    2010-01-01

    We present colour-colour diagrams of detected sources in the Herschel-ATLAS Science Demonstration Field from 100 to 500/microns using both PACS and SPIRE. We fit isothermal modified-blackbody spectral energy distribution (SED) models in order to extract the dust temperature of sources with counterparts in GAMA or SDSS with either a spectroscopic or a photometric redshift. For a subsample of 331 sources detected in at least three FIR bands with significance greater than 30 sigma, we find an average dust temperature of (28 plus or minus 8)K. For sources with no known redshifts, we populate the colour-colour diagram with a large number of SEDs generated with a broad range of dust temperatures and emissivity parameters and compare to colours of observed sources to establish the redshift distribution of those samples. For another subsample of 1686 sources with fluxes above 35 mJy at 350 microns and detected at 250 and 500 microns with a significance greater than 3sigma, we find an average redshift of 2.2 plus or minus 0.6.

  4. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    Science.gov (United States)

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse; Zhou, Kevin

    2017-09-01

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that track multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a "soft drop multiplicity" which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.

  5. Full counting statistics in a serially coupled double quantum dot system with spin-orbit coupling

    Science.gov (United States)

    Wang, Qiang; Xue, Hai-Bin; Xie, Hai-Qing

    2018-04-01

    We study the full counting statistics of electron transport through a serially coupled double quantum dot (QD) system with spin-orbit coupling (SOC) weakly coupled to two electrodes. We demonstrate that the spin polarizations of the source and drain electrodes determine whether the shot noise maintains super-Poissonian distribution, and whether the sign transitions of the skewness from positive to negative values and of the kurtosis from negative to positive values take place. In particular, the interplay between the spin polarizations of the source and drain electrodes and the magnitude of the external magnetic field, can give rise to a gate-voltage-tunable strong negative differential conductance (NDC) and the shot noise in this NDC region is significantly enhanced. Importantly, for a given SOC parameter, the obvious variation of the high-order current cumulants as a function of the energy-level detuning in a certain range, especially the dip position of the Fano factor of the skewness can be used to qualitatively extract the information about the magnitude of the SOC.

  6. Model of charge-state distributions for electron cyclotron resonance ion source plasmas

    Directory of Open Access Journals (Sweden)

    D. H. Edgell

    1999-12-01

    Full Text Available A computer model for the ion charge-state distribution (CSD in an electron cyclotron resonance ion source (ECRIS plasma is presented that incorporates non-Maxwellian distribution functions, multiple atomic species, and ion confinement due to the ambipolar potential well that arises from confinement of the electron cyclotron resonance (ECR heated electrons. Atomic processes incorporated into the model include multiple ionization and multiple charge exchange with rate coefficients calculated for non-Maxwellian electron distributions. The electron distribution function is calculated using a Fokker-Planck code with an ECR heating term. This eliminates the electron temperature as an arbitrary user input. The model produces results that are a good match to CSD data from the ANL-ECRII ECRIS. Extending the model to 1D axial will also allow the model to determine the plasma and electrostatic potential profiles, further eliminating arbitrary user input to the model.

  7. Assessment of the statistical uncertainty affecting a counting; Evaluation de l'incertitude statistique affectant un comptage

    Energy Technology Data Exchange (ETDEWEB)

    Cluchet, J.

    1960-07-01

    After a recall of some aspects regarding the Gauss law and the Gauss curve, this note addresses the case of performance of a large number of measurements of a source activity by means of a sensor (counter, scintillator, nuclear emulsion, etc.) at equal intervals, and with a number of events which is not rigorously constant. Thus, it addresses measurements, and more particularly counting operations in a random or statistical environment. It more particularly addresses the case of a counting rate due to a source greater (and then lower) than twenty times the Eigen movement. The validity of curves is discussed.

  8. The intensity detection of single-photon detectors based on photon counting probability density statistics

    International Nuclear Information System (INIS)

    Zhang Zijing; Song Jie; Zhao Yuan; Wu Long

    2017-01-01

    Single-photon detectors possess the ultra-high sensitivity, but they cannot directly respond to signal intensity. Conventional methods adopt sampling gates with fixed width and count the triggered number of sampling gates, which is capable of obtaining photon counting probability to estimate the echo signal intensity. In this paper, we not only count the number of triggered sampling gates, but also record the triggered time position of photon counting pulses. The photon counting probability density distribution is obtained through the statistics of a series of the triggered time positions. Then Minimum Variance Unbiased Estimation (MVUE) method is used to estimate the echo signal intensity. Compared with conventional methods, this method can improve the estimation accuracy of echo signal intensity due to the acquisition of more detected information. Finally, a proof-of-principle laboratory system is established. The estimation accuracy of echo signal intensity is discussed and a high accuracy intensity image is acquired under low-light level environments. (paper)

  9. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System

    Directory of Open Access Journals (Sweden)

    Miao Sun

    2016-06-01

    Full Text Available We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  10. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System.

    Science.gov (United States)

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong

    2016-06-06

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  11. 137Cs source dose distribution using the Fricke Xylenol Gel dosimetry

    International Nuclear Information System (INIS)

    Sato, R.; De Almeida, A.; Moreira, M.V.

    2009-01-01

    Dosimetric measurements close to radioisotope sources, such as those used in brachytherapy, require high spatial resolution to avoid incorrect results in the steep dose gradient region. In this work the Fricke Xylenol Gel dosimeter was used to obtain the spatial dose distribution. The readings from a 137 Cs source were performed using two methods, visible spectrophotometer and CCD camera images. Good agreement with the Sievert summation method was found for the transversal axis dose profile within uncertainties of 4% and 5%, for the spectrophotometer and CCD camera respectively. Our results show that the dosimeter is adequate for brachytherapy dosimetry and, owing to its relatively fast and easy preparation and reading, it is recommended for quality control in brachytherapy applications.

  12. Estimating spatial and temporal components of variation in count data using negative binomial mixed models

    Science.gov (United States)

    Irwin, Brian J.; Wagner, Tyler; Bence, James R.; Kepler, Megan V.; Liu, Weihai; Hayes, Daniel B.

    2013-01-01

    Partitioning total variability into its component temporal and spatial sources is a powerful way to better understand time series and elucidate trends. The data available for such analyses of fish and other populations are usually nonnegative integer counts of the number of organisms, often dominated by many low values with few observations of relatively high abundance. These characteristics are not well approximated by the Gaussian distribution. We present a detailed description of a negative binomial mixed-model framework that can be used to model count data and quantify temporal and spatial variability. We applied these models to data from four fishery-independent surveys of Walleyes Sander vitreus across the Great Lakes basin. Specifically, we fitted models to gill-net catches from Wisconsin waters of Lake Superior; Oneida Lake, New York; Saginaw Bay in Lake Huron, Michigan; and Ohio waters of Lake Erie. These long-term monitoring surveys varied in overall sampling intensity, the total catch of Walleyes, and the proportion of zero catches. Parameter estimation included the negative binomial scaling parameter, and we quantified the random effects as the variations among gill-net sampling sites, the variations among sampled years, and site × year interactions. This framework (i.e., the application of a mixed model appropriate for count data in a variance-partitioning context) represents a flexible approach that has implications for monitoring programs (e.g., trend detection) and for examining the potential of individual variance components to serve as response metrics to large-scale anthropogenic perturbations or ecological changes.

  13. Analysis of electroperforated materials using the quadrat counts method

    Energy Technology Data Exchange (ETDEWEB)

    Miranda, E; Garzon, C; Garcia-Garcia, J [Departament d' Enginyeria Electronica, Universitat Autonoma de Barcelona, 08193 Bellaterra, Barcelona (Spain); MartInez-Cisneros, C; Alonso, J, E-mail: enrique.miranda@uab.cat [Departament de Quimica AnalItica, Universitat Autonoma de Barcelona, 08193 Bellaterra, Barcelona (Spain)

    2011-06-23

    The electroperforation distribution in thin porous materials is investigated using the quadrat counts method (QCM), a classical statistical technique aimed to evaluate the deviation from complete spatial randomness (CSR). Perforations are created by means of electrical discharges generated by needle-like tungsten electrodes. The objective of perforating a thin porous material is to enhance its air permeability, a critical issue in many industrial applications involving paper, plastics, textiles, etc. Using image analysis techniques and specialized statistical software it is shown that the perforation locations follow, beyond a certain length scale, a homogeneous 2D Poisson distribution.

  14. Cross correlations of quantum key distribution based on single-photon sources

    International Nuclear Information System (INIS)

    Dong Shuangli; Wang Xiaobo; Zhang Guofeng; Sun Jianhu; Zhang Fang; Xiao Liantuan; Jia Suotang

    2009-01-01

    We theoretically analyze the second-order correlation function in a quantum key distribution system with real single-photon sources. Based on single-event photon statistics, the influence of the modification caused by an eavesdropper's intervention and the effects of background signals on the cross correlations between authorized partners are presented. On this basis, we have shown a secure range of correlation against the intercept-resend attacks.

  15. Geiger-Mueller haloid counter dead time dependence on counting rate

    International Nuclear Information System (INIS)

    Onishchenko, A.M.; Tsvetkov, A.A.

    1980-01-01

    The experimental dependences of the dead time of Geiger counters (SBM-19, SBM-20, SBM-21 and SGM-19) on the loading, are presented. The method of two sources has been used to determine the dead time counters of increased stability. The counters are switched on according to the usually used circuit of discrete counting with loading resistance of 50 MOhm and the separating capacity of 10 pF. Voltage pulses are given to the counting device with the time of resolution of 100 ns, discrimenation threshold 3 V, input resistance 3.6 Ω and the input capacity-15 pF. The time constant of the counter RC-circuit is 50 μs

  16. Performance of a Discrete Wavelet Transform for Compressing Plasma Count Data and its Application to the Fast Plasma Investigation on NASA's Magnetospheric Multiscale Mission

    Science.gov (United States)

    Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.; hide

    2015-01-01

    Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.

  17. Automatic vehicle counting system for traffic monitoring

    Science.gov (United States)

    Crouzil, Alain; Khoudour, Louahdi; Valiere, Paul; Truong Cong, Dung Nghy

    2016-09-01

    The article is dedicated to the presentation of a vision-based system for road vehicle counting and classification. The system is able to achieve counting with a very good accuracy even in difficult scenarios linked to occlusions and/or presence of shadows. The principle of the system is to use already installed cameras in road networks without any additional calibration procedure. We propose a robust segmentation algorithm that detects foreground pixels corresponding to moving vehicles. First, the approach models each pixel of the background with an adaptive Gaussian distribution. This model is coupled with a motion detection procedure, which allows correctly location of moving vehicles in space and time. The nature of trials carried out, including peak periods and various vehicle types, leads to an increase of occlusions between cars and between cars and trucks. A specific method for severe occlusion detection, based on the notion of solidity, has been carried out and tested. Furthermore, the method developed in this work is capable of managing shadows with high resolution. The related algorithm has been tested and compared to a classical method. Experimental results based on four large datasets show that our method can count and classify vehicles in real time with a high level of performance (>98%) under different environmental situations, thus performing better than the conventional inductive loop detectors.

  18. Studies on the supposition of liquid source for irradiation and its dose distribution, (1)

    International Nuclear Information System (INIS)

    Yoshimura, Seiji; Nishida, Tsuneo

    1977-01-01

    Recently radio isotope has been used and applied in the respective spheres. The application of the effects by irradiation will be specially paid attention to in the future. Today the source for irradiation has been considered to be the thing sealed in the solid state into various capsules. So we suppose that we use liquid radio isotope as the source for irradiation. This is because there are some advantages compared with the solid source in its freedom of the shape or additional easiness at attenuation. In these experiments we measured the dose distribution by the columnar liquid source. We expect that these will be put to practical use. (auth.)

  19. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    Science.gov (United States)

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  20. Multi-source analysis reveals latitudinal and altitudinal shifts in range of Ixodes ricinus at its northern distribution limit

    Directory of Open Access Journals (Sweden)

    Kristoffersen Anja B

    2011-05-01

    Full Text Available Abstract Background There is increasing evidence for a latitudinal and altitudinal shift in the distribution range of Ixodes ricinus. The reported incidence of tick-borne disease in humans is on the rise in many European countries and has raised political concern and attracted media attention. It is disputed which factors are responsible for these trends, though many ascribe shifts in distribution range to climate changes. Any possible climate effect would be most easily noticeable close to the tick's geographical distribution limits. In Norway- being the northern limit of this species in Europe- no documentation of changes in range has been published. The objectives of this study were to describe the distribution of I. ricinus in Norway and to evaluate if any range shifts have occurred relative to historical descriptions. Methods Multiple data sources - such as tick-sighting reports from veterinarians, hunters, and the general public - and surveillance of human and animal tick-borne diseases were compared to describe the present distribution of I. ricinus in Norway. Correlation between data sources and visual comparison of maps revealed spatial consistency. In order to identify the main spatial pattern of tick abundance, a principal component analysis (PCA was used to obtain a weighted mean of four data sources. The weighted mean explained 67% of the variation of the data sources covering Norway's 430 municipalities and was used to depict the present distribution of I. ricinus. To evaluate if any geographical range shift has occurred in recent decades, the present distribution was compared to historical data from 1943 and 1983. Results Tick-borne disease and/or observations of I. ricinus was reported in municipalities up to an altitude of 583 metres above sea level (MASL and is now present in coastal municipalities north to approximately 69°N. Conclusion I. ricinus is currently found further north and at higher altitudes than described in

  1. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA

    Science.gov (United States)

    Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.

    2017-10-01

    Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  2. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA.

    Science.gov (United States)

    Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M

    2017-10-01

    Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  3. REM meter for pulsed sources of neutrons

    International Nuclear Information System (INIS)

    Thorngate, J.E.; Hunt, G.F.; Rueppel, D.W.

    1980-01-01

    A rem meter was constructed specifically for measuring neutrons produced by fusion experiments for which the source pulses last 10 ms or longer. The detector is a 6 Li glass scintillator, 25.4 mm in diameter and 3.2 mm thick, surrounded by 11.5 cm of polyethylene. This detector has a sensitivity of 8.5 x 10 4 counts/mrem. The signals from this fast scintillator are shaped using a shorted delay line to produce pulses that are only 10 ns long so that dose equivalent rates up to 12 mrem/s can be measured with less than a 1% counting loss. The associated electronic circuits store detector counts only when the count rate exceeds a preset level. When the count rate returns to background, a conversion from counts to dose equivalent is made and the results are displayed. As a means of recording the number of source pulses that have occurred, a second display shows how many times the preset count rate has been exceeded. Accumulation of detector counts and readouts can also be controlled manually. The unit will display the integrated dose equilavent up to 200 mrem in 0.01 mrem steps. A pulse-height discriminator rejects gamma-ray interactions below 1 MeV, and the detector size limits the response above that energy. The instrument can be operated from an ac line or will run on rechargeable batteries for up to 12 hours

  4. A Bayesian interpretation of a decision level in radioactivity counting data

    International Nuclear Information System (INIS)

    Moreno, A.; Navarro, E.; Senent, F.

    1988-01-01

    A Bayesian procedure is applied to derive a posterior gamma distribution which describes the inferential content of the radsioactivity counting data without incorporating any other information. A formulation leading to both dsetection and decision limits is developed. A tabulation of significance levels is given for some experimental situations. (Author)

  5. Neutron counting and gamma spectroscopy with PVT detectors

    International Nuclear Information System (INIS)

    Mitchell, Dean James; Brusseau, Charles A.

    2011-01-01

    Radiation portals normally incorporate a dedicated neutron counter and a gamma-ray detector with at least some spectroscopic capability. This paper describes the design and presents characterization data for a detection system called PVT-NG, which uses large polyvinyl toluene (PVT) detectors to monitor both types of radiation. The detector material is surrounded by polyvinyl chloride (PVC), which emits high-energy gamma rays following neutron capture reactions. Assessments based on high-energy gamma rays are well suited for the detection of neutron sources, particularly in border security applications, because few isotopes in the normal stream of commerce have significant gamma ray yields above 3 MeV. Therefore, an increased count rate for high-energy gamma rays is a strong indicator for the presence of a neutron source. The sensitivity of the PVT-NG sensor to bare 252 Cf is 1.9 counts per second per nanogram (cps/ng) and the sensitivity for 252 Cf surrounded by 2.5 cm of polyethylene is 2.3 cps/ng. The PVT-NG sensor is a proof-of-principal sensor that was not fully optimized. The neutron detector sensitivity could be improved, for instance, by using additional moderator. The PVT-NG detectors and associated electronics are designed to provide improved resolution, gain stability, and performance at high-count rates relative to PVT detectors in typical radiation portals. As well as addressing the needs for neutron detection, these characteristics are also desirable for analysis of the gamma-ray spectra. Accurate isotope identification results were obtained despite the common impression that the absence of photopeaks makes data collected by PVT detectors unsuitable for spectroscopic analysis. The PVT detectors in the PVT-NG unit are used for both gamma-ray and neutron detection, so the sensitive volume exceeds the volume of the detection elements in portals that use dedicated components to detect each type of radiation.

  6. Enhancing the performance of the measurement-device-independent quantum key distribution with heralded pair-coherent sources

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Feng; Zhang, Chun-Hui; Liu, Ai-Ping [Institute of Signal Processing Transmission, Nanjing University of Posts and Telecommunications, Nanjing 210003 (China); Key Lab of Broadband Wireless Communication and Sensor Network Technology, Nanjing University of Posts and Telecommunications, Ministry of Education, Nanjing 210003 (China); Wang, Qin, E-mail: qinw@njupt.edu.cn [Institute of Signal Processing Transmission, Nanjing University of Posts and Telecommunications, Nanjing 210003 (China); Key Lab of Broadband Wireless Communication and Sensor Network Technology, Nanjing University of Posts and Telecommunications, Ministry of Education, Nanjing 210003 (China); Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026 (China)

    2016-04-01

    In this paper, we propose to implement the heralded pair-coherent source into the measurement-device-independent quantum key distribution. By comparing its performance with other existing schemes, we demonstrate that our new scheme can overcome many shortcomings existing in current schemes, and show excellent behavior in the quantum key distribution. Moreover, even when taking the statistical fluctuation into account, we can still obtain quite high key generation rate at very long transmission distance by using our new scheme. - Highlights: • Implement the heralded pair-coherent source into the measurement-device-independent quantum key distribution. • Overcome many shortcomings existing in current schemes and show excellent behavior. • Obtain quite high key generation rate even when taking statistical fluctuation into account.

  7. Depression and anxiety symptoms are associated with white blood cell count and red cell distribution width: A sex-stratified analysis in a population-based study.

    Science.gov (United States)

    Shafiee, Mojtaba; Tayefi, Maryam; Hassanian, Seyed Mahdi; Ghaneifar, Zahra; Parizadeh, Mohammad Reza; Avan, Amir; Rahmani, Farzad; Khorasanchi, Zahra; Azarpajouh, Mahmoud Reza; Safarian, Hamideh; Moohebati, Mohsen; Heidari-Bakavoli, Alireza; Esmaeili, Habibolah; Nematy, Mohsen; Safarian, Mohammad; Ebrahimi, Mahmoud; Ferns, Gordon A; Mokhber, Naghmeh; Ghayour-Mobarhan, Majid

    2017-10-01

    Depression and anxiety are two common mood disorders that are both linked to systemic inflammation. Increased white blood cell (WBC) count and red cell distribution width (RDW) are associated with negative clinical outcomes in a wide variety of pathological conditions. WBC is a non-specific inflammatory marker and RDW is also strongly related to other inflammatory markers. Therefore, we proposed that there might be an association between these hematological inflammatory markers and depression/anxiety symptoms. The primary objective of this study was to examine the association between depression/anxiety symptoms and hematological inflammatory markers including WBC and RDW in a large population-based study. Symptoms of depression and anxiety and a complete blood count (CBC) were measured in 9274 participants (40% males and 60% females) aged 35-65 years, enrolled in a population-based cohort (MASHAD) study in north-eastern Iran. Symptoms of depression and anxiety were evaluated using the Beck Depression and Anxiety Inventories. The mean WBC count increased with increasing severity of symptoms of depression and anxiety among men. Male participants with severe depression had significantly higher values of RDW (panxiety symptoms had significantly higher values of RDW (panxiety. Our results suggest that higher depression and anxiety scores are associated with an enhanced inflammatory state, as assessed by higher hematological inflammatory markers including WBC and RDW, even after adjusting for potential confounders. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A stationary computed tomography system with cylindrically distributed sources and detectors.

    Science.gov (United States)

    Chen, Yi; Xi, Yan; Zhao, Jun

    2014-01-01

    The temporal resolution of current computed tomography (CT) systems is limited by the rotation speed of their gantries. A helical interlaced source detector array (HISDA) CT, which is a stationary CT system with distributed X-ray sources and detectors, is presented in this paper to overcome the aforementioned limitation and achieve high temporal resolution. Projection data can be obtained from different angles in a short time and do not require source, detector, or object motion. Axial coverage speed is increased further by employing a parallel scan scheme. Interpolation is employed to approximate the missing data in the gaps, and then a Katsevich-type reconstruction algorithm is applied to enable an approximate reconstruction. The proposed algorithm suppressed the cone beam and gap-induced artifacts in HISDA CT. The results also suggest that gap-induced artifacts can be reduced by employing a large helical pitch for a fixed gap height. HISDA CT is a promising 3D dynamic imaging architecture given its good temporal resolution and stationary advantage.

  9. The effect of volume and quenching on estimation of counting efficiencies in liquid scintillation counting

    International Nuclear Information System (INIS)

    Knoche, H.W.; Parkhurst, A.M.; Tam, S.W.

    1979-01-01

    The effect of volume on the liquid scintillation counting performance of 14 C-samples has been investigated. A decrease in counting efficiency was observed for samples with volumes below about 6 ml and those above about 18 ml when unquenched samples were assayed. Two quench-correction methods, sample channels ratio and external standard channels ratio, and three different liquid scintillation counters, were used in an investigation to determine the magnitude of the error in predicting counting efficiencies when small volume samples (2 ml) with different levels of quenching were assayed. The 2 ml samples exhibited slightly greater standard deviations of the difference between predicted and determined counting efficiencies than did 15 ml samples. Nevertheless, the magnitude of the errors indicate that if the sample channels ratio method of quench correction is employed, 2 ml samples may be counted in conventional counting vials with little loss in counting precision. (author)

  10. Activity measurements of radioactive solutions by liquid scintillation counting and pressurized ionization chambers and Monte Carlo simulations of source-detector systems for metrology

    International Nuclear Information System (INIS)

    Amiot, Marie-Noelle

    2013-01-01

    The research works 'Activity measurements of radioactive solutions by liquid scintillation and pressurized ionization chambers and Monte Carlo simulations of source-detector systems' was presented for the graduation: 'Habilitation a diriger des recherches'. The common thread of both themes liquid scintillation counting and pressurized ionization chambers lies in the improvement of the techniques of radionuclide activity measurement. Metrology of ionization radiation intervenes in numerous domains, in the research, in the industry including the environment and the health, which are subjects of constant concern for the world population these last years. In this big variety of applications answers a large number of radionuclides of diverse disintegration scheme and under varied physical forms. The presented works realized within the National Laboratory Henri Becquerel have for objective to assure detector calibration traceability and to improve the methods of activity measurements within the framework of research projects and development. The improvement of the primary and secondary activity measurement methods consists in perfecting the accuracy of the measurements in particular by a better knowledge of the parameters influencing the detector yield. The works of development dealing with liquid scintillation counting concern mainly the study of the response of liquid scintillators to low energy electrons as well as their linear absorption coefficients using synchrotron radiation. The research works on pressurized ionization chambers consist of the study of their response to photons and electrons by experimental measurements compared to the simulation of the source-detector system using Monte Carlo codes. Besides, the design of a new type of ionization chamber with variable pressure is presented. This new project was developed to guarantee the precision of the amount of activity injected into the patient within the framework of diagnosis examination

  11. HIV Seropositivity And CD4 T-Lymphocyte Counts Among Infants ...

    African Journals Online (AJOL)

    The CD4 cell count was estimated using the Dynamal ® Quant Kit (Dynal Biotechn, ASA, Oslo, Norway). Results: The overall HIV prevalence rate in this study was 23.2%. The distribution of HIV prevalence among different age group revealed a high prevalence rate among the under fives (24.1% for males and 26.4% for ...

  12. TasselNet: counting maize tassels in the wild via local counts regression network.

    Science.gov (United States)

    Lu, Hao; Cao, Zhiguo; Xiao, Yang; Zhuang, Bohan; Shen, Chunhua

    2017-01-01

    Accurately counting maize tassels is important for monitoring the growth status of maize plants. This tedious task, however, is still mainly done by manual efforts. In the context of modern plant phenotyping, automating this task is required to meet the need of large-scale analysis of genotype and phenotype. In recent years, computer vision technologies have experienced a significant breakthrough due to the emergence of large-scale datasets and increased computational resources. Naturally image-based approaches have also received much attention in plant-related studies. Yet a fact is that most image-based systems for plant phenotyping are deployed under controlled laboratory environment. When transferring the application scenario to unconstrained in-field conditions, intrinsic and extrinsic variations in the wild pose great challenges for accurate counting of maize tassels, which goes beyond the ability of conventional image processing techniques. This calls for further robust computer vision approaches to address in-field variations. This paper studies the in-field counting problem of maize tassels. To our knowledge, this is the first time that a plant-related counting problem is considered using computer vision technologies under unconstrained field-based environment. With 361 field images collected in four experimental fields across China between 2010 and 2015 and corresponding manually-labelled dotted annotations, a novel Maize Tassels Counting ( MTC ) dataset is created and will be released with this paper. To alleviate the in-field challenges, a deep convolutional neural network-based approach termed TasselNet is proposed. TasselNet can achieve good adaptability to in-field variations via modelling the local visual characteristics of field images and regressing the local counts of maize tassels. Extensive results on the MTC dataset demonstrate that TasselNet outperforms other state-of-the-art approaches by large margins and achieves the overall best counting

  13. TasselNet: counting maize tassels in the wild via local counts regression network

    Directory of Open Access Journals (Sweden)

    Hao Lu

    2017-11-01

    Full Text Available Abstract Background Accurately counting maize tassels is important for monitoring the growth status of maize plants. This tedious task, however, is still mainly done by manual efforts. In the context of modern plant phenotyping, automating this task is required to meet the need of large-scale analysis of genotype and phenotype. In recent years, computer vision technologies have experienced a significant breakthrough due to the emergence of large-scale datasets and increased computational resources. Naturally image-based approaches have also received much attention in plant-related studies. Yet a fact is that most image-based systems for plant phenotyping are deployed under controlled laboratory environment. When transferring the application scenario to unconstrained in-field conditions, intrinsic and extrinsic variations in the wild pose great challenges for accurate counting of maize tassels, which goes beyond the ability of conventional image processing techniques. This calls for further robust computer vision approaches to address in-field variations. Results This paper studies the in-field counting problem of maize tassels. To our knowledge, this is the first time that a plant-related counting problem is considered using computer vision technologies under unconstrained field-based environment. With 361 field images collected in four experimental fields across China between 2010 and 2015 and corresponding manually-labelled dotted annotations, a novel Maize Tassels Counting (MTC dataset is created and will be released with this paper. To alleviate the in-field challenges, a deep convolutional neural network-based approach termed TasselNet is proposed. TasselNet can achieve good adaptability to in-field variations via modelling the local visual characteristics of field images and regressing the local counts of maize tassels. Extensive results on the MTC dataset demonstrate that TasselNet outperforms other state-of-the-art approaches by large

  14. Coincidence counting corrections for dead time losses and accidental coincidences

    International Nuclear Information System (INIS)

    Wyllie, H.A.

    1987-04-01

    An equation is derived for the calculation of the radioactivity of a source from the results of coincidence counting taking into account the dead-time losses and accidental coincidences. The derivation is an extension of the method of J. Bryant [Int. J. Appl. Radiat. Isot., 14:143, 1963]. The improvement on Bryant's formula has been verified by experiment

  15. Full Counting Statistics for Interacting Fermions with Determinantal Quantum Monte Carlo Simulations.

    Science.gov (United States)

    Humeniuk, Stephan; Büchler, Hans Peter

    2017-12-08

    We present a method for computing the full probability distribution function of quadratic observables such as particle number or magnetization for the Fermi-Hubbard model within the framework of determinantal quantum Monte Carlo calculations. Especially in cold atom experiments with single-site resolution, such a full counting statistics can be obtained from repeated projective measurements. We demonstrate that the full counting statistics can provide important information on the size of preformed pairs. Furthermore, we compute the full counting statistics of the staggered magnetization in the repulsive Hubbard model at half filling and find excellent agreement with recent experimental results. We show that current experiments are capable of probing the difference between the Hubbard model and the limiting Heisenberg model.

  16. COUNTS-IN-CYLINDERS IN THE SLOAN DIGITAL SKY SURVEY WITH COMPARISONS TO N-BODY SIMULATIONS

    International Nuclear Information System (INIS)

    Berrier, Heather D.; Barton, Elizabeth J.; Bullock, James S.; Berrier, Joel C.; Zentner, Andrew R.; Wechsler, Risa H.

    2011-01-01

    Environmental statistics provide a necessary means of comparing the properties of galaxies in different environments, and a vital test of models of galaxy formation within the prevailing hierarchical cosmological model. We explore counts-in-cylinders, a common statistic defined as the number of companions of a particular galaxy found within a given projected radius and redshift interval. Galaxy distributions with the same two-point correlation functions do not necessarily have the same companion count distributions. We use this statistic to examine the environments of galaxies in the Sloan Digital Sky Survey Data Release 4 (SDSS DR4). We also make preliminary comparisons to four models for the spatial distributions of galaxies, based on N-body simulations and data from SDSS DR4, to study the utility of the counts-in-cylinders statistic. There is a very large scatter between the number of companions a galaxy has and the mass of its parent dark matter halo and the halo occupation, limiting the utility of this statistic for certain kinds of environmental studies. We also show that prevalent empirical models of galaxy clustering, that match observed two- and three-point clustering statistics well, fail to reproduce some aspects of the observed distribution of counts-in-cylinders on 1, 3, and 6 h -1 Mpc scales. All models that we explore underpredict the fraction of galaxies with few or no companions in 3 and 6 h -1 Mpc cylinders. Roughly 7% of galaxies in the real universe are significantly more isolated within a 6 h -1 Mpc cylinder than the galaxies in any of the models we use. Simple phenomenological models that map galaxies to dark matter halos fail to reproduce high-order clustering statistics in low-density environments.

  17. UV Stellar Distribution Model for the Derivation of Payload

    Directory of Open Access Journals (Sweden)

    Young-Jun Choi

    1999-12-01

    Full Text Available We present the results of a model calculation of the stellar distribution in a UV and centered at 2175Å corresponding to the well-known bump in the interstellar extinction curve. The stellar distribution model used here is based on the Bahcall-Soneira galaxy model (1980. The source code for model calculation was designed by Brosch (1991 and modified to investigate various designing factors for UV satellite payload. The model predicts UV stellar densities in different sky directions, and its results are compared with the TD-1 star counts for a number of sky regions. From this study, we can determine the field of view, size of optics, angular resolution, and number of stars in one orbit. There will provide the basic constrains in designing a satellite payload for UV observations.

  18. Optics study of liquid scintillation counting systems

    International Nuclear Information System (INIS)

    Duran Ramiro, M. T.; Garcia-Torano, E.

    2005-01-01

    Optics is a key issue in the development of any liquid scintillation counting (LSC) system. Light emission in the scintillating solution, transmission through the vial and reflector design are some aspects that need to be considered in detail. This paper describes measurements and calculations carried out to optimise these factors for the design of a new family of LSC counters. Measurements of the light distribution emitted by a scintillation vial were done by autoradiographs of cylindrical vials made of various materials and results were compared to those obtained by direct measurements of the light distribution made by scanning the vial with a photomultiplier tube. Calculations were also carried out to study the light transmission in the vial and the optimal design of the reflector for a system with one photomultiplier tube. (Author)

  19. Geometric discretization of the multidimensional Dirac delta distribution - Application to the Poisson equation with singular source terms

    Science.gov (United States)

    Egan, Raphael; Gibou, Frédéric

    2017-10-01

    We present a discretization method for the multidimensional Dirac distribution. We show its applicability in the context of integration problems, and for discretizing Dirac-distributed source terms in Poisson equations with constant or variable diffusion coefficients. The discretization is cell-based and can thus be applied in a straightforward fashion to Quadtree/Octree grids. The method produces second-order accurate results for integration. Superlinear convergence is observed when it is used to model Dirac-distributed source terms in Poisson equations: the observed order of convergence is 2 or slightly smaller. The method is consistent with the discretization of Dirac delta distribution for codimension one surfaces presented in [1,2]. We present Quadtree/Octree construction procedures to preserve convergence and present various numerical examples, including multi-scale problems that are intractable with uniform grids.

  20. Do your syringes count?

    International Nuclear Information System (INIS)

    Brewster, K.

    2002-01-01

    Full text: This study was designed to investigate anecdotal evidence that residual Sestamibi (MIBI) activity vaned in certain situations. For rest studies different brands of syringes were tested to see if the residuals varied. The period of time MIBI doses remained in the syringe between dispensing and injection was also considered as a possible source of increased residual counts. Stress Mibi syringe residual activities were measured to assess if the method of stress test affected residual activity. MIBI was reconstituted using 13 Gbq of Technetium in 3mls of normal saline then boiled for 10 minutes. Doses were dispensed according to department protocol and injected via cannula. Residual syringes were collected for three syringe types. In each case the barrel and plunger were measured separately. As the syringe is flushed during the exercise stress test and not the pharmacological stress test the chosen method was recorded. No relationship was demonstrated between the time MIBI remained in a syringe prior to injection and residual activity. Residual activity was not affected by method of stress test used. Actual injected activity can be calculated if the amount of activity remaining in the syringe post injection is known. Imaging time can be adjusted for residual activity to optimise count statistics. Preliminary results in this study indicate there is no difference in residual activity between syringe brands.Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  1. Isospectral discrete and quantum graphs with the same flip counts and nodal counts

    Science.gov (United States)

    Juul, Jonas S.; Joyner, Christopher H.

    2018-06-01

    The existence of non-isomorphic graphs which share the same Laplace spectrum (to be referred to as isospectral graphs) leads naturally to the following question: what additional information is required in order to resolve isospectral graphs? It was suggested by Band, Shapira and Smilansky that this might be achieved by either counting the number of nodal domains or the number of times the eigenfunctions change sign (the so-called flip count) (Band et al 2006 J. Phys. A: Math. Gen. 39 13999–4014 Band and Smilansky 2007 Eur. Phys. J. Spec. Top. 145 171–9). Recent examples of (discrete) isospectral graphs with the same flip count and nodal count have been constructed by Ammann by utilising Godsil–McKay switching (Ammann private communication). Here, we provide a simple alternative mechanism that produces systematic examples of both discrete and quantum isospectral graphs with the same flip and nodal counts.

  2. The dislocation distribution function near a crack tip generated by external sources

    International Nuclear Information System (INIS)

    Lung, C.W.; Deng, K.M.

    1988-06-01

    The dislocation distribution function near a crack tip generated by external sources is calculated. It is similar to the shape of curves calculated for the crack tip emission case but the quantative difference is quite large. The image forces enlarges the negative dislocation zone but does not change the form of the curve. (author). 10 refs, 3 figs

  3. Long-distance quantum key distribution with imperfect devices

    International Nuclear Information System (INIS)

    Lo Piparo, Nicoló; Razavi, Mohsen

    2014-01-01

    Quantum key distribution over probabilistic quantum repeaters is addressed. We compare, under practical assumptions, two such schemes in terms of their secure key generation rate per memory, R QKD . The two schemes under investigation are the one proposed by Duan et al. in [Nat. 414, 413 (2001)] and that of Sangouard et al. proposed in [Phys. Rev. A 76, 050301 (2007)]. We consider various sources of imperfections in the latter protocol, such as a nonzero double-photon probability for the source, dark count per pulse, channel loss and inefficiencies in photodetectors and memories, to find the rate for different nesting levels. We determine the maximum value of the double-photon probability beyond which it is not possible to share a secret key anymore. We find the crossover distance for up to three nesting levels. We finally compare the two protocols

  4. Counting It Twice.

    Science.gov (United States)

    Schattschneider, Doris

    1991-01-01

    Provided are examples from many domains of mathematics that illustrate the Fubini Principle in its discrete version: the value of a summation over a rectangular array is independent of the order of summation. Included are: counting using partitions as in proof by pictures, combinatorial arguments, indirect counting as in the inclusion-exclusion…

  5. A review on the sources and spatial-temporal distributions of Pb in Jiaozhou Bay

    Science.gov (United States)

    Yang, Dongfang; Zhang, Jie; Wang, Ming; Zhu, Sixi; Wu, Yunjie

    2017-12-01

    This paper provided a review on the source, spatial-distribution, temporal variations of Pb in Jiaozhou Bay based on investigation of Pb in surface and waters in different seasons during 1979-1983. The source strengths of Pb sources in Jiaozhou Bay were showing increasing trends, and the pollution level of Pb in this bay was slight or moderate in the early stage of reform and opening-up. Pb contents in the marine bay were mainly determined by the strength and frequency of Pb inputs from human activities, and Pb could be moving from high content areas to low content areas in the ocean interior. Surface waters in the ocean was polluted by human activities, and bottom waters was polluted by means of vertical water’s effect. The process of spatial distribution of Pb in waters was including three steps, i.e., 1), Pb was transferring to surface waters in the bay, 2) Pb was transferring to surface waters, and 3) Pb was transferring to and accumulating in bottom waters.

  6. Low White Blood Cell Count

    Science.gov (United States)

    Symptoms Low white blood cell count By Mayo Clinic Staff A low white blood cell count (leukopenia) is a decrease ... of white blood cell (neutrophil). The definition of low white blood cell count varies from one medical ...

  7. Liquid scintillation counting of 3H-thymidine incorporated into rat lens DNA

    International Nuclear Information System (INIS)

    Soederberg, P.G.; Lindstroem, B.

    1990-01-01

    DNA synthesis in the lens has previously been localized by autoradiography following incorporation of 3 H-thymidine. For the quantification of DNA synthesis in the lens, pooling of lenses and extraction of the DNA for liquid scintillation counting, has formerly been adapted. In the present investigation a method has been developed for the extraction of the unincorporated tracer from whole lenses after short time incubation in a medium containing 3 H-thymidine. The 3 H-thymidine incorporated into individual lenses was then detected by liquid scintillation counting after dissolution of the lenses. The sources of the variation in the method are evaluated. (author)

  8. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks.

    Science.gov (United States)

    Ma, Junjie; Meng, Fansheng; Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-02-16

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  9. Ion-source dependence of the distributions of internuclear separations in 2-MeV HeH+ beams

    International Nuclear Information System (INIS)

    Kanter, E.P.; Gemmell, D.S.; Plesser, I.; Vager, Z.

    1981-01-01

    Experiments involving the use of MeV molecular-ion beams have yielded new information on atomic collisions in solids. A central part of the analyses of such experiments is a knowledge of the distribution of internuclear separations contained in the incident beam. In an attempt to determine how these distributions depend on ion-source gas conditions, we have studied foil-induced dissociations of H 2+ , H 3+ , HeH + , and OH 2+ ions. Although changes of ion-source gas compositions and pressure were found to have no measurable influence on the vibrational state populations of the beams reaching our target, for HeH + we found that beams produced in our rf source were vibrationally hotter than beams produced in a duoplasmatron. This was also seen in studies of neutral fragments and transmitted molecules

  10. Free-Space Quantum Key Distribution with a High Generation Rate KTP Waveguide Photon-Pair Source

    Science.gov (United States)

    Wilson, J.; Chaffee, D.; Wilson, N.; Lekki, J.; Tokars, R.; Pouch, J.; Lind, A.; Cavin, J.; Helmick, S.; Roberts, T.; hide

    2016-01-01

    NASA awarded Small Business Innovative Research (SBIR) contracts to AdvR, Inc to develop a high generation rate source of entangled photons that could be used to explore quantum key distribution (QKD) protocols. The final product, a photon pair source using a dual-element periodically- poled potassium titanyl phosphate (KTP) waveguide, was delivered to NASA Glenn Research Center in June of 2015. This paper describes the source, its characterization, and its performance in a B92 (Bennett, 1992) protocol QKD experiment.

  11. Distribution and sources of particulate organic matter in the Indian monsoonal estuaries during monsoon

    Digital Repository Service at National Institute of Oceanography (India)

    Sarma, V.V.S.S.; Krishna, M.S.; Prasad, V.R.; Kumar, B.S.K.; Naidu, S.A.; Rao, G.D.; Viswanadham, R.; Sridevi, T.; Kumar, P.P.; Reddy, N.P.C.

    The distribution and sources of particulate organic carbon (POC) and nitrogen (PN) in 27 Indian estuaries were examined during the monsoon using the content and isotopic composition of carbon and nitrogen. Higher phytoplankton biomass was noticed...

  12. Future prospects for ECR ion sources with improved charge state distributions

    International Nuclear Information System (INIS)

    Alton, G.D.

    1995-01-01

    Despite the steady advance in the technology of the ECR ion source, present art forms have not yet reached their full potential in terms of charge state and intensity within a particular charge state, in part, because of the narrow band width. single-frequency microwave radiation used to heat the plasma electrons. This article identifies fundamentally important methods which may enhance the performances of ECR ion sources through the use of: (1) a tailored magnetic field configuration (spatial domain) in combination with single-frequency microwave radiation to create a large uniformly distributed ECR ''volume'' or (2) the use of broadband frequency domain techniques (variable-frequency, broad-band frequency, or multiple-discrete-frequency microwave radiation), derived from standard TWT technology, to transform the resonant plasma ''surfaces'' of traditional ECR ion sources into resonant plasma ''volume''. The creation of a large ECR plasma ''volume'' permits coupling of more power into the plasma, resulting in the heating of a much larger electron population to higher energies, thereby producing higher charge state ions and much higher intensities within a particular charge state than possible in present forms of' the source. The ECR ion source concepts described in this article offer exciting opportunities to significantly advance the-state-of-the-art of ECR technology and as a consequence, open new opportunities in fundamental and applied research and for a variety of industrial applications

  13. Positron imaging system with improved count rate and tomographic capability

    International Nuclear Information System (INIS)

    1979-01-01

    A system with improved count rate capability for detecting the radioactive distribution of positron events within an organ of interest in a living subject is described. Objects of the invention include improving the scintillation crystal and pulse processing electronics, avoiding the limitations of collimators and provide an Arger camera positron imaging system that avoids the use of collimators. (U.K.)

  14. Variations in pollen counts largely explained by climate and weather

    Science.gov (United States)

    Jung, Stephan; Damialis, Athanasios; Estrella, Nicole; Jochner, Susanne; Menzel, Annette

    2017-04-01

    The interaction between climate and vegetation is well studied within phenology. Climatic / weather conditions affect e.g. flowering date, length of vegetation period, start and end of the season and the plant growth. Besides phenological stages also pollen counts can be used to investigate the interaction between climate and vegetation. Pollen emission and distribution is directly influenced by temperature, wind speed, wind direction and humidity/precipitation. The objective of this project is to study daily/sub daily variations in pollen counts of woody and herbaceous plant species along an altitudinal gradient with different climatic conditions during the vegetation period. Measurements of pollen were carried out with three volumetric pollen traps installed at the altitudes 450 m a.s.l (Freising), 700 m a.s.l (Garmisch-Partenkirchen), and 2700 m a.s.l (Schneefernerhaus near Zugspitze) representing gradient from north of Munich towards the highest mountain of Germany. Airborne pollen concentrations were recorded during the years 2014-2015. The altitudinal range of these three stations accompanied by different microclimates ("space for time approach") can be used as proxy for climate change and to assess its impact on pollen counts and thus allergenic risk for human health. For example the pollen season is shortened and pollen amount is reduced at higher sites. For detailed investigations pollen of the species Plantago, Quercus, Poaceae, Cupressaceae, Cyperacea, Betula and Platanus were chosen, because those are found in appropriate quantities. In general, pollen captured in the pollen traps to a certain extent has its origin from the immediate surrounding. Thus, it mirrors local species distribution. But furthermore the distance of pollen transport is also based on (micro-) climatic conditions, land cover and topography. The pollen trap shortly below the summit of Zugspitze (Schneefernerhaus) has an alpine environment without vegetation nearby. Therefore, this

  15. Hanford whole body counting manual

    International Nuclear Information System (INIS)

    Palmer, H.E.; Brim, C.P.; Rieksts, G.A.; Rhoads, M.C.

    1987-05-01

    This document, a reprint of the Whole Body Counting Manual, was compiled to train personnel, document operation procedures, and outline quality assurance procedures. The current manual contains information on: the location, availability, and scope of services of Hanford's whole body counting facilities; the administrative aspect of the whole body counting operation; Hanford's whole body counting facilities; the step-by-step procedure involved in the different types of in vivo measurements; the detectors, preamplifiers and amplifiers, and spectroscopy equipment; the quality assurance aspect of equipment calibration and recordkeeping; data processing, record storage, results verification, report preparation, count summaries, and unit cost accounting; and the topics of minimum detectable amount and measurement accuracy and precision. 12 refs., 13 tabs

  16. Determination of the 51Cr source strength at BNL

    International Nuclear Information System (INIS)

    Boger, J.; Hahn, R.L.; Chu, Y.Y.

    1995-11-01

    Neutron activation analysis (NAA) and γ-ray counting have been used to measure the activity of 24 samples removed from the GALLEX radioactive Cr neutrino source. In 9.86% of the disintegrations, 51 Cr decays with the emission of a 320-keV γ-ray. Counting this γ-ray provides a direct means to obtain the disintegration rates of the Cr samples. Based upon these disintegration rates, the authors obtain a strength of 63.1 ± 1.0 PBq for the entire Cr source. The Cr source activity has also been obtained through measuring the 51 V content of each sample by means of NAA. 51 V is the decay daughter for all decay modes of 51 Cr. Through neutron bombardment, radioactive 52 V is produced, which decays with the emission of a 1,434-keV γ-ray. By counting this γ-ray from NAA, they obtain a disintegration rate of 62.1 ± 1.0 PBq for the entire source. These values are consistent with all other measurements of the source strength done at other GALLEX Laboratories

  17. TU-EF-207-03: Advances in Stationary Breast Tomosynthesis Using Distributed X-Ray Sources

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, O. [The University of North Carolina at Chapel Hill (United States)

    2015-06-15

    Breast imaging technology is advancing on several fronts. In digital mammography, the major technological trend has been on optimization of approaches for performing combined mammography and tomosynthesis using the same system. In parallel, photon-counting slot-scan mammography is now in clinical use and more efforts are directed towards further development of this approach for spectral imaging. Spectral imaging refers to simultaneous acquisition of two or more energy-windowed images. Depending on the detector and associated electronics, there are a number of ways this can be accomplished. Spectral mammography using photon-counting detectors can suppress electronic noise and importantly, it enables decomposition of the image into various material compositions of interest facilitating quantitative imaging. Spectral imaging can be particularly important in intravenously injected contrast mammography and eventually tomosynthesis. The various approaches and applications of spectral mammography are discussed. Digital breast tomosynthesis relies on the mechanical movement of the x-ray tube to acquire a number of projections in a predefined arc, typically from 9 to 25 projections over a scan angle of +/−7.5 to 25 degrees depending on the particular system. The mechanical x-ray tube motion requires relatively long acquisition time, typically between 3.7 to 25 seconds depending on the system. Moreover, mechanical scanning may have an effect on the spatial resolution due to internal x-ray filament or external mechanical vibrations. New x-ray source arrays have been developed and they are aimed at replacing the scanned x-ray tube for improved acquisition time and potentially for higher spatial resolution. The potential advantages and challenges of this approach are described. Combination of digital mammography and tomosynthesis in a single system places increased demands on certain functional aspects of the detector and overall performance, particularly in the tomosynthesis

  18. TU-EF-207-03: Advances in Stationary Breast Tomosynthesis Using Distributed X-Ray Sources

    International Nuclear Information System (INIS)

    Zhou, O.

    2015-01-01

    Breast imaging technology is advancing on several fronts. In digital mammography, the major technological trend has been on optimization of approaches for performing combined mammography and tomosynthesis using the same system. In parallel, photon-counting slot-scan mammography is now in clinical use and more efforts are directed towards further development of this approach for spectral imaging. Spectral imaging refers to simultaneous acquisition of two or more energy-windowed images. Depending on the detector and associated electronics, there are a number of ways this can be accomplished. Spectral mammography using photon-counting detectors can suppress electronic noise and importantly, it enables decomposition of the image into various material compositions of interest facilitating quantitative imaging. Spectral imaging can be particularly important in intravenously injected contrast mammography and eventually tomosynthesis. The various approaches and applications of spectral mammography are discussed. Digital breast tomosynthesis relies on the mechanical movement of the x-ray tube to acquire a number of projections in a predefined arc, typically from 9 to 25 projections over a scan angle of +/−7.5 to 25 degrees depending on the particular system. The mechanical x-ray tube motion requires relatively long acquisition time, typically between 3.7 to 25 seconds depending on the system. Moreover, mechanical scanning may have an effect on the spatial resolution due to internal x-ray filament or external mechanical vibrations. New x-ray source arrays have been developed and they are aimed at replacing the scanned x-ray tube for improved acquisition time and potentially for higher spatial resolution. The potential advantages and challenges of this approach are described. Combination of digital mammography and tomosynthesis in a single system places increased demands on certain functional aspects of the detector and overall performance, particularly in the tomosynthesis

  19. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  20. SOILD: A computer model for calculating the effective dose equivalent from external exposure to distributed gamma sources in soil

    International Nuclear Information System (INIS)

    Chen, S.Y.; LePoire, D.; Yu, C.; Schafetz, S.; Mehta, P.

    1991-01-01

    The SOLID computer model was developed for calculating the effective dose equivalent from external exposure to distributed gamma sources in soil. It is designed to assess external doses under various exposure scenarios that may be encountered in environmental restoration programs. The models four major functional features address (1) dose versus source depth in soil, (2) shielding of clean cover soil, (3) area of contamination, and (4) nonuniform distribution of sources. The model is also capable of adjusting doses when there are variations in soil densities for both source and cover soils. The model is supported by a data base of approximately 500 radionuclides. 4 refs

  1. Disentangling the major source areas for an intense aerosol advection in the Central Mediterranean on the basis of Potential Source Contribution Function modeling of chemical and size distribution measurements

    Science.gov (United States)

    Petroselli, Chiara; Crocchianti, Stefano; Moroni, Beatrice; Castellini, Silvia; Selvaggi, Roberta; Nava, Silvia; Calzolai, Giulia; Lucarelli, Franco; Cappelletti, David

    2018-05-01

    In this paper, we combined a Potential Source Contribution Function (PSCF) analysis of daily chemical aerosol composition data with hourly aerosol size distributions with the aim to disentangle the major source areas during a complex and fast modulating advection event impacting on Central Italy in 2013. Chemical data include an ample set of metals obtained by Proton Induced X-ray Emission (PIXE), main soluble ions from ionic chromatography and elemental and organic carbon (EC, OC) obtained by thermo-optical measurements. Size distributions have been recorded with an optical particle counter for eight calibrated size classes in the 0.27-10 μm range. We demonstrated the usefulness of the approach by the positive identification of two very different source areas impacting during the transport event. In particular, biomass burning from Eastern Europe and desert dust from Sahara sources have been discriminated based on both chemistry and size distribution time evolution. Hourly BT provided the best results in comparison to 6 h or 24 h based calculations.

  2. CalCOFI Egg Counts

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Fish egg counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets], and...

  3. Empirical tests of Zipf's law mechanism in open source Linux distribution.

    Science.gov (United States)

    Maillart, T; Sornette, D; Spaeth, S; von Krogh, G

    2008-11-21

    Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.

  4. Logistic quantile regression provides improved estimates for bounded avian counts: A case study of California Spotted Owl fledgling production

    Science.gov (United States)

    Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of

  5. Count-doubling time safety circuit

    International Nuclear Information System (INIS)

    Keefe, D.J.; McDowell, W.P.; Rusch, G.K.

    1981-01-01

    There is provided a nuclear reactor count-factor-increase time monitoring circuit which includes a pulse-type neutron detector, and means for counting the number of detected pulses during specific time periods. Counts are compared and the comparison is utilized to develop a reactor scram signal, if necessary

  6. Count-doubling time safety circuit

    Science.gov (United States)

    Rusch, Gordon K.; Keefe, Donald J.; McDowell, William P.

    1981-01-01

    There is provided a nuclear reactor count-factor-increase time monitoring circuit which includes a pulse-type neutron detector, and means for counting the number of detected pulses during specific time periods. Counts are compared and the comparison is utilized to develop a reactor scram signal, if necessary.

  7. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources

    Directory of Open Access Journals (Sweden)

    Xiang Gao

    2016-07-01

    Full Text Available This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors’ data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented.

  8. XMM-Newton 13H deep field - I. X-ray sources

    Science.gov (United States)

    Loaring, N. S.; Dwelly, T.; Page, M. J.; Mason, K.; McHardy, I.; Gunn, K.; Moss, D.; Seymour, N.; Newsam, A. M.; Takata, T.; Sekguchi, K.; Sasseen, T.; Cordova, F.

    2005-10-01

    We present the results of a deep X-ray survey conducted with XMM-Newton, centred on the UK ROSAT13H deep field area. This region covers 0.18 deg2, and is the first of the two areas covered with XMM-Newton as part of an extensive multiwavelength survey designed to study the nature and evolution of the faint X-ray source population. We have produced detailed Monte Carlo simulations to obtain a quantitative characterization of the source detection procedure and to assess the reliability of the resultant sourcelist. We use the simulations to establish a likelihood threshold, above which we expect less than seven (3 per cent) of our sources to be spurious. We present the final catalogue of 225 sources. Within the central 9 arcmin, 68 per cent of source positions are accurate to 2 arcsec, making optical follow-up relatively straightforward. We construct the N(>S) relation in four energy bands: 0.2-0.5, 0.5-2, 2-5 and 5-10 keV. In all but our highest energy band we find that the source counts can be represented by a double power law with a bright-end slope consistent with the Euclidean case and a break around 10-14yergcm-2s-1. Below this flux, the counts exhibit a flattening. Our source counts reach densities of 700, 1300, 900 and 300 deg-2 at fluxes of 4.1 × 10-16,4.5 × 10-16,1.1 × 10-15 and 5.3 × 10-15ergcm-2s-1 in the 0.2-0.5, 0.5-2, 2-5 and 5-10 keV energy bands, respectively. We have compared our source counts with those in the two Chandra deep fields and Lockman hole, and found our source counts to be amongst the highest of these fields in all energy bands. We resolve >51 per cent (>50 per cent) of the X-ray background emission in the 1-2 keV (2-5 keV) energy bands.

  9. Distribution, sources and health risk assessment of mercury in kindergarten dust

    Science.gov (United States)

    Sun, Guangyi; Li, Zhonggen; Bi, Xiangyang; Chen, Yupeng; Lu, Shuangfang; Yuan, Xin

    2013-07-01

    Mercury (Hg) contamination in urban area is a hot issue in environmental research. In this study, the distribution, sources and health risk of Hg in dust from 69 kindergartens in Wuhan, China, were investigated. In comparison with most other cities, the concentrations of total mercury (THg) and methylmercury (MeHg) were significantly elevated, ranging from 0.15 to 10.59 mg kg-1 and from 0.64 to 3.88 μg kg-1, respectively. Among the five different urban areas, the educational area had the highest concentrations of THg and MeHg. The GIS mapping was used to identify the hot-spot areas and assess the potential pollution sources of Hg. The emissions of coal-power plants and coking plants were the main sources of THg in the dust, whereas the contributions of municipal solid waste (MSW) landfills and iron and steel smelting related industries were not significant. However, the emission of MSW landfills was considered to be an important source of MeHg in the studied area. The result of health risk assessment indicated that there was a high adverse health effect of the kindergarten dust in terms of Hg contamination on the children living in the educational area (Hazard index (HI) = 6.89).

  10. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Junjie Ma

    2018-02-01

    Full Text Available Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  11. The Big Pumpkin Count.

    Science.gov (United States)

    Coplestone-Loomis, Lenny

    1981-01-01

    Pumpkin seeds are counted after students convert pumpkins to jack-o-lanterns. Among the activities involved, pupils learn to count by 10s, make estimates, and to construct a visual representation of 1,000. (MP)

  12. Sources and dispersive modes of micro-fibers in the environment.

    Science.gov (United States)

    Carr, Steve A

    2017-05-01

    Understanding the sources and distribution of microfibers (MFs) in the environment is critical if control and remediation measures are to be effective. Microfibers comprise an overwhelming fraction (>85%) of microplastic debris found on shorelines around the world. Although primary sources have not been fully vetted, until recently it was widely believed that domestic laundry discharges were the major source. It was also thought that synthetic fibers and particles having dimensions wastewater treatment plants (WWTPs) and entered oceans and surface waters. A more thorough assessment of WWTP effluent discharges indicates, however, that fiber and particulate counts do not support the belief that plants are the primary vectors for fibers entering the environment. This finding may bolster concerns that active and pervasive shedding of fibers from common fabrics and textiles could be contributing significantly, via direct pathways, to burgeoning environmental loads. Integr Environ Assess Manag 2017;13:466-469. © 2017 SETAC. © 2017 SETAC.

  13. Evaluation of a standardized procedure for [corrected] microscopic cell counts [corrected] in body fluids.

    Science.gov (United States)

    Emerson, Jane F; Emerson, Scott S

    2005-01-01

    A standardized urinalysis and manual microscopic cell counting system was evaluated for its potential to reduce intra- and interoperator variability in urine and cerebrospinal fluid (CSF) cell counts. Replicate aliquots of pooled specimens were submitted blindly to technologists who were instructed to use either the Kova system with the disposable Glasstic slide (Hycor Biomedical, Inc., Garden Grove, CA) or the standard operating procedure of the University of California-Irvine (UCI), which uses plain glass slides for urine sediments and hemacytometers for CSF. The Hycor system provides a mechanical means of obtaining a fixed volume of fluid in which to resuspend the sediment, and fixes the volume of specimen to be microscopically examined by using capillary filling of a chamber containing in-plane counting grids. Ninety aliquots of pooled specimens of each type of body fluid were used to assess the inter- and intraoperator reproducibility of the measurements. The variability of replicate Hycor measurements made on a single specimen by the same or different observers was compared with that predicted by a Poisson distribution. The Hycor methods generally resulted in test statistics that were slightly lower than those obtained with the laboratory standard methods, indicating a trend toward decreasing the effects of various sources of variability. For 15 paired aliquots of each body fluid, tests for systematically higher or lower measurements with the Hycor methods were performed using the Wilcoxon signed-rank test. Also examined was the average difference between the Hycor and current laboratory standard measurements, along with a 95% confidence interval (CI) for the true average difference. Without increasing labor or the requirement for attention to detail, the Hycor method provides slightly better interrater comparisons than the current method used at UCI. Copyright 2005 Wiley-Liss, Inc.

  14. Positron energy distributions from a hybrid positron source based on channeling radiation

    International Nuclear Information System (INIS)

    Azadegan, B.; Mahdipour, A.; Dabagov, S.B.; Wagner, W.

    2013-01-01

    A hybrid positron source which is based on the generation of channeling radiation by relativistic electrons channeled along different crystallographic planes and axes of a tungsten single crystal and subsequent conversion of radiation into e + e − -pairs in an amorphous tungsten target is described. The photon spectra of channeling radiation are calculated using the Doyle–Turner approximation for the continuum potentials and classical equations of motion for channeled particles to obtain their trajectories, velocities and accelerations. The spectral-angular distributions of channeling radiation are found applying classical electrodynamics. Finally, the conversion of radiation into e + e − -pairs and the energy distributions of positrons are simulated using the GEANT4 package

  15. Correction of count losses due to deadtime on a DST-XLi (SMVi-GE) camera during dosimetric studies in patients injected with iodine-131

    International Nuclear Information System (INIS)

    Delpon, G.; Ferrer, L.; Lisbona, A.; Bardies, M.

    2002-01-01

    In dosimetric studies performed after therapeutic injection, it is essential to correct count losses due to deadtime on the gamma camera. This note describes four deadtime correction methods, one based on the use of a standard source without preliminary calibration, and three requiring specific calibration and based on the count rate observed in different spectrometric windows (20%, 20% plus a lower energy window and the full spectrum of 50-750 keV). Experiments were conducted on a phantom at increasingly higher count rates to check correction accuracy with the different methods. The error was less than +7% with a standard source, whereas count-rate-based methods gave more accurate results. On the assumption that the model was paralysable, preliminary calibration allowed an observed count rate curve to be plotted as a function of the real count rate. The use of the full spectrum led to a 3.0% underestimation for the highest activity imaged. As count losses depend on photon flux independent of energy, the use of the full spectrum during measurement allowed scatter conditions to be taken into account. A protocol was developed to apply this correction method to whole-body acquisitions. (author)

  16. Smart pile-up consideration for evaluation of high count rate EDS spectra

    International Nuclear Information System (INIS)

    Eggert, F; Anderhalt, R; Nicolosi, J; Elam, T

    2012-01-01

    This work describes a new pile-up consideration for the very high count rate spectra which are possible to acquire with silicon drift detector (SDD) technology. Pile-up effects are the major and still remaining challenge with the use of SDD for EDS in scanning electron microscopes (SEM) with ultra thin windows for soft X-ray detection. The ability to increase the count rates up to a factor of 100 compared with conventional Si(Li) detectors, comes with the problem that the pile-up recognition (pile-up rejection) in pulse processors is not able to improve by the same order of magnitude, just only with a factor of about 3. Therefore, it is common that spectra will show significant pile-up effects if count rates of more than 10000 counts per second (10 kcps) are used. These false counts affect both automatic qualitative analysis and quantitative evaluation of the spectra. The new idea is to use additional inputs for pile-up calculation to shift the applicability towards very high count rates of up to 200 kcps and more, which can be easily acquired with the SDD. The additional input is the 'known' (estimated) background distribution, calculated iteratively during all automated qualitative or quantitative evaluations. This additional knowledge gives the opportunity for self adjustment of the pile-up calculation parameters and avoids over-corrections which challenge the evaluation as well as the pile-up artefacts themselves. With the proposed method the pile-up correction is no longer a 'correction' but an integral part of all spectra evaluation steps. Examples for the application are given with evaluation of very high count rate spectra.

  17. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    Science.gov (United States)

    Bansal, A. R.; Anand, S. P.; Rajaram, Mita; Rao, V. K.; Dimri, V. P.

    2013-09-01

    The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.

  18. The Space-, Time-, and Energy-distribution of Neutrons from a Pulsed Plane Source

    Energy Technology Data Exchange (ETDEWEB)

    Claesson, Arne

    1962-05-15

    The space-, time- and energy-distribution of neutrons from a pulsed, plane, high energy source in an infinite medium is determined in a diffusion approximation. For simplicity the moderator is first assumed to be hydrogen gas but it is also shown that the method can be used for a moderator of arbitrary mass.

  19. Calibrate the aerial surveying instrument by the limited surface source and the single point source that replace the unlimited surface source

    International Nuclear Information System (INIS)

    Lu Cunheng

    1999-01-01

    It is described that the calculating formula and surveying result is found on the basis of the stacking principle of gamma ray and the feature of hexagonal surface source when the limited surface source replaces the unlimited surface source to calibrate the aerial survey instrument on the ground, and that it is found in the light of the exchanged principle of the gamma ray when the single point source replaces the unlimited surface source to calibrate aerial surveying instrument in the air. Meanwhile through the theoretical analysis, the receiving rate of the crystal bottom and side surfaces is calculated when aerial surveying instrument receives gamma ray. The mathematical expression of the gamma ray decaying following height according to the Jinge function regularity is got. According to this regularity, the absorbing coefficient that air absorbs the gamma ray and the detective efficiency coefficient of the crystal is calculated based on the ground and air measuring value of the bottom surface receiving count rate (derived from total receiving count rate of the bottom and side surface). Finally, according to the measuring value, it is proved that imitating the change of total receiving gamma ray exposure rate of the bottom and side surfaces with this regularity in a certain high area is feasible

  20. Distributed least-squares estimation of a remote chemical source via convex combination in wireless sensor networks.

    Science.gov (United States)

    Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

    2014-06-27

    This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  1. Distributed Least-Squares Estimation of a Remote Chemical Source via Convex Combination in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Meng-Li Cao

    2014-06-01

    Full Text Available This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN. Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE method to solve the chemical source localization (CSL problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  2. Galaxy number counts: Pt. 2

    International Nuclear Information System (INIS)

    Metcalfe, N.; Shanks, T.; Fong, R.; Jones, L.R.

    1991-01-01

    Using the Prime Focus CCD Camera at the Isaac Newton Telescope we have determined the form of the B and R galaxy number-magnitude count relations in 12 independent fields for 21 m ccd m and 19 m ccd m 5. The average galaxy count relations lie in the middle of the wide range previously encompassed by photographic data. The field-to-field variation of the counts is small enough to define the faint (B m 5) galaxy count to ±10 per cent and this variation is consistent with that expected from galaxy clustering considerations. Our new data confirm that the B, and also the R, galaxy counts show evidence for strong galaxy luminosity evolution, and that the majority of the evolving galaxies are of moderately blue colour. (author)

  3. A constant velocity Moessbauer spectrometer free of long-term instrumental drifts in the count rate

    International Nuclear Information System (INIS)

    Sarma, P.R.; Sharma, A.K.; Tripathi, K.C.

    1979-01-01

    Two new control circuits to be used with a constant velocity Moessbauer spectrometer with a loud-speaker drive have been described. The wave-forms generated in the circuits are of the stair-case type instead of the usual square wave-form, so that in each oscillation of the source it remains stationary for a fraction of the time-period. The gamma-rays counted during this period are monitored along with the positive and negative velocity counts and are used to correct any fluctuation in the count rate by feeding these pulses into the timer. The associated logic circuits have been described and the statistical errors involved in the circuits have been computed. (auth.)

  4. Lung counting: Comparison of a four detector array that has either metal or carbon fiber end caps, and the effect on array performance characteristics

    International Nuclear Information System (INIS)

    Sabbir Ahmed, Asm; Kramer, Gary H.

    2011-01-01

    This study described the performance of an array of HPGe detectors, made by ORTEC. In the existing system, a metal end cap was used in the detector construction. In general, the natural metal contains some radioactive materials, create high background noises and signals during in vivo counting. ORTEC proposed a novel carbon fiber to be used in end cap, without any radio active content. This paper described the methodology of developing a model of the given HPGe array-detectors, comparing the detection efficiency and cross talk among the detectors using two end cap materials: either metal or carbon fiber and to provide a recommendation about the end cap material. The detector's counting efficiency were studied using point and plane sources. The cross talk among the array detectors were studied using a homogeneous attenuating medium made of tissue equivalent material. The cross talk was significant when single or multiple point sources (simulated to heterogeneous hot spots) were embedded inside the attenuating medium. With carbon fiber, the cross talk increased about 100% for photon energy at about 100 keV. For a uniform distribution of radioactive material, the cross talk increased about 5-10% when the end cap was made of carbon instead of steel. Metal end cap was recommended for the array of HPGe detectors.

  5. Simulations of a spectral gamma-ray logging tool response to a surface source distribution on the borehole wall

    International Nuclear Information System (INIS)

    Wilson, R.D.; Conaway, J.G.

    1991-01-01

    We have developed Monte Carlo and discrete ordinates simulation models for the large-detector spectral gamma-ray (SGR) logging tool in use at the Nevada Test Site. Application of the simulation models produced spectra for source layers on the borehole wall, either from potassium-bearing mudcakes or from plate-out of radon daughter products. Simulations show that the shape and magnitude of gamma-ray spectra from sources distributed on the borehole wall depend on radial position with in the air-filled borehole as well as on hole diameter. No such dependence is observed for sources uniformly distributed in the formation. In addition, sources on the borehole wall produce anisotropic angular fluxes at the higher scattered energies and at the source energy. These differences in borehole effects and in angular flux are important to the process of correcting SGR logs for the presence of potassium mudcakes; they also suggest a technique for distinguishing between spectral contributions from formation sources and sources on the borehole wall. These results imply the existence of a standoff effect not present for spectra measured in air-filled boreholes from formation sources. 5 refs., 11 figs

  6. Isotopes, Inventories and Seasonality: Unraveling Methane Source Distribution in the Complex Landscapes of the United Kingdom.

    Science.gov (United States)

    Lowry, D.; Fisher, R. E.; Zazzeri, G.; Lanoisellé, M.; France, J.; Allen, G.; Nisbet, E. G.

    2017-12-01

    Unlike the big open landscapes of many continents with large area sources dominated by one particular methane emission type that can be isotopically characterized by flight measurements and sampling, the complex patchwork of urban, fossil and agricultural methane sources across NW Europe require detailed ground surveys for characterization (Zazzeri et al., 2017). Here we outline the findings from multiple seasonal urban and rural measurement campaigns in the United Kingdom. These surveys aim to: 1) Assess source distribution and baseline in regions of planned fracking, and relate to on-site continuous baseline climatology. 2) Characterize spatial and seasonal differences in the isotopic signatures of the UNFCCC source categories, and 3) Assess the spatial validity of the 1 x 1 km UK inventory for large continuous emitters, proposed point sources, and seasonal / ephemeral emissions. The UK inventory suggests that 90% of methane emissions are from 3 source categories, ruminants, landfill and gas distribution. Bag sampling and GC-IRMS delta13C analysis shows that landfill gives a constant signature of -57 ±3 ‰ throughout the year. Fugitive gas emissions are consistent regionally depending on the North Sea supply regions feeding the network (-41 ± 2 ‰ in N England, -37 ± 2 ‰ in SE England). Ruminant, mostly cattle, emissions are far more complex as these spend winters in barns and summers in fields, but are essentially a mix of 2 end members, breath at -68 ±3 ‰ and manure at -51 ±3 ‰, resulting in broad summer field emission plumes of -64 ‰ and point winter barn emission plumes of -58 ‰. The inventory correctly locates emission hotspots from landfill, larger sewage treatment plants and gas compressor stations, giving a broad overview of emission distribution for regional model validation. Mobile surveys are adding an extra layer of detail to this which, combined with isotopic characterization, has identified spatial distribution of gas pipe leaks

  7. Time-dependent anisotropic distributed source capability in transient 3-d transport code tort-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    The transient 3-D discrete ordinates transport code TORT-TD has been extended to account for time-dependent anisotropic distributed external sources. The extension aims at the simulation of the pulsed neutron source in the YALINA-Thermal subcritical assembly. Since feedback effects are not relevant in this zero-power configuration, this offers a unique opportunity to validate the time-dependent neutron kinetics of TORT-TD with experimental data. The extensions made in TORT-TD to incorporate a time-dependent anisotropic external source are described. The steady state of the YALINA-Thermal assembly and its response to an artificial square-wave source pulse sequence have been analysed with TORT-TD using pin-wise homogenised cross sections in 18 prompt energy groups with P 1 scattering order and 8 delayed neutron groups. The results demonstrate the applicability of TORT-TD to subcritical problems with a time-dependent external source. (authors)

  8. Angular and mass resolved energy distribution measurements with a gallium liquid metal ion source

    International Nuclear Information System (INIS)

    Marriott, Philip

    1987-06-01

    Ionisation and energy broadening mechanisms relevant to liquid metal ion sources are discussed. A review of experimental results giving a picture of source operation and a discussion of the emission mechanisms thought to occur for the ionic species and droplets emitted is presented. Further work is suggested by this review and an analysis system for angular and mass resolved energy distribution measurements of liquid metal ion source beams has been constructed. The energy analyser has been calibrated and a series of measurements, both on and off the beam axis, of 69 Ga + , Ga ++ and Ga 2 + ions emitted at various currents from a gallium source has been performed. A comparison is made between these results and published work where possible, and the results are discussed with the aim of determining the emission and energy spread mechanisms operating in the gallium liquid metal ion source. (author)

  9. Distributional patterns of arsenic concentrations in contaminant plumes offer clues to the source of arsenic in groundwater at landfills

    Science.gov (United States)

    Harte, Philip T.

    2015-01-01

    The distributional pattern of dissolved arsenic concentrations from landfill plumes can provide clues to the source of arsenic contamination. Under simple idealized conditions, arsenic concentrations along flow paths in aquifers proximal to a landfill will decrease under anthropogenic sources but potentially increase under in situ sources. This paper presents several conceptual distributional patterns of arsenic in groundwater based on the arsenic source under idealized conditions. An example of advanced subsurface mapping of dissolved arsenic with geophysical surveys, chemical monitoring, and redox fingerprinting is presented for a landfill site in New Hampshire with a complex flow pattern. Tools to assist in the mapping of arsenic in groundwater ultimately provide information on the source of contamination. Once an understanding of the arsenic contamination is achieved, appropriate remedial strategies can then be formulated.

  10. Correlation between total lymphocyte count, hemoglobin, hematocrit and CD4 count in HIV patients in Nigeria.

    Science.gov (United States)

    Emuchay, Charles Iheanyichi; Okeniyi, Shemaiah Olufemi; Okeniyi, Joshua Olusegun

    2014-04-01

    The expensive and technology limited setting of CD4 count testing is a major setback to the initiation of HAART in a resource limited country like Nigeria. Simple and inexpensive tools such as Hemoglobin (Hb) measurement and Total Lymphocyte Count (TLC) are recommended as substitute marker. In order to assess the correlations of these parameters with CD4 count, 100 "apparently healthy" male volunteers tested HIV positive aged ≥ 20 years but ≤ 40 years were recruited and from whom Hb, Hct, TLC and CD4 count were obtained. The correlation coefficients, R, the Nash-Sutcliffe Coefficient of Efficiency (CoE) and the p-values of the ANOVA model of Hb, Hct and TLC with CD4 count were assessed. The assessments show that there is no significant relationship of any of these parameters with CD4 count and the correlation coefficients are very weak. This study shows that Hb, Hct and TLC cannot be substitute for CD4 count as this might lead to certain individuals' deprivation of required treatment.

  11. It counts who counts: an experimental evaluation of the importance of observer effects on spotlight count estimates

    DEFF Research Database (Denmark)

    Sunde, Peter; Jessen, Lonnie

    2013-01-01

    observers with respect to their ability to detect and estimate distance to realistic animal silhouettes at different distances. Detection probabilities were higher for observers experienced in spotlighting mammals than for inexperienced observers, higher for observers with a hunting background compared...... with non-hunters and decreased as function of age but were independent of sex or educational background. If observer-specific detection probabilities were applied to real counting routes, point count estimates from inexperienced observers without a hunting background would only be 43 % (95 % CI, 39...

  12. The effect of the volumetric heat source distribution of the fuel pellet on the minimum DNBR ratio

    International Nuclear Information System (INIS)

    Hordosy, G.; Kereszturi, A.; Maroti, L.; Trosztel, I.

    1995-01-01

    The radial power distribution in a VVER-440 type fuel assembly is strongly non-uniform as a result of the water-gap between the shrouds and the moderator filled central tube. Consequently, it can be expected that the power density inside a single fuel rod is inhomogeneous, as well. In the paper the methodology and the results of coupled thermohydraulic and neutronic calculations are presented. The objective of the analysis was the investigation of the heat source distribution and the determination of the possible extent of the power non-uniformity in a corner rod which has always the highest peaking factor in a VVER-440 type assembly. The results of the analysis revealed that there can be a strong non-uniformity of power distribution inside a fuel pellet, and the effect depends first of all on the general assembly conditions, while the local subchannel parameters have only a slight influence on the pellet heat source distribution. (author)

  13. You can count on the motor cortex: Finger counting habits modulate motor cortex activation evoked by numbers

    Science.gov (United States)

    Tschentscher, Nadja; Hauk, Olaf; Fischer, Martin H.; Pulvermüller, Friedemann

    2012-01-01

    The embodied cognition framework suggests that neural systems for perception and action are engaged during higher cognitive processes. In an event-related fMRI study, we tested this claim for the abstract domain of numerical symbol processing: is the human cortical motor system part of the representation of numbers, and is organization of numerical knowledge influenced by individual finger counting habits? Developmental studies suggest a link between numerals and finger counting habits due to the acquisition of numerical skills through finger counting in childhood. In the present study, digits 1 to 9 and the corresponding number words were presented visually to adults with different finger counting habits, i.e. left- and right-starters who reported that they usually start counting small numbers with their left and right hand, respectively. Despite the absence of overt hand movements, the hemisphere contralateral to the hand used for counting small numbers was activated when small numbers were presented. The correspondence between finger counting habits and hemispheric motor activation is consistent with an intrinsic functional link between finger counting and number processing. PMID:22133748

  14. Reliability evaluation of hard disk drive failures based on counting processes

    International Nuclear Information System (INIS)

    Ye, Zhi-Sheng; Xie, Min; Tang, Loon-Ching

    2013-01-01

    Reliability assessment for hard disk drives (HDDs) is important yet difficult for manufacturers. Motivated by the fact that the particle accumulation in the HDDs, which accounts for most HDD catastrophic failures, is contributed from the internal and external sources, a counting process with two arrival sources is proposed to model the particle cumulative process in HDDs. This model successfully explains the collapse of traditional ALT approaches for accelerated life test data. Parameter estimation and hypothesis tests for the model are developed and illustrated with real data from a HDD test. A simulation study is conducted to examine the accuracy of large sample normal approximations that are used to test existence of the internal and external sources.

  15. SU-F-T-24: Impact of Source Position and Dose Distribution Due to Curvature of HDR Transfer Tubes

    Energy Technology Data Exchange (ETDEWEB)

    Khan, A; Yue, N [Rutgers University, New Brunswick, NJ (United States)

    2016-06-15

    Purpose: Brachytherapy is a highly targeted from of radiotherapy. While this may lead to ideal dose distributions on the treatment planning system, a small error in source location can lead to change in the dose distribution. The purpose of this study is to quantify the impact on source position error due to curvature of the transfer tubes and the impact this may have on the dose distribution. Methods: Since the source travels along the midline of the tube, an estimate of the positioning error for various angles of curvature was determined using geometric properties of the tube. Based on the range of values a specific shift was chosen to alter the treatment plans for a number of cervical cancer patients who had undergone HDR brachytherapy boost using tandem and ovoids. Impact of dose to target and organs at risk were determined and checked against guidelines outlined by radiation oncologist. Results: The estimate of the positioning error was 2mm short of the expected position (the curved tube can only cause the source to not reach as far as with a flat tube). Quantitative impact on the dose distribution is still in the process of being analyzed. Conclusion: The accepted positioning tolerance for the source position of a HDR brachytherapy unit is plus or minus 1mm. If there is an additional 2mm discrepancy due to tube curvature, this can result in a source being 1mm to 3mm short of the expected location. While we do always attempt to keep the tubes straight, in some cases such as with tandem and ovoids, the tandem connector does not extend as far out from the patient so the ovoid tubes always contain some degree of curvature. The dose impact of this may be significant.

  16. Total lymphocyte count and subpopulation lymphocyte counts in relation to dietary intake and nutritional status of peritoneal dialysis patients.

    Science.gov (United States)

    Grzegorzewska, Alicja E; Leander, Magdalena

    2005-01-01

    Dietary deficiency causes abnormalities in circulating lymphocyte counts. For the present paper, we evaluated correlations between total and subpopulation lymphocyte counts (TLC, SLCs) and parameters of nutrition in peritoneal dialysis (PD) patients. Studies were carried out in 55 patients treated with PD for 22.2 +/- 11.4 months. Parameters of nutritional status included total body mass, lean body mass (LBM), body mass index (BMI), and laboratory indices [total protein, albumin, iron, ferritin, and total iron binding capacity (TIBC)]. The SLCs were evaluated using flow cytometry. Positive correlations were seen between TLC and dietary intake of niacin; TLC and CD8 and CD16+56 counts and energy delivered from protein; CD4 count and beta-carotene and monounsaturated fatty acids 17:1 intake; and CD19 count and potassium, copper, vitamin A, and beta-carotene intake. Anorexia negatively influenced CD19 count. Serum albumin showed correlations with CD4 and CD19 counts, and LBM with CD19 count. A higher CD19 count was connected with a higher red blood cell count, hemoglobin, and hematocrit. Correlations were observed between TIBC and TLC and CD3 and CD8 counts, and between serum Fe and TLC and CD3 and CD4 counts. Patients with a higher CD19 count showed a better clinical-laboratory score, especially less weakness. Patients with a higher CD4 count had less expressed insomnia. Quantities of ingested vitamins and minerals influence lymphocyte counts in the peripheral blood of PD patients. Evaluation of TLC and SLCs is helpful in monitoring the effectiveness of nutrition in these patients.

  17. Monte Carlo simulation of gamma-ray total counting efficiency for a Phoswich detector

    Energy Technology Data Exchange (ETDEWEB)

    Yalcin, S. [Education Faculty, Kastamonu University, 37200 Kastamonu (Turkey)], E-mail: syalcin@kastamonu.edu.tr; Gurler, O. [Department of Physics, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey); Gundogdu, O. [Department of Physics, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford, GU2 7XH (United Kingdom); NCCPM, Medical Physics, Royal Surrey County Hospital, Guildford, GU2 7XX (United Kingdom); Kaynak, G. [Department of Physics, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey)

    2009-01-15

    The LB 1000-PW detector is mainly used for determining total alpha, beta and gamma activity of low activity natural sources such as water, soil, air filters and any other environmental sources. Detector efficiency needs to be known in order to measure the absolute activity of such samples. This paper presents results on the total gamma counting efficiency of a Phoswich detector from point and disk sources. The directions of photons emitted from the source were determined by Monte Carlo techniques and the true path lengths in the detector were determined by analytical equations depending on photon directions. Results are tabulated for various gamma energies.

  18. Monte Carlo simulation of gamma-ray total counting efficiency for a Phoswich detector

    International Nuclear Information System (INIS)

    Yalcin, S.; Gurler, O.; Gundogdu, O.; Kaynak, G.

    2009-01-01

    The LB 1000-PW detector is mainly used for determining total alpha, beta and gamma activity of low activity natural sources such as water, soil, air filters and any other environmental sources. Detector efficiency needs to be known in order to measure the absolute activity of such samples. This paper presents results on the total gamma counting efficiency of a Phoswich detector from point and disk sources. The directions of photons emitted from the source were determined by Monte Carlo techniques and the true path lengths in the detector were determined by analytical equations depending on photon directions. Results are tabulated for various gamma energies

  19. The P1-approximation for the Distribution of Neutrons from a Pulsed Source in Hydrogen

    International Nuclear Information System (INIS)

    Claesson, A.

    1963-12-01

    The asymptotic distribution of neutrons from a pulsed, high energy source in an infinite moderator has been obtained earlier in a 'diffusion' approximation. In that paper the cross section was assumed to be constant over the whole energy region and the time derivative of the first moment was disregarded. Here, first, an analytic expression is obtained for the density in a P 1 -approximation. However, the result is very complicated, and it is shown that an asymptotic solution can be found in a simpler way. By taking into account the low hydrogen scattering cross section at the source energy it follows that the space dependence of the distribution is less than that obtained earlier. The importance of keeping the time derivative of the first moment is further shown in a perturbation approximation

  20. Regulatory actions to expand the offer of distributed generation from renewable energy sources in Brazil

    International Nuclear Information System (INIS)

    Pepitone da Nóbrega, André; Cabral Carvalho, Carlos Eduardo

    2015-01-01

    The composition of the Brazilian electric energy matrix has undergone transformations in recent years. However, it has still maintained significant participation of renewable energy sources, in particular hydropower plants of various magnitudes. Reasons for the growth of other renewable sources of energy, such as wind and solar, include the fact that the remaining hydropower capacity is mainly located in the Amazon, which is far from centers of consumption, the necessity of diversifying the energy mix and reducing dependence on hydrologic regimes, the increase in environmental restrictions, the increase of civil construction and land costs.Wind power generation has grown most significantly in Brazil. Positive results in the latest energy auctions show that wind power generation has reached competitive pricing. Solar energy is still incipient in Brazil, despite its high potential for conversion into electric energy. This energy source in the Brazilian electric energy matrix mainly involves solar centrals and distributed generation. Biomass thermal plants, mainly the ones that use bagasse of sugar cane, also have an important role in renewable generation in Brazil.This paper aims to present an overview of the present situation and discuss the actions and the regulations to expand the offer of renewable distributed generation in Brazil, mainly from wind power, solar and biomass energy sources. (full text)

  1. Aerial Survey Counts of Harbor Seals in Lake Iliamna, Alaska, 1984-2013 (NODC Accession 0123188)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset provides counts of harbor seals from aerial surveys over Lake Iliamna, Alaska, USA. The data have been collated from three previously published sources...

  2. A Dataset of Aerial Survey Counts of Harbor Seals in Iliamna Lake, Alaska: 1984-2013

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset provides counts of harbor seals from aerial surveys over Iliamna Lake, Alaska, USA. The data have been collated from three previously published sources...

  3. Absolute nuclear material assay

    Science.gov (United States)

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  4. A Multispectral Photon-Counting Double Random Phase Encoding Scheme for Image Authentication

    Directory of Open Access Journals (Sweden)

    Faliu Yi

    2014-05-01

    Full Text Available In this paper, we propose a new method for color image-based authentication that combines multispectral photon-counting imaging (MPCI and double random phase encoding (DRPE schemes. The sparsely distributed information from MPCI and the stationary white noise signal from DRPE make intruder attacks difficult. In this authentication method, the original multispectral RGB color image is down-sampled into a Bayer image. The three types of color samples (red, green and blue color in the Bayer image are encrypted with DRPE and the amplitude part of the resulting image is photon counted. The corresponding phase information that has nonzero amplitude after photon counting is then kept for decryption. Experimental results show that the retrieved images from the proposed method do not visually resemble their original counterparts. Nevertheless, the original color image can be efficiently verified with statistical nonlinear correlations. Our experimental results also show that different interpolation algorithms applied to Bayer images result in different verification effects for multispectral RGB color images.

  5. A multispectral photon-counting double random phase encoding scheme for image authentication.

    Science.gov (United States)

    Yi, Faliu; Moon, Inkyu; Lee, Yeon H

    2014-05-20

    In this paper, we propose a new method for color image-based authentication that combines multispectral photon-counting imaging (MPCI) and double random phase encoding (DRPE) schemes. The sparsely distributed information from MPCI and the stationary white noise signal from DRPE make intruder attacks difficult. In this authentication method, the original multispectral RGB color image is down-sampled into a Bayer image. The three types of color samples (red, green and blue color) in the Bayer image are encrypted with DRPE and the amplitude part of the resulting image is photon counted. The corresponding phase information that has nonzero amplitude after photon counting is then kept for decryption. Experimental results show that the retrieved images from the proposed method do not visually resemble their original counterparts. Nevertheless, the original color image can be efficiently verified with statistical nonlinear correlations. Our experimental results also show that different interpolation algorithms applied to Bayer images result in different verification effects for multispectral RGB color images.

  6. Associations between somatic cell count patterns and the incidence of clinical mastitis

    NARCIS (Netherlands)

    Haas, de Y.; Barkema, H.W.; Schukken, Y.H.; Veerkamp, R.F.

    2005-01-01

    Associations between clinical mastitis (CM) and the proportional distribution of patterns in somatic cell count (SCC) on a herd level were determined in this study. Data on CM and SCC over a 12-month period from 274 Dutch herds were used. The dataset contained parts of 29,719 lactations from 22,955

  7. Fast optical source for quantum key distribution based on semiconductor optical amplifiers.

    Science.gov (United States)

    Jofre, M; Gardelein, A; Anzolin, G; Amaya, W; Capmany, J; Ursin, R; Peñate, L; Lopez, D; San Juan, J L; Carrasco, J A; Garcia, F; Torcal-Milla, F J; Sanchez-Brea, L M; Bernabeu, E; Perdigues, J M; Jennewein, T; Torres, J P; Mitchell, M W; Pruneri, V

    2011-02-28

    A novel integrated optical source capable of emitting faint pulses with different polarization states and with different intensity levels at 100 MHz has been developed. The source relies on a single laser diode followed by four semiconductor optical amplifiers and thin film polarizers, connected through a fiber network. The use of a single laser ensures high level of indistinguishability in time and spectrum of the pulses for the four different polarizations and three different levels of intensity. The applicability of the source is demonstrated in the lab through a free space quantum key distribution experiment which makes use of the decoy state BB84 protocol. We achieved a lower bound secure key rate of the order of 3.64 Mbps and a quantum bit error ratio as low as 1.14×10⁻² while the lower bound secure key rate became 187 bps for an equivalent attenuation of 35 dB. To our knowledge, this is the fastest polarization encoded QKD system which has been reported so far. The performance, reduced size, low power consumption and the fact that the components used can be space qualified make the source particularly suitable for secure satellite communication.

  8. MANTA – An Open-Source, High Density Electrophysiology Recording Suite for MATLAB

    Directory of Open Access Journals (Sweden)

    Bernhard eEnglitz

    2013-05-01

    Full Text Available The distributed nature of nervous systems makes it necessary to record from a large number of sites in order to break the neural code, whether single cell, local field potential (LFP, micro-electrocorticograms (μECoG, electroencephalographic (EEG, magnetoencephalographic (MEG or in vitro micro-electrode array (MEA data are considered. High channel-count recordings also optimize the yield of a preparation and the efficiency of time invested by the researcher. Currently, data acquisition (DAQ systems with high channel counts (>100 can be purchased from a limited number of companies at considerable prices. These systems are typically closed-source and thus prohibit custom extensions or improvements by end users.We have developed MANTA, an open-source MATLAB-based DAQ system, as an alternative to existing options. MANTA combines high channel counts (up to 1440 channels/PC, usage of analog or digital headstages, low per channel cost (<$90/channel, feature-rich display & filtering, a user-friendly interface, and a modular design permitting easy addition of new features. MANTA is licensed under the GPL and free of charge. The system has been tested by daily use in multiple setups for >1 year, recording reliably from 128 channels. It offers a growing list of features, including integrated spike sorting, PSTH and CSD display and fully customizable electrode array geometry (including 3D arrays, some of which are not available in commercial systems. MANTA runs on a typical PC and communicates via TCP/IP and can thus be easily integrated with existing stimulus generation/control systems in a lab at a fraction of the cost of commercial systems.With modern neuroscience developing rapidly, MANTA provides a flexible platform that can be rapidly adapted to the needs of new analyses and questions. Being open-source, the development of MANTA can outpace commercial solutions in functionality, while maintaining a low price-point.

  9. Investigation of Anisotropy Caused by Cylinder Applicator on Dose Distribution around Cs-137 Brachytherapy Source using MCNP4C Code

    Directory of Open Access Journals (Sweden)

    Sedigheh Sina

    2011-06-01

    Full Text Available Introduction: Brachytherapy is a type of radiotherapy in which radioactive sources are used in proximity of tumors normally for treatment of malignancies in the head, prostate and cervix. Materials and Methods: The Cs-137 Selectron source is a low-dose-rate (LDR brachytherapy source used in a remote afterloading system for treatment of different cancers. This system uses active and inactive spherical sources of 2.5 mm diameter, which can be used in different configurations inside the applicator to obtain different dose distributions. In this study, first the dose distribution at different distances from the source was obtained around a single pellet inside the applicator in a water phantom using the MCNP4C Monte Carlo code. The simulations were then repeated for six active pellets in the applicator and for six point sources.  Results: The anisotropy of dose distribution due to the presence of the applicator was obtained by division of dose at each distance and angle to the dose at the same distance and angle of 90 degrees. According to the results, the doses decreased towards the applicator tips. For example, for points at the distances of 5 and 7 cm from the source and angle of 165 degrees, such discrepancies reached 5.8% and 5.1%, respectively.  By increasing the number of pellets to six, these values reached 30% for the angle of 5 degrees. Discussion and Conclusion: The results indicate that the presence of the applicator causes a significant dose decrease at the tip of the applicator compared with the dose in the transverse plane. However, the treatment planning systems consider an isotropic dose distribution around the source and this causes significant errors in treatment planning, which are not negligible, especially for a large number of sources inside the applicator.

  10. Single-source gamma radiation procedures for improved calibration and measurements in porous media

    International Nuclear Information System (INIS)

    Oostrom, M.; Hofstee, C.; Dane, H.; Lenhard, R.J.

    1998-01-01

    When dual-energy gamma radiation systems are employed for measurements in porous media, count rates from both sources are often used to compute parameter values. However, for several applications, the count rates of just one source are insufficient. These applications include the determination of volumetric liquid content values in two-liquid systems and salt concentration values in water-saturated porous media. Single-energy gamma radiation procedures for three applications are described in this paper. Through an error analysis, single-source procedures are shown to reduce the probable error in the determinations considerably. Example calculations and simple column experiments were conducted for each application to compare the performance of the new single-source and standard dual-source methods. In all cases, the single-source methods provided more reliable data than the traditional dual-source methods. In addition, a single-source calibration procedure is proposed to determine incident count rates indirectly. This procedure, which requires packing under saturated conditions, can be used in all single- and dual-source applications and yields accurate porosity and dry bulk density values

  11. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Energy Technology Data Exchange (ETDEWEB)

    Murray, S. G.; Trott, C. M.; Jordan, C. H. [ARC Centre of Excellence for All-sky Astrophysics (CAASTRO) (Australia)

    2017-08-10

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  12. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Science.gov (United States)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  13. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    Science.gov (United States)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  14. Heterogeneous counting on filter support media

    International Nuclear Information System (INIS)

    Long, E.; Kohler, V.; Kelly, M.J.

    1976-01-01

    Many investigators in the biomedical research area have used filter paper as the support for radioactive samples. This means that a heterogeneous counting of sample sometimes results. The count rate of a sample on a filter will be affected by positioning, degree of dryness, sample application procedure, the type of filter, and the type of cocktail used. Positioning of the filter (up or down) in the counting vial can cause a variation of 35% or more when counting tritiated samples on filter paper. Samples of varying degrees of dryness when added to the counting cocktail can cause nonreproducible counts if handled improperly. Count rates starting at 2400 CPM initially can become 10,000 CPM in 24 hours for 3 H-DNA (deoxyribonucleic acid) samples dried on standard cellulose acetate membrane filters. Data on cellulose nitrate filters show a similar trend. Sample application procedures in which the sample is applied to the filter in a small spot or on a large amount of the surface area can cause nonreproducible or very low counting rates. A tritiated DNA sample, when applied topically, gives a count rate of 4,000 CPM. When the sample is spread over the whole filter, 13,400 CPM are obtained with a much better coefficient of variation (5% versus 20%). Adding protein carrier (bovine serum albumin-BSA) to the sample to trap more of the tritiated DNA on the filter during the filtration process causes a serious beta absorption problem. Count rates which are one-fourth the count rate applied to the filter are obtained on calibrated runs. Many of the problems encountered can be alleviated by a proper choice of filter and the use of a liquid scintillation cocktail which dissolves the filter. Filter-Solv has been used to dissolve cellulose nitrate filters and filters which are a combination of cellulose nitrate and cellulose acetate. Count rates obtained for these dissolved samples are very reproducible and highly efficient

  15. SAWdoubler: A program for counting self-avoiding walks

    Science.gov (United States)

    Schram, Raoul D.; Barkema, Gerard T.; Bisseling, Rob H.

    2013-03-01

    This article presents SAWdoubler, a package for counting the total number ZN of self-avoiding walks (SAWs) on a regular lattice by the length-doubling method, of which the basic concept has been published previously by us. We discuss an algorithm for the creation of all SAWs of length N, efficient storage of these SAWs in a tree data structure, and an algorithm for the computation of correction terms to the count Z2N for SAWs of double length, removing all combinations of two intersecting single-length SAWs. We present an efficient numbering of the lattice sites that enables exploitation of symmetry and leads to a smaller tree data structure; this numbering is by increasing Euclidean distance from the origin of the lattice. Furthermore, we show how the computation can be parallelised by distributing the iterations of the main loop of the algorithm over the cores of a multicore architecture. Experimental results on the 3D cubic lattice demonstrate that Z28 can be computed on a dual-core PC in only 1 h and 40 min, with a speedup of 1.56 compared to the single-core computation and with a gain by using symmetry of a factor of 26. We present results for memory use and show how the computation is made to fit in 4 GB RAM. It is easy to extend the SAWdoubler software to other lattices; it is publicly available under the GNU LGPL license. Catalogue identifier: AEOB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public Licence No. of lines in distributed program, including test data, etc.: 2101 No. of bytes in distributed program, including test data, etc.: 19816 Distribution format: tar.gz Programming language: C. Computer: Any computer with a UNIX-like operating system and a C compiler. For large problems, use is made of specific 128-bit integer arithmetic provided by the gcc compiler. Operating system: Any UNIX

  16. Effect of tissue inhomogeneity on dose distribution of point sources of low-energy electrons

    International Nuclear Information System (INIS)

    Kwok, C.S.; Bialobzyski, P.J.; Yu, S.K.; Prestwich, W.V.

    1990-01-01

    Perturbation in dose distributions of point sources of low-energy electrons at planar interfaces of cortical bone (CB) and red marrow (RM) was investigated experimentally and by Monte Carlo codes EGS and the TIGER series. Ultrathin LiF thermoluminescent dosimeters were used to measure the dose distributions of point sources of 204 Tl and 147 Pm in RM. When the point sources were at 12 mg/cm 2 from a planar interface of CB and RM equivalent plastics, dose enhancement ratios in RM averaged over the region 0--12 mg/cm 2 from the interface were measured to be 1.08±0.03 (SE) and 1.03±0.03 (SE) for 204 Tl and 147 Pm, respectively. The Monte Carlo codes predicted 1.05±0.02 and 1.01±0.02 for the two nuclides, respectively. However, EGS gave consistently 3% higher dose in the dose scoring region than the TIGER series when point sources of monoenergetic electrons up to 0.75 MeV energy were considered in the homogeneous RM situation or in the CB and RM heterogeneous situation. By means of the TIGER series, it was demonstrated that aluminum, which is normally assumed to be equivalent to CB in radiation dosimetry, leads to an overestimation of backscattering of low-energy electrons in soft tissue at a CB--soft-tissue interface by as much as a factor of 2

  17. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  18. Analysis of Parasite and Other Skewed Counts

    Science.gov (United States)

    Alexander, Neal

    2012-01-01

    Objective To review methods for the statistical analysis of parasite and other skewed count data. Methods Statistical methods for skewed count data are described and compared, with reference to those used over a ten year period of Tropical Medicine and International Health. Two parasitological datasets are used for illustration. Results Ninety papers were identified, 89 with descriptive and 60 with inferential analysis. A lack of clarity is noted in identifying measures of location, in particular the Williams and geometric mean. The different measures are compared, emphasizing the legitimacy of the arithmetic mean for skewed data. In the published papers, the t test and related methods were often used on untransformed data, which is likely to be invalid. Several approaches to inferential analysis are described, emphasizing 1) non-parametric methods, while noting that they are not simply comparisons of medians, and 2) generalized linear modelling, in particular with the negative binomial distribution. Additional methods, such as the bootstrap, with potential for greater use are described. Conclusions Clarity is recommended when describing transformations and measures of location. It is suggested that non-parametric methods and generalized linear models are likely to be sufficient for most analyses. PMID:22943299

  19. Robustifying Bayesian nonparametric mixtures for count data.

    Science.gov (United States)

    Canale, Antonio; Prünster, Igor

    2017-03-01

    Our motivating application stems from surveys of natural populations and is characterized by large spatial heterogeneity in the counts, which makes parametric approaches to modeling local animal abundance too restrictive. We adopt a Bayesian nonparametric approach based on mixture models and innovate with respect to popular Dirichlet process mixture of Poisson kernels by increasing the model flexibility at the level both of the kernel and the nonparametric mixing measure. This allows to derive accurate and robust estimates of the distribution of local animal abundance and of the corresponding clusters. The application and a simulation study for different scenarios yield also some general methodological implications. Adding flexibility solely at the level of the mixing measure does not improve inferences, since its impact is severely limited by the rigidity of the Poisson kernel with considerable consequences in terms of bias. However, once a kernel more flexible than the Poisson is chosen, inferences can be robustified by choosing a prior more general than the Dirichlet process. Therefore, to improve the performance of Bayesian nonparametric mixtures for count data one has to enrich the model simultaneously at both levels, the kernel and the mixing measure. © 2016, The International Biometric Society.

  20. Power Law Distributions in the Experiment for Adjustment of the Ion Source of the NBI System

    International Nuclear Information System (INIS)

    Han Xiaopu; Hu Chundong

    2005-01-01

    The experiential adjustment process in an experiment on the ion source of the neutral beam injector system for the HT-7 Tokamak is reported in this paper. With regard to the data obtained in the same condition, in arranging the arc current intensities of every shot with a decay rank, the distributions of the arc current intensity correspond to the power laws, and the distribution obtained in the condition with the cryo-pump corresponds to the double Pareto distribution. Using the similar study method, the distributions of the arc duration are close to the power laws too. These power law distributions are formed rather naturally instead of being the results of purposeful seeking

  1. LAWRENCE RADIATION LABORATORY COUNTING HANDBOOK

    Energy Technology Data Exchange (ETDEWEB)

    Group, Nuclear Instrumentation

    1966-10-01

    The Counting Handbook is a compilation of operational techniques and performance specifications on counting equipment in use at the Lawrence Radiation Laboratory, Berkeley. Counting notes have been written from the viewpoint of the user rather than that of the designer or maintenance man. The only maintenance instructions that have been included are those that can easily be performed by the experimenter to assure that the equipment is operating properly.

  2. Development of an asymmetric multiple-position neutron source (AMPNS) method to monitor the criticality of a degraded reactor core

    International Nuclear Information System (INIS)

    Kim, S.S.; Levine, S.H.

    1985-01-01

    An analytical/experimental method has been developed to monitor the subcritical reactivity and unfold the k/sub infinity/ distribution of a degraded reactor core. The method uses several fixed neutron detectors and a Cf-252 neutron source placed sequentially in multiple positions in the core. Therefore, it is called the Asymmetric Multiple Position Neutron Source (AMPNS) method. The AMPNS method employs nucleonic codes to analyze the neutron multiplication of a Cf-252 neutron source. An optimization program, GPM, is utilized to unfold the k/sub infinity/ distribution of the degraded core, in which the desired performance measure minimizes the error between the calculated and the measured count rates of the degraded reactor core. The analytical/experimental approach is validated by performing experiments using the Penn State Breazeale TRIGA Reactor (PSBR). A significant result of this study is that it provides a method to monitor the criticality of a damaged core during the recovery period

  3. ChromAIX2: A large area, high count-rate energy-resolving photon counting ASIC for a Spectral CT Prototype

    Science.gov (United States)

    Steadman, Roger; Herrmann, Christoph; Livne, Amir

    2017-08-01

    Spectral CT based on energy-resolving photon counting detectors is expected to deliver additional diagnostic value at a lower dose than current state-of-the-art CT [1]. The capability of simultaneously providing a number of spectrally distinct measurements not only allows distinguishing between photo-electric and Compton interactions but also discriminating contrast agents that exhibit a K-edge discontinuity in the absorption spectrum, referred to as K-edge Imaging [2]. Such detectors are based on direct converting sensors (e.g. CdTe or CdZnTe) and high-rate photon counting electronics. To support the development of Spectral CT and show the feasibility of obtaining rates exceeding 10 Mcps/pixel (Poissonian observed count-rate), the ChromAIX ASIC has been previously reported showing 13.5 Mcps/pixel (150 Mcps/mm2 incident) [3]. The ChromAIX has been improved to offer the possibility of a large area coverage detector, and increased overall performance. The new ASIC is called ChromAIX2, and delivers count-rates exceeding 15 Mcps/pixel with an rms-noise performance of approximately 260 e-. It has an isotropic pixel pitch of 500 μm in an array of 22×32 pixels and is tile-able on three of its sides. The pixel topology consists of a two stage amplifier (CSA and Shaper) and a number of test features allowing to thoroughly characterize the ASIC without a sensor. A total of 5 independent thresholds are also available within each pixel, allowing to acquire 5 spectrally distinct measurements simultaneously. The ASIC also incorporates a baseline restorer to eliminate excess currents induced by the sensor (e.g. dark current and low frequency drifts) which would otherwise cause an energy estimation error. In this paper we report on the inherent electrical performance of the ChromAXI2 as well as measurements obtained with CZT (CdZnTe)/CdTe sensors and X-rays and radioactive sources.

  4. On Road Study of Colorado Front Range Greenhouse Gases Distribution and Sources

    Science.gov (United States)

    Petron, G.; Hirsch, A.; Trainer, M. K.; Karion, A.; Kofler, J.; Sweeney, C.; Andrews, A.; Kolodzey, W.; Miller, B. R.; Miller, L.; Montzka, S. A.; Kitzis, D. R.; Patrick, L.; Frost, G. J.; Ryerson, T. B.; Robers, J. M.; Tans, P.

    2008-12-01

    The Global Monitoring Division and Chemical Sciences Division of the NOAA Earth System Research Laboratory have teamed up over the summer 2008 to experiment with a new measurement strategy to characterize greenhouse gases distribution and sources in the Colorado Front Range. Combining expertise in greenhouse gases measurements and in local to regional scales air quality study intensive campaigns, we have built the 'Hybrid Lab'. A continuous CO2 and CH4 cavity ring down spectroscopic analyzer (Picarro, Inc.), a CO gas-filter correlation instrument (Thermo Environmental, Inc.) and a continuous UV absorption ozone monitor (2B Technologies, Inc., model 202SC) have been installed securely onboard a 2006 Toyota Prius Hybrid vehicle with an inlet bringing in outside air from a few meters above the ground. To better characterize point and distributed sources, air samples were taken with a Portable Flask Package (PFP) for later multiple species analysis in the lab. A GPS unit hooked up to the ozone analyzer and another one installed on the PFP kept track of our location allowing us to map measured concentrations on the driving route using Google Earth. The Hybrid Lab went out for several drives in the vicinity of the NOAA Boulder Atmospheric Observatory (BAO) tall tower located in Erie, CO and covering areas from Boulder, Denver, Longmont, Fort Collins and Greeley. Enhancements in CO2, CO and destruction of ozone mainly reflect emissions from traffic. Methane enhancements however are clearly correlated with nearby point sources (landfill, feedlot, natural gas compressor ...) or with larger scale air masses advected from the NE Colorado, where oil and gas drilling operations are widespread. The multiple species analysis (hydrocarbons, CFCs, HFCs) of the air samples collected along the way bring insightful information about the methane sources at play. We will present results of the analysis and interpretation of the Hybrid Lab Front Range Study and conclude with perspectives

  5. Agent paradigm and services technology for distributed Information Sources

    Directory of Open Access Journals (Sweden)

    Hakima Mellah

    2011-10-01

    Full Text Available The complexity of information is issued from interacting information sources (IS, and could be better exploited with respect to relevance of information. In distributed IS system, relevant information has a content that is in connection with other contents in information network, and is used for a certain purpose. The highlighting point of the proposed model is to contribute to information system agility according to a three-dimensional view involving the content, the use and the structure. This reflects the relevance of information complexity and effective methodologies through self organized principle to manage the complexity. This contribution is primarily focused on presenting some factors that lead and trigger for self organization in a Service Oriented Architecture (SOA and how it can be possible to integrate self organization mechanism in the same.

  6. The Kruskal Count

    OpenAIRE

    Lagarias, Jeffrey C.; Rains, Eric; Vanderbei, Robert J.

    2001-01-01

    The Kruskal Count is a card trick invented by Martin J. Kruskal in which a magician "guesses" a card selected by a subject according to a certain counting procedure. With high probability the magician can correctly "guess" the card. The success of the trick is based on a mathematical principle related to coupling methods for Markov chains. This paper analyzes in detail two simplified variants of the trick and estimates the probability of success. The model predictions are compared with simula...

  7. Determination of grain size by XRD profile analysis and TEM counting in nano-structured Cu

    International Nuclear Information System (INIS)

    Zhong Yong; Ping Dehai; Song Xiaoyan; Yin Fuxing

    2009-01-01

    In this work, a serial of pure copper sample with different grain sizes from nano- to micro-scale were prepared by sparkle plasma sintering (SPS) and following anneal treatment at 873 K and 1073 K, respectively. The grain size distributions of these samples were determined by both X-ray diffraction (XRD) profile analysis and transmission electronic microscope (TEM) micrograph counting. Although these two methods give similar distributions of grain size in the case of as-SPS sample with nano-scale grain size (around 10 nm), there are apparent discrepancies between the grain size distributions of the annealed samples obtained from XRD and TEM, especially for the sample annealed at 1073 K after SPS with micro-scale grain size (around 2 μm), which TEM counting provides much higher values of grain sizes than XRD analysis does. It indicates that for large grain-sized material, XRD analysis lost its validity for determination of grain size. It might be due to some small sized substructures possibly existed in even annealed (large grain-sized) samples, whereas there is no substructures in as-SPS (nanocrystalline) sample. Moreover, it has been found that the effective outer cut-off radius R e derived from XRD analysis coincides with the grain sizes given by TEM counting. The potential relationship between grain size and R e was discussed in the present work. These results might provide some new hints for deeper understanding of the physical meaning of XRD analysis and the parameters derived.

  8. Memory-assisted measurement-device-independent quantum key distribution

    Science.gov (United States)

    Panayi, Christiana; Razavi, Mohsen; Ma, Xiongfeng; Lütkenhaus, Norbert

    2014-04-01

    A protocol with the potential of beating the existing distance records for conventional quantum key distribution (QKD) systems is proposed. It borrows ideas from quantum repeaters by using memories in the middle of the link, and that of measurement-device-independent QKD, which only requires optical source equipment at the user's end. For certain memories with short access times, our scheme allows a higher repetition rate than that of quantum repeaters with single-mode memories, thereby requiring lower coherence times. By accounting for various sources of nonideality, such as memory decoherence, dark counts, misalignment errors, and background noise, as well as timing issues with memories, we develop a mathematical framework within which we can compare QKD systems with and without memories. In particular, we show that with the state-of-the-art technology for quantum memories, it is potentially possible to devise memory-assisted QKD systems that, at certain distances of practical interest, outperform current QKD implementations.

  9. Memory-assisted measurement-device-independent quantum key distribution

    International Nuclear Information System (INIS)

    Panayi, Christiana; Razavi, Mohsen; Ma, Xiongfeng; Lütkenhaus, Norbert

    2014-01-01

    A protocol with the potential of beating the existing distance records for conventional quantum key distribution (QKD) systems is proposed. It borrows ideas from quantum repeaters by using memories in the middle of the link, and that of measurement-device-independent QKD, which only requires optical source equipment at the user's end. For certain memories with short access times, our scheme allows a higher repetition rate than that of quantum repeaters with single-mode memories, thereby requiring lower coherence times. By accounting for various sources of nonideality, such as memory decoherence, dark counts, misalignment errors, and background noise, as well as timing issues with memories, we develop a mathematical framework within which we can compare QKD systems with and without memories. In particular, we show that with the state-of-the-art technology for quantum memories, it is potentially possible to devise memory-assisted QKD systems that, at certain distances of practical interest, outperform current QKD implementations. (paper)

  10. Standardization of I-125 solution by extrapolation of an efficiency wave obtained by coincidence X-(X-γ) counting method

    International Nuclear Information System (INIS)

    Iwahara, A.

    1989-01-01

    The activity concentration of 125 I was determined by X-(X-α) coincidence counting method and efficiency extrapolation curve. The measurement system consists of 2 thin NaI(T1) scintillation detectors which are horizontally movable on a track. The efficiency curve is obtained by symmetricaly changing the distance between the source and the detectors and the activity is determined by applying a linear efficiency extrapolation curve. All sum-coincidence events are included between 10 and 100 KeV window counting and the main source of uncertainty is coming from poor counting statistic around zero efficiency. The consistence of results with other methods shows that this technique can be applied to photon cascade emitters and are not discriminating by the detectors. It has been also determined the 35,5 KeV gamma-ray emission probability of 125 I by using a Gamma-X type high purity germanium detector. (author) [pt

  11. A Predictive Model for Microbial Counts on Beaches where Intertidal Sand is the Primary Source

    Science.gov (United States)

    Feng, Zhixuan; Reniers, Ad; Haus, Brian K.; Solo-Gabriele, Helena M.; Wang, John D.; Fleming, Lora E.

    2015-01-01

    Human health protection at recreational beaches requires accurate and timely information on microbiological conditions to issue advisories. The objective of this study was to develop a new numerical mass balance model for enterococci levels on nonpoint source beaches. The significant advantage of this model is its easy implementation, and it provides a detailed description of the cross-shore distribution of enterococci that is useful for beach management purposes. The performance of the balance model was evaluated by comparing predicted exceedances of a beach advisory threshold value to field data, and to a traditional regression model. Both the balance model and regression equation predicted approximately 70% the advisories correctly at the knee depth and over 90% at the waist depth. The balance model has the advantage over the regression equation in its ability to simulate spatiotemporal variations of microbial levels, and it is recommended for making more informed management decisions. PMID:25840869

  12. Calibration of the Accuscan II In Vivo System for Whole Body Counting

    Energy Technology Data Exchange (ETDEWEB)

    Orval R. Perry; David L. Georgeson

    2011-08-01

    This report describes the April 2011 calibration of the Accuscan II HpGe In Vivo system for whole body counting. The source used for the calibration was a NIST traceable BOMAB manufactured by DOE as INL2006 BOMAB containing Eu-154, Eu-155, Eu-152, Sb-125 and Y-88 with energies from 27 keV to 1836 keV with a reference date of 11/29/2006. The actual usable energy range was 86.5 keV to 1597 keV on 4/21/2011. The BOMAB was constructed inside the Accuscan II counting 'tub' in the order of legs, thighs, abdomen, thorax/arms, neck, and head. Each piece was taped to the backwall of the counter. The arms were taped to the thorax. The phantom was constructed between the v-ridges on the backwall of the Accuscan II counter. The energy and efficiency calibrations were performed using the INL2006 BOMAB. The calibrations were performed with the detectors in the scanning mode. This report includes an overview introduction and records for the energy/FWHM and efficiency calibration including performance verification and validation counting. The Accuscan II system was successfully calibrated for whole body counting and verified in accordance with ANSI/HPS N13.30-1996 criteria.

  13. Total bacterial count and somatic cell count in refrigerated raw milk stored in communal tanks

    Directory of Open Access Journals (Sweden)

    Edmar da Costa Alves

    2014-09-01

    Full Text Available The current industry demand for dairy products with extended shelf life has resulted in new challenges for milk quality maintenance. The processing of milk with high bacterial counts compromises the quality and performance of industrial products. The study aimed to evaluate the total bacteria counts (TBC and somatic cell count (SCC in 768 samples of refrigerated raw milk, from 32 communal tanks. Samples were collected in the first quarter of 2010, 2011, 2012 and 2013 and analyzed by the Laboratory of Milk Quality - LQL. Results showed that 62.5%, 37.5%, 15.6% and 27.1% of the means for TBC in 2010, 2011, 2012 and 2013, respectively, were above the values established by legislation. However, we observed a significant reduction in the levels of total bacterial count (TBC in the studied periods. For somatic cell count, 100% of the means indicated values below 600.000 cells/mL, complying with the actual Brazilian legislation. The values found for the somatic cell count suggests the adoption of effective measures for the sanitary control of the herd. However, the results must be considered with caution as it highlights the need for quality improvements of the raw material until it achieves reliable results effectively.

  14. Automatic counting of fission fragments tracks using the gas permeation technique

    CERN Document Server

    Yamazaki, I M

    1999-01-01

    An automatic counting system for fission tracks induced in a polycarbonate plastic Makrofol KG (10 mu m thickness) is described. The method is based on the gas transport mechanism proposed by Knudsen, where the gas permeability for a porous membrane is expected to be directly related to its track density. In this work, nitrogen permeabilities for several Makrofol films, with different fission track densities, have been measured using an adequate gas permeation system. The fission tracks were produced by irradiating Makrofol foils with a 252Cf calibrated source in a 2 pi geometry. A calibration curve fission track number versus nitrogen permeability has been obtained, for track densities higher than 1000/cm sup 2 , where the spark gap technique and the visual methods employing a microscope, are not appropriate for track counting.

  15. PREDICTIONS FOR ULTRA-DEEP RADIO COUNTS OF STAR-FORMING GALAXIES

    Energy Technology Data Exchange (ETDEWEB)

    Mancuso, Claudia; Lapi, Andrea; De Zotti, Gianfranco; Bressan, Alessandro; Perrotta, Francesca; Danese, Luigi [Astrophysics Sector, SISSA, Via Bonomea 265, I-34136 Trieste (Italy); Cai, Zhen-Yi [CAS Key Laboratory for Research in Galaxies and Cosmology, Department of Astronomy, University of Science and Technology of China, Hefei, Anhui 230026 (China); Negrello, Mattia; Bonato, Matteo, E-mail: cmancuso@sissa.it [INAF—Osservatorio Astronomico di Padova, Vicolo dell’Osservatorio 5, I-35122 Padova (Italy)

    2015-09-01

    We have worked outty predictions for the radio counts of star-forming galaxies down to nJy levels, along with redshift distributions down to the detection limits of the phase 1 Square Kilometer Array MID telescope (SKA1-MID) and of its precursors. Such predictions were obtained by coupling epoch-dependent star formation rate (SFR) functions with relations between SFR and radio (synchrotron and free–free) emission. The SFR functions were derived taking into account both the dust-obscured and the unobscured star formation, by combining far-infrared, ultraviolet, and Hα luminosity functions up to high redshifts. We have also revisited the South Pole Telescope counts of dusty galaxies at 95 GHz, performing a detailed analysis of the Spectral Energy Distributions. Our results show that the deepest SKA1-MID surveys will detect high-z galaxies with SFRs two orders of magnitude lower compared to Herschel surveys. The highest redshift tails of the distributions at the detection limits of planned SKA1-MID surveys comprise a substantial fraction of strongly lensed galaxies. We predict that a survey down to 0.25 μJy at 1.4 GHz will detect about 1200 strongly lensed galaxies per square degree, at redshifts of up to 10. For about 30% of them the SKA1-MID will detect at least 2 images. The SKA1-MID will thus provide a comprehensive view of the star formation history throughout the re-ionization epoch, unaffected by dust extinction. We have also provided specific predictions for the EMU/ASKAP and MIGHTEE/MeerKAT surveys.

  16. Rainflow counting revisited

    Energy Technology Data Exchange (ETDEWEB)

    Soeker, H [Deutsches Windenergie-Institut (Germany)

    1996-09-01

    As state of the art method the rainflow counting technique is presently applied everywhere in fatigue analysis. However, the author feels that the potential of the technique is not fully recognized in wind energy industries as it is used, most of the times, as a mere data reduction technique disregarding some of the inherent information of the rainflow counting results. The ideas described in the following aim at exploitation of this information and making it available for use in the design and verification process. (au)

  17. Identifying (subsurface) anthropogenic heat sources that influence temperature in the drinking water distribution system

    Science.gov (United States)

    Agudelo-Vera, Claudia M.; Blokker, Mirjam; de Kater, Henk; Lafort, Rob

    2017-09-01

    The water temperature in the drinking water distribution system and at customers' taps approaches the surrounding soil temperature at a depth of 1 m. Water temperature is an important determinant of water quality. In the Netherlands drinking water is distributed without additional residual disinfectant and the temperature of drinking water at customers' taps is not allowed to exceed 25 °C. In recent decades, the urban (sub)surface has been getting more occupied by various types of infrastructures, and some of these can be heat sources. Only recently have the anthropogenic sources and their influence on the underground been studied on coarse spatial scales. Little is known about the urban shallow underground heat profile on small spatial scales, of the order of 10 m × 10 m. Routine water quality samples at the tap in urban areas have shown up locations - so-called hotspots - in the city, with relatively high soil temperatures - up to 7 °C warmer - compared to the soil temperatures in the surrounding rural areas. Yet the sources and the locations of these hotspots have not been identified. It is expected that with climate change during a warm summer the soil temperature in the hotspots can be above 25 °C. The objective of this paper is to find a method to identify heat sources and urban characteristics that locally influence the soil temperature. The proposed method combines mapping of urban anthropogenic heat sources, retrospective modelling of the soil temperature, analysis of water temperature measurements at the tap, and extensive soil temperature measurements. This approach provided insight into the typical range of the variation of the urban soil temperature, and it is a first step to identifying areas with potential underground heat stress towards thermal underground management in cities.

  18. Platelet Counts in Insoluble Platelet-Rich Fibrin Clots: A Direct Method for Accurate Determination

    Directory of Open Access Journals (Sweden)

    Yutaka Kitamura

    2018-02-01

    Full Text Available Platelet-rich fibrin (PRF clots have been used in regenerative dentistry most often, with the assumption that growth factor levels are concentrated in proportion to the platelet concentration. Platelet counts in PRF are generally determined indirectly by platelet counting in other liquid fractions. This study shows a method for direct estimation of platelet counts in PRF. To validate this method by determination of the recovery rate, whole-blood samples were obtained with an anticoagulant from healthy donors, and platelet-rich plasma (PRP fractions were clotted with CaCl2 by centrifugation and digested with tissue-plasminogen activator. Platelet counts were estimated before clotting and after digestion using an automatic hemocytometer. The method was then tested on PRF clots. The quality of platelets was examined by scanning electron microscopy and flow cytometry. In PRP-derived fibrin matrices, the recovery rate of platelets and white blood cells was 91.6 and 74.6%, respectively, after 24 h of digestion. In PRF clots associated with small and large red thrombi, platelet counts were 92.6 and 67.2% of the respective total platelet counts. These findings suggest that our direct method is sufficient for estimating the number of platelets trapped in an insoluble fibrin matrix and for determining that platelets are distributed in PRF clots and red thrombi roughly in proportion to their individual volumes. Therefore, we propose this direct digestion method for more accurate estimation of platelet counts in most types of platelet-enriched fibrin matrix.

  19. Radon counting statistics - a Monte Carlo investigation

    International Nuclear Information System (INIS)

    Scott, A.G.

    1996-01-01

    Radioactive decay is a Poisson process, and so the Coefficient of Variation (COV) of open-quotes nclose quotes counts of a single nuclide is usually estimated as 1/√n. This is only true if the count duration is much shorter than the half-life of the nuclide. At longer count durations, the COV is smaller than the Poisson estimate. Most radon measurement methods count the alpha decays of 222 Rn, plus the progeny 218 Po and 214 Po, and estimate the 222 Rn activity from the sum of the counts. At long count durations, the chain decay of these nuclides means that every 222 Rn decay must be followed by two other alpha decays. The total number of decays is open-quotes 3Nclose quotes, where N is the number of radon decays, and the true COV of the radon concentration estimate is 1/√(N), √3 larger than the Poisson total count estimate of 1/√3N. Most count periods are comparable to the half lives of the progeny, so the relationship between COV and count time is complex. A Monte-Carlo estimate of the ratio of true COV to Poisson estimate was carried out for a range of count periods from 1 min to 16 h and three common radon measurement methods: liquid scintillation, scintillation cell, and electrostatic precipitation of progeny. The Poisson approximation underestimates COV by less than 20% for count durations of less than 60 min

  20. Hanford whole body counting manual

    International Nuclear Information System (INIS)

    Palmer, H.E.; Rieksts, G.A.; Lynch, T.P.

    1990-06-01

    This document describes the Hanford Whole Body Counting Program as it is administered by Pacific Northwest Laboratory (PNL) in support of the US Department of Energy--Richland Operations Office (DOE-RL) and its Hanford contractors. Program services include providing in vivo measurements of internally deposited radioactivity in Hanford employees (or visitors). Specific chapters of this manual deal with the following subjects: program operational charter, authority, administration, and practices, including interpreting applicable DOE Orders, regulations, and guidance into criteria for in vivo measurement frequency, etc., for the plant-wide whole body counting services; state-of-the-art facilities and equipment used to provide the best in vivo measurement results possible for the approximately 11,000 measurements made annually; procedures for performing the various in vivo measurements at the Whole Body Counter (WBC) and related facilities including whole body counts; operation and maintenance of counting equipment, quality assurance provisions of the program, WBC data processing functions, statistical aspects of in vivo measurements, and whole body counting records and associated guidance documents. 16 refs., 48 figs., 22 tabs