WorldWideScience

Sample records for multiple point source

  1. Use of multiple water surface flow constructed wetlands for non-point source water pollution control.

    Li, Dan; Zheng, Binghui; Liu, Yan; Chu, Zhaosheng; He, Yan; Huang, Minsheng

    2018-05-02

    Multiple free water surface flow constructed wetlands (multi-FWS CWs) are a variety of conventional water treatment plants for the interception of pollutants. This review encapsulated the characteristics and applications in the field of ecological non-point source water pollution control technology. The roles of in-series design and operation parameters (hydraulic residence time, hydraulic load rate, water depth and aspect ratio, composition of influent, and plant species) for performance intensification were also analyzed, which were crucial to achieve sustainable and effective contaminants removal, especially the retention of nutrient. The mechanism study of design and operation parameters for the removal of nitrogen and phosphorus was also highlighted. Conducive perspectives for further research on optimizing its design/operation parameters and advanced technologies of ecological restoration were illustrated to possibly interpret the functions of multi-FWS CWs.

  2. Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions

    Belkhatir, Zehor; Laleg-Kirati, Taous-Meriem

    2017-01-01

    This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating

  3. A Numerical Study on the Excitation of Guided Waves in Rectangular Plates Using Multiple Point Sources

    Wenbo Duan

    2017-12-01

    Full Text Available Ultrasonic guided waves are widely used to inspect and monitor the structural integrity of plates and plate-like structures, such as ship hulls and large storage-tank floors. Recently, ultrasonic guided waves have also been used to remove ice and fouling from ship hulls, wind-turbine blades and aeroplane wings. In these applications, the strength of the sound source must be high for scanning a large area, or to break the bond between ice, fouling and plate substrate. More than one transducer may be used to achieve maximum sound power output. However, multiple sources can interact with each other, and form a sound field in the structure with local constructive and destructive regions. Destructive regions are weak regions and shall be avoided. When multiple transducers are used it is important that they are arranged in a particular way so that the desired wave modes can be excited to cover the whole structure. The objective of this paper is to provide a theoretical basis for generating particular wave mode patterns in finite-width rectangular plates whose length is assumed to be infinitely long with respect to its width and thickness. The wave modes have displacements in both width and thickness directions, and are thus different from the classical Lamb-type wave modes. A two-dimensional semi-analytical finite element (SAFE method was used to study dispersion characteristics and mode shapes in the plate up to ultrasonic frequencies. The modal analysis provided information on the generation of modes suitable for a particular application. The number of point sources and direction of loading for the excitation of a few representative modes was investigated. Based on the SAFE analysis, a standard finite element modelling package, Abaqus, was used to excite the designed modes in a three-dimensional plate. The generated wave patterns in Abaqus were then compared with mode shapes predicted in the SAFE model. Good agreement was observed between the

  4. Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions

    Belkhatir, Zehor

    2017-06-28

    This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating the locations and the amplitudes of a multi-pointwise input is decoupled into two algebraic systems of equations. The first system is nonlinear and solves for the time locations iteratively, whereas the second system is linear and solves for the input’s amplitudes. Second, closed form formulas for both the time location and the amplitude are provided in the particular case of single point input. Finally, numerical examples are given to illustrate the performance of the proposed technique in both noise-free and noisy cases. The joint estimation of pointwise input and fractional differentiation orders is also presented. Furthermore, a discussion on the performance of the proposed algorithm is provided.

  5. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    Gora, D.; Bernardini, E.; Cruz Silva, A.H.

    2011-04-01

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  6. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    Gora, D. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute of Nuclear Physics PAN, Cracow (Poland); Bernardini, E.; Cruz Silva, A.H. [Institute of Nuclear Physics PAN, Cracow (Poland)

    2011-04-15

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  7. [Multiple time scales analysis of spatial differentiation characteristics of non-point source nitrogen loss within watershed].

    Liu, Mei-bing; Chen, Xing-wei; Chen, Ying

    2015-07-01

    Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.

  8. Tracking an oil slick from multiple natural sources, Coal Oil Point, California

    Leifer, Ira; Luyendyk, Bruce; Broderick, Kris

    2006-01-01

    Oil slicks on the ocean surface emitted from natural marine hydrocarbon seeps offshore from Coal Oil Point in the Santa Barbara Channel, California were tracked and sampled over a 2-h period. The objectives were to characterize the seep oil and to track its composition over time using a new sampling device, a catamaran drum sampler (CATDRUMS). The sampler was designed and developed at UCSB. Chromatograms showed that oil originating from an informally named, very active seep area, Shane Seep, primarily evolved during the first hour due to mixing with oil originating from a convergence zone slick surrounding Shane Seep. (author)

  9. Tracking an oil slick from multiple natural sources, Coal Oil Point, California

    Leifer, Ira [Marine Sciences Institute, University of California, Santa Barbara, CA 93106 (United States); Luyendyk, Bruce [Department of Geological Sciences, University of California, Santa Barbara, CA 93106 (United States); Broderick, Kris [Exxon/Mobil Exploration Company, 13401 N. Freeway, Houston, TX 77060 (United States)

    2006-06-15

    Oil slicks on the ocean surface emitted from natural marine hydrocarbon seeps offshore from Coal Oil Point in the Santa Barbara Channel, California were tracked and sampled over a 2-h period. The objectives were to characterize the seep oil and to track its composition over time using a new sampling device, a catamaran drum sampler (CATDRUMS). The sampler was designed and developed at UCSB. Chromatograms showed that oil originating from an informally named, very active seep area, Shane Seep, primarily evolved during the first hour due to mixing with oil originating from a convergence zone slick surrounding Shane Seep. (author)

  10. PSD Applicability Determination for Multiple Owner/Operator Point Sources Within a Single Facility

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  11. Photoacoustic Point Source

    Calasso, Irio G.; Craig, Walter; Diebold, Gerald J.

    2001-01-01

    We investigate the photoacoustic effect generated by heat deposition at a point in space in an inviscid fluid. Delta-function and long Gaussian optical pulses are used as sources in the wave equation for the displacement potential to determine the fluid motion. The linear sound-generation mechanism gives bipolar photoacoustic waves, whereas the nonlinear mechanism produces asymmetric tripolar waves. The salient features of the photoacoustic point source are that rapid heat deposition and nonlinear thermal expansion dominate the production of ultrasound

  12. Point Pollution Sources Dimensioning

    Georgeta CUCULEANU

    2011-06-01

    Full Text Available In this paper a method for determining the main physical characteristics of the point pollution sources is presented. It can be used to find the main physical characteristics of them. The main physical characteristics of these sources are top inside source diameter and physical height. The top inside source diameter is calculated from gas flow-rate. For reckoning the physical height of the source one takes into account the relation given by the proportionality factor, defined as ratio between the plume rise and physical height of the source. The plume rise depends on the gas exit velocity and gas temperature. That relation is necessary for diminishing the environmental pollution when the production capacity of the plant varies, in comparison with the nominal one.

  13. Screening the Medicines for Malaria Venture Pathogen Box across Multiple Pathogens Reclassifies Starting Points for Open-Source Drug Discovery.

    Duffy, Sandra; Sykes, Melissa L; Jones, Amy J; Shelper, Todd B; Simpson, Moana; Lang, Rebecca; Poulsen, Sally-Ann; Sleebs, Brad E; Avery, Vicky M

    2017-09-01

    Open-access drug discovery provides a substantial resource for diseases primarily affecting the poor and disadvantaged. The open-access Pathogen Box collection is comprised of compounds with demonstrated biological activity against specific pathogenic organisms. The supply of this resource by the Medicines for Malaria Venture has the potential to provide new chemical starting points for a number of tropical and neglected diseases, through repurposing of these compounds for use in drug discovery campaigns for these additional pathogens. We tested the Pathogen Box against kinetoplastid parasites and malaria life cycle stages in vitro Consequently, chemical starting points for malaria, human African trypanosomiasis, Chagas disease, and leishmaniasis drug discovery efforts have been identified. Inclusive of this in vitro biological evaluation, outcomes from extensive literature reviews and database searches are provided. This information encompasses commercial availability, literature reference citations, other aliases and ChEMBL number with associated biological activity, where available. The release of this new data for the Pathogen Box collection into the public domain will aid the open-source model of drug discovery. Importantly, this will provide novel chemical starting points for drug discovery and target identification in tropical disease research. Copyright © 2017 Duffy et al.

  14. Neutron source multiplication method

    Clayton, E.D.

    1985-01-01

    Extensive use has been made of neutron source multiplication in thousands of measurements of critical masses and configurations and in subcritical neutron-multiplication measurements in situ that provide data for criticality prevention and control in nuclear materials operations. There is continuing interest in developing reliable methods for monitoring the reactivity, or k/sub eff/, of plant operations, but the required measurements are difficult to carry out and interpret on the far subcritical configurations usually encountered. The relationship between neutron multiplication and reactivity is briefly discussed and data presented to illustrate problems associated with the absolute measurement of neutron multiplication and reactivity in subcritical systems. A number of curves of inverse multiplication have been selected from a variety of experiments showing variations observed in multiplication during the course of critical and subcritical experiments where different methods of reactivity addition were used, with different neutron source detector position locations. Concern is raised regarding the meaning and interpretation of k/sub eff/ as might be measured in a far subcritical system because of the modal effects and spectrum differences that exist between the subcritical and critical systems. Because of this, the calculation of k/sub eff/ identical with unity for the critical assembly, although necessary, may not be sufficient to assure safety margins in calculations pertaining to far subcritical systems. Further study is needed on the interpretation and meaning of k/sub eff/ in the far subcritical system

  15. Source splitting via the point source method

    Potthast, Roland; Fazi, Filippo M; Nelson, Philip A

    2010-01-01

    We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119–40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731–42). The task is to separate the sound fields u j , j = 1, ..., n of n element of N sound sources supported in different bounded domains G 1 , ..., G n in R 3 from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u 1 + ... + u n on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions g 1 ,…, g n , n element of N, to construct u l for l = 1, ..., n from u| Λ in the form u l (x) = ∫ Λ g l,x (y)u(y)ds(y), l=1,... n. (1) We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online

  16. Evaluation of multiple emission point facilities

    Miltenberger, R.P.; Hull, A.P.; Strachan, S.; Tichler, J.

    1988-01-01

    In 1970, the New York State Department of Environmental Conservation (NYSDEC) assumed responsibility for the environmental aspect of the state's regulatory program for by-product, source, and special nuclear material. The major objective of this study was to provide consultation to NYSDEC and the US NRC to assist NYSDEC in determining if broad-based licensed facilities with multiple emission points were in compliance with NYCRR Part 380. Under this contract, BNL would evaluate a multiple emission point facility, identified by NYSDEC, as a case study. The review would be a nonbinding evaluation of the facility to determine likely dispersion characteristics, compliance with specified release limits, and implementation of the ALARA philosophy regarding effluent release practices. From the data collected, guidance as to areas of future investigation and the impact of new federal regulations were to be developed. Reported here is the case study for the University of Rochester, Strong Memorial Medical Center and Riverside Campus

  17. Point-source inversion techniques

    Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.

    1982-11-01

    A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.

  18. Plume rise from multiple sources

    Briggs, G.A.

    1975-01-01

    A simple enhancement factor for plume rise from multiple sources is proposed and tested against plume-rise observations. For bent-over buoyant plumes, this results in the recommendation that multiple-source rise be calculated as [(N + S)/(1 + S)]/sup 1/3/ times the single-source rise, Δh 1 , where N is the number of sources and S = 6 (total width of source configuration/N/sup 1/3/ Δh 1 )/sup 3/2/. For calm conditions a crude but simple method is suggested for predicting the height of plume merger and subsequent behavior which is based on the geometry and velocity variations of a single buoyant plume. Finally, it is suggested that large clusters of buoyant sources might occasionally give rise to concentrated vortices either within the source configuration or just downwind of it

  19. Calcareous Fens - Source Feature Points

    Minnesota Department of Natural Resources — Pursuant to the provisions of Minnesota Statutes, section 103G.223, this database contains points that represent calcareous fens as defined in Minnesota Rules, part...

  20. Unidentified point sources in the IRAS minisurvey

    Houck, J. R.; Soifer, B. T.; Neugebauer, G.; Beichman, C. A.; Aumann, H. H.; Clegg, P. E.; Gillett, F. C.; Habing, H. J.; Hauser, M. G.; Low, F. J.

    1984-01-01

    Nine bright, point-like 60 micron sources have been selected from the sample of 8709 sources in the IRAS minisurvey. These sources have no counterparts in a variety of catalogs of nonstellar objects. Four objects have no visible counterparts, while five have faint stellar objects visible in the error ellipse. These sources do not resemble objects previously known to be bright infrared sources.

  1. Supporting Multiple Pointing Devices in Microsoft Windows

    Westergaard, Michael

    2002-01-01

    In this paper the implementation of a Microsoft Windows driver including APIs supporting multiple pointing devices is presented. Microsoft Windows does not natively support multiple pointing devices controlling independent cursors, and a number of solutions to this have been implemented by us and...... and others. Here we motivate and describe a general solution, and how user applications can use it by means of a framework. The device driver and the supporting APIs will be made available free of charge. Interested parties can contact the author for more information....

  2. UHE point source survey at Cygnus experiment

    Lu, X.; Yodh, G.B.; Alexandreas, D.E.; Allen, R.C.; Berley, D.; Biller, S.D.; Burman, R.L.; Cady, R.; Chang, C.Y.; Dingus, B.L.; Dion, G.M.; Ellsworth, R.W.; Gilra, M.K.; Goodman, J.A.; Haines, T.J.; Hoffman, C.M.; Kwok, P.; Lloyd-Evans, J.; Nagle, D.E.; Potter, M.E.; Sandberg, V.D.; Stark, M.J.; Talaga, R.L.; Vishwanath, P.R.; Zhang, W.

    1991-01-01

    A new method of searching for UHE point source has been developed. With a data sample of 150 million events, we have surveyed the sky for point sources over 3314 locations (1.4 degree <δ<70.4 degree). It was found that their distribution is consistent with a random fluctuation. In addition, fifty two known potential sources, including pulsars and binary x-ray sources, were studied. The source with the largest positive excess is the Crab Nebula. An excess of 2.5 sigma above the background is observed in a bin of 2.3 degree by 2.5 degree in declination and right ascension respectively

  3. Γ-source Neutral Point Clamped Inverter

    Mo, Wei; Loh, Poh Chiang; Blaabjerg, Frede

    Transformer based Z-source inverters are recently proposed to achieve promising buck-boost capability. They have improved higher buck-boost capability, smaller size and less components count over Z-source inverters. On the other hand, neutral point clamped inverters have less switching stress...... and better output performance comparing with traditional two-level inverters. Integrating these two types of configurations can help neutral point inverters achieve enhanced votlage buck-boost capability....

  4. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    Song, S. G.; Dalguer, L. A.; Mai, Paul Martin

    2013-01-01

    statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point

  5. OH masers associated with IRAS point sources

    Masheder, MRW; Cohen, RJ; Martin-Hernandez, NL; Migenes,; Reid, MJ

    2002-01-01

    We report a search for masers from the Lambda-doublet of the ground-state of OH at 18cm, carried out with the Jodrell Bank Lovell Telescope and with the 25m Dwingeloo telescope. All objects north of delta = -20degrees which appear in the IRAS Point Source Catalog with fluxes > 1000 Jy at 60mum and

  6. Isotropic irradiation of detectors from point sources

    Aage, Helle Karina

    1997-01-01

    NaI(Tl) scintillator detectors have been exposed to gamma rays from 8 different point sources from different directions. Background and backscatter of gamma-rays from the surroundings have been subtracted in order to produce clean spectra. By adding spectra obtained from exposures from different ...

  7. Eta Carinae: Viewed from Multiple Vantage Points

    Gull, Theodore

    2007-01-01

    The central source of Eta Carinae and its ejecta is a massive binary system buried within a massive interacting wind structure which envelops the two stars. However the hot, less massive companion blows a small cavity in the very massive primary wind, plus ionizes a portion of the massive wind just beyond the wind-wind boundary. We gain insight on this complex structure by examining the spatially-resolved Space Telescope Imaging Spectrograph (STIS) spectra of the central source (0.1") with the wind structure which extends out to nearly an arcsecond (2300AU) and the wind-blown boundaries, plus the ejecta of the Little Homunculus. Moreover, the spatially resolved Very Large Telescope/UltraViolet Echelle Spectrograph (VLT/UVES) stellar spectrum (one arcsecond) and spatially sampled spectra across the foreground lobe of the Homunculus provide us vantage points from different angles relative to line of sight. Examples of wind line profiles of Fe II, and the.highly excited [Fe III], [Ne III], [Ar III] and [S III)], plus other lines will be presented.

  8. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    Song, S. G.

    2013-12-24

    Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.

  9. Localization of Point Sources for Poisson Equation using State Observers

    Majeed, Muhammad Usman

    2016-08-09

    A method based On iterative observer design is presented to solve point source localization problem for Poisson equation with riven boundary data. The procedure involves solution of multiple boundary estimation sub problems using the available Dirichlet and Neumann data from different parts of the boundary. A weighted sum of these solution profiles of sub-problems localizes point sources inside the domain. Method to compute these weights is also provided. Numerical results are presented using finite differences in a rectangular domain. (C) 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.

  10. Localization of Point Sources for Poisson Equation using State Observers

    Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem

    2016-01-01

    A method based On iterative observer design is presented to solve point source localization problem for Poisson equation with riven boundary data. The procedure involves solution of multiple boundary estimation sub problems using the available Dirichlet and Neumann data from different parts of the boundary. A weighted sum of these solution profiles of sub-problems localizes point sources inside the domain. Method to compute these weights is also provided. Numerical results are presented using finite differences in a rectangular domain. (C) 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.

  11. Pilot points method for conditioning multiple-point statistical facies simulation on flow data

    Ma, Wei; Jafarpour, Behnam

    2018-05-01

    We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  12. On the point-source approximation of earthquake dynamics

    Andrea Bizzarri

    2014-06-01

    Full Text Available The focus on the present study is on the point-source approximation of a seismic source. First, we compare the synthetic motions on the free surface resulting from different analytical evolutions of the seismic source (the Gabor signal (G, the Bouchon ramp (B, the Cotton and Campillo ramp (CC, the Yoffe function (Y and the Liu and Archuleta function (LA. Our numerical experiments indicate that the CC and the Y functions produce synthetics with larger oscillations and correspondingly they have a higher frequency content. Moreover, the CC and the Y functions tend to produce higher peaks in the ground velocity (roughly of a factor of two. We have also found that the falloff at high frequencies is quite different: it roughly follows ω−2 in the case of G and LA functions, it decays more faster than ω−2 for the B function, while it is slow than ω−1 for both the CC and the Y solutions. Then we perform a comparison of seismic waves resulting from 3-D extended ruptures (both supershear and subshear obeying to different governing laws against those from a single point-source having the same features. It is shown that the point-source models tend to overestimate the ground motions and that they completely miss the Mach fronts emerging from the supershear transition process. When we compare the extended fault solutions against a multiple point-sources model the agreement becomes more significant, although relevant discrepancies still persist. Our results confirm that, and more importantly quantify how, the point-source approximation is unable to adequately describe the radiation emitted during a real world earthquake, even in the most idealized case of planar fault with homogeneous properties and embedded in a homogeneous, perfectly elastic medium.

  13. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  14. Land Streamer Surveying Using Multiple Sources

    Mahmoud, Sherif

    2014-12-11

    Various examples are provided for land streamer seismic surveying using multiple sources. In one example, among others, a method includes disposing a land streamer in-line with first and second shot sources. The first shot source is at a first source location adjacent to a proximal end of the land streamer and the second shot source is at a second source location separated by a fixed length corresponding to a length of the land streamer. Shot gathers can be obtained when the shot sources are fired. In another example, a system includes a land streamer including a plurality of receivers, a first shot source located adjacent to the proximal end of the land streamer, and a second shot source located in-line with the land streamer and the first shot source. The second shot source is separated from the first shot source by a fixed overall length corresponding to the land streamer.

  15. Multiple point statistical simulation using uncertain (soft) conditional data

    Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou

    2018-05-01

    Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.

  16. Tripled Fixed Point in Ordered Multiplicative Metric Spaces

    Laishram Shanjit

    2017-06-01

    Full Text Available In this paper, we present some triple fixed point theorems in partially ordered multiplicative metric spaces depended on another function. Our results generalise the results of [6] and [5].

  17. Multi-lane detection based on multiple vanishing points detection

    Li, Chuanxiang; Nie, Yiming; Dai, Bin; Wu, Tao

    2015-03-01

    Lane detection plays a significant role in Advanced Driver Assistance Systems (ADAS) for intelligent vehicles. In this paper we present a multi-lane detection method based on multiple vanishing points detection. A new multi-lane model assumes that a single lane, which has two approximately parallel boundaries, may not parallel to others on road plane. Non-parallel lanes associate with different vanishing points. A biological plausibility model is used to detect multiple vanishing points and fit lane model. Experimental results show that the proposed method can detect both parallel lanes and non-parallel lanes.

  18. Thermionic detector with multiple layered ionization source

    Patterson, P. L.

    1985-01-01

    Method and apparatus for analyzing specific chemical substances in a gaseous environment comprises a thermionic source formed of multiple layers of ceramic material composition, an electrical current instrumentality for heating the thermionic source to operating temperatures in the range of 100 0 C. to 1000 0 C., an instrumentality for exposing the surface of the thermionic source to contact with the specific chemical substances for the purpose of forming gas phase ionization of the substances by a process of electrical charge emission from the surface, a collector electrode disposed adjacent to the thermiomic source, an instrumentality for biasing the thermionic source at an electrical potential which causes the gas phase ions to move toward the collector, and an instrumentality for measuring the ion current arriving at the collector. The thermionic source is constructed of a metallic heater element molded inside a sub-layer of hardened ceramic cement material impregnated with a metallic compound additive which is non-corrosive to the heater element during operation. The sub-layer is further covered by a surface-layer formed of hardened ceramic cement material impregnated with an alkali metal compound in a manner that eliminates corrosive contact of the alkali compounds with the heater element. The sub-layer further protects the heater element from contact with gas environments which may be corrosive. The specific ionization of different chemical substances is varied over a wide range by changing the composition and temperature of the thermionic source, and by changing the composition of the gas environment

  19. Binaural Processing of Multiple Sound Sources

    2016-08-18

    AFRL-AFOSR-VA-TR-2016-0298 Binaural Processing of Multiple Sound Sources William Yost ARIZONA STATE UNIVERSITY 660 S MILL AVE STE 312 TEMPE, AZ 85281...18-08-2016 2. REPORT TYPE Final Performance 3. DATES COVERED (From - To) 15 Jul 2012 to 14 Jul 2016 4. TITLE AND SUBTITLE Binaural Processing of...three topics cited above are entirely within the scope of the AFOSR grant. 15. SUBJECT TERMS Binaural hearing, Sound Localization, Interaural signal

  20. Atmospheric mercury dispersion modelling from two nearest hypothetical point sources

    Al Razi, Khandakar Md Habib; Hiroshi, Moritomi; Shinji, Kambara [Environmental and Renewable Energy System (ERES), Graduate School of Engineering, Gifu University, Yanagido, Gifu City, 501-1193 (Japan)

    2012-07-01

    The Japan coastal areas are still environmentally friendly, though there are multiple air emission sources originating as a consequence of several developmental activities such as automobile industries, operation of thermal power plants, and mobile-source pollution. Mercury is known to be a potential air pollutant in the region apart from SOX, NOX, CO and Ozone. Mercury contamination in water bodies and other ecosystems due to deposition of atmospheric mercury is considered a serious environmental concern. Identification of sources contributing to the high atmospheric mercury levels will be useful for formulating pollution control and mitigation strategies in the region. In Japan, mercury and its compounds were categorized as hazardous air pollutants in 1996 and are on the list of 'Substances Requiring Priority Action' published by the Central Environmental Council of Japan. The Air Quality Management Division of the Environmental Bureau, Ministry of the Environment, Japan, selected the current annual mean environmental air quality standard for mercury and its compounds of 0.04 ?g/m3. Long-term exposure to mercury and its compounds can have a carcinogenic effect, inducing eg, Minamata disease. This study evaluates the impact of mercury emissions on air quality in the coastal area of Japan. Average yearly emission of mercury from an elevated point source in this area with background concentration and one-year meteorological data were used to predict the ground level concentration of mercury. To estimate the concentration of mercury and its compounds in air of the local area, two different simulation models have been used. The first is the National Institute of Advanced Science and Technology Atmospheric Dispersion Model for Exposure and Risk Assessment (AIST-ADMER) that estimates regional atmospheric concentration and distribution. The second is the Hybrid Single Particle Lagrangian Integrated trajectory Model (HYSPLIT) that estimates the atmospheric

  1. Assessment of the impact of point source pollution from the ...

    Assessment of the impact of point source pollution from the Keiskammahoek Sewage ... Water SA. Journal Home · ABOUT THIS JOURNAL · Advanced Search ... Also, significant pollution of the receiving Keiskamma River was indicated for ...

  2. Effect of point source and heterogeneity on the propagation of ...

    user

    propagation of Love waves due to point source in a homogeneous layer overlying a ...... The dispersion equation of SH waves will be obtained by equating to zero the ..... He was Awarded Atomic Energy Fellowship by the Government of India.

  3. Point source reconstruction principle of linear inverse problems

    Terazono, Yasushi; Matani, Ayumu; Fujimaki, Norio; Murata, Tsutomu

    2010-01-01

    Exact point source reconstruction for underdetermined linear inverse problems with a block-wise structure was studied. In a block-wise problem, elements of a source vector are partitioned into blocks. Accordingly, a leadfield matrix, which represents the forward observation process, is also partitioned into blocks. A point source is a source having only one nonzero block. An example of such a problem is current distribution estimation in electroencephalography and magnetoencephalography, where a source vector represents a vector field and a point source represents a single current dipole. In this study, the block-wise norm, a block-wise extension of the l p -norm, was defined as the family of cost functions of the inverse method. The main result is that a set of three conditions was found to be necessary and sufficient for block-wise norm minimization to ensure exact point source reconstruction for any leadfield matrix that admit such reconstruction. The block-wise norm that satisfies the conditions is the sum of the cost of all the observations of source blocks, or in other words, the block-wisely extended leadfield-weighted l 1 -norm. Additional results are that minimization of such a norm always provides block-wisely sparse solutions and that its solutions form cones in source space

  4. 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources

    Sturgeon, Richard W. [Los Alamos National Laboratory

    2012-06-27

    This report provides the results of the 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources (RMUS), which was updated by the Environmental Protection (ENV) Division's Environmental Stewardship (ES) at Los Alamos National Laboratory (LANL). ES classifies LANL emission sources into one of four Tiers, based on the potential effective dose equivalent (PEDE) calculated for each point source. Detailed descriptions of these tiers are provided in Section 3. The usage survey is conducted annually; in odd-numbered years the survey addresses all monitored and unmonitored point sources and in even-numbered years it addresses all Tier III and various selected other sources. This graded approach was designed to ensure that the appropriate emphasis is placed on point sources that have higher potential emissions to the environment. For calendar year (CY) 2011, ES has divided the usage survey into two distinct reports, one covering the monitored point sources (to be completed later this year) and this report covering all unmonitored point sources. This usage survey includes the following release points: (1) all unmonitored sources identified in the 2010 usage survey, (2) any new release points identified through the new project review (NPR) process, and (3) other release points as designated by the Rad-NESHAP Team Leader. Data for all unmonitored point sources at LANL is stored in the survey files at ES. LANL uses this survey data to help demonstrate compliance with Clean Air Act radioactive air emissions regulations (40 CFR 61, Subpart H). The remainder of this introduction provides a brief description of the information contained in each section. Section 2 of this report describes the methods that were employed for gathering usage survey data and for calculating usage, emissions, and dose for these point sources. It also references the appropriate ES procedures for further information. Section 3 describes the RMUS and explains how the survey results are

  5. Concept for Risk-based Prioritisation of Point Sources

    Overheu, N.D.; Troldborg, Mads; Tuxen, N.

    2010-01-01

    estimates on a local scale from all the sources, and 3D catchment-scale fate and transport modelling. It handles point sources at various knowledge levels and accounts for uncertainties. The tool estimates the impacts on the water supply in the catchment and provides an overall prioritisation of the sites...

  6. Induced Temporal Signatures for Point-Source Detection

    Stephens, Daniel L.; Runkle, Robert C.; Carlson, Deborah K.; Peurrung, Anthony J.; Seifert, Allen; Wyatt, Cory R.

    2005-01-01

    Detection of radioactive point-sized sources is inherently divided into two regimes encompassing stationary and moving detectors. The two cases differ in their treatment of background radiation and its influence on detection sensitivity. In the stationary detector case the statistical fluctuation of the background determines the minimum detectable quantity. In the moving detector case the detector may be subjected to widely and irregularly varying background radiation, as a result of geographical and environmental variation. This significant systematic variation, in conjunction with the statistical variation of the background, requires a conservative threshold to be selected to yield the same false-positive rate as the stationary detection case. This results in lost detection sensitivity for real sources. This work focuses on a simple and practical modification of the detector geometry that increase point-source recognition via a distinctive temporal signature. A key part of this effort is the integrated development of both detector geometries that induce a highly distinctive signature for point sources and the development of statistical algorithms able to optimize detection of this signature amidst varying background. The identification of temporal signatures for point sources has been demonstrated and compared with the canonical method showing good results. This work demonstrates that temporal signatures are efficient at increasing point-source discrimination in a moving detector system

  7. X pinch a point x-ray source

    Garg, A.B.; Rout, R.K.; Shyam, A.; Srinivasan, M.

    1993-01-01

    X ray emission from an X pinch, a point x-ray source has been studied using a pin-hole camera by a 30 kV, 7.2 μ F capacitor bank. The wires of different material like W, Mo, Cu, S.S.(stainless steel) and Ti were used. Molybdenum pinch gives the most intense x-rays and stainless steel gives the minimum intensity x-rays for same bank energy (∼ 3.2 kJ). Point x-ray source of size (≤ 0.5 mm) was observed using pin hole camera. The size of the source is limited by the size of the pin hole camera. The peak current in the load is approximately 150 kA. The point x-ray source could be useful in many fields like micro lithography, medicine and to study the basic physics of high Z plasmas. (author). 4 refs., 3 figs

  8. Very Luminous X-ray Point Sources in Starburst Galaxies

    Colbert, E.; Heckman, T.; Ptak, A.; Weaver, K. A.; Strickland, D.

    Extranuclear X-ray point sources in external galaxies with luminosities above 1039.0 erg/s are quite common in elliptical, disk and dwarf galaxies, with an average of ~ 0.5 and dwarf galaxies, with an average of ~0.5 sources per galaxy. These objects may be a new class of object, perhaps accreting intermediate-mass black holes, or beamed stellar mass black hole binaries. Starburst galaxies tend to have a larger number of these intermediate-luminosity X-ray objects (IXOs), as well as a large number of lower-luminosity (1037 - 1039 erg/s) point sources. These point sources dominate the total hard X-ray emission in starburst galaxies. We present a review of both types of objects and discuss possible schemes for their formation.

  9. Mirrored pyramidal wells for simultaneous multiple vantage point microscopy.

    Seale, K T; Reiserer, R S; Markov, D A; Ges, I A; Wright, C; Janetopoulos, C; Wikswo, J P

    2008-10-01

    We report a novel method for obtaining simultaneous images from multiple vantage points of a microscopic specimen using size-matched microscopic mirrors created from anisotropically etched silicon. The resulting pyramidal wells enable bright-field and fluorescent side-view images, and when combined with z-sectioning, provide additional information for 3D reconstructions of the specimen. We have demonstrated the 3D localization and tracking over time of the centrosome of a live Dictyostelium discoideum. The simultaneous acquisition of images from multiple perspectives also provides a five-fold increase in the theoretical collection efficiency of emitted photons, a property which may be useful for low-light imaging modalities such as bioluminescence, or low abundance surface-marker labelling.

  10. Multiple contacts with diversion at the point of arrest.

    Riordan, Sharon; Wix, Stuart; Haque, M Sayeed; Humphreys, Martin

    2003-04-01

    A diversion at the point of arrest (DAPA) scheme was set up in five police stations in South Birmingham in 1992. In a study of all referrals made over a four-year period a sub group of multiple contact individuals was identified. During that time four hundred and ninety-two contacts were recorded in total, of which 130 were made by 58 individuals. The latter group was generally no different from the single contact group but did have a tendency to be younger. This research highlights the need for a re-evaluation of service provision and associated education of police officers and relevant mental health care professionals.

  11. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-01-01

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.

  12. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-07-01

    An extension of the point kinetics model is developed to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. The spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.

  13. Trans-Z-source Neutral Point Clamped inverter

    Mo, W.; Loh, P. C.; Li, D.

    2012-01-01

    Transformer based Z-source (trans-Z-source) inverters are recently proposed by extending the traditional Z-source inverter with higher buck-boost capability as well as reducing the passive components at the same time. Multi-Level Z-source inverters are single-stage topological solutions used...... for buck-boost energy conversion with all the favourable advantages of multi-level switching retained. This paper presents three-level trans-Z-source Neutral Point Clamped (NPC) inverter topology, which achieves both the advantages of trans-Z-source and three-level NPC inverter configuration. With proper...... modulation scheme, the three-level trans-Z-source inverter can function with minimum of six device commutations per half carrier cycle (same as the traditional buck NPC inverter), while maintaining to produce the designed volt-sec average and inductive voltage boosting at ac output terminals. The designed...

  14. Search for high energy cosmic neutrino point sources with ANTARES

    Halladjian, G.

    2010-01-01

    The aim of this thesis is the search for high energy cosmic neutrinos emitted by point sources with the ANTARES neutrino telescope. The detection of high energy cosmic neutrinos can bring answers to important questions such as the origin of cosmic rays and the γ-rays emission processes. In the first part of the thesis, the neutrino flux emitted by galactic and extragalactic sources and the number of events which can be detected by ANTARES are estimated. This study uses the measured γ-ray spectra of known sources taking into account the γ-ray absorption by the extragalactic background light. In the second part of the thesis, the absolute pointing of the ANTARES telescope is evaluated. Being located at a depth of 2475 m in sea water, the orientation of the detector is determined by an acoustic positioning system which relies on low and high frequency acoustic waves measurements between the sea surface and the bottom. The third part of the thesis is a search for neutrino point sources in the ANTARES data. The search algorithm is based on a likelihood ratio maximization method. It is used in two search strategies; 'the candidate sources list strategy' and 'the all sky search strategy'. Analysing 2007+2008 data, no discovery is made and the world's best upper limits on neutrino fluxes from various sources in the Southern sky are established. (author)

  15. Clinical Validation of Point-Source Corneal Topography in Keratoplasty

    Vrijling, A C L; Braaf, B.; Snellenburg, J.J.; de Lange, F.; Zaal, M.J.W.; van der Heijde, G.L.; Sicam, V.A.D.P.

    2011-01-01

    Purpose. To validate the clinical performance of point-source corneal topography (PCT) in postpenetrating keratoplasty (PKP) eyes and to compare it with conventional Placido-based topography. Methods. Corneal elevation maps of the anterior corneal surface were obtained from 20 post-PKP corneas using

  16. Identifying populations at risk from environmental contamination from point sources

    Williams, F; Ogston, S

    2002-01-01

    Objectives: To compare methods for defining the population at risk from a point source of air pollution. A major challenge for environmental epidemiology lies in correctly identifying populations at risk from exposure to environmental pollutants. The complexity of today's environment makes it essential that the methods chosen are accurate and sensitive.

  17. Multiplicative point process as a model of trading activity

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  18. Multiplicity: discussion points from the Statisticians in the Pharmaceutical Industry multiplicity expert group.

    Phillips, Alan; Fletcher, Chrissie; Atkinson, Gary; Channon, Eddie; Douiri, Abdel; Jaki, Thomas; Maca, Jeff; Morgan, David; Roger, James Henry; Terrill, Paul

    2013-01-01

    In May 2012, the Committee of Health and Medicinal Products issued a concept paper on the need to review the points to consider document on multiplicity issues in clinical trials. In preparation for the release of the updated guidance document, Statisticians in the Pharmaceutical Industry held a one-day expert group meeting in January 2013. Topics debated included multiplicity and the drug development process, the usefulness and limitations of newly developed strategies to deal with multiplicity, multiplicity issues arising from interim decisions and multiregional development, and the need for simultaneous confidence intervals (CIs) corresponding to multiple test procedures. A clear message from the meeting was that multiplicity adjustments need to be considered when the intention is to make a formal statement about efficacy or safety based on hypothesis tests. Statisticians have a key role when designing studies to assess what adjustment really means in the context of the research being conducted. More thought during the planning phase needs to be given to multiplicity adjustments for secondary endpoints given these are increasing in importance in differentiating products in the market place. No consensus was reached on the role of simultaneous CIs in the context of superiority trials. It was argued that unadjusted intervals should be employed as the primary purpose of the intervals is estimation, while the purpose of hypothesis testing is to formally establish an effect. The opposing view was that CIs should correspond to the test decision whenever possible. Copyright © 2013 John Wiley & Sons, Ltd.

  19. 46 CFR 111.10-5 - Multiple energy sources.

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...

  20. Research on point source simulating the γ-ray detection efficiencies of stander source

    Tian Zining; Jia Mingyan; Shen Maoquan; Yang Xiaoyan; Cheng Zhiwei

    2010-01-01

    For φ 75 mm x 25 mm sample, the full energy peak efficiencies on different heights of sample radius were obtained using the point sources, and the function parameters about the full energy peak efficiencies of point sources based on radius was fixed. The 59.54 keV γ-ray, 661.66 keV γ-ray, 1173.2 keV γ-ray, 1332.5 keV γ-ray detection efficiencies on different height of samples were obtained, based on the full energy peak efficiencies of point sources and its height, and the function parameters about the full energy peak efficiencies of surface sources based on sample height was fixed. The detection efficiency of (75 mm x 25 mm calibration source can be obtained by integrality, the detection efficiencies simulated by point sources are consistent with the results of stander source in 10%. Therefore, the calibration method of stander source can be substituted by the point source simulation method, and it tis feasible when there is no stander source.) (authors)

  1. Reduction of bias in neutron multiplicity assay using a weighted point model

    Geist, W. H. (William H.); Krick, M. S. (Merlyn S.); Mayo, D. R. (Douglas R.)

    2004-01-01

    Accurate assay of most common plutonium samples was the development goal for the nondestructive assay technique of neutron multiplicity counting. Over the past 20 years the technique has been proven for relatively pure oxides and small metal items. Unfortunately, the technique results in large biases when assaying large metal items. Limiting assumptions, such as unifoh multiplication, in the point model used to derive the multiplicity equations causes these biases for large dense items. A weighted point model has been developed to overcome some of the limitations in the standard point model. Weighting factors are detemiined from Monte Carlo calculations using the MCNPX code. Monte Carlo calculations give the dependence of the weighting factors on sample mass and geometry, and simulated assays using Monte Carlo give the theoretical accuracy of the weighted-point-model assay. Measured multiplicity data evaluated with both the standard and weighted point models are compared to reference values to give the experimental accuracy of the assay. Initial results show significant promise for the weighted point model in reducing or eliminating biases in the neutron multiplicity assay of metal items. The negative biases observed in the assay of plutonium metal samples are caused by variations in the neutron multiplication for neutrons originating in various locations in the sample. The bias depends on the mass and shape of the sample and depends on the amount and energy distribution of the ({alpha},n) neutrons in the sample. When the standard point model is used, this variable-multiplication bias overestimates the multiplication and alpha values of the sample, and underestimates the plutonium mass. The weighted point model potentially can provide assay accuracy of {approx}2% (1 {sigma}) for cylindrical plutonium metal samples < 4 kg with {alpha} < 1 without knowing the exact shape of the samples, provided that the ({alpha},n) source is uniformly distributed throughout the

  2. Point source identification in nonlinear advection–diffusion–reaction systems

    Mamonov, A V; Tsai, Y-H R

    2013-01-01

    We consider a problem of identification of point sources in time-dependent advection–diffusion systems with a nonlinear reaction term. The linear counterpart of the problem in question can be reduced to solving a system of nonlinear algebraic equations via the use of adjoint equations. We extend this approach by constructing an algorithm that solves the problem iteratively to account for the nonlinearity of the reaction term. We study the question of improving the quality of source identification by adding more measurements adaptively using the solution obtained previously with a smaller number of measurements. (paper)

  3. Point sources and multipoles in inverse scattering theory

    Potthast, Roland

    2001-01-01

    Over the last twenty years, the growing availability of computing power has had an enormous impact on the classical fields of direct and inverse scattering. The study of inverse scattering, in particular, has developed rapidly with the ability to perform computational simulations of scattering processes and led to remarkable advances in a range of applications, from medical imaging and radar to remote sensing and seismic exploration. Point Sources and Multipoles in Inverse Scattering Theory provides a survey of recent developments in inverse acoustic and electromagnetic scattering theory. Focusing on methods developed over the last six years by Colton, Kirsch, and the author, this treatment uses point sources combined with several far-reaching techniques to obtain qualitative reconstruction methods. The author addresses questions of uniqueness, stability, and reconstructions for both two-and three-dimensional problems.With interest in extracting information about an object through scattered waves at an all-ti...

  4. Integrating multiple data sources for malware classification

    Anderson, Blake Harrell; Storlie, Curtis B; Lane, Terran

    2015-04-28

    Disclosed herein are representative embodiments of tools and techniques for classifying programs. According to one exemplary technique, at least one graph representation of at least one dynamic data source of at least one program is generated. Also, at least one graph representation of at least one static data source of the at least one program is generated. Additionally, at least using the at least one graph representation of the at least one dynamic data source and the at least one graph representation of the at least one static data source, the at least one program is classified.

  5. Reduction Assessment of Agricultural Non-Point Source Pollutant Loading

    Fu, YiCheng; Zang, Wenbin; Zhang, Jian; Wang, Hongtao; Zhang, Chunling; Shi, Wanli

    2018-01-01

    NPS (Non-point source) pollution has become a key impact element to watershed environment at present. With the development of technology, application of models to control NPS pollution has become a very common practice for resource management and Pollutant reduction control in the watershed scale of China. The SWAT (Soil and Water Assessment Tool) model is a semi-conceptual model, which was put forward to estimate pollutant production & the influences on water quantity-quality under different...

  6. Diffusion from a point source in an urban atmosphere

    Essa, K.S.M.; El-Otaify, M.S.

    2005-01-01

    In the present paper, a model for the diffusion of material from a point source in an urban atmosphere is incorporated. The plume is assumed to have a well-defined edge at which the concentration falls to zero. The vertical wind shear is estimated using logarithmic law, by employing most of the available techniques of stability categories. The concentrations estimated from the model were compared favorably with the field observations of other investigators

  7. Is a wind turbine a point source? (L).

    Makarewicz, Rufin

    2011-02-01

    Measurements show that practically all noise of wind turbine noise is produced by turbine blades, sometimes a few tens of meters long, despite that the model of a point source located at the hub height is commonly used. The plane of rotating blades is the critical location of the receiver because the distances to the blades are the shortest. It is shown that such location requires certain condition to be met. The model is valid far away from the wind turbine as well.

  8. Land Streamer Surveying Using Multiple Sources

    Mahmoud, Sherif; Schuster, Gerard T.

    2014-01-01

    are fired. In another example, a system includes a land streamer including a plurality of receivers, a first shot source located adjacent to the proximal end of the land streamer, and a second shot source located in-line with the land streamer and the first

  9. PSFGAN: a generative adversarial network system for separating quasar point sources and host galaxy light

    Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.

    2018-06-01

    The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.

  10. Pulsewidth-modulated 2-source neutral-point-clamped inverter

    Blaabjerg, Frede; Loh, Poh Chang; Gao, Feng

    2007-01-01

    This paper presents the careful integration of a newly proposed Z-source topological concept to the basic neutral-point-clamped (NPC) inverter topology for designing a three-level inverter with both voltage-buck and voltage-boost capabilities. The designed Z-source NPC inverter uses two unique X......-shaped inductance-capacitance (LC) impedance networks that are connected between two isolated dc input power sources and its inverter circuitry for boosting its AC output voltage. Through the design of an appropriate pulsewidth-modulation (PWM) algorithm, the two impedance networks can be short......-circuited sequentially (without shooting through the inverter full DC link) for implementing the ldquonearest-three-vectorrdquo modulation principle with minimized harmonic distortion and device commutations per half carrier cycle while performing voltage boosting. With only a slight modification to the inverter PWM...

  11. Reconstruction of multiple line source attenuation maps

    Celler, A.; Sitek, A.; Harrop, R.

    1996-01-01

    A simple configuration for a transmission source for the single photon emission computed tomography (SPECT) was proposed, which utilizes a series of collimated line sources parallel to the axis of rotation of a camera. The detector is equipped with a standard parallel hole collimator. We have demonstrated that this type of source configuration can be used to generate sufficient data for the reconstruction of the attenuation map when using 8-10 line sources spaced by 3.5-4.5 cm for a 30 x 40cm detector at 65cm distance from the sources. Transmission data for a nonuniform thorax phantom was simulated, then binned and reconstructed using filtered backprojection (FBP) and iterative methods. The optimum maps are obtained with data binned into 2-3 bins and FBP reconstruction. The activity in the source was investigated for uniform and exponential activity distributions, as well as the effect of gaps and overlaps of the neighboring fan beams. A prototype of the line source has been built and the experimental verification of the technique has started

  12. Downscaling remotely sensed imagery using area-to-point cokriging and multiple-point geostatistical simulation

    Tang, Yunwei; Atkinson, Peter M.; Zhang, Jingxiong

    2015-03-01

    A cross-scale data integration method was developed and tested based on the theory of geostatistics and multiple-point geostatistics (MPG). The goal was to downscale remotely sensed images while retaining spatial structure by integrating images at different spatial resolutions. During the process of downscaling, a rich spatial correlation model in the form of a training image was incorporated to facilitate reproduction of similar local patterns in the simulated images. Area-to-point cokriging (ATPCK) was used as locally varying mean (LVM) (i.e., soft data) to deal with the change of support problem (COSP) for cross-scale integration, which MPG cannot achieve alone. Several pairs of spectral bands of remotely sensed images were tested for integration within different cross-scale case studies. The experiment shows that MPG can restore the spatial structure of the image at a fine spatial resolution given the training image and conditioning data. The super-resolution image can be predicted using the proposed method, which cannot be realised using most data integration methods. The results show that ATPCK-MPG approach can achieve greater accuracy than methods which do not account for the change of support issue.

  13. Scattering and absorption of particles emitted by a point source in a cluster of point scatterers

    Liljequist, D.

    2012-01-01

    A theory for the scattering and absorption of particles isotropically emitted by a point source in a cluster of point scatterers is described and related to the theory for the scattering of an incident particle beam. The quantum mechanical probability of escape from the cluster in different directions is calculated, as well as the spatial distribution of absorption events within the cluster. A source strength renormalization procedure is required. The average quantum scattering in clusters with randomly shifting scatterer positions is compared to trajectory simulation with the aim of studying the validity of the trajectory method. Differences between the results of the quantum and trajectory methods are found primarily for wavelengths larger than the average distance between nearest neighbour scatterers. The average quantum results include, for example, a local minimum in the number of absorption events at the location of the point source and interference patterns in the angle-dependent escape probability as well as in the distribution of absorption events. The relative error of the trajectory method is in general, though not generally, of similar magnitude as that obtained for beam scattering.

  14. Open-Source Automated Mapping Four-Point Probe

    Handy Chandra

    2017-01-01

    Full Text Available Scientists have begun using self-replicating rapid prototyper (RepRap 3-D printers to manufacture open source digital designs of scientific equipment. This approach is refined here to develop a novel instrument capable of performing automated large-area four-point probe measurements. The designs for conversion of a RepRap 3-D printer to a 2-D open source four-point probe (OS4PP measurement device are detailed for the mechanical and electrical systems. Free and open source software and firmware are developed to operate the tool. The OS4PP was validated against a wide range of discrete resistors and indium tin oxide (ITO samples of different thicknesses both pre- and post-annealing. The OS4PP was then compared to two commercial proprietary systems. Results of resistors from 10 to 1 MΩ show errors of less than 1% for the OS4PP. The 3-D mapping of sheet resistance of ITO samples successfully demonstrated the automated capability to measure non-uniformities in large-area samples. The results indicate that all measured values are within the same order of magnitude when compared to two proprietary measurement systems. In conclusion, the OS4PP system, which costs less than 70% of manual proprietary systems, is comparable electrically while offering automated 100 micron positional accuracy for measuring sheet resistance over larger areas.

  15. Open-Source Automated Mapping Four-Point Probe.

    Chandra, Handy; Allen, Spencer W; Oberloier, Shane W; Bihari, Nupur; Gwamuri, Jephias; Pearce, Joshua M

    2017-01-26

    Scientists have begun using self-replicating rapid prototyper (RepRap) 3-D printers to manufacture open source digital designs of scientific equipment. This approach is refined here to develop a novel instrument capable of performing automated large-area four-point probe measurements. The designs for conversion of a RepRap 3-D printer to a 2-D open source four-point probe (OS4PP) measurement device are detailed for the mechanical and electrical systems. Free and open source software and firmware are developed to operate the tool. The OS4PP was validated against a wide range of discrete resistors and indium tin oxide (ITO) samples of different thicknesses both pre- and post-annealing. The OS4PP was then compared to two commercial proprietary systems. Results of resistors from 10 to 1 MΩ show errors of less than 1% for the OS4PP. The 3-D mapping of sheet resistance of ITO samples successfully demonstrated the automated capability to measure non-uniformities in large-area samples. The results indicate that all measured values are within the same order of magnitude when compared to two proprietary measurement systems. In conclusion, the OS4PP system, which costs less than 70% of manual proprietary systems, is comparable electrically while offering automated 100 micron positional accuracy for measuring sheet resistance over larger areas.

  16. Research on neutron source multiplication method in nuclear critical safety

    Zhu Qingfu; Shi Yongqian; Hu Dingsheng

    2005-01-01

    The paper concerns in the neutron source multiplication method research in nuclear critical safety. Based on the neutron diffusion equation with external neutron source the effective sub-critical multiplication factor k s is deduced, and k s is different to the effective neutron multiplication factor k eff in the case of sub-critical system with external neutron source. The verification experiment on the sub-critical system indicates that the parameter measured with neutron source multiplication method is k s , and k s is related to the external neutron source position in sub-critical system and external neutron source spectrum. The relation between k s and k eff and the effect of them on nuclear critical safety is discussed. (author)

  17. Preparation of very small point sources for high resolution radiography

    Case, F.N.

    1976-01-01

    The need for very small point sources of high specific activity 192 Ir, 169 Yb, 170 Tm, and 60 Co in non-destructive testing has motivated the development of techniques for the fabrication of these sources. To prepare 192 Ir point sources for use in examination of tube sheet welds in LMFBR heat exchangers, 191 Ir enriched to greater than 90 percent was melted in a helium blanketed arc to form spheres as small as 0.38 mm in diameter. Methods were developed to form the roughly spherical shaped arc product into nearly symmetrical spheres that could be used for high resolution radiography. Similar methods were used for spherical shaped sources of 169 Yb and 170 Tm. The oxides were arc melted to form rough spheres followed by grinding to precise dimensions, neutron irradiation of the spheres at a flux of 2 to 3 x 10 15 nv, and use of enriched 168 Yb to provide the maximum specific activity. Cobalt-60 with a specific activity of greater than 1100 Ci/g was prepared by processing 59 Co that had been neutron irradiated to nearly complete burnup of the 59 Co target to produce 60 Co, 61 Ni, and 62 Ni. Ion exchange methods were used to separate the cobalt from the nickel. The cobalt was reduced to metal by plating either onto aluminum foil which was dissolved away from the cobalt plate, or by plating onto mercury to prepare amalgam that could be easily formed into a pellet of cobalt with exclusion of the mercury. Both methods are discussed

  18. Learning from Multiple Sources for Video Summarisation

    Zhu, Xiatian; Loy, Chen Change; Gong, Shaogang

    2015-01-01

    Many visual surveillance tasks, e.g.video summarisation, is conventionally accomplished through analysing imagerybased features. Relying solely on visual cues for public surveillance video understanding is unreliable, since visual observations obtained from public space CCTV video data are often not sufficiently trustworthy and events of interest can be subtle. On the other hand, non-visual data sources such as weather reports and traffic sensory signals are readily accessible but are not exp...

  19. Rainfall Deduction Method for Estimating Non-Point Source Pollution Load for Watershed

    Cai, Ming; Li, Huai-en; KAWAKAMI, Yoji

    2004-01-01

    The water pollution can be divided into point source pollution (PSP) and non-point source pollution (NSP). Since the point source pollution has been controlled, the non-point source pollution is becoming the main pollution source. The prediction of NSP load is being increasingly important in water pollution controlling and planning in watershed. Considering the monitoring data shortage of NPS in China, a practical estimation method of non-point source pollution load --- rainfall deduction met...

  20. MULTIPLE ACCESS POINTS WITHIN THE ONLINE CLASSROOM: WHERE STUDENTS LOOK FOR INFORMATION

    John STEELE

    2017-01-01

    Full Text Available The purpose of this study is to examine the impact of information placement within the confines of the online classroom architecture. Also reviewed was the impact of other variables such as course design, teaching presence and student patterns in looking for information. The sample population included students from a major online university in their first year course sequence. Students were tasked with completing a survey at the end of the course, indicating their preference for accessing information within the online classroom. The qualitative data indicated that student preference is to receive information from multiple access points and sources within the online classroom architecture. Students also expressed a desire to have information delivered through the usage of technology such as email and text messaging. In addition to receiving information from multiple sources, the qualitative data indicated students were satisfied overall, with the current ways in which they received and accessed information within the online classroom setting. Major findings suggest that instructors teaching within the online classroom should have multiple data access points within the classroom architecture. Furthermore, instructors should use a variety of communication venues to enhance the ability for students to access and receive information pertinent to the course.

  1. Pressure Points in Reading Comprehension: A Quantile Multiple Regression Analysis

    Logan, Jessica

    2017-01-01

    The goal of this study was to examine how selected pressure points or areas of vulnerability are related to individual differences in reading comprehension and whether the importance of these pressure points varies as a function of the level of children's reading comprehension. A sample of 245 third-grade children were given an assessment battery…

  2. PSFGAN: a generative adversarial network system for separating quasar point sources and host galaxy light

    Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.

    2018-03-01

    The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey (SDSS) r-band images with artificial AGN point sources added which are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source PS is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover PS and host galaxy magnitudes with smaller systematic error and a lower average scatter (49%). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ±50% if it is trained on multiple PSF's. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN it is more robust and easy to use than parametric methods as it requires no input parameters.

  3. The peak efficiency calibration of volume source using 152Eu point source in computer

    Shen Tingyun; Qian Jianfu; Nan Qinliang; Zhou Yanguo

    1997-01-01

    The author describes the method of the peak efficiency calibration of volume source by means of 152 Eu point source for HPGe γ spectrometer. The peak efficiency can be computed by Monte Carlo simulation, after inputting parameter of detector. The computation results are in agreement with the experimental results with an error of +-3.8%, with an exception one is about +-7.4%

  4. Discretized energy minimization in a wave guide with point sources

    Propst, G.

    1994-01-01

    An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.

  5. Performance analysis of commercial multiple-input-multiple-output access point in distributed antenna system.

    Fan, Yuting; Aighobahi, Anthony E; Gomes, Nathan J; Xu, Kun; Li, Jianqiang

    2015-03-23

    In this paper, we experimentally investigate the throughput of IEEE 802.11n 2x2 multiple-input-multiple-output (MIMO) signals in a radio-over-fiber-based distributed antenna system (DAS) with different fiber lengths and power imbalance. Both a MIMO-supported access point (AP) and a spatial-diversity-supported AP were separately employed in the experiments. Throughput measurements were carried out with wireless users at different locations in a typical office environment. For the different fiber length effect, the results indicate that MIMO signals can maintain high throughput when the fiber length difference between the two remote antenna units (RAUs) is under 100 m and falls quickly when the length difference is greater. For the spatial diversity signals, high throughput can be maintained even when the difference is 150 m. On the other hand, the separation of the MIMO antennas allows additional freedom in placing the antennas in strategic locations for overall improved system performance, although it may also lead to received power imbalance problems. The results show that the throughput performance drops in specific positions when the received power imbalance is above around 13 dB. Hence, there is a trade-off between the extent of the wireless coverage for moderate bit-rates and the area over which peak bit-rates can be achieved.

  6. Volcano Monitoring using Multiple Remote Data Sources

    Reath, K. A.; Pritchard, M. E.

    2016-12-01

    Satellite-based remote sensing instruments can be used to determine quantitative values related to precursory activity that can act as a warning sign of an upcoming eruption. These warning signs are measured through examining anomalous activity in: (1) thermal flux, (2) gas/aerosol emission rates, (3) ground deformation, and (4) ground-based seismic readings. Patterns in each of these data sources are then analyzed to create classifications of different phases of precursory activity. These different phases of activity act as guidelines to monitor the progression of precursory activity leading to an eruption. Current monitoring methods rely on using high temporal resolution satellite imagery from instruments like the Advanced Very High Resolution Radiometer (AVHRR) and the Moderate Resolution Imaging Spectrometer (MODIS) sensors, for variations in thermal and aerosol emissions, and the Ozone Monitoring Instruments (OMI) and Ozone Mapping Profiler Suite (OMPS) instruments, for variations in gas emissions, to provide a valuable resource for near real-time monitoring of volcanic activity. However, the low spatial resolution of these data only enable events that produce a high thermal output or a large amount of gas/aerosol emissions to be detected. High spatial resolution instruments, like the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) sensor, have a small enough pixel size (90m2) that the subtle variations in both thermal flux and gas/aerosol emission rates in the pre-eruptive period can be detected. Including these data with the already established high temporal resolution data helps to identify and classify precursory activity patterns months before an eruption (Reath et al, 2016). By correlating these data with ground surface deformation data, determined from the Interferometric Synthetic Aperture Radar (InSAR) sensor, and seismic data, collected by the Incorporated Research Institution for Seismology (IRIS) data archive, subtle

  7. Super-resolution for a point source using positive refraction

    Miñano, Juan C.; Benítez, Pablo; González, Juan C.; Grabovičkić, Dejan; Ahmadpanahi, Hamed

    Leonhardt demonstrated (2009) that the 2D Maxwell Fish Eye lens (MFE) can focus perfectly 2D Helmholtz waves of arbitrary frequency, i.e., it can transport perfectly an outward (monopole) 2D Helmholtz wave field, generated by a point source, towards a receptor called "perfect drain" (PD) located at the corresponding MFE image point. The PD has the property of absorbing the complete radiation without radiation or scattering and it has been claimed as necessary to obtain super-resolution (SR) in the MFE. However, a prototype using a "drain" different from the PD has shown λ/5 resolution for microwave frequencies (Ma et al, 2010). Recently, the SR properties of a device equivalent to the MFE, called the Spherical Geodesic Waveguide (SGW) (Miñano et al, 2012) have been analyzed. The reported results show resolution up to λ /3000, for the SGW loaded with the perfect drain, and up to λ /500 for the SGW without perfect drain. The perfect drain was realized as a coaxial probe loaded with properly calculated impedance. The SGW provides SR only in a narrow band of frequencies close to the resonance Schumann frequencies. Here we analyze the SGW loaded with a small "perfect drain region" (González et al, 2011). This drain is designed as a region made of a material with complex permittivity. The comparative results show that there is no significant difference in the SR properties for both perfect drain designs.

  8. Trans-Z-source and Γ-Z-source neutral-point-clamped inverters

    Wei, Mo; Loh, Poh Chiang; Blaabjerg, Frede

    2015-01-01

    Z-source neutral-point-clamped (NPC) inverters are earlier proposed for obtaining voltage buck-boost and three-level switching simultaneously. Their performances are, however, constrained by a trade-off between their input-to-output gain and modulation ratio. This trade-off can lead to high...

  9. Power-Law Template for IR Point Source Clustering

    Addison, Graeme E.; Dunkley, Joanna; Hajian, Amir; Viero, Marco; Bond, J. Richard; Das, Sudeep; Devlin, Mark; Halpern, Mark; Hincks, Adam; Hlozek, Renee; hide

    2011-01-01

    We perform a combined fit to angular power spectra of unresolved infrared (IR) point sources from the Planck satellite (at 217,353,545 and 857 GHz, over angular scales 100 clustered power over the range of angular scales and frequencies considered is well fit by a simple power law of the form C_l\\propto I(sup -n) with n = 1.25 +/- 0.06. While the IR sources are understood to lie at a range of redshifts, with a variety of dust properties, we find that the frequency dependence of the clustering power can be described by the square of a modified blackbody, nu(sup beta) B(nu,T_eff), with a single emissivity index beta = 2.20 +/- 0.07 and effective temperature T_eff= 9.7 K. Our predictions for the clustering amplitude are consistent with existing ACT and South Pole Telescope results at around 150 and 220 GHz, as is our prediction for the effective dust spectral index, which we find to be alpha_150-220 = 3.68 +/- 0.07 between 150 and 220 GHz. Our constraints on the clustering shape and frequency dependence can be used to model the IR clustering as a contaminant in Cosmic Microwave Background anisotropy measurements. The combined Planck and BLAST data also rule out a linear bias clustering model.

  10. Power-Law Template for Infrared Point-Source Clustering

    Addison, Graeme E; Dunkley, Joanna; Hajian, Amir; Viero, Marco; Bond, J. Richard; Das, Sudeep; Devlin, Mark J.; Halpern, Mark; Hincks, Adam D; Hlozek, Renee; hide

    2012-01-01

    We perform a combined fit to angular power spectra of unresolved infrared (IR) point sources from the Planck satellite (at 217, 353, 545, and 857 GHz, over angular scales 100 approx clustered power over the range of angular scales and frequencies considered is well fitted by a simple power law of the form C(sup clust)(sub l) varies as l (sub -n) with n = 1.25 +/- 0.06. While the IR sources are understood to lie at a range of redshifts, with a variety of dust properties, we find that the frequency dependence of the clustering power can be described by the square of a modified blackbody, ?(sup Beta)B(?, T(sub eff) ), with a single emissivity index Beta = 2.20 +/- 0.07 and effective temperature T(sub eff) = 9.7 K. Our predictions for the clustering amplitude are consistent with existing ACT and South Pole Telescope results at around 150 and 220 GHz, as is our prediction for the effective dust spectral index, which we find to be alpha(sub 150-220) = 3.68 +/- 0.07 between 150 and 220 GHz. Our constraints on the clustering shape and frequency dependence can be used to model the IR clustering as a contaminant in cosmic microwave background anisotropy measurements. The combined Planck and BLAST data also rule out a linear bias clustering model.

  11. Systematic Review: Impact of point sources on antibiotic-resistant bacteria in the natural environment.

    Bueno, I; Williams-Nguyen, J; Hwang, H; Sargeant, J M; Nault, A J; Singer, R S

    2018-02-01

    Point sources such as wastewater treatment plants and agricultural facilities may have a role in the dissemination of antibiotic-resistant bacteria (ARB) and antibiotic resistance genes (ARG). To analyse the evidence for increases in ARB in the natural environment associated with these point sources of ARB and ARG, we conducted a systematic review. We evaluated 5,247 records retrieved through database searches, including both studies that ascertained ARG and ARB outcomes. All studies were subjected to a screening process to assess relevance to the question and methodology to address our review question. A risk of bias assessment was conducted upon the final pool of studies included in the review. This article summarizes the evidence only for those studies with ARB outcomes (n = 47). Thirty-five studies were at high (n = 11) or at unclear (n = 24) risk of bias in the estimation of source effects due to lack of information and/or failure to control for confounders. Statistical analysis was used in ten studies, of which one assessed the effect of multiple sources using modelling approaches; none reported effect measures. Most studies reported higher ARB prevalence or concentration downstream/near the source. However, this evidence was primarily descriptive and it could not be concluded that there is a clear impact of point sources on increases in ARB in the environment. To quantify increases in ARB in the environment due to specific point sources, there is a need for studies that stress study design, control of biases and analytical tools to provide effect measure estimates. © 2017 Blackwell Verlag GmbH.

  12. A MOSUM procedure for the estimation of multiple random change points

    Eichinger, Birte; Kirch, Claudia

    2018-01-01

    In this work, we investigate statistical properties of change point estimators based on moving sum statistics. We extend results for testing in a classical situation with multiple deterministic change points by allowing for random exogenous change points that arise in Hidden Markov or regime switching models among others. To this end, we consider a multiple mean change model with possible time series errors and prove that the number and location of change points are estimated consistently by ...

  13. Point spread function due to multiple scattering of light in the atmosphere

    Pękala, J.; Wilczyński, H.

    2013-01-01

    The atmospheric scattering of light has a significant influence on the results of optical observations of air showers. It causes attenuation of direct light from the shower, but also contributes a delayed signal to the observed light. The scattering of light therefore should be accounted for, both in simulations of air shower detection and reconstruction of observed events. In this work a Monte Carlo simulation of multiple scattering of light has been used to determine the contribution of the scattered light in observations of a point source of light. Results of the simulations and a parameterization of the angular distribution of the scattered light contribution to the observed signal (the point spread function) are presented. -- Author-Highlights: •Analysis of atmospheric scattering of light from an isotropic point source. •Different geometries and atmospheric conditions were investigated. •A parameterization of scattered light distribution has been developed. •The parameterization allows one to easily account for the light scattering in air. •The results will be useful in analyses of observations of extensive air shower

  14. POWER-LAW TEMPLATE FOR INFRARED POINT-SOURCE CLUSTERING

    Addison, Graeme E.; Dunkley, Joanna [Sub-department of Astrophysics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Hajian, Amir; Das, Sudeep; Hincks, Adam D.; Page, Lyman A.; Staggs, Suzanne T. [Joseph Henry Laboratories of Physics, Jadwin Hall, Princeton University, Princeton, NJ 08544 (United States); Viero, Marco [Department of Astronomy, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Bond, J. Richard [Canadian Institute for Theoretical Astrophysics, University of Toronto, Toronto, ON M5S 3H8 (Canada); Devlin, Mark J.; Reese, Erik D. [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Halpern, Mark; Scott, Douglas [Department of Physics and Astronomy, University of British Columbia, Vancouver, BC V6T 1Z4 (Canada); Hlozek, Renee; Marriage, Tobias A.; Spergel, David N. [Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08544 (United States); Moodley, Kavilan [Astrophysics and Cosmology Research Unit, School of Mathematical Sciences, University of KwaZulu-Natal, Durban 4041 (South Africa); Wollack, Edward [NASA/Goddard Space Flight Center, Code 665, Greenbelt, MD 20771 (United States)

    2012-06-20

    We perform a combined fit to angular power spectra of unresolved infrared (IR) point sources from the Planck satellite (at 217, 353, 545, and 857 GHz, over angular scales 100 {approx}< l {approx}< 2200), the Balloon-borne Large-Aperture Submillimeter Telescope (BLAST; 250, 350, and 500 {mu}m; 1000 {approx}< l {approx}< 9000), and from correlating BLAST and Atacama Cosmology Telescope (ACT; 148 and 218 GHz) maps. We find that the clustered power over the range of angular scales and frequencies considered is well fitted by a simple power law of the form C{sup clust}{sub l}{proportional_to}l{sup -n} with n = 1.25 {+-} 0.06. While the IR sources are understood to lie at a range of redshifts, with a variety of dust properties, we find that the frequency dependence of the clustering power can be described by the square of a modified blackbody, {nu}{sup {beta}} B({nu}, T{sub eff}), with a single emissivity index {beta} = 2.20 {+-} 0.07 and effective temperature T{sub eff} = 9.7 K. Our predictions for the clustering amplitude are consistent with existing ACT and South Pole Telescope results at around 150 and 220 GHz, as is our prediction for the effective dust spectral index, which we find to be {alpha}{sub 150-220} = 3.68 {+-} 0.07 between 150 and 220 GHz. Our constraints on the clustering shape and frequency dependence can be used to model the IR clustering as a contaminant in cosmic microwave background anisotropy measurements. The combined Planck and BLAST data also rule out a linear bias clustering model.

  15. Investigating sources and pathways of perfluoroalkyl acids (PFAAs) in aquifers in Tokyo using multiple tracers

    Kuroda, Keisuke; Murakami, Michio; Oguma, Kumiko; Takada, Hideshige; Takizawa, Satoshi

    2014-01-01

    We employed a multi-tracer approach to investigate sources and pathways of perfluoroalkyl acids (PFAAs) in urban groundwater, based on 53 groundwater samples taken from confined aquifers and unconfined aquifers in Tokyo. While the median concentrations of groundwater PFAAs were several ng/L, the maximum concentrations of perfluorooctane sulfonate (PFOS, 990 ng/L), perfluorooctanoate (PFOA, 1800 ng/L) and perfluorononanoate (PFNA, 620 ng/L) in groundwater were several times higher than those of wastewater and street runoff reported in the literature. PFAAs were more frequently detected than sewage tracers (carbamazepine and crotamiton), presumably owing to the higher persistence of PFAAs, the multiple sources of PFAAs beyond sewage (e.g., surface runoff, point sources) and the formation of PFAAs from their precursors. Use of multiple methods of source apportionment including principal component analysis–multiple linear regression (PCA–MLR) and perfluoroalkyl carboxylic acid ratio analysis highlighted sewage and point sources as the primary sources of PFAAs in the most severely polluted groundwater samples, with street runoff being a minor source (44.6% sewage, 45.7% point sources and 9.7% street runoff, by PCA–MLR). Tritium analysis indicated that, while young groundwater (recharged during or after the 1970s, when PFAAs were already in commercial use) in shallow aquifers (< 50 m depth) was naturally highly vulnerable to PFAA pollution, PFAAs were also found in old groundwater (recharged before the 1950s, when PFAAs were not in use) in deep aquifers (50–500 m depth). This study demonstrated the utility of multiple uses of tracers (pharmaceuticals and personal care products; PPCPs, tritium) and source apportionment methods in investigating sources and pathways of PFAAs in multiple aquifer systems. - Highlights: • Aquifers in Tokyo had high levels of perfluoroalkyl acids (up to 1800 ng/L). • PFAAs were more frequently detected than sewage

  16. Investigating sources and pathways of perfluoroalkyl acids (PFAAs) in aquifers in Tokyo using multiple tracers

    Kuroda, Keisuke, E-mail: keisukekr@gmail.com [Department of Urban Engineering, Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-8656 (Japan); Murakami, Michio [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro, Tokyo 153-8505 (Japan); Oguma, Kumiko [Department of Urban Engineering, Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-8656 (Japan); Takada, Hideshige [Laboratory of Organic Geochemistry (LOG), Institute of Symbiotic Science and Technology, Tokyo University of Agriculture and Technology, Fuchu, Tokyo 183-8509 (Japan); Takizawa, Satoshi [Department of Urban Engineering, Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-8656 (Japan)

    2014-08-01

    We employed a multi-tracer approach to investigate sources and pathways of perfluoroalkyl acids (PFAAs) in urban groundwater, based on 53 groundwater samples taken from confined aquifers and unconfined aquifers in Tokyo. While the median concentrations of groundwater PFAAs were several ng/L, the maximum concentrations of perfluorooctane sulfonate (PFOS, 990 ng/L), perfluorooctanoate (PFOA, 1800 ng/L) and perfluorononanoate (PFNA, 620 ng/L) in groundwater were several times higher than those of wastewater and street runoff reported in the literature. PFAAs were more frequently detected than sewage tracers (carbamazepine and crotamiton), presumably owing to the higher persistence of PFAAs, the multiple sources of PFAAs beyond sewage (e.g., surface runoff, point sources) and the formation of PFAAs from their precursors. Use of multiple methods of source apportionment including principal component analysis–multiple linear regression (PCA–MLR) and perfluoroalkyl carboxylic acid ratio analysis highlighted sewage and point sources as the primary sources of PFAAs in the most severely polluted groundwater samples, with street runoff being a minor source (44.6% sewage, 45.7% point sources and 9.7% street runoff, by PCA–MLR). Tritium analysis indicated that, while young groundwater (recharged during or after the 1970s, when PFAAs were already in commercial use) in shallow aquifers (< 50 m depth) was naturally highly vulnerable to PFAA pollution, PFAAs were also found in old groundwater (recharged before the 1950s, when PFAAs were not in use) in deep aquifers (50–500 m depth). This study demonstrated the utility of multiple uses of tracers (pharmaceuticals and personal care products; PPCPs, tritium) and source apportionment methods in investigating sources and pathways of PFAAs in multiple aquifer systems. - Highlights: • Aquifers in Tokyo had high levels of perfluoroalkyl acids (up to 1800 ng/L). • PFAAs were more frequently detected than sewage

  17. Multiple Monte Carlo Testing with Applications in Spatial Point Processes

    Mrkvička, Tomáš; Myllymäki, Mari; Hahn, Ute

    with a function as the test statistic, 3) several Monte Carlo tests with functions as test statistics. The rank test has correct (global) type I error in each case and it is accompanied with a p-value and with a graphical interpretation which shows which subtest or which distances of the used test function......(s) lead to the rejection at the prescribed significance level of the test. Examples of null hypothesis from point process and random set statistics are used to demonstrate the strength of the rank envelope test. The examples include goodness-of-fit test with several test functions, goodness-of-fit test...

  18. Low energy electron point source microscopy: beyond imaging

    Beyer, Andre; Goelzhaeuser, Armin [Physics of Supramolecular Systems and Surfaces, University of Bielefeld, Postfach 100131, 33501 Bielefeld (Germany)

    2010-09-01

    Low energy electron point source (LEEPS) microscopy has the capability to record in-line holograms at very high magnifications with a fairly simple set-up. After the holograms are numerically reconstructed, structural features with the size of about 2 nm can be resolved. The achievement of an even higher resolution has been predicted. However, a number of obstacles are known to impede the realization of this goal, for example the presence of electric fields around the imaged object, electrostatic charging or radiation induced processes. This topical review gives an overview of the achievements as well as the difficulties in the efforts to shift the resolution limit of LEEPS microscopy towards the atomic level. A special emphasis is laid on the high sensitivity of low energy electrons to electrical fields, which limits the structural determination of the imaged objects. On the other hand, the investigation of the electrical field around objects of known structure is very useful for other tasks and LEEPS microscopy can be extended beyond the task of imaging. The determination of the electrical resistance of individual nanowires can be achieved by a proper analysis of the corresponding LEEPS micrographs. This conductivity imaging may be a very useful application for LEEPS microscopes. (topical review)

  19. BEAMLINE-CONTROLLED STEERING OF SOURCE-POINT ANGLE AT THE ADVANCED PHOTON SOURCE

    Emery, L.; Fystro, G.; Shang, H.; Smith, M.

    2017-06-25

    An EPICS-based steering software system has been implemented for beamline personnel to directly steer the angle of the synchrotron radiation sources at the Advanced Photon Source. A script running on a workstation monitors "start steering" beamline EPICS records, and effects a steering given by the value of the "angle request" EPICS record. The new system makes the steering process much faster than before, although the older steering protocols can still be used. The robustness features of the original steering remain. Feedback messages are provided to the beamlines and the accelerator operators. Underpinning this new steering protocol is the recent refinement of the global orbit feedback process whereby feedforward of dipole corrector set points and orbit set points are used to create a local steering bump in a rapid and seamless way.

  20. Normalized Point Source Sensitivity for Off-Axis Optical Performance Evaluation of the Thirty Meter Telescope

    Seo, Byoung-Joon; Nissly, Carl; Troy, Mitchell; Angeli, George

    2010-01-01

    The Normalized Point Source Sensitivity (PSSN) has previously been defined and analyzed as an On-Axis seeing-limited telescope performance metric. In this paper, we expand the scope of the PSSN definition to include Off-Axis field of view (FoV) points and apply this generalized metric for performance evaluation of the Thirty Meter Telescope (TMT). We first propose various possible choices for the PSSN definition and select one as our baseline. We show that our baseline metric has useful properties including the multiplicative feature even when considering Off-Axis FoV points, which has proven to be useful for optimizing the telescope error budget. Various TMT optical errors are considered for the performance evaluation including segment alignment and phasing, segment surface figures, temperature, and gravity, whose On-Axis PSSN values have previously been published by our group.

  1. Testability evaluation using prior information of multiple sources

    Wang Chao

    2014-08-01

    Full Text Available Testability plays an important role in improving the readiness and decreasing the life-cycle cost of equipment. Testability demonstration and evaluation is of significance in measuring such testability indexes as fault detection rate (FDR and fault isolation rate (FIR, which is useful to the producer in mastering the testability level and improving the testability design, and helpful to the consumer in making purchase decisions. Aiming at the problems with a small sample of testability demonstration test data (TDTD such as low evaluation confidence and inaccurate result, a testability evaluation method is proposed based on the prior information of multiple sources and Bayes theory. Firstly, the types of prior information are analyzed. The maximum entropy method is applied to the prior information with the mean and interval estimate forms on the testability index to obtain the parameters of prior probability density function (PDF, and the empirical Bayesian method is used to get the parameters for the prior information with a success-fail form. Then, a parametrical data consistency check method is used to check the compatibility between all the sources of prior information and TDTD. For the prior information to pass the check, the prior credibility is calculated. A mixed prior distribution is formed based on the prior PDFs and the corresponding credibility. The Bayesian posterior distribution model is acquired with the mixed prior distribution and TDTD, based on which the point and interval estimates are calculated. Finally, examples of a flying control system are used to verify the proposed method. The results show that the proposed method is feasible and effective.

  2. Testability evaluation using prior information of multiple sources

    Wang Chao; Qiu Jing; Liu Guanjun; Zhang Yong

    2014-01-01

    Testability plays an important role in improving the readiness and decreasing the life-cycle cost of equipment. Testability demonstration and evaluation is of significance in measuring such testability indexes as fault detection rate (FDR) and fault isolation rate (FIR), which is useful to the producer in mastering the testability level and improving the testability design, and helpful to the consumer in making purchase decisions. Aiming at the problems with a small sample of testabil-ity demonstration test data (TDTD) such as low evaluation confidence and inaccurate result, a test-ability evaluation method is proposed based on the prior information of multiple sources and Bayes theory. Firstly, the types of prior information are analyzed. The maximum entropy method is applied to the prior information with the mean and interval estimate forms on the testability index to obtain the parameters of prior probability density function (PDF), and the empirical Bayesian method is used to get the parameters for the prior information with a success-fail form. Then, a parametrical data consistency check method is used to check the compatibility between all the sources of prior information and TDTD. For the prior information to pass the check, the prior credibility is calculated. A mixed prior distribution is formed based on the prior PDFs and the corresponding credibility. The Bayesian posterior distribution model is acquired with the mixed prior distribution and TDTD, based on which the point and interval estimates are calculated. Finally, examples of a flying control system are used to verify the proposed method. The results show that the proposed method is feasible and effective.

  3. A modified likelihood-method to search for point-sources in the diffuse astrophysical neutrino-flux in IceCube

    Reimann, Rene; Haack, Christian; Leuermann, Martin; Raedel, Leif; Schoenen, Sebastian; Schimp, Michael; Wiebusch, Christopher [III. Physikalisches Institut, RWTH Aachen (Germany); Collaboration: IceCube-Collaboration

    2015-07-01

    IceCube, a cubic-kilometer sized neutrino detector at the geographical South Pole, has recently measured a flux of high-energy astrophysical neutrinos. Although this flux has now been observed in multiple analyses, no point sources or source classes could be identified yet. Standard point source searches test many points in the sky for a point source of astrophysical neutrinos individually and therefore produce many trials. Our approach is to additionally use the measured diffuse spectrum to constrain the number of possible point sources and their properties. Initial studies of the method performance are shown.

  4. Effective communication at the point of multiple sclerosis diagnosis.

    Solari, Alessandra

    2014-04-01

    As a consequence of the current shortened diagnostic workup, people with multiple sclerosis (PwMS) are rapidly confronted with a disease of uncertain prognosis that requires complex treatment decisions. This paper reviews studies that have assessed the experiences of PwMS in the peri-diagnostic period and have evaluated the efficacy of interventions providing information at this critical moment. The studies found that the emotional burden on PwMS at diagnosis was high, and emphasised the need for careful monitoring and management of mood symptoms (chiefly anxiety). Information provision did not affect anxiety symptoms but improved patients' knowledge of their condition, the achievement of 'informed choice', and satisfaction with the diagnosis communication. It is vital to develop and implement information and decision aids for PwMS, but this is resource intensive, and international collaboration may be a way forward. The use of patient self-assessed outcome measures that appraise the quality of diagnosis communication is also important to allow health services to understand and meet the needs and preferences of PwMS.

  5. Nutrient Losses from Non-Point Sources or from Unidentified Point Sources? Application Examples of the Smartphone Based Nitrate App.

    Rozemeijer, J.; Ekkelenkamp, R.; van der Zaan, B.

    2017-12-01

    In 2016 Deltares launched the free to use Nitrate App which accurately reads and interprets nitrate test strips. The app directly displays the measured concentration and gives the option to share the result. Shared results are visualised in map functionality within the app and online. Since its introduction we've been seeing an increasing number of nitrate app applications. In this presentation we show some unanticipated types of application. The Nitrate App was originally intended to enable farmers to measure nitrate concentrations on their own farms. This may encourage farmers to talk to specialists about the right nutrient best management practices (BMP's) for their farm. Several groups of farmers have recently started to apply the Nitrate App and to discuss their results with each other and with the authorities. Nitrate concentration routings in catchments have proven to be another useful application. Within a day a person can generate a catchment scale nitrate concentration map identifying nitrate loss hotspots. In several routings in agricultural catchments clear point sources were found, for example at small scale manure processing plants. These routings proved that the Nitrate App can help water managers to target conservation practices more accurately to areas with the highest nitrate concentrations and loads. Other current applications are the screening of domestic water wells in California, the collection of extra measurements (also pH and NH4) in the National Monitoring Network for the Evaluation of the Manure Policy in the Netherlands, and several educational initiatives in cooperation with schools and universities.

  6. Impact of point source pollution on groundwater quality

    Gill, M.A.; Solehria, B.A.; Rai, N.I.

    2005-01-01

    The management of point source pollution (municipal and industrial waste water) is an important item on Brown Agenda confronting urban planners and policy makers. The industrial concerns and households produce enormous amount of waste water, which has to be disposed of through the municipal sewage system. Generally, municipal wastewater management is done on non-scientific lines, resulting in considerable social and economic loss and gradual degradation of the natural resources. The present study highlights that how the poor management practices, lack of infrastructure, and poor disposal system-comprising of mostly open, un-walled or partially lined drains, affect the groundwater quality and render it unfit for human consumption. Satiana Road sludge carrier at Faisalabad city, receiving effluents of about 67 textile units, 4 oil mills, 2 ice factories, 3 laundris and domestic waste water of Peoples Colony No.1, Maqbool Road and Ghulam Rasool Nagar was selected to derive quantitative and qualitative estimates of TDS, Na, Cl and heavy metals namely Fe, Cu and Pb of the waste water and their leaching around the sludge carrier. The measurement of leaching of TDS, Na/sup +/, and Cl/sup -1/ per 1000 m basis in lined section was 818, 550 and 228 tons, respectively. Where as in the unlined section, annual increase of TDS, Na/sup /+, and Cl/sup -/ was 2404,1615 and 669 tons per 1000 m respectively. In case of leaching of metals through the sludge carrier, Cu was at the top with 8.4 tons per annum per 1000 m followed by Fe and Pb with 6.66 and 1.2 tons per annum per 1000 m respectively. The concentration of all the salts/metals studied were higher in groundwater near the sludge carrier which decreased with increase in distance. The groundwater contamination in unlined portions is greater than lined portions, which might be due to higher seepage losses in unlined portions of the sludge carrier (4.9 % per 1000 m) as compared to relatively low seepage losses in lined portion of

  7. Coordinating a Large, Amalgamated REU Program with Multiple Funding Sources

    Fiorini, Eugene; Myers, Kellen; Naqvi, Yusra

    2017-01-01

    In this paper, we discuss the challenges of organizing a large REU program amalgamated from multiple funding sources, including diverse participants, mentors, and research projects. We detail the program's structure, activities, and recruitment, and we hope to demonstrate that the organization of this REU is not only beneficial to its…

  8. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-07

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  9. LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources

    Pan, Hanjie; Simeoni, Matthieu; Hurley, Paul; Blu, Thierry; Vetterli, Martin

    2017-12-01

    Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims: The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods: We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results: We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point-source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable

  10. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    temporal variations. Geostationary and non-sun-synchronous low-Earth-orbits (precessing local solar time, diurnal information possible) with agile pointing have the potential to provide, comprehensive mapping of distributed area sources such as megacities with longer stare times and multiple revisits per day, at the expense of global access and spatial coverage. An ad hoc CO2 remote sensing constellation is emerging. NASA's OCO-2 satellite (launch July 2014) joins JAXA's GOSAT satellite in orbit. These will be followed by GOSAT-2 and NASA's OCO-3 on the International Space Station as early as 2017. Additional polar orbiting satellites (e.g., CarbonSat, under consideration at ESA) and geostationary platforms may also become available. However, the individual assets have been designed with independent science goals and requirements, and limited consideration of coordinated observing strategies. Every effort must be made to maximize the science return from this constellation. We discuss the opportunities to exploit the complementary spatial and temporal coverage provided by these assets as well as the crucial gaps in the capabilities of this constellation. References Burton, M.R., Sawyer, G.M., and Granieri, D. (2013). Deep carbon emissions from volcanoes. Rev. Mineral. Geochem. 75: 323-354. Duren, R.M., Miller, C.E. (2012). Measuring the carbon emissions of megacities. Nature Climate Change 2, 560-562. Schwandner, F.M., Oda, T., Duren, R., Carn, S.A., Maksyutov, S., Crisp, D., Miller, C.E. (2013). Scientific Opportunities from Target-Mode Capabilities of GOSAT-2. NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, White Paper, 6p., March 2013.

  11. Determination of disintegration rates of a 60Co point source and volume sources by the sum-peak method

    Kawano, Takao; Ebihara, Hiroshi

    1990-01-01

    The disintegration rates of 60 Co as a point source (<2 mm in diameter on a thin plastic disc) and volume sources (10-100 mL solutions in a polyethylene bottle) are determined by the sum-peak method. The sum-peak formula gives the exact disintegration rate for the point source at different positions from the detector. However, increasing the volume of the solution results in enlarged deviations from the true disintegration rate. Extended sources must be treated as an amalgam of many point sources. (author)

  12. Error analysis of dimensionless scaling experiments with multiple points using linear regression

    Guercan, Oe.D.; Vermare, L.; Hennequin, P.; Bourdelle, C.

    2010-01-01

    A general method of error estimation in the case of multiple point dimensionless scaling experiments, using linear regression and standard error propagation, is proposed. The method reduces to the previous result of Cordey (2009 Nucl. Fusion 49 052001) in the case of a two-point scan. On the other hand, if the points follow a linear trend, it explains how the estimated error decreases as more points are added to the scan. Based on the analytical expression that is derived, it is argued that for a low number of points, adding points to the ends of the scanned range, rather than the middle, results in a smaller error estimate. (letter)

  13. DISCRIMINATION OF NATURAL AND NON-POINT SOURCE EFFECTS FROM ANTHROGENIC EFFECTS AS REFLECTED IN BENTHIC STATE IN THREE ESTUARIES IN NEW ENGLAND

    In order to protect estuarine resources, managers must be able to discern the effects of natural conditions and non-point source effects, and separate them from multiple anthropogenic point source effects. Our approach was to evaluate benthic community assemblages, riverine nitro...

  14. Characteristics of infrared point sources associated with OH masers

    Mu Jimang; Esimbek, Jarken; Zhou Jianjun; Zhang Haijuan

    2010-01-01

    We collect 3249 OH maser sources from the literature published up to April 2007, and compile a new catalog of OH masers. We look for the exciting sources of these masers and their infrared properties from IRAS and MSX data, and make a statistical study. MSX sources associated with stellar 1612 MHz OH masers are located mainly above the blackbody line; this is caused by the dust absorption of stellar envelopes, especially in the MSX A band. The mid-IR sources associated with stellar OH masers are concentrated in a small region in an [A]-[D] vs. [A]-[E] diagram with a small fraction of contamination; this gives us a new criterion to search for new stellar OH masers and distinguish stellar masers from unknown types of OH masers. IR sources associated with 1612 MHz stellar OH masers show an expected result: the average flux of sources with F60 > F25 increases with increasing wavelength, while those with F60 F25.

  15. Evaluation of Network Reliability for Computer Networks with Multiple Sources

    Yi-Kuei Lin

    2012-01-01

    Full Text Available Evaluating the reliability of a network with multiple sources to multiple sinks is a critical issue from the perspective of quality management. Due to the unrealistic definition of paths of network models in previous literature, existing models are not appropriate for real-world computer networks such as the Taiwan Advanced Research and Education Network (TWAREN. This paper proposes a modified stochastic-flow network model to evaluate the network reliability of a practical computer network with multiple sources where data is transmitted through several light paths (LPs. Network reliability is defined as being the probability of delivering a specified amount of data from the sources to the sink. It is taken as a performance index to measure the service level of TWAREN. This paper studies the network reliability of the international portion of TWAREN from two sources (Taipei and Hsinchu to one sink (New York that goes through a submarine and land surface cable between Taiwan and the United States.

  16. Analysis of point source size on measurement accuracy of lateral point-spread function of confocal Raman microscopy

    Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang

    2018-01-01

    Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.

  17. Multisensory softness perceived compliance from multiple sources of information

    Luca, Massimiliano Di

    2014-01-01

    Offers a unique multidisciplinary overview of how humans interact with soft objects and how multiple sensory signals are used to perceive material properties, with an emphasis on object deformability. The authors describe a range of setups that have been employed to study and exploit sensory signals involved in interactions with compliant objects as well as techniques to simulate and modulate softness - including a psychophysical perspective of the field. Multisensory Softness focuses on the cognitive mechanisms underlying the use of multiple sources of information in softness perception. D

  18. Tapping the zero-point energy as an energy source

    King, M.B.

    1991-01-01

    This paper reports that the hypothesis for tapping the zero-point energy (ZPE) arises by combining the theories of the ZPE with the theories of system self-organization. The vacuum polarization of atomic nuclei might allow their synchronous motion to activate a ZPE coherence. Experimentally observed plasma ion-acoustic anomalies as well as inventions utilizing cycloid ion motions may offer supporting evidence. The suggested experiment of rapidly circulating a charged plasma in a vortex ring might induce a sufficient zero-point energy interaction to manifest a gravitational anomaly. An invention utilizing abrupt E field rotation to create virtual charge exhibits excessive energy output

  19. History Matching Through a Smooth Formulation of Multiple-Point Statistics

    Melnikova, Yulia; Zunino, Andrea; Lange, Katrine

    2014-01-01

    and the mismatch with multiple-point statistics. As a result, in the framework of the Bayesian approach, such a solution belongs to a high posterior region. The methodology, while applicable to any inverse problem with a training-image-based prior, is especially beneficial for problems which require expensive......We propose a smooth formulation of multiple-point statistics that enables us to solve inverse problems using gradient-based optimization techniques. We introduce a differentiable function that quantifies the mismatch between multiple-point statistics of a training image and of a given model. We...... show that, by minimizing this function, any continuous image can be gradually transformed into an image that honors the multiple-point statistics of the discrete training image. The solution to an inverse problem is then found by minimizing the sum of two mismatches: the mismatch with data...

  20. Tracking of Multiple Moving Sources Using Recursive EM Algorithm

    Böhme Johann F

    2005-01-01

    Full Text Available We deal with recursive direction-of-arrival (DOA estimation of multiple moving sources. Based on the recursive EM algorithm, we develop two recursive procedures to estimate the time-varying DOA parameter for narrowband signals. The first procedure requires no prior knowledge about the source movement. The second procedure assumes that the motion of moving sources is described by a linear polynomial model. The proposed recursion updates the polynomial coefficients when a new data arrives. The suggested approaches have two major advantages: simple implementation and easy extension to wideband signals. Numerical experiments show that both procedures provide excellent results in a slowly changing environment. When the DOA parameter changes fast or two source directions cross with each other, the procedure designed for a linear polynomial model has a better performance than the general procedure. Compared to the beamforming technique based on the same parameterization, our approach is computationally favorable and has a wider range of applications.

  1. Characterization of non point source pollutants and their dispersion ...

    EJIRO

    landing site in Uganda. N. Banadda. Agricultural and Bio-Systems Engineering Department, Makerere University, P. O. Box 7062, Kampala, Uganda. E-mail: banadda@agric.mak.ac.ug. Fax: +256-414-53.16.41. Accepted 5 January, 2011. The aim of this research is to characterize non point pollutants and their dispersion in ...

  2. Family of Quantum Sources for Improving Near Field Accuracy in Transducer Modeling by the Distributed Point Source Method

    Dominique Placko

    2016-10-01

    Full Text Available The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD. In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.

  3. Multiple LDPC decoding for distributed source coding and video coding

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  4. Assimilation of concentration measurements for retrieving multiple point releases in atmosphere: A least-squares approach to inverse modelling

    Singh, Sarvesh Kumar; Rani, Raj

    2015-10-01

    The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.

  5. Research on amplification multiple of source neutron number for ADS

    Liu Guisheng; Zhao Zhixiang; Zhang Baocheng; Shen Qingbiao; Ding Dazhao

    1998-01-01

    NJOY-91.91 and MILER code systems was applied to process and generate 44 group cross sections in AMPX master library format from CENDL-2 and ENDF/B-6. It is important an ADS (Accelerator-Driven System) assembly spectrum is used as the weighting spectrum for generating multi-group constants. Amplification multiples of source neutron number for several fast assemblies were calculated

  6. Kernel integration scatter model for parallel beam gamma camera and SPECT point source response

    Marinkovic, P.M.

    2001-01-01

    Scatter correction is a prerequisite for quantitative single photon emission computed tomography (SPECT). In this paper a kernel integration scatter Scatter correction is a prerequisite for quantitative SPECT. In this paper a kernel integration scatter model for parallel beam gamma camera and SPECT point source response based on Klein-Nishina formula is proposed. This method models primary photon distribution as well as first Compton scattering. It also includes a correction for multiple scattering by applying a point isotropic single medium buildup factor for the path segment between the point of scatter an the point of detection. Gamma ray attenuation in the object of imaging, based on known μ-map distribution, is considered too. Intrinsic spatial resolution of the camera is approximated by a simple Gaussian function. Collimator is modeled simply using acceptance angles derived from the physical dimensions of the collimator. Any gamma rays satisfying this angle were passed through the collimator to the crystal. Septal penetration and scatter in the collimator were not included in the model. The method was validated by comparison with Monte Carlo MCNP-4a numerical phantom simulation and excellent results were obtained. The physical phantom experiments, to confirm this method, are planed to be done. (author)

  7. Nomogram for Determining Shield Thickness for Point and Line Sources of Gamma Rays

    Joenemalm, C.; Malen, K

    1966-10-01

    A set of nomograms is given for the determination of the required shield thickness against gamma radiation. The sources handled are point and infinite line sources with shields of Pb, Fe, magnetite concrete (p = 3.6), ordinary concrete (p = 2.3) or water. The gamma energy range covered is 0.5 - 10 MeV. The nomograms are directly applicable for source and dose points on the surfaces of the shield. They can easily be extended to source and dose points in other positions by applying a geometrical correction. Also included are data for calculation of the source strength for the most common materials and for fission product sources

  8. Nomogram for Determining Shield Thickness for Point and Line Sources of Gamma Rays

    Joenemalm, C; Malen, K

    1966-10-15

    A set of nomograms is given for the determination of the required shield thickness against gamma radiation. The sources handled are point and infinite line sources with shields of Pb, Fe, magnetite concrete (p = 3.6), ordinary concrete (p = 2.3) or water. The gamma energy range covered is 0.5 - 10 MeV. The nomograms are directly applicable for source and dose points on the surfaces of the shield. They can easily be extended to source and dose points in other positions by applying a geometrical correction. Also included are data for calculation of the source strength for the most common materials and for fission product sources.

  9. Tokamak startup using point-source dc helicity injection.

    Battaglia, D J; Bongard, M W; Fonck, R J; Redd, A J; Sontag, A C

    2009-06-05

    Startup of a 0.1 MA tokamak plasma is demonstrated on the ultralow aspect ratio Pegasus Toroidal Experiment using three localized, high-current density sources mounted near the outboard midplane. The injected open field current relaxes via helicity-conserving magnetic turbulence into a tokamaklike magnetic topology where the maximum sustained plasma current is determined by helicity balance and the requirements for magnetic relaxation.

  10. 75 FR 69591 - Medicaid Program; Withdrawal of Determination of Average Manufacturer Price, Multiple Source Drug...

    2010-11-15

    ..., Multiple Source Drug Definition, and Upper Limits for Multiple Source Drugs AGENCY: Centers for Medicare... withdrawing the definition of ``multiple source drug'' as it was revised in the ``Medicaid Program; Multiple Source Drug Definition'' final rule published in the October 7, 2008 Federal Register. DATES: Effective...

  11. Common Fixed Points of Generalized Rational Type Cocyclic Mappings in Multiplicative Metric Spaces

    Mujahid Abbas

    2015-01-01

    Full Text Available The aim of this paper is to present fixed point result of mappings satisfying a generalized rational contractive condition in the setup of multiplicative metric spaces. As an application, we obtain a common fixed point of a pair of weakly compatible mappings. Some common fixed point results of pair of rational contractive types mappings involved in cocyclic representation of a nonempty subset of a multiplicative metric space are also obtained. Some examples are presented to support the results proved herein. Our results generalize and extend various results in the existing literature.

  12. Multiple approaches to microbial source tracking in tropical northern Australia

    Neave, Matthew

    2014-09-16

    Microbial source tracking is an area of research in which multiple approaches are used to identify the sources of elevated bacterial concentrations in recreational lakes and beaches. At our study location in Darwin, northern Australia, water quality in the harbor is generally good, however dry-season beach closures due to elevated Escherichia coli and enterococci counts are a cause for concern. The sources of these high bacteria counts are currently unknown. To address this, we sampled sewage outfalls, other potential inputs, such as urban rivers and drains, and surrounding beaches, and used genetic fingerprints from E. coli and enterococci communities, fecal markers and 454 pyrosequencing to track contamination sources. A sewage effluent outfall (Larrakeyah discharge) was a source of bacteria, including fecal bacteria that impacted nearby beaches. Two other treated effluent discharges did not appear to influence sites other than those directly adjacent. Several beaches contained fecal indicator bacteria that likely originated from urban rivers and creeks within the catchment. Generally, connectivity between the sites was observed within distinct geographical locations and it appeared that most of the bacterial contamination on Darwin beaches was confined to local sources.

  13. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications.

    Md Selim Hossain

    Full Text Available In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM, which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST. The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text] and Area × Time × Energy (ATE product of the proposed design are far better than the most significant studies found in the literature.

  14. [A landscape ecological approach for urban non-point source pollution control].

    Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing

    2005-05-01

    Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.

  15. Localizing Brain Activity from Multiple Distinct Sources via EEG

    George Dassios

    2014-01-01

    Full Text Available An important question arousing in the framework of electroencephalography (EEG is the possibility to recognize, by means of a recorded surface potential, the number of activated areas in the brain. In the present paper, employing a homogeneous spherical conductor serving as an approximation of the brain, we provide a criterion which determines whether the measured surface potential is evoked by a single or multiple localized neuronal excitations. We show that the uniqueness of the inverse problem for a single dipole is closely connected with attaining certain relations connecting the measured data. Further, we present the necessary and sufficient conditions which decide whether the collected data originates from a single dipole or from numerous dipoles. In the case where the EEG data arouses from multiple parallel dipoles, an isolation of the source is, in general, not possible.

  16. Calculation of dose for β point and sphere sources in soft tissue

    Sun Fuyin; Yuan Shuyu; Tan Jian

    1999-01-01

    Objective: To compare the results of the distribution of dose rate calculated by three typical methods for point source and sphere source of β nuclide. Methods: Calculating and comparing the distributions of dose rate from 32 P β point and sphere sources in soft tissue calculated by the three methods published in references, [1]. [2] and [3], respectively. Results: For the point source of 3.7 x 10 7 Bq (1mCi), the variations of the calculation results of the three formulas are within 10% if r≤0.35 g/cm 2 , r being the distance from source, and larger than 10% if r > 0.35 g/cm 2 . For the sphere source whose volume is 50 μl and activity is 3.7 x 10 7 Bq(1 mCi), the variations are within 10% if z≤0.15 g/cm 2 , z being the distance from the surface of the sphere source to a point outside the sphere. Conclusion: The agreement of the distributions of the dose rate calculated by the three methods mentioned above for point and sphere β source are good if the distances from point source or the surface of sphere source to the points observed are small, and poor if they are large

  17. Improving the Pattern Reproducibility of Multiple-Point-Based Prior Models Using Frequency Matching

    Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus

    2014-01-01

    Some multiple-point-based sampling algorithms, such as the snesim algorithm, rely on sequential simulation. The conditional probability distributions that are used for the simulation are based on statistics of multiple-point data events obtained from a training image. During the simulation, data...... events with zero probability in the training image statistics may occur. This is handled by pruning the set of conditioning data until an event with non-zero probability is found. The resulting probability distribution sampled by such algorithms is a pruned mixture model. The pruning strategy leads...... to a probability distribution that lacks some of the information provided by the multiple-point statistics from the training image, which reduces the reproducibility of the training image patterns in the outcome realizations. When pruned mixture models are used as prior models for inverse problems, local re...

  18. A feature point identification method for positron emission particle tracking with multiple tracers

    Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)

    2017-01-21

    A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.

  19. PUBLIC EXPOSURE TO MULTIPLE RF SOURCES IN GHANA.

    Deatanyah, P; Abavare, E K K; Menyeh, A; Amoako, J K

    2018-03-16

    This paper describes an effort to respond to the suggestion in World Health Organization (WHO) research agenda to better quantify potential exposure levels from a range of radiofrequency (RF) sources at 200 public access locations in Ghana. Wide-band measurements were performed-with a spectrum analyser and a log-periodic antenna using three-point spatial averaging method. The overall results represented a maximum of 0.19% of the ICNIRP reference levels for public exposure. These results were generally lower than found in some previous but were 58% (2.0 dB) greater, than found in similar work conducted in the USA. Major contributing sources of RF fields were identified to be FM broadcast and mobile base station sites. Three locations with the greatest measured RF fields could represent potential areas for epidemiological studies.

  20. The Development and Application of Spatiotemporal Metrics for the Characterization of Point Source FFCO2 Emissions and Dispersion

    Roten, D.; Hogue, S.; Spell, P.; Marland, E.; Marland, G.

    2017-12-01

    There is an increasing role for high resolution, CO2 emissions inventories across multiple arenas. The breadth of the applicability of high-resolution data is apparent from their use in atmospheric CO2 modeling, their potential for validation of space-based atmospheric CO2 remote-sensing, and the development of climate change policy. This work focuses on increasing our understanding of the uncertainty in these inventories and the implications on their downstream use. The industrial point sources of emissions (power generating stations, cement manufacturing plants, paper mills, etc.) used in the creation of these inventories often have robust emissions characteristics, beyond just their geographic location. Physical parameters of the emission sources such as number of exhaust stacks, stack heights, stack diameters, exhaust temperatures, and exhaust velocities, as well as temporal variability and climatic influences can be important in characterizing emissions. Emissions from large point sources can behave much differently than emissions from areal sources such as automobiles. For many applications geographic location is not an adequate characterization of emissions. This work demonstrates the sensitivities of atmospheric models to the physical parameters of large point sources and provides a methodology for quantifying parameter impacts at multiple locations across the United States. The sensitivities highlight the importance of location and timing and help to highlight potential aspects that can guide efforts to reduce uncertainty in emissions inventories and increase the utility of the models.

  1. Simultaneous Determination of Source Wavelet and Velocity Profile Using Impulsive Point-Source Reflections from a Layered Fluid

    Bube, K; Lailly, P; Sacks, P; Santosa, F; Symes, W. W

    1987-01-01

    .... We show that a quasi-impulsive, isotropic point source may be recovered simultaneously with the velocity profile from reflection data over a layered fluid, in linear (perturbation) approximation...

  2. Method of Fusion Diagnosis for Dam Service Status Based on Joint Distribution Function of Multiple Points

    Zhenxiang Jiang

    2016-01-01

    Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.

  3. Writing in the workplace: Constructing documents using multiple digital sources

    Mariëlle Leijten

    2014-02-01

    Full Text Available In today’s workplaces professional communication often involves constructing documents from multiple digital sources—integrating one’s own texts/graphics with ideas based on others’ text/graphics. This article presents a case study of a professional communication designer as he constructs a proposal over several days. Drawing on keystroke and interview data, we map the professional’s overall process, plot the time course of his writing/design, illustrate how he searches for content and switches among optional digital sources, and show how he modifies and reuses others’ content. The case study reveals not only that the professional (1 searches extensively through multiple sources for content and ideas but that he also (2 constructs visual content (charts, graphs, photographs as well as verbal content, and (3 manages his attention and motivation over this extended task. Since these three activities are not represented in current models of writing, we propose their addition not just to models of communication design, but also to models of writing in general.

  4. Determination of shell correction energies at saddle point using pre-scission neutron multiplicities

    Golda, K.S.; Saxena, A.; Mittal, V.K.; Mahata, K.; Sugathan, P.; Jhingan, A.; Singh, V.; Sandal, R.; Goyal, S.; Gehlot, J.; Dhal, A.; Behera, B.R.; Bhowmik, R.K.; Kailas, S.

    2013-01-01

    Pre-scission neutron multiplicities have been measured for 12 C + 194, 198 Pt systems at matching excitation energies at near Coulomb barrier region. Statistical model analysis with a modified fission barrier and level density prescription have been carried out to fit the measured pre-scission neutron multiplicities and the available evaporation residue and fission cross sections simultaneously to constrain statistical model parameters. Simultaneous fitting of the pre-scission neutron multiplicities and cross section data requires shell correction at the saddle point

  5. Point source search techniques in ultra high energy gamma ray astronomy

    Alexandreas, D.E.; Biller, S.; Dion, G.M.; Lu, X.Q.; Yodh, G.B.; Berley, D.; Goodman, J.A.; Haines, T.J.; Hoffman, C.M.; Horch, E.; Sinnis, C.; Zhang, W.

    1993-01-01

    Searches for point astrophysical sources of ultra high energy (UHE) gamma rays are plagued by large numbers of background events from isotropic cosmic rays. Some of the methods that have been used to estimate the expected number of background events coming from the direction of a possible source are found to contain biases. Search techniques that avoid this problem are described. There is also a discussion of how to optimize the sensitivity of a search to emission from a point source. (orig.)

  6. Optical identifications of IRAS point sources: the Fornax, Hydra I and Coma clusters

    Wang, G.; Leggett, S.K.; Savage, A.

    1991-01-01

    We present optical identifications for 66 IRAS point sources in the region of the Fornax cluster of galaxies, 106 IRAS point sources in the region of the Hydra I cluster of galaxies (Abell 1060) and 59 IRAS point sources in the region of the Coma cluster of galaxies (Abell 1656). Eight other sources in Hydra I do not have optical counterparts and are very probably due to infrared cirrus. Twenty-three (35 per cent) of the Fornax sources are associated with stars and 43 (65 per cent) with galaxies; 48 (42 per cent) of the Hydra I sources are associated with stars and 58 (51 per cent) with galaxies; 18 (31 per cent) of the Coma sources are associated with stars and 41 (69 per cent) with galaxies. The stellar and infrared cirrus surface density is consistent with the galactic latitude of each field. (author)

  7. 75 FR 10438 - Effluent Limitations Guidelines and Standards for the Construction and Development Point Source...

    2010-03-08

    ... Effluent Limitations Guidelines and Standards for the Construction and Development Point Source Category... technology-based Effluent Limitations Guidelines and New Source Performance Standards for the Construction... technology-based Effluent Limitations Guidelines and New Source Performance Standards for the Construction...

  8. Experimental properties of gluon and quark jets from a point source

    Abbiendi, G.; Alexander, G.; Allison, John; Altekamp, N.; Anderson, K.J.; Anderson, S.; Arcelli, S.; Asai, S.; Ashby, S.F.; Axen, D.; Azuelos, G.; Ball, A.H.; Barberio, E.; Barlow, Roger J.; Batley, J.R.; Baumann, S.; Bechtluft, J.; Behnke, T.; Bell, Kenneth Watson; Bella, G.; Bellerive, A.; Bentvelsen, S.; Bethke, S.; Betts, S.; Biebel, O.; Biguzzi, A.; Blobel, V.; Bloodworth, I.J.; Bock, P.; Bohme, J.; Bonacorsi, D.; Boutemeur, M.; Braibant, S.; Bright-Thomas, P.; Brigliadori, L.; Brown, Robert M.; Burckhart, H.J.; Capiluppi, P.; Carnegie, R.K.; Carter, A.A.; Carter, J.R.; Chang, C.Y.; Charlton, David G.; Chrisman, D.; Ciocca, C.; Clarke, P.E.L.; Clay, E.; Cohen, I.; Conboy, J.E.; Cooke, O.C.; Couyoumtzelis, C.; Coxe, R.L.; Cuffiani, M.; Dado, S.; Dallavalle, G.Marco; Davis, R.; De Jong, S.; de Roeck, A.; Dervan, P.; Desch, K.; Dienes, B.; Dixit, M.S.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I.P.; Estabrooks, P.G.; Etzion, E.; Fabbri, F.; Fanfani, A.; Fanti, M.; Faust, A.A.; Fiedler, F.; Fierro, M.; Fleck, I.; Folman, R.; Frey, A.; Furtjes, A.; Futyan, D.I.; Gagnon, P.; Gary, J.W.; Gascon, J.; Gascon-Shotkin, S.M.; Gaycken, G.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Gibson, V.; Gibson, W.R.; Gingrich, D.M.; Glenzinski, D.; Goldberg, J.; Gorn, W.; Grandi, C.; Graham, K.; Gross, E.; Grunhaus, J.; Gruwe, M.; Hanson, G.G.; Hansroul, M.; Hapke, M.; Harder, K.; Harel, A.; Hargrove, C.K.; Hauschild, M.; Hawkes, C.M.; Hawkings, R.; Hemingway, R.J.; Herndon, M.; Herten, G.; Heuer, R.D.; Hildreth, M.D.; Hill, J.C.; Hobson, P.R.; Hoch, M.; Hocker, James Andrew; Hoffman, Kara Dion; Homer, R.J.; Honma, A.K.; Horvath, D.; Hossain, K.R.; Howard, R.; Huntemeyer, P.; Igo-Kemenes, P.; Imrie, D.C.; Ishii, K.; Jacob, F.R.; Jawahery, A.; Jeremie, H.; Jimack, M.; Jones, C.R.; Jovanovic, P.; Junk, T.R.; Kanzaki, J.; Karlen, D.; Kartvelishvili, V.; Kawagoe, K.; Kawamoto, T.; Kayal, P.I.; Keeler, R.K.; Kellogg, R.G.; Kennedy, B.W.; Kim, D.H.; Klier, A.; Kobayashi, T.; Kobel, M.; Kokott, T.P.; Kolrep, M.; Komamiya, S.; Kowalewski, Robert V.; Kress, T.; Krieger, P.; von Krogh, J.; Kuhl, T.; Kyberd, P.; Lafferty, G.D.; Landsman, H.; Lanske, D.; Lauber, J.; Lautenschlager, S.R.; Lawson, I.; Layter, J.G.; Lee, A.M.; Lellouch, D.; Letts, J.; Levinson, L.; Liebisch, R.; List, B.; Littlewood, C.; Lloyd, A.W.; Lloyd, S.L.; Loebinger, F.K.; Long, G.D.; Losty, M.J.; Lu, J.; Ludwig, J.; Lui, D.; Macchiolo, A.; Macpherson, A.; Mader, W.; Mannelli, M.; Marcellini, S.; Markopoulos, C.; Martin, A.J.; Martin, J.P.; Martinez, G.; Mashimo, T.; Mattig, Peter; McDonald, W.John; McKenna, J.; Mckigney, E.A.; McMahon, T.J.; McPherson, R.A.; Meijers, F.; Menke, S.; Merritt, F.S.; Mes, H.; Meyer, J.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D.J.; Mir, R.; Mohr, W.; Montanari, A.; Mori, T.; Nagai, K.; Nakamura, I.; Neal, H.A.; Nisius, R.; O'Neale, S.W.; Oakham, F.G.; Odorici, F.; Ogren, H.O.; Oreglia, M.J.; Orito, S.; Palinkas, J.; Pasztor, G.; Pater, J.R.; Patrick, G.N.; Patt, J.; Perez-Ochoa, R.; Petzold, S.; Pfeifenschneider, P.; Pilcher, J.E.; Pinfold, J.; Plane, David E.; Poffenberger, P.; Poli, B.; Polok, J.; Przybycien, M.; Rembser, C.; Rick, H.; Robertson, S.; Robins, S.A.; Rodning, N.; Roney, J.M.; Rosati, S.; Roscoe, K.; Rossi, A.M.; Rozen, Y.; Runge, K.; Runolfsson, O.; Rust, D.R.; Sachs, K.; Saeki, T.; Sahr, O.; Sang, W.M.; Sarkisian, E.K.G.; Sbarra, C.; Schaile, A.D.; Schaile, O.; Scharff-Hansen, P.; Schieck, J.; Schmitt, S.; Schoning, A.; Schroder, Matthias; Schumacher, M.; Schwick, C.; Scott, W.G.; Seuster, R.; Shears, T.G.; Shen, B.C.; Shepherd-Themistocleous, C.H.; Sherwood, P.; Siroli, G.P.; Sittler, A.; Skuja, A.; Smith, A.M.; Snow, G.A.; Sobie, R.; Soldner-Rembold, S.; Spagnolo, S.; Sproston, M.; Stahl, A.; Stephens, K.; Steuerer, J.; Stoll, K.; Strom, David M.; Strohmer, R.; Surrow, B.; Talbot, S.D.; Taras, P.; Tarem, S.; Teuscher, R.; Thiergen, M.; Thomas, J.; Thomson, M.A.; Torrence, E.; Towers, S.; Trigger, I.; Trocsanyi, Z.; Tsur, E.; Turcot, A.S.; Turner-Watson, M.F.; Ueda, I.; Van Kooten, Rick J.; Vannerem, P.; Verzocchi, M.; Voss, H.; Wackerle, F.; Wagner, A.; Ward, C.P.; Ward, D.R.; Watkins, P.M.; Watson, A.T.; Watson, N.K.; Wells, P.S.; Wermes, N.; White, J.S.; Wilson, G.W.; Wilson, J.A.; Wyatt, T.R.; Yamashita, S.; Yekutieli, G.; Zacek, V.; Zer-Zion, D.

    1999-01-01

    Gluon jets are identified in hadronic Z0 decays as all the particles in a hemisphere opposite to a hemisphere containing two tagged quark jets. Gluon jets defined in this manner are equivalent to gluon jets produced from a color singlet point source and thus correspond to the definition employed for most theoretical calculations. In a separate stage of the analysis, we select quark jets in a manner to correspond to calculations, as the particles in hemispheres of flavor tagged light quark (uds) events. We present the distributions of rapidity, scaled energy, the logarithm of the momentum, and transverse momentum with respect to the jet axes, for charged particles in these gluon and quark jets. We also examine the charged particle multiplicity distributions of the jets in restricted intervals of rapidity. For soft particles at large transverse momentum, we observe the charged particle multiplicity ratio of gluon to quark jets to be 2.29 +- 0.09 +- 0.15 in agreement with the prediction that this ratio should ap...

  9. 77 FR 34211 - Modification of Multiple Compulsory Reporting Points; Continental United States, Alaska and Hawaii

    2012-06-11

    ... DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 71 [Docket No. FAA-2012-0130; Airspace Docket No. 12-AWA-2] RIN 2120-AA66 Modification of Multiple Compulsory Reporting Points; Continental United States, Alaska and Hawaii AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final...

  10. Multiple sources of boron in urban surface waters and groundwaters

    Hasenmueller, Elizabeth A., E-mail: eahasenm@wustl.edu; Criss, Robert E.

    2013-03-01

    Previous studies attribute abnormal boron (B) levels in streams and groundwaters to wastewater and fertilizer inputs. This study shows that municipal drinking water used for lawn irrigation contributes substantial non-point loads of B and other chemicals (S-species, Li, and Cu) to surface waters and shallow groundwaters in the St. Louis, Missouri, area. Background levels and potential B sources were characterized by analysis of lawn and street runoff, streams, rivers, springs, local rainfall, wastewater influent and effluent, and fertilizers. Urban surface waters and groundwaters are highly enriched in B (to 250 μg/L) compared to background levels found in rain and pristine, carbonate-hosted streams and springs (< 25 μg/L), but have similar concentrations (150 to 259 μg/L) compared to municipal drinking waters derived from the Missouri River. Other data including B/SO{sub 4}{sup 2-}−S and B/Li ratios confirm major contributions from this source. Moreover, sequential samples of runoff collected during storms show that B concentrations decrease with increased discharge, proving that elevated B levels are not primarily derived from combined sewer overflows (CSOs) during flooding. Instead, non-point source B exhibits complex behavior depending on land use. In urban settings B is rapidly mobilized from lawns during “first flush” events, likely representing surficial salt residues from drinking water used to irrigate lawns, and is also associated with the baseflow fraction, likely derived from the shallow groundwater reservoir that over time accumulates B from drinking water that percolates into the subsurface. The opposite occurs in small rural watersheds, where B is leached from soils by recent rainfall and covaries with the event water fraction. Highlights: ► Boron sources and loads differ between urban and rural watersheds. ► Wastewaters are not the major boron source in small St. Louis, MO watersheds. ► Municipal drinking water used for lawn

  11. Photonic crystals possessing multiple Weyl points and the experimental observation of robust surface states

    Chen, Wen-Jie; Xiao, Meng; Chan, C. T.

    2016-01-01

    Weyl points, as monopoles of Berry curvature in momentum space, have captured much attention recently in various branches of physics. Realizing topological materials that exhibit such nodal points is challenging and indeed, Weyl points have been found experimentally in transition metal arsenide and phosphide and gyroid photonic crystal whose structure is complex. If realizing even the simplest type of single Weyl nodes with a topological charge of 1 is difficult, then making a real crystal carrying higher topological charges may seem more challenging. Here we design, and fabricate using planar fabrication technology, a photonic crystal possessing single Weyl points (including type-II nodes) and multiple Weyl points with topological charges of 2 and 3. We characterize this photonic crystal and find nontrivial 2D bulk band gaps for a fixed kz and the associated surface modes. The robustness of these surface states against kz-preserving scattering is experimentally observed for the first time. PMID:27703140

  12. Interpretation of the TRADE In-Pile source multiplication experiments

    Mercatali, Luigi; Carta, Mario; Peluso, Vincenzo

    2006-01-01

    Within the framework of the neutronic characterization of the TRIGA RC-1 reactor in support to the TRADE (TRiga Accelerator Driven Experiment) program, the interpretation of the subcriticality level measurements performed in static regime during the TRADE In-Pile experimental program is presented. Different levels of subcriticality have been measured using the MSA (Modified Source Approximated) method by the insertion of a standard fixed radioactive source into different core positions. Starting from a reference configuration, fuel elements were removed: control rods were moved outward as required for the coupling experiments envisioned with the proton accelerator and fission chambers were inserted in order to measure subcritical count rates. A neutron-physics analysis based on the modified formulation of the source multiplication method (MSM) has been carried out, which requires the systematic solution for each experimental configuration of the homogeneous, both in the forward and adjoint forms, and inhomogeneous Boltzmann equations. By means of such a methodology calculated correction factors to be applied to the MSA measured reactivities were produced in order to take into account spatial and energetic effects creating changes in the detector efficiencies and effective source with respect to the calibration configuration. The methodology presented has been tested against a large number of experimental states. The measurements have underlined the sensitivity of the MSA measured reactivities to core geometry changes and control rod perturbations; the efficiency of MSM factors to dramatically correct for this sensitivity is underlined, making of this technique a relevant methodology in view of the incoming US RACE program to be performed in TRIGA reactors

  13. Radio identifications of IRAS point sources with b greater than 30 deg

    Condon, J.J.; Broderick, J.J.; Virginia Polytechnic Institute and State Univ., Blacksburg)

    1986-01-01

    The present radio identifications of IRAS point sources on the basis of Green Bank 1400 MHz survey maps notes that 365 hot IR sources are not detectable radio sources, and that nearly all cool high latitude IRAS sources are extragalactic. The fainter IR-source identifications encompass optically bright quasars, BL Lac objects, Seyfert galaxies, and elliptical galaxies. No IRAS sources could be identified with distant elliptical radio galaxies, so that although the radio and IR fluxes of most IRAS extragalactic sources are tightly correlated, complete samples of strong radio and IR sources are almost completely disjoint; no more than 1 percent of the IR sources are radio sources and less than 1 percent of the radio sources are IR ones. 35 references

  14. Generating and executing programs for a floating point single instruction multiple data instruction set architecture

    Gschwind, Michael K

    2013-04-16

    Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.

  15. Feature extraction from multiple data sources using genetic programming.

    Szymanski, J. J. (John J.); Brumby, Steven P.; Pope, P. A. (Paul A.); Eads, D. R. (Damian R.); Galassi, M. C. (Mark C.); Harvey, N. R. (Neal R.); Perkins, S. J. (Simon J.); Porter, R. B. (Reid B.); Theiler, J. P. (James P.); Young, A. C. (Aaron Cody); Bloch, J. J. (Jeffrey J.); David, N. A. (Nancy A.); Esch-Mosher, D. M. (Diana M.)

    2002-01-01

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  16. Solute transport with periodic input point source in one-dimensional ...

    JOY

    groundwater flow velocity is considered proportional to multiple of temporal function and ζ th ... One-dimensional solute transport through porous media with or without .... solute free. ... the periodic concentration at source of the boundary i.e.,. 0.

  17. Estimation of subcriticality by neutron source multiplication method

    Sakurai, Kiyoshi; Suzaki, Takenori; Arakawa, Takuya; Naito, Yoshitaka

    1995-03-01

    Subcritical cores were constructed in a core tank of the TCA by arraying 2.6% enriched UO 2 fuel rods into nxn square lattices of 1.956 cm pitch. Vertical distributions of the neutron count rates for the fifteen subcritical cores (n=17, 16, 14, 11, 8) with different water levels were measured at 5 cm interval with 235 U micro-fission counters at the in-core and out-core positions arranging a 252 C f neutron source at near core center. The continuous energy Monte Carlo code MCNP-4A was used for the calculation of neutron multiplication factors and neutron count rates. In this study, important conclusions are as follows: (1) Differences of neutron multiplication factors resulted from exponential experiment and MCNP-4A are below 1% in most cases. (2) Standard deviations of neutron count rates calculated from MCNP-4A with 500000 histories are 5-8%. The calculated neutron count rates are consistent with the measured one. (author)

  18. Assessing the use of multiple sources in student essays.

    Hastings, Peter; Hughes, Simon; Magliano, Joseph P; Goldman, Susan R; Lawless, Kimberly

    2012-09-01

    The present study explored different approaches for automatically scoring student essays that were written on the basis of multiple texts. Specifically, these approaches were developed to classify whether or not important elements of the texts were present in the essays. The first was a simple pattern-matching approach called "multi-word" that allowed for flexible matching of words and phrases in the sentences. The second technique was latent semantic analysis (LSA), which was used to compare student sentences to original source sentences using its high-dimensional vector-based representation. Finally, the third was a machine-learning technique, support vector machines, which learned a classification scheme from the corpus. The results of the study suggested that the LSA-based system was superior for detecting the presence of explicit content from the texts, but the multi-word pattern-matching approach was better for detecting inferences outside or across texts. These results suggest that the best approach for analyzing essays of this nature should draw upon multiple natural language processing approaches.

  19. Multiple time-reversed guide-sources in shallow water

    Gaumond, Charles F.; Fromm, David M.; Lingevitch, Joseph F.; Gauss, Roger C.; Menis, Richard

    2003-10-01

    Detection in a monostatic, broadband, active sonar system in shallow water is degraded by propagation-induced spreading. The detection improvement from multiple spatially separated guide sources (GSs) is presented as a method to mitigate this degradation. The improvement of detection by using information in a set of one-way transmissions from a variety of positions is shown using sea data. The experimental area is south of the Hudson Canyon off the coast of New Jersey. The data were taken using five elements of a time-reversing VLA. The five elements were contiguous and at midwater depth. The target and guide source was an echo repeater positioned at various ranges and at middepth. The transmitted signals were 3.0- to 3.5-kHz LFMs. The data are analyzed to show the amount of information present in the collection, a baseline probability of detection (PD) not using the collection of GS signals, the improvement in PD from the use of various sets of GS signals. The dependence of the improvement as a function of range is also shown. [The authors acknowledge support from Dr. Jeffrey Simmen, ONR321OS, and the chief scientist Dr. Charles Holland. Work supported by ONR.

  20. Simultaneous colour visualizations of multiple ALS point cloud attributes for land cover and vegetation analysis

    Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert

    2014-05-01

    LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar

  1. LOWERING ICECUBE'S ENERGY THRESHOLD FOR POINT SOURCE SEARCHES IN THE SOUTHERN SKY

    Aartsen, M. G. [Department of Physics, University of Adelaide, Adelaide, 5005 (Australia); Abraham, K. [Physik-department, Technische Universität München, D-85748 Garching (Germany); Ackermann, M. [DESY, D-15735 Zeuthen (Germany); Adams, J. [Department of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch (New Zealand); Aguilar, J. A.; Ansseau, I. [Université Libre de Bruxelles, Science Faculty CP230, B-1050 Brussels (Belgium); Ahlers, M. [Department of Physics and Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin, Madison, WI 53706 (United States); Ahrens, M. [Oskar Klein Centre and Department of Physics, Stockholm University, SE-10691 Stockholm (Sweden); Altmann, D.; Anton, G. [Erlangen Centre for Astroparticle Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg, D-91058 Erlangen (Germany); Andeen, K. [Department of Physics, Marquette University, Milwaukee, WI, 53201 (United States); Anderson, T.; Arlen, T. C. [Department of Physics, Pennsylvania State University, University Park, PA 16802 (United States); Archinger, M.; Baum, V. [Institute of Physics, University of Mainz, Staudinger Weg 7, D-55099 Mainz (Germany); Arguelles, C. [Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Auffenberg, J. [III. Physikalisches Institut, RWTH Aachen University, D-52056 Aachen (Germany); Bai, X. [Physics Department, South Dakota School of Mines and Technology, Rapid City, SD 57701 (United States); Barwick, S. W. [Department of Physics and Astronomy, University of California, Irvine, CA 92697 (United States); Bay, R., E-mail: jacob.feintzeig@gmail.com, E-mail: naoko@icecube.wisc.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States); Collaboration: IceCube Collaboration; and others

    2016-06-20

    Observation of a point source of astrophysical neutrinos would be a “smoking gun” signature of a cosmic-ray accelerator. While IceCube has recently discovered a diffuse flux of astrophysical neutrinos, no localized point source has been observed. Previous IceCube searches for point sources in the southern sky were restricted by either an energy threshold above a few hundred TeV or poor neutrino angular resolution. Here we present a search for southern sky point sources with greatly improved sensitivities to neutrinos with energies below 100 TeV. By selecting charged-current ν{sub μ} interacting inside the detector, we reduce the atmospheric background while retaining efficiency for astrophysical neutrino-induced events reconstructed with sub-degree angular resolution. The new event sample covers three years of detector data and leads to a factor of 10 improvement in sensitivity to point sources emitting below 100 TeV in the southern sky. No statistically significant evidence of point sources was found, and upper limits are set on neutrino emission from individual sources. A posteriori analysis of the highest-energy (∼100 TeV) starting event in the sample found that this event alone represents a 2.8 σ deviation from the hypothesis that the data consists only of atmospheric background.

  2. Impacts by point and diffuse micropollutant sources on the stream water quality at catchment scale

    Petersen, M. F.; Eriksson, E.; Binning, P. J.; Bjerg, P. L.

    2012-04-01

    The water quality of surface waters is threatened by multiple anthropogenic pollutants and the large variety of pollutants challenges the monitoring and assessment of the water quality. The aim of this study was to characterize and quantify both point and diffuse sources of micropollutants impacting the water quality of a stream at catchment scale. Grindsted stream in western Jutland, Denmark was used as a study site. The stream passes both urban and agricultural areas and is impacted by severe groundwater contamination in Grindsted city. Along a 12 km reach of Grindsted stream, the potential pollution sources were identified including a pharmaceutical factory site with a contaminated old drainage ditch, two waste deposits, a wastewater treatment plant, overflow structures, fish farms, industrial discharges and diffuse agricultural and urban sources. Six water samples were collected along the stream and analyzed for general water quality parameters, inorganic constituents, pesticides, sulfonamides, chlorinated solvents, BTEXs, and paracetamol and ibuprofen. The latter two groups were not detected. The general water quality showed typical conditions for a stream in western Jutland. Minor impacts by releases of organic matter and nutrients were found after the fish farms and the waste water treatment plant. Nickel was found at concentrations 5.8 - 8.8 μg/l. Nine pesticides and metabolites of both agricultural and urban use were detected along the stream; among these were the two most frequently detected and some rarely detected pesticides in Danish water courses. The concentrations were generally consistent with other findings in Danish streams and in the range 0.01 - 0.09 μg/l; except for metribuzin-diketo that showed high concentrations up to 0.74 μg/l. The groundwater contamination at the pharmaceutical factory site, the drainage ditch and the waste deposits is similar in composition containing among others sulfonamides and chlorinated solvents (including vinyl

  3. CHANDRA ACIS SURVEY OF X-RAY POINT SOURCES: THE SOURCE CATALOG

    Wang, Song; Liu, Jifeng; Qiu, Yanli; Bai, Yu; Yang, Huiqin; Guo, Jincheng; Zhang, Peng, E-mail: jfliu@bao.ac.cn, E-mail: songw@bao.ac.cn [Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China)

    2016-06-01

    The Chandra archival data is a valuable resource for various studies on different X-ray astronomy topics. In this paper, we utilize this wealth of information and present a uniformly processed data set, which can be used to address a wide range of scientific questions. The data analysis procedures are applied to 10,029 Advanced CCD Imaging Spectrometer observations, which produces 363,530 source detections belonging to 217,828 distinct X-ray sources. This number is twice the size of the Chandra Source Catalog (Version 1.1). The catalogs in this paper provide abundant estimates of the detected X-ray source properties, including source positions, counts, colors, fluxes, luminosities, variability statistics, etc. Cross-correlation of these objects with galaxies shows that 17,828 sources are located within the D {sub 25} isophotes of 1110 galaxies, and 7504 sources are located between the D {sub 25} and 2 D {sub 25} isophotes of 910 galaxies. Contamination analysis with the log N –log S relation indicates that 51.3% of objects within 2 D {sub 25} isophotes are truly relevant to galaxies, and the “net” source fraction increases to 58.9%, 67.3%, and 69.1% for sources with luminosities above 10{sup 37}, 10{sup 38}, and 10{sup 39} erg s{sup −1}, respectively. Among the possible scientific uses of this catalog, we discuss the possibility of studying intra-observation variability, inter-observation variability, and supersoft sources (SSSs). About 17,092 detected sources above 10 counts are classified as variable in individual observation with the Kolmogorov–Smirnov (K–S) criterion ( P {sub K–S} < 0.01). There are 99,647 sources observed more than once and 11,843 sources observed 10 times or more, offering us a wealth of data with which to explore the long-term variability. There are 1638 individual objects (∼2350 detections) classified as SSSs. As a quite interesting subclass, detailed studies on X-ray spectra and optical spectroscopic follow-up are needed to

  4. FireProt: Energy- and Evolution-Based Computational Design of Thermostable Multiple-Point Mutants.

    Bednar, David; Beerens, Koen; Sebestova, Eva; Bendl, Jaroslav; Khare, Sagar; Chaloupkova, Radka; Prokop, Zbynek; Brezovsky, Jan; Baker, David; Damborsky, Jiri

    2015-11-01

    There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C) by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications.

  5. FireProt: Energy- and Evolution-Based Computational Design of Thermostable Multiple-Point Mutants.

    David Bednar

    2015-11-01

    Full Text Available There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications.

  6. Thermal Analysis of a Cracked Half-plane under Moving Point Heat Source

    He Kuanfang

    2017-09-01

    Full Text Available The heat conduction in half-plane with an insulated crack subjected to moving point heat source is investigated. The analytical solution and the numerical means are combined to analyze the transient temperature distribution of a cracked half-plane under moving point heat source. The transient temperature distribution of the half plane structure under moving point heat source is obtained by the moving coordinate method firstly, then the heat conduction equation with thermal boundary of an insulated crack face is changed to singular integral equation by applying Fourier transforms and solved by the numerical method. The numerical examples of the temperature distribution on the cracked half-plane structure under moving point heat source are presented and discussed in detail.

  7. A method to analyze "source-sink" structure of non-point source pollution based on remote sensing technology.

    Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui

    2013-11-01

    With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the "source-sink" theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of "source" of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km(2) in 2008, and the "sink" was 172.06 km(2). The "source" of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the "sink" was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of "source" gets weaker along with the distance from the seas boundary increase, while "sink" gets stronger. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Non point source pollution modelling in the watershed managed by Integrated Conctructed Wetlands: A GIS approach.

    Vyavahare, Nilesh

    2008-01-01

    The non-point source pollution has been recognised as main cause of eutrophication in Ireland (EPA Ireland, 2001). Integrated Constructed Wetland (ICW) is a management practice adopted in Annestown stream watershed, located in the south county of Waterford in Ireland, used to cleanse farmyard runoff. Present study forms the annual pollution budget for the Annestown stream watershed. The amount of pollution from non-point sources flowing into the stream was simulated by using GIS techniques; u...

  9. Epidemiology, public health, and health surveillance around point sources of pollution

    Stebbings, J.H. Jr.

    1981-01-01

    In industrial society a large number of point sources of pollution exist, such as chemical plants, smelters, and nuclear power plants. Public concern has forced the practising epidemiologist to undertake health surveillance of the usually small populations living around point sources. Although not justifiable as research, such epidemiologic surveillance activities are becoming a routine part of public health practice, and this trend will continue. This introduction reviews concepts of epidemiologic surveillance, and institutional problems relating to the quality of such applied research

  10. The resolution of point sources of light as analyzed by quantum detection theory

    Helstrom, C. W.

    1972-01-01

    The resolvability of point sources of incoherent light is analyzed by quantum detection theory in terms of two hypothesis-testing problems. In the first, the observer must decide whether there are two sources of equal radiant power at given locations, or whether there is only one source of twice the power located midway between them. In the second problem, either one, but not both, of two point sources is radiating, and the observer must decide which it is. The decisions are based on optimum processing of the electromagnetic field at the aperture of an optical instrument. In both problems the density operators of the field under the two hypotheses do not commute. The error probabilities, determined as functions of the separation of the points and the mean number of received photons, characterize the ultimate resolvability of the sources.

  11. Resolution of point sources of light as analyzed by quantum detection theory.

    Helstrom, C. W.

    1973-01-01

    The resolvability of point sources of incoherent thermal light is analyzed by quantum detection theory in terms of two hypothesis-testing problems. In the first, the observer must decide whether there are two sources of equal radiant power at given locations, or whether there is only one source of twice the power located midway between them. In the second problem, either one, but not both, of two point sources is radiating, and the observer must decide which it is. The decisions are based on optimum processing of the electromagnetic field at the aperture of an optical instrument. In both problems the density operators of the field under the two hypotheses do not commute. The error probabilities, determined as functions of the separation of the points and the mean number of received photons, characterize the ultimate resolvability of the sources.

  12. Interferometry with flexible point source array for measuring complex freeform surface and its design algorithm

    Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo

    2018-06-01

    The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.

  13. Characteristics of a multi-keV monochromatic point x-ray source

    Temporal, spatial and spectral characteristics of a multi-keV monochromatic point x-ray source based on vacuum diode with laser-produced plasma as cathode are presented. Electrons from a laser-produced aluminium plasma were accelerated towards a conical point tip titanium anode to generate K-shell x-ray radiation.

  14. Mapping correlation of a simulated dark matter source and a point source in the gamma-ray sky - Oral Presentation

    Gibson, Alexander [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2015-08-23

    In my research, I analyzed how two gamma-ray source models interact with one another when optimizing to fit data. This is important because it becomes hard to distinguish between the two point sources when they are close together or looking at low energy photons. The reason for the first is obvious, the reason why they become harder to distinguish at lower photon energies is the resolving power of the Fermi Gamma-Ray Space Telescope gets worse at lower energies. When the two point sources are highly correlated (hard to distinguish between), we need to change our method of statistical analysis. What I did was show that highly correlated sources have larger uncertainties associated with them, caused by an optimizer not knowing which point source’s parameters to optimize. I also mapped out where their is high correlation for 2 different theoretical mass dark matter point sources so that people analyzing them in the future knew where they had to use more sophisticated statistical analysis.

  15. NEWTONIAN IMPERIALIST COMPETITVE APPROACH TO OPTIMIZING OBSERVATION OF MULTIPLE TARGET POINTS IN MULTISENSOR SURVEILLANCE SYSTEMS

    A. Afghan-Toloee

    2013-09-01

    Full Text Available The problem of specifying the minimum number of sensors to deploy in a certain area to face multiple targets has been generally studied in the literatures. In this paper, we are arguing the multi-sensors deployment problem (MDP. The Multi-sensor placement problem can be clarified as minimizing the cost required to cover the multi target points in the area. We propose a more feasible method for the multi-sensor placement problem. Our method makes provision the high coverage of grid based placements while minimizing the cost as discovered in perimeter placement techniques. The NICA algorithm as improved ICA (Imperialist Competitive Algorithm is used to decrease the performance time to explore an enough solution compared to other meta-heuristic schemes such as GA, PSO and ICA. A three dimensional area is used for clarify the multiple target and placement points, making provision x, y, and z computations in the observation algorithm. A structure of model for the multi-sensor placement problem is proposed: The problem is constructed as an optimization problem with the objective to minimize the cost while covering all multiple target points upon a given probability of observation tolerance.

  16. Experimental properties of gluon and quark jets from a point source

    Abbiendi, G.; Ackerstaff, K.; Alexander, G.

    1999-01-01

    Gluon jets are identified in hadronic Z 0 decays as all the particles in a hemisphere opposite to a hemisphere containing two tagged quark jets. Gluon jets defined in this manner are equivalent to gluon jets produced from a color singlet point source and thus correspond to the definition employed for most theoretical calculations. In a separate stage of the analysis, we select quark jets in a manner to correspond to calculations, as the particles in hemispheres of flavor tagged light quark (uds) events. We present the distributions of rapidity, scaled energy, the logarithm of the momentum, and transverse momentum with respect to the jet axes, for charged particles in these gluon and quark jets. We also examine the charged particle multiplicity distributions of the jets in restricted intervals of rapidity. For soft particles at large p T , we observe the charged particle multiplicity ratio of gluon to quark jets to be 2.29±0.09(stat.)±0.15(syst.), in agreement with the prediction that this ratio should approximately equal the ratio of QCD color factors, C A /C F =2.25. The intervals used to define soft particles and large p T for this result, p T < 3.0 GeV/c, are motivated by the predictions of the Herwig Monte Carlo multihadronic event generator. Additionally, our gluon jet data allow a sensitive test of the phenomenon of non-leading QCD terms known as color reconnection. We test the model of color reconnection implemented in the Ariadne Monte Carlo multihadronic event generator and find it to be disfavored by our data. (orig.)

  17. Miniature x-ray point source for alignment and calibration of x-ray optics

    Price, R.H.; Boyle, M.J.; Glaros, S.S.

    1977-01-01

    A miniature x-ray point source of high brightness similar to that of Rovinsky, et al. is described. One version of the x-ray source is used to align the x-ray optics on the Argus and Shiva laser systems. A second version is used to determine the spatial and spectral transmission functions of the x-ray optics. The spatial and spectral characteristics of the x-ray emission from the x-ray point source are described. The physical constraints including size, intensity and thermal limitations, and useful lifetime are discussed. The alignment and calibration techniques for various x-ray optics and detector combinations are described

  18. Guaranteed Unresolved Point Source Emission and the Gamma-ray Background

    Pavlidou, Vasiliki; Siegal-Gaskins, Jennifer M.; Brown, Carolyn; Fields, Brian D.; Olinto, Angela V.

    2007-01-01

    The large majority of EGRET point sources remain without an identified low-energy counterpart, and a large fraction of these sources are most likely extragalactic. Whatever the nature of the extragalactic EGRET unidentified sources, faint unresolved objects of the same class must have a contribution to the diffuse extragalactic gamma-ray background (EGRB). Understanding this component of the EGRB, along with other guaranteed contributions from known sources (blazars and normal galaxies), is essential if we are to use this emission to constrain exotic high-energy physics. Here, we follow an empirical approach to estimate whether the contribution of unresolved unidentified sources to the EGRB is likely to be important. Additionally, we discuss how upcoming GLAST observations of EGRET unidentified sources, their fainter counterparts, and the Galactic and extragalactic diffuse backgrounds, will shed light on the nature of the EGRET unidentified sources even without any positional association of such sources with low-energy counterparts

  19. Evaluation of the Agricultural Non-point Source Pollution in Chongqing Based on PSR Model

    Hanwen; ZHANG; Xinli; MOU; Hui; XIE; Hong; LU; Xingyun; YAN

    2014-01-01

    Through a series of exploration based on PSR framework model,for the purpose of building a suitable Chongqing agricultural nonpoint source pollution evaluation index system model framework,combined with the presence of Chongqing specific agro-environmental issues,we build a agricultural non-point source pollution assessment index system,and then study the agricultural system pressure,agro-environmental status and human response in total 3 major categories,develope an agricultural non-point source pollution evaluation index consisting of 3 criteria indicators and 19 indicators. As can be seen from the analysis,pressures and responses tend to increase and decrease linearly,state and complex have large fluctuations,and their fluctuations are similar mainly due to the elimination of pressures and impact,increasing the impact for agricultural non-point source pollution.

  20. Tackling non-point source water pollution in British Columbia: An action plan

    1998-01-01

    Efforts to protect British Columbia water quality by regulating point discharges from municipal and industrial sources have generally been successful, and it is recognized that the major remaining cause of water pollution in the province is from non-point sources. These sources are largely unregulated and associated with urbanization, agriculture, and other forms of land development. The first part of this report reviews the provincial commitment to clean water, the effects of non-point-source (NPS) pollution, and the management of NPS in the province. Part 2 describes the main causes of NPS in British Columbia: Land development, agriculture, stormwater runoff, on-site sewage systems, forestry and range activities, atmospheric deposition, and boating/marine activities. Finally, it presents key components of the province's NPS action plan: Education and training, prevention at site, land use planning and co-ordination, assessment and reporting, economic incentives, legislation and regulation, and implementation.

  1. Extending the search for neutrino point sources with IceCube above the horizon

    IceCube Collaboration; Abbasi, R.

    2009-11-20

    Point source searches with the IceCube neutrino telescope have been restricted to one hemisphere, due to the exclusive selection of upward going events as a way of rejecting the atmospheric muon background. We show that the region above the horizon can be included by suppressing the background through energy-sensitive cuts. This approach improves the sensitivity above PeV energies, previously not accessible for declinations of more than a few degrees below the horizon due to the absorption of neutrinos in Earth. We present results based on data collected with 22 strings of IceCube, extending its field of view and energy reach for point source searches. No significant excess above the atmospheric background is observed in a sky scan and in tests of source candidates. Upper limits are reported, which for the first time cover point sources in the southern sky up to EeV energies.

  2. DOA Estimation of Multiple LFM Sources Using a STFT-based and FBSS-based MUSIC Algorithm

    K. B. Cui

    2017-12-01

    Full Text Available Direction of arrival (DOA estimation is an important problem in array signal processing. An effective multiple signal classification (MUSIC method based on the short-time Fourier transform (STFT and forward/ backward spatial smoothing (FBSS techniques for the DOA estimation problem of multiple time-frequency (t-f joint LFM sources is addressed. Previous work in the area e. g. STFT-MUSIC algorithm cannot resolve the t-f completely or largely joint sources because they can only select the single-source t-f points. The proposed method con¬structs the spatial t-f distributions (STFDs by selecting the multiple-source t-f points and uses the FBSS techniques to solve the problem of rank loss. In this way, the STFT-FBSS-MUSIC algorithm can resolve the t-f largely joint or completely joint LFM sources. In addition, the proposed algorithm also owns pretty low computational complexity when resolving multiple LFM sources because it can reduce the times of the feature decomposition and spectrum search. The performance of the proposed method is compared with that of the existing t-f based MUSIC algorithms through computer simulations and the results show its good performance.

  3. The Treatment Train approach to reducing non-point source pollution from agriculture

    Barber, N.; Reaney, S. M.; Barker, P. A.; Benskin, C.; Burke, S.; Cleasby, W.; Haygarth, P.; Jonczyk, J. C.; Owen, G. J.; Snell, M. A.; Surridge, B.; Quinn, P. F.

    2016-12-01

    An experimental approach has been applied to an agricultural catchment in NW England, where non-point pollution adversely affects freshwater ecology. The aim of the work (as part of the River Eden Demonstration Test Catchment project) is to develop techniques to manage agricultural runoff whilst maintaining food production. The approach used is the Treatment Train (TT), which applies multiple connected mitigation options that control nutrient and fine sediment pollution at source, and address polluted runoff pathways at increasing spatial scale. The principal agricultural practices in the study sub-catchment (1.5 km2) are dairy and stock production. Farm yards can act as significant pollution sources by housing large numbers of animals; these areas are addressed initially with infrastructure improvements e.g. clean/dirty water separation and upgraded waste storage. In-stream high resolution monitoring of hydrology and water quality parameters showed high-discharge events to account for the majority of pollutant exports ( 80% total phosphorus; 95% fine sediment), and primary transfer routes to be surface and shallow sub-surface flow pathways, including drains. To manage these pathways and reduce hydrological connectivity, a series of mitigation features were constructed to intercept and temporarily store runoff. Farm tracks, field drains, first order ditches and overland flow pathways were all targeted. The efficacy of the mitigation features has been monitored at event and annual scale, using inflow-outflow sampling and sediment/nutrient accumulation measurements, respectively. Data presented here show varied but positive results in terms of reducing acute and chronic sediment and nutrient losses. An aerial fly-through of the catchment is used to demonstrate how the TT has been applied to a fully-functioning agricultural landscape. The elevated perspective provides a better understanding of the spatial arrangement of mitigation features, and how they can be

  4. Model Predictive Control of Z-source Neutral Point Clamped Inverter

    Mo, Wei; Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of Z-source Neutral Point Clamped (NPC) inverter. For illustration, current control of Z-source NPC grid-connected inverter is analyzed and simulated. With MPC’s advantage of easily including system constraints, load current, impedance network...... response are obtained at the same time with a formulated Z-source NPC inverter network model. Operation steady state and transient state simulation results of MPC are going to be presented, which shows good reference tracking ability of this method. It provides new control method for Z-source NPC inverter...

  5. Astronomers Detect Powerful Bursting Radio Source Discovery Points to New Class of Astronomical Objects

    2005-03-01

    Astronomers at Sweet Briar College and the Naval Research Laboratory (NRL) have detected a powerful new bursting radio source whose unique properties suggest the discovery of a new class of astronomical objects. The researchers have monitored the center of the Milky Way Galaxy for several years and reveal their findings in the March 3, 2005 edition of the journal, “Nature”. This radio image of the central region of the Milky Way Galaxy holds a new radio source, GCRT J1745-3009. The arrow points to an expanding ring of debris expelled by a supernova. CREDIT: N.E. Kassim et al., Naval Research Laboratory, NRAO/AUI/NSF Principal investigator, Dr. Scott Hyman, professor of physics at Sweet Briar College, said the discovery came after analyzing some additional observations from 2002 provided by researchers at Northwestern University. “"We hit the jackpot!” Hyman said referring to the observations. “An image of the Galactic center, made by collecting radio waves of about 1-meter in wavelength, revealed multiple bursts from the source during a seven-hour period from Sept. 30 to Oct. 1, 2002 — five bursts in fact, and repeating at remarkably constant intervals.” Hyman, four Sweet Briar students, and his NRL collaborators, Drs. Namir Kassim and Joseph Lazio, happened upon transient emission from two radio sources while studying the Galactic center in 1998. This prompted the team to propose an ongoing monitoring program using the National Science Foundation’s Very Large Array (VLA) radio telescope in New Mexico. The National Radio Astronomy Observatory, which operates the VLA, approved the program. The data collected, laid the groundwork for the detection of the new radio source. “Amazingly, even though the sky is known to be full of transient objects emitting at X- and gamma-ray wavelengths,” NRL astronomer Dr. Joseph Lazio pointed out, “very little has been done to look for radio bursts, which are often easier for astronomical objects to produce

  6. Double point source W-phase inversion: Real-time implementation and automated model selection

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  7. Spatiotemporal patterns of non-point source nitrogen loss in an agricultural catchment

    Jian-feng Xu

    2016-04-01

    Full Text Available Non-point source nitrogen loss poses a risk to sustainable aquatic ecosystems. However, non-point sources, as well as impaired river segments with high nitrogen concentrations, are difficult to monitor and regulate because of their diffusive nature, budget constraints, and resource deficiencies. For the purpose of catchment management, the Bayesian maximum entropy approach and spatial regression models have been used to explore the spatiotemporal patterns of non-point source nitrogen loss. In this study, a total of 18 sampling sites were selected along the river network in the Hujiashan Catchment. Over the time period of 2008–2012, water samples were collected 116 times at each site and analyzed for non-point source nitrogen loss. The morphometric variables and soil drainage of different land cover types were studied and considered potential factors affecting nitrogen loss. The results revealed that, compared with the approach using the Euclidean distance, the Bayesian maximum entropy approach using the river distance led to an appreciable 10.1% reduction in the estimation error, and more than 53.3% and 44.7% of the river network in the dry and wet seasons, respectively, had a probability of non-point source nitrogen impairment. The proportion of the impaired river segments exhibited an overall decreasing trend in the study catchment from 2008 to 2012, and the reduction in the wet seasons was greater than that in the dry seasons. High nitrogen concentrations were primarily found in the downstream reaches and river segments close to the residential lands. Croplands and residential lands were the dominant factors affecting non-point source nitrogen loss, and explained up to 70.7% of total nitrogen in the dry seasons and 54.7% in the wet seasons. A thorough understanding of the location of impaired river segments and the dominant factors affecting total nitrogen concentration would have considerable importance for catchment management.

  8. Methods of fast, multiple-point in vivo T1 determination

    Zhang, Y.; Spigarelli, M.; Fencil, L.E.; Yeung, H.N.

    1989-01-01

    Two methods of rapid, multiple-point determination of T1 in vivo have been evaluated with a phantom consisting of vials of gel in different Mn + + concentrations. The first method was an inversion-recovery- on-the-fly technique, and the second method used a variable- tip-angle (α) progressive saturation with two sub- sequences of different repetition times. In the first method, 1/T1 was evaluated by an exponential fit. In the second method, 1/T1 was obtained iteratively with a linear fit and then readjusted together with α to a model equation until self-consistency was reached

  9. Coexistence of different vacua in the effective quantum field theory and multiple point principle

    Volovik, G.E.

    2004-01-01

    According to the multiple point principle our Universe in on the coexistence curve of two or more phases of the quantum vacuum. The coexistence of different quantum vacua can be regulated by the exchange of the global fermionic charges between the vacua. If the coexistence is regulated by the baryonic charge, all the coexisting vacua exhibit the baryonic asymmetry. Due to the exchange of the baryonic charge between the vacuum and matter which occurs above the electroweak transition, the baryonic asymmetry of the vacuum induces the baryonic asymmetry of matter in our Standard-Model phase of the quantum vacuum [ru

  10. Robust set-point regulation for ecological models with multiple management goals.

    Guiver, Chris; Mueller, Markus; Hodgson, Dave; Townley, Stuart

    2016-05-01

    Population managers will often have to deal with problems of meeting multiple goals, for example, keeping at specific levels both the total population and population abundances in given stage-classes of a stratified population. In control engineering, such set-point regulation problems are commonly tackled using multi-input, multi-output proportional and integral (PI) feedback controllers. Building on our recent results for population management with single goals, we develop a PI control approach in a context of multi-objective population management. We show that robust set-point regulation is achieved by using a modified PI controller with saturation and anti-windup elements, both described in the paper, and illustrate the theory with examples. Our results apply more generally to linear control systems with positive state variables, including a class of infinite-dimensional systems, and thus have broader appeal.

  11. Applicability of a desiccant dew-point cooling system independent of external water sources

    Bellemo, Lorenzo; Elmegaard, Brian; Kærn, Martin Ryhl

    2015-01-01

    The applicability of a technical solution for making desiccant cooling systems independent of external water sources is investigated. Water is produced by condensing the desorbed water vapour in a closed regeneration circuit. Desorbed water recovery is applied to a desiccant dew-point cooling...... system, which includes a desiccant wheel and a dew point cooler. The system is simulated during the summer period in the Mediterranean climate of Rome and it results completely independent of external water sources. The seasonal thermal COP drops 8% in comparison to the open regeneration circuit solution...

  12. X-ray Point Source Populations in Spiral and Elliptical Galaxies

    Colbert, E.; Heckman, T.; Weaver, K.; Strickland, D.

    2002-01-01

    The hard-X-ray luminosity of non-active galaxies has been known to be fairly well correlated with the total blue luminosity since the days of the Einstein satellite. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Chandra images of normal, elliptical and starburst galaxies now show that a significant amount of the total hard X-ray emission comes from individual point sources. We present here spatial and spectral analyses of the point sources in a small sample of Chandra obervations of starburst galaxies, and compare with Chandra point source analyses from comparison galaxies (elliptical, Seyfert and normal galaxies). We discuss possible relationships between the number and total hard luminosity of the X-ray point sources and various measures of the galaxy star formation rate, and discuss possible options for the numerous compact sources that are observed.

  13. Subcritical Neutron Multiplication Measurements of HEU Using Delayed Neutrons as the Driving Source

    Hollas, C.L.; Goulding, C.A.; Myers, W.L.

    1999-01-01

    A new method for the determination of the multiplication of highly enriched uranium systems is presented. The method uses delayed neutrons to drive the HEU system. These delayed neutrons are from fission events induced by a pulsed 14-MeV neutron source. Between pulses, neutrons are detected within a medium efficiency neutron detector using 3 He ionization tubes within polyethylene enclosures. The neutron detection times are recorded relative to the initiation of the 14-MeV neutron pulse, and subsequently analyzed with the Feynman reduced variance method to extract singles, doubles and triples neutron counting rates. Measurements have been made on a set of nested hollow spheres of 93% enriched uranium, with mass values from 3.86 kg to 21.48 kg. The singles, doubles and triples counting rates for each uranium system are compared to calculations from point kinetics models of neutron multiplicity to assign multiplication values. These multiplication values are compared to those from MC NP K-Code calculations

  14. Nonpoint and Point Sources of Nitrogen in Major Watersheds of the United States

    Puckett, Larry J.

    1994-01-01

    Estimates of nonpoint and point sources of nitrogen were made for 107 watersheds located in the U.S. Geological Survey's National Water-Quality Assessment Program study units throughout the conterminous United States. The proportions of nitrogen originating from fertilizer, manure, atmospheric deposition, sewage, and industrial sources were found to vary with climate, hydrologic conditions, land use, population, and physiography. Fertilizer sources of nitrogen are proportionally greater in agricultural areas of the West and the Midwest than in other parts of the Nation. Animal manure contributes large proportions of nitrogen in the South and parts of the Northeast. Atmospheric deposition of nitrogen is generally greatest in areas of greatest precipitation, such as the Northeast. Point sources (sewage and industrial) generally are predominant in watersheds near cities, where they may account for large proportions of the nitrogen in streams. The transport of nitrogen in streams increases as amounts of precipitation and runoff increase and is greatest in the Northeastern United States. Because no single nonpoint nitrogen source is dominant everywhere, approaches to control nitrogen must vary throughout the Nation. Watershed-based approaches to understanding nonpoint and point sources of contamination, as used by the National Water-Quality Assessment Program, will aid water-quality and environmental managers to devise methods to reduce nitrogen pollution.

  15. The Unicellular State as a Point Source in a Quantum Biological System

    John S. Torday

    2016-05-01

    Full Text Available A point source is the central and most important point or place for any group of cohering phenomena. Evolutionary development presumes that biological processes are sequentially linked, but neither directed from, nor centralized within, any specific biologic structure or stage. However, such an epigenomic entity exists and its transforming effects can be understood through the obligatory recapitulation of all eukaryotic lifeforms through a zygotic unicellular phase. This requisite biological conjunction can now be properly assessed as the focal point of reconciliation between biology and quantum phenomena, illustrated by deconvoluting complex physiologic traits back to their unicellular origins.

  16. Bridges between multiple-point geostatistics and texture synthesis: Review and guidelines for future research

    Mariethoz, Gregoire; Lefebvre, Sylvain

    2014-05-01

    Multiple-Point Simulations (MPS) is a family of geostatistical tools that has received a lot of attention in recent years for the characterization of spatial phenomena in geosciences. It relies on the definition of training images to represent a given type of spatial variability, or texture. We show that the algorithmic tools used are similar in many ways to techniques developed in computer graphics, where there is a need to generate large amounts of realistic textures for applications such as video games and animated movies. Similarly to MPS, these texture synthesis methods use training images, or exemplars, to generate realistic-looking graphical textures. Both domains of multiple-point geostatistics and example-based texture synthesis present similarities in their historic development and share similar concepts. These disciplines have however remained separated, and as a result significant algorithmic innovations in each discipline have not been universally adopted. Texture synthesis algorithms present drastically increased computational efficiency, patterns reproduction and user control. At the same time, MPS developed ways to condition models to spatial data and to produce 3D stochastic realizations, which have not been thoroughly investigated in the field of texture synthesis. In this paper we review the possible links between these disciplines and show the potential and limitations of using concepts and approaches from texture synthesis in MPS. We also provide guidelines on how recent developments could benefit both fields of research, and what challenges remain open.

  17. A Comparison of Combustion Dynamics for Multiple 7-Point Lean Direct Injection Combustor Configurations

    Tacina, K. M.; Hicks, Y. R.

    2017-01-01

    The combustion dynamics of multiple 7-point lean direct injection (LDI) combustor configurations are compared. LDI is a fuel-lean combustor concept for aero gas turbine engines in which multiple small fuel-air mixers replace one traditionally-sized fuel-air mixer. This 7-point LDI configuration has a circular cross section, with a center (pilot) fuel-air mixer surrounded by six outer (main) fuel-air mixers. Each fuel-air mixer consists of an axial air swirler followed by a converging-diverging venturi. A simplex fuel injector is inserted through the center of the air swirler, with the fuel injector tip located near the venturi throat. All 7 fuel-air mixers are identical except for the swirler blade angle, which varies with the configuration. Testing was done in a 5-atm flame tube with inlet air temperatures from 600 to 800 F and equivalence ratios from 0.4 to 0.7. Combustion dynamics were measured using a cooled PCB pressure transducer flush-mounted in the wall of the combustor test section.

  18. Neutron generators with size scalability, ease of fabrication and multiple ion source functionalities

    Elizondo-Decanini, Juan M

    2014-11-18

    A neutron generator is provided with a flat, rectilinear geometry and surface mounted metallizations. This construction provides scalability and ease of fabrication, and permits multiple ion source functionalities.

  19. KM3NeT/ARCA sensitivity and discovery potential for neutrino point-like sources

    Trovato A.

    2016-01-01

    Full Text Available KM3NeT is a large research infrastructure with a network of deep-sea neutrino telescopes in the abyss of the Mediterranean Sea. Of these, the KM3NeT/ARCA detector, installed in the KM3NeT-It node of the network, is optimised for studying high-energy neutrinos of cosmic origin. Sensitivities to galactic sources such as the supernova remnant RXJ1713.7-3946 and the pulsar wind nebula Vela X are presented as well as sensitivities to a generic point source with an E−2 spectrum which represents an approximation for the spectrum of extragalactic candidate neutrino sources.

  20. Nature of the Diffuse Source and Its Central Point-like Source in SNR 0509–67.5

    Litke, Katrina C.; Chu, You-Hua; Holmes, Abigail; Santucci, Robert; Blindauer, Terrence; Gruendl, Robert A.; Ricker, Paul M. [Astronomy Department, University of Illinois, 1002 W. Green Street, Urbana, IL 61801 (United States); Li, Chuan-Jui [Academia Sinica Institute of Astronomy and Astrophysics, P.O. Box 23-141, Taipei 10617, Taiwan, R.O.C. (China); Pan, Kuo-Chuan [Departement Physik, Universität Basel, Klingelbergstrasse 82, CH-4056 Basel (Switzerland); Weisz, Daniel R., E-mail: kclitke@email.arizona.edu [Department of Astronomy, University of California, 501 Cambell Hall #3411, Berkeley, CA 94720-3411 (United States)

    2017-03-10

    We examine a diffuse emission region near the center of SNR 0509−67.5 to determine its nature. Within this diffuse region we observe a point-like source that is bright in the near-IR, but is not visible in the B and V bands. We consider an emission line observed at 6766 Å and the possibilities that it is Ly α , H α , and [O ii] λ 3727. We examine the spectral energy distribution (SED) of the source, comprised of Hubble Space Telescope B , V , I , J , and H bands in addition to Spitzer /IRAC 3.6, 4.5, 5.8, and 8 μ m bands. The peak of the SED is consistent with a background galaxy at z ≈ 0.8 ± 0.2 and a possible Balmer jump places the galaxy at z ≈ 0.9 ± 0.3. These SED considerations support the emission line’s identification as [O ii] λ 3727. We conclude that the diffuse source in SNR 0509−67.5 is a background galaxy at z ≈ 0.82. Furthermore, we identify the point-like source superposed near the center of the galaxy as its central bulge. Finally, we find no evidence for a surviving companion star, indicating a double-degenerate origin for SNR 0509−67.5.

  1. Nature of the Diffuse Source and Its Central Point-like Source in SNR 0509–67.5

    Litke, Katrina C.; Chu, You-Hua; Holmes, Abigail; Santucci, Robert; Blindauer, Terrence; Gruendl, Robert A.; Ricker, Paul M.; Li, Chuan-Jui; Pan, Kuo-Chuan; Weisz, Daniel R.

    2017-01-01

    We examine a diffuse emission region near the center of SNR 0509−67.5 to determine its nature. Within this diffuse region we observe a point-like source that is bright in the near-IR, but is not visible in the B and V bands. We consider an emission line observed at 6766 Å and the possibilities that it is Ly α , H α , and [O ii] λ 3727. We examine the spectral energy distribution (SED) of the source, comprised of Hubble Space Telescope B , V , I , J , and H bands in addition to Spitzer /IRAC 3.6, 4.5, 5.8, and 8 μ m bands. The peak of the SED is consistent with a background galaxy at z ≈ 0.8 ± 0.2 and a possible Balmer jump places the galaxy at z ≈ 0.9 ± 0.3. These SED considerations support the emission line’s identification as [O ii] λ 3727. We conclude that the diffuse source in SNR 0509−67.5 is a background galaxy at z ≈ 0.82. Furthermore, we identify the point-like source superposed near the center of the galaxy as its central bulge. Finally, we find no evidence for a surviving companion star, indicating a double-degenerate origin for SNR 0509−67.5.

  2. Effect of tissue inhomogeneity on dose distribution of point sources of low-energy electrons

    Kwok, C.S.; Bialobzyski, P.J.; Yu, S.K.; Prestwich, W.V.

    1990-01-01

    Perturbation in dose distributions of point sources of low-energy electrons at planar interfaces of cortical bone (CB) and red marrow (RM) was investigated experimentally and by Monte Carlo codes EGS and the TIGER series. Ultrathin LiF thermoluminescent dosimeters were used to measure the dose distributions of point sources of 204 Tl and 147 Pm in RM. When the point sources were at 12 mg/cm 2 from a planar interface of CB and RM equivalent plastics, dose enhancement ratios in RM averaged over the region 0--12 mg/cm 2 from the interface were measured to be 1.08±0.03 (SE) and 1.03±0.03 (SE) for 204 Tl and 147 Pm, respectively. The Monte Carlo codes predicted 1.05±0.02 and 1.01±0.02 for the two nuclides, respectively. However, EGS gave consistently 3% higher dose in the dose scoring region than the TIGER series when point sources of monoenergetic electrons up to 0.75 MeV energy were considered in the homogeneous RM situation or in the CB and RM heterogeneous situation. By means of the TIGER series, it was demonstrated that aluminum, which is normally assumed to be equivalent to CB in radiation dosimetry, leads to an overestimation of backscattering of low-energy electrons in soft tissue at a CB--soft-tissue interface by as much as a factor of 2

  3. Identification and quantification of point sources of surface water contamination in fruit culture in the Netherlands

    Wenneker, M.; Beltman, W.H.J.; Werd, de H.A.E.; Zande, van de J.C.

    2008-01-01

    Measurements of pesticide concentrations in surface water by the water boards show that they have decreased less than was expected from model calculations. Possibly, the implementation of spray drift reducing techniques is overestimated in the model calculation. The impact of point sources is

  4. General Approach to the Evolution of Singlet Nanoparticles from a Rapidly Quenched Point Source

    Feng, J.; Huang, Luyi; Ludvigsson, Linus; Messing, Maria; Maiser, A.; Biskos, G.; Schmidt-Ott, A.

    2016-01-01

    Among the numerous point vapor sources, microsecond-pulsed spark ablation at atmospheric pressure is a versatile and environmentally friendly method for producing ultrapure inorganic nanoparticles ranging from singlets having sizes smaller than 1 nm to larger agglomerated structures. Due to its fast

  5. HYDROLOGY AND SEDIMENT MODELING USING THE BASINS NON-POINT SOURCE MODEL

    The Non-Point Source Model (Hydrologic Simulation Program-Fortran, or HSPF) within the EPA Office of Water's BASINS watershed modeling system was used to simulate streamflow and total suspended solids within Contentnea Creek, North Carolina, which is a tributary of the Neuse Rive...

  6. ''Anomalous'' air showers from point sources: Mass limits and light curves

    Domokos, G.; Elliott, B.; Kovesi-Domokos, S.

    1993-01-01

    We describe a method to obtain upper limits on the mass of the primaries of air showers associated with point sources. One also obtains the UHE pulse shape of a pulsar if its period is observed in the signal. As an example, we analyze the data obtained during a recent burst of Hercules-X1

  7. A search for hot post-AGE stars in the IRAS Point Source Catalog

    Oudmaijer, RD

    In this paper a first step is made to search for hot post-AGB stars in the IRAS Point Source Catalog. In order to find objects that evolved off the AGE a longer time ago than post-AGB objects discussed in the literature, objects that were not detected at 12 mu m by IRAS were selected. The selection

  8. Estimation of Methane Emissions from Municipal Solid Waste Landfills in China Based on Point Emission Sources

    Cai Bo-Feng

    2014-01-01

    Citation: Cai, B.-F., Liu, J.-G., Gao, Q.-X., et al., 2014. Estimation of methane emissions from municipal solid waste landfills in China based on point emission sources. Adv. Clim. Change Res. 5(2, doi: 10.3724/SP.J.1248.2014.081.

  9. Relationship between exposure to multiple noise sources and noise annoyance

    Miedema, H.M.E.

    2004-01-01

    Relationships between exposure to noise [metric: day-night level (DNL) or day-evening-night level (DENL)] from a single source (aircraft, road traffic, or railways) and annoyance based on a large international dataset have been published earlier. Also for stationary sources relationships have been

  10. Comparative Evaluation of Pulsewidth Modulation Strategies for Z-Source Neutral-Point-Clamped Inverter

    Loh, P.C.; Blaabjerg, Frede; Wong, C.P.

    2007-01-01

    modulation (PWM) strategies for controlling the Z-source NPC inverter. While developing the PWM techniques, attention has been devoted to carefully derive them from a common generic basis for improved portability, easier implementation, and most importantly, assisting readers in understanding all concepts......Z-source neutral-point-clamped (NPC) inverter has recently been proposed as an alternative three-level buck-boost power conversion solution with an improved output waveform quality. In principle, the designed Z-source inverter functions by selectively "shooting through" its power sources, coupled...... to the inverter using two unique Z-source impedance networks, to boost the inverter three-level output waveform. Proper modulation of the new inverter would therefore require careful integration of the selective shoot-through process to the basic switching concepts to achieve maximal voltage-boost, minimal...

  11. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  12. The Potential for Electrofuels Production in Sweden Utilizing Fossil and Biogenic CO2 Point Sources

    Hansson, Julia; Hackl, Roman; Taljegard, Maria; Brynolf, Selma; Grahn, Maria

    2017-01-01

    This paper maps, categorizes, and quantifies all major point sources of carbon dioxide (CO 2 ) emissions from industrial and combustion processes in Sweden. The paper also estimates the Swedish technical potential for electrofuels (power-to-gas/fuels) based on carbon capture and utilization. With our bottom-up approach using European databases, we find that Sweden emits approximately 50 million metric tons of CO 2 per year from different types of point sources, with 65% (or about 32 million tons) from biogenic sources. The major sources are the pulp and paper industry (46%), heat and power production (23%), and waste treatment and incineration (8%). Most of the CO 2 is emitted at low concentrations (<15%) from sources in the southern part of Sweden where power demand generally exceeds in-region supply. The potentially recoverable emissions from all the included point sources amount to 45 million tons. If all the recoverable CO 2 were used to produce electrofuels, the yield would correspond to 2–3 times the current Swedish demand for transportation fuels. The electricity required would correspond to about 3 times the current Swedish electricity supply. The current relatively few emission sources with high concentrations of CO 2 (>90%, biofuel operations) would yield electrofuels corresponding to approximately 2% of the current demand for transportation fuels (corresponding to 1.5–2 TWh/year). In a 2030 scenario with large-scale biofuels operations based on lignocellulosic feedstocks, the potential for electrofuels production from high-concentration sources increases to 8–11 TWh/year. Finally, renewable electricity and production costs, rather than CO 2 supply, limit the potential for production of electrofuels in Sweden.

  13. Tackling non-point source water pollution in British Columbia : an action plan

    NONE

    1999-03-01

    British Columbia`s approach to water quality management is discussed. The BC efforts include regulating `end of pipe` point discharges from industrial and municipal outfalls. The major remaining cause of water pollution is from non-point sources (NPS). NPS water pollution is caused by the release of pollutants from different and diffuse sources, mostly unregulated and associated with urbanization, agriculture and other forms of land development. The importance of dealing with such problems on an immediate basis to avoid a decline in water quality in the province is emphasized. Major sources of water pollution in British Columbia include: land development, agriculture, storm water runoff, onsite sewage systems, forestry, atmospheric deposition, and marine activities. 3 tabs.

  14. Modeling water demand when households have multiple sources of water

    Coulibaly, Lassina; Jakus, Paul M.; Keith, John E.

    2014-07-01

    A significant portion of the world's population lives in areas where public water delivery systems are unreliable and/or deliver poor quality water. In response, people have developed important alternatives to publicly supplied water. To date, most water demand research has been based on single-equation models for a single source of water, with very few studies that have examined water demand from two sources of water (where all nonpublic system water sources have been aggregated into a single demand). This modeling approach leads to two outcomes. First, the demand models do not capture the full range of alternatives, so the true economic relationship among the alternatives is obscured. Second, and more seriously, economic theory predicts that demand for a good becomes more price-elastic as the number of close substitutes increases. If researchers artificially limit the number of alternatives studied to something less than the true number, the price elasticity estimate may be biased downward. This paper examines water demand in a region with near universal access to piped water, but where system reliability and quality is such that many alternative sources of water exist. In extending the demand analysis to four sources of water, we are able to (i) demonstrate why households choose the water sources they do, (ii) provide a richer description of the demand relationships among sources, and (iii) calculate own-price elasticity estimates that are more elastic than those generally found in the literature.

  15. Multiple station beamline at an undulator x-ray source

    Als-Nielsen, J.; Freund, A.K.; Grübel, G.

    1994-01-01

    The undulator X-ray source is an ideal source for many applications: the beam is brilliant, highly collimated in all directions, quasi-monochromatic, pulsed and linearly polarized. Such a precious source can feed several independently operated instruments by utilizing a downstream series of X......-ray transparent monochromator crystals. Diamond in particular is an attractive monochromator as it is rather X-ray transparent and can be fabricated to a high degree of crystal perfection. Moreover, it has a very high heat conductivity and a rather small thermal expansion so the beam X-ray heat load problem...

  16. The shooting method and multiple solutions of two/multi-point BVPs of second-order ODE

    Man Kam Kwong

    2006-06-01

    Full Text Available Within the last decade, there has been growing interest in the study of multiple solutions of two- and multi-point boundary value problems of nonlinear ordinary differential equations as fixed points of a cone mapping. Undeniably many good results have emerged. The purpose of this paper is to point out that, in the special case of second-order equations, the shooting method can be an effective tool, sometimes yielding better results than those obtainable via fixed point techniques.

  17. Multiple approaches to microbial source tracking in tropical northern Australia

    Neave, Matthew; Luter, Heidi; Padovan, Anna; Townsend, Simon; Schobben, Xavier; Gibb, Karen

    2014-01-01

    , other potential inputs, such as urban rivers and drains, and surrounding beaches, and used genetic fingerprints from E. coli and enterococci communities, fecal markers and 454 pyrosequencing to track contamination sources. A sewage effluent outfall

  18. [Nitrogen non-point source pollution identification based on ArcSWAT in Changle River].

    Deng, Ou-Ping; Sun, Si-Yang; Lü, Jun

    2013-04-01

    The ArcSWAT (Soil and Water Assessment Tool) model was adopted for Non-point source (NPS) nitrogen pollution modeling and nitrogen source apportionment for the Changle River watershed, a typical agricultural watershed in Southeast China. Water quality and hydrological parameters were monitored, and the watershed natural conditions (including soil, climate, land use, etc) and pollution sources information were also investigated and collected for SWAT database. The ArcSWAT model was established in the Changle River after the calibrating and validating procedures of the model parameters. Based on the validated SWAT model, the contributions of different nitrogen sources to river TN loading were quantified, and spatial-temporal distributions of NPS nitrogen export to rivers were addressed. The results showed that in the Changle River watershed, Nitrogen fertilizer, nitrogen air deposition and nitrogen soil pool were the prominent pollution sources, which contributed 35%, 32% and 25% to the river TN loading, respectively. There were spatial-temporal variations in the critical sources for NPS TN export to the river. Natural sources, such as soil nitrogen pool and atmospheric nitrogen deposition, should be targeted as the critical sources for river TN pollution during the rainy seasons. Chemical nitrogen fertilizer application should be targeted as the critical sources for river TN pollution during the crop growing season. Chemical nitrogen fertilizer application, soil nitrogen pool and atmospheric nitrogen deposition were the main sources for TN exported from the garden plot, forest and residential land, respectively. However, they were the main sources for TN exported both from the upland and paddy field. These results revealed that NPS pollution controlling rules should focus on the spatio-temporal distribution of NPS pollution sources.

  19. SIGMA/B, Doses in Space Vehicle for Multiple Trajectories, Various Radiation Source

    Jordan, T.M.

    2003-01-01

    1 - Description of problem or function: SIGMA/B calculates radiation dose at arbitrary points inside a space vehicle, taking into account vehicle geometry, heterogeneous placement of equipment and stores, vehicle materials, time-weighted astronaut positions and many radiation sources from mission trajectories, e.g. geomagnetically trapped protons and electrons, solar flare particles, galactic cosmic rays and their secondary radiations. The vehicle geometry, equipment and supplies, and man models are described by quadric surfaces. The irradiating flux field may be anisotropic. The code can be used to perform simultaneous dose calculations for multiple vehicle trajectories, each involving several radiation sources. Results are presented either as dose as a function of shield thickness, or the dose received through designated outer sections of the vehicle. 2 - Method of solution: Automatic sectoring of the vehicle is performed by a Simpson's rule integration over angle; the dose is computed by a numerical angular integration of the dose attenuation kernels about the dose points. The kernels are curve-fit functions constructed from input data tables. 3 - Restrictions on the complexity of the problem: The code uses variable dimensioning techniques to store data. The only restriction on problem size is the available core storage

  20. Metasurface Cloak Performance Near-by Multiple Line Sources and PEC Cylindrical Objects

    Arslanagic, Samel; Yatman, William H.; Pehrson, Signe

    2014-01-01

    The performance/robustness of metasurface cloaks to a complex field environment which may represent a realistic scenario of radiating sources is presently reported. Attention is devoted to the cloak operation near-by multiple line sources and multiple perfectly electrically conducting cylinders. ...

  1. Multiple-point statistical prediction on fracture networks at Yucca Mountain

    Liu, X.Y; Zhang, C.Y.; Liu, Q.S.; Birkholzer, J.T.

    2009-01-01

    In many underground nuclear waste repository systems, such as at Yucca Mountain, water flow rate and amount of water seepage into the waste emplacement drifts are mainly determined by hydrological properties of fracture network in the surrounding rock mass. Natural fracture network system is not easy to describe, especially with respect to its connectivity which is critically important for simulating the water flow field. In this paper, we introduced a new method for fracture network description and prediction, termed multi-point-statistics (MPS). The process of the MPS method is to record multiple-point statistics concerning the connectivity patterns of a fracture network from a known fracture map, and to reproduce multiple-scale training fracture patterns in a stochastic manner, implicitly and directly. It is applied to fracture data to study flow field behavior at the Yucca Mountain waste repository system. First, the MPS method is used to create a fracture network with an original fracture training image from Yucca Mountain dataset. After we adopt a harmonic and arithmetic average method to upscale the permeability to a coarse grid, THM simulation is carried out to study near-field water flow in the surrounding waste emplacement drifts. Our study shows that connectivity or patterns of fracture networks can be grasped and reconstructed by MPS methods. In theory, it will lead to better prediction of fracture system characteristics and flow behavior. Meanwhile, we can obtain variance from flow field, which gives us a way to quantify model uncertainty even in complicated coupled THM simulations. It indicates that MPS can potentially characterize and reconstruct natural fracture networks in a fractured rock mass with advantages of quantifying connectivity of fracture system and its simulation uncertainty simultaneously.

  2. A Targeted Search for Point Sources of EeV Photons with the Pierre Auger Observatory

    Aab, A. [Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud Universiteit, Nijmegen (Netherlands); Abreu, P. [Laboratório de Instrumentação e Física Experimental de Partículas—LIP and Instituto Superior Técnico—IST, Universidade de Lisboa—UL, Lisbon (Portugal); Aglietta, M. [INFN, Sezione di Torino, Torino (Italy); Samarai, I. Al [Laboratoire de Physique Nucléaire et de Hautes Energies (LPNHE), Universités Paris 6 et Paris 7, CNRS-IN2P3, Paris (France); Albuquerque, I. F. M. [Universidade de São Paulo, Inst. de Física, São Paulo (Brazil); Allekotte, I. [Centro Atómico Bariloche and Instituto Balseiro (CNEA-UNCuyo-CONICET), San Carlos de Bariloche (Argentina); Almela, A. [Instituto de Tecnologías en Detección y Astropartículas (CNEA, CONICET, UNSAM), Centro Atómico Constituyentes, Comisión Nacional de Energía Atómica, Buenos Aires (Argentina); Castillo, J. Alvarez [Universidad Nacional Autónoma de México, México, D. F., México (Mexico); Alvarez-Muñiz, J. [Universidad de Santiago de Compostela, La Coruña (Spain); Anastasi, G. A. [Gran Sasso Science Institute (INFN), L’Aquila (Italy); and others

    2017-03-10

    Simultaneous measurements of air showers with the fluorescence and surface detectors of the Pierre Auger Observatory allow a sensitive search for EeV photon point sources. Several Galactic and extragalactic candidate objects are grouped in classes to reduce the statistical penalty of many trials from that of a blind search and are analyzed for a significant excess above the background expectation. The presented search does not find any evidence for photon emission at candidate sources, and combined p -values for every class are reported. Particle and energy flux upper limits are given for selected candidate sources. These limits significantly constrain predictions of EeV proton emission models from non-transient Galactic and nearby extragalactic sources, as illustrated for the particular case of the Galactic center region.

  3. Identification of 'Point A' as the prevalent source of error in cephalometric analysis of lateral radiographs.

    Grogger, P; Sacher, C; Weber, S; Millesi, G; Seemann, R

    2018-04-10

    Deviations in measuring dentofacial components in a lateral X-ray represent a major hurdle in the subsequent treatment of dysgnathic patients. In a retrospective study, we investigated the most prevalent source of error in the following commonly used cephalometric measurements: the angles Sella-Nasion-Point A (SNA), Sella-Nasion-Point B (SNB) and Point A-Nasion-Point B (ANB); the Wits appraisal; the anteroposterior dysplasia indicator (APDI); and the overbite depth indicator (ODI). Preoperative lateral radiographic images of patients with dentofacial deformities were collected and the landmarks digitally traced by three independent raters. Cephalometric analysis was automatically performed based on 1116 tracings. Error analysis identified the x-coordinate of Point A as the prevalent source of error in all investigated measurements, except SNB, in which it is not incorporated. In SNB, the y-coordinate of Nasion predominated error variance. SNB showed lowest inter-rater variation. In addition, our observations confirmed previous studies showing that landmark identification variance follows characteristic error envelopes in the highest number of tracings analysed up to now. Variance orthogonal to defining planes was of relevance, while variance parallel to planes was not. Taking these findings into account, orthognathic surgeons as well as orthodontists would be able to perform cephalometry more accurately and accomplish better therapeutic results. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  4. Using Soluble Reactive Phosphorus and Ammonia to Identify Point Source Discharge from Large Livestock Facilities

    Borrello, M. C.; Scribner, M.; Chessin, K.

    2013-12-01

    A growing body of research draws attention to the negative environmental impacts on surface water from large livestock facilities. These impacts are mostly in the form of excessive nutrient loading resulting in significantly decreased oxygen levels. Over-application of animal waste on fields as well as direct discharge into surface water from facilities themselves has been identified as the main contributor to the development of hypoxic zones in Lake Erie, Chesapeake Bay and the Gulf of Mexico. Some regulators claim enforcement of water quality laws is problematic because of the nature and pervasiveness of non-point source impacts. Any direct discharge by a facility is a violation of permits governed by the Clean Water Act, unless the facility has special dispensation for discharge. Previous research by the principal author and others has shown runoff and underdrain transport are the main mechanisms by which nutrients enter surface water. This study utilized previous work to determine if the effects of non-point source discharge can be distinguished from direct (point-source) discharge using simple nutrient analysis and dissolved oxygen (DO) parameters. Nutrient and DO parameters were measured from three sites: 1. A stream adjacent to a field receiving manure, upstream of a large livestock facility with a history of direct discharge, 2. The same stream downstream of the facility and 3. A stream in an area relatively unimpacted by large-scale agriculture (control site). Results show that calculating a simple Pearson correlation coefficient (r) of soluble reactive phosphorus (SRP) and ammonia over time as well as temperature and DO, distinguishes non-point source from point source discharge into surface water. The r value for SRP and ammonia for the upstream site was 0.01 while the r value for the downstream site was 0.92. The control site had an r value of 0.20. Likewise, r values were calculated on temperature and DO for each site. High negative correlations

  5. Systems near a critical point under multiplicative noise and the concept of effective potential

    Shapiro, V. E.

    1993-07-01

    This paper presents a general approach to and elucidates the main features of the effective potential, friction, and diffusion exerted by systems near a critical point due to nonlinear influence of noise. The model is that of a general many-dimensional system of coupled nonlinear oscillators of finite damping under frequently alternating influences, multiplicative or additive, and arbitrary form of the power spectrum, provided the time scales of the system's drift due to noise are large compared to the scales of unperturbed relaxation behavior. The conventional statistical approach and the widespread deterministic effective potential concept use the assumptions about a small parameter which are particular cases of the considered. We show close correspondence between the asymptotic methods of these approaches and base the analysis on this. The results include an analytical treatment of the system's long-time behavior as a function of the noise covering all the range of its table- and bell-shaped spectra, from the monochromatic limit to white noise. The trend is considered both in the coordinate momentum and in the coordinate system's space. Particular attention is paid to the stabilization behavior forced by multiplicative noise. An intermittency, in a broad area of the control parameter space, is shown to be an intrinsic feature of these phenomena.

  6. Multiple types of motives don't multiply the motivation of West Point cadets.

    Wrzesniewski, Amy; Schwartz, Barry; Cong, Xiangyu; Kane, Michael; Omar, Audrey; Kolditz, Thomas

    2014-07-29

    Although people often assume that multiple motives for doing something will be more powerful and effective than a single motive, research suggests that different types of motives for the same action sometimes compete. More specifically, research suggests that instrumental motives, which are extrinsic to the activities at hand, can weaken internal motives, which are intrinsic to the activities at hand. We tested whether holding both instrumental and internal motives yields negative outcomes in a field context in which various motives occur naturally and long-term educational and career outcomes are at stake. We assessed the impact of the motives of over 10,000 West Point cadets over the period of a decade on whether they would become commissioned officers, extend their officer service beyond the minimum required period, and be selected for early career promotions. For each outcome, motivation internal to military service itself predicted positive outcomes; a relationship that was negatively affected when instrumental motives were also in evidence. These results suggest that holding multiple motives damages persistence and performance in educational and occupational contexts over long periods of time.

  7. Calibrate the aerial surveying instrument by the limited surface source and the single point source that replace the unlimited surface source

    Lu Cun Heng

    1999-01-01

    It is described that the calculating formula and surveying result is found on the basis of the stacking principle of gamma ray and the feature of hexagonal surface source when the limited surface source replaces the unlimited surface source to calibrate the aerial survey instrument on the ground, and that it is found in the light of the exchanged principle of the gamma ray when the single point source replaces the unlimited surface source to calibrate aerial surveying instrument in the air. Meanwhile through the theoretical analysis, the receiving rate of the crystal bottom and side surfaces is calculated when aerial surveying instrument receives gamma ray. The mathematical expression of the gamma ray decaying following height according to the Jinge function regularity is got. According to this regularity, the absorbing coefficient that air absorbs the gamma ray and the detective efficiency coefficient of the crystal is calculated based on the ground and air measuring value of the bottom surface receiving cou...

  8. Speculative Attacks with Multiple Sources of Public Information

    Cornand, Camille; Heinemann, Frank

    2005-01-01

    We propose a speculative attack model in which agents receive multiple public signals. It is characterised by its focus on an informational structure, which sets free from the strict separation between public information and private information. Diverse pieces of public information can be taken into account differently by players and are likely to lead to different appreciations ex post. This process defines players’ private value. The main result is to show that equilibrium uniqueness depend...

  9. Synergies of multiple remote sensing data sources for REDD+ monitoring

    Sy, de V.; Herold, M.; Achard, F.; Asner, G.P.; Held, A.; Kellndorfer, J.; Verbesselt, J.

    2012-01-01

    Remote sensing technologies can provide objective, practical and cost-effective solutions for developing and maintaining REDD+ monitoring systems. This paper reviews the potential and status of available remote sensing data sources with a focus on different forest information products and synergies

  10. An international point source outbreak of typhoid fever: a European collaborative investigation*

    Stanwell-Smith, R. E.; Ward, L. R.

    1986-01-01

    A point source outbreak of Salmonella typhi, degraded Vi-strain 22, affecting 32 British visitors to Kos, Greece, in 1983 was attributed by a case—control study to the consumption of a salad at one hotel. This represents the first major outbreak of typhoid fever in which a salad has been identified as the vehicle. The source of the infection was probably a carrier in the hotel staff. The investigation demonstrates the importance of national surveillance, international cooperation, and epidemiological methods in the investigation and control of major outbreaks of infection. PMID:3488842

  11. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  12. Misconceptions and biases in German students' perception of multiple energy sources: implications for science education

    Lee, Roh Pin

    2016-04-01

    Misconceptions and biases in energy perception could influence people's support for developments integral to the success of restructuring a nation's energy system. Science education, in equipping young adults with the cognitive skills and knowledge necessary to navigate in the confusing energy environment, could play a key role in paving the way for informed decision-making. This study examined German students' knowledge of the contribution of diverse energy sources to their nation's energy mix as well as their affective energy responses so as to identify implications for science education. Specifically, the study investigated whether and to what extent students hold mistaken beliefs about the role of multiple energy sources in their nation's energy mix, and assessed how misconceptions could act as self-generated reference points to underpin support/resistance of proposed developments. An in-depth analysis of spontaneous affective associations with five key energy sources also enabled the identification of underlying concerns driving people's energy responses and facilitated an examination of how affective perception, in acting as a heuristic, could lead to biases in energy judgment and decision-making. Finally, subgroup analysis differentiated by education and gender supported insights into a 'two culture' effect on energy perception and the challenge it poses to science education.

  13. A proton point source produced by laser interaction with cone-top-end target

    Yu, Jinqing; Jin, Xiaolin; Zhou, Weimin; Zhao, Zongqing; Yan, Yonghong; Li, Bin; Hong, Wei; Gu, Yuqiu

    2012-01-01

    In this paper, we propose a proton point source by the interaction of laser and cone-top-end target and investigate it by two-dimensional particle-in-cell (2D-PIC) simulations as the proton point sources are well known for higher spatial resolution of proton radiography. Our results show that the relativistic electrons are guided to the rear of the cone-top-end target by the electrostatic charge-separation field and self-generated magnetic field along the profile of the target. As a result, the peak magnitude of sheath field at the rear surface of cone-top-end target is higher compared to common cone target. We test this scheme by 2D-PIC simulation and find the result has a diameter of 0.79λ 0 , an average energy of 9.1 MeV and energy spread less than 35%.

  14. Simulation of ultrasonic surface waves with multi-Gaussian and point source beam models

    Zhao, Xinyu; Schmerr, Lester W. Jr.; Li, Xiongbing; Sedov, Alexander

    2014-01-01

    In the past decade, multi-Gaussian beam models have been developed to solve many complicated bulk wave propagation problems. However, to date those models have not been extended to simulate the generation of Rayleigh waves. Here we will combine Gaussian beams with an explicit high frequency expression for the Rayleigh wave Green function to produce a three-dimensional multi-Gaussian beam model for the fields radiated from an angle beam transducer mounted on a solid wedge. Simulation results obtained with this model are compared to those of a point source model. It is shown that the multi-Gaussian surface wave beam model agrees well with the point source model while being computationally much more efficient

  15. Search for neutrino point sources with an all-sky autocorrelation analysis in IceCube

    Turcati, Andrea; Bernhard, Anna; Coenders, Stefan [TU, Munich (Germany); Collaboration: IceCube-Collaboration

    2016-07-01

    The IceCube Neutrino Observatory is a cubic kilometre scale neutrino telescope located in the Antarctic ice. Its full-sky field of view gives unique opportunities to study the neutrino emission from the Galactic and extragalactic sky. Recently, IceCube found the first signal of astrophysical neutrinos with energies up to the PeV scale, but the origin of these particles still remains unresolved. Given the observed flux, the absence of observations of bright point-sources is explainable with the presence of numerous weak sources. This scenario can be tested using autocorrelation methods. We present here the sensitivities and discovery potentials of a two-point angular correlation analysis performed on seven years of IceCube data, taken between 2008 and 2015. The test is applied on the northern and southern skies separately, using the neutrino energy information to improve the effectiveness of the method.

  16. A Bayesian geostatistical approach for evaluating the uncertainty of contaminant mass discharges from point sources

    Troldborg, M.; Nowak, W.; Binning, P. J.; Bjerg, P. L.

    2012-12-01

    Estimates of mass discharge (mass/time) are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Mass discharge estimates are, however, prone to rather large uncertainties as they integrate uncertain spatial distributions of both concentration and groundwater flow velocities. For risk assessments or any other decisions that are being based on mass discharge estimates, it is essential to address these uncertainties. We present a novel Bayesian geostatistical approach for quantifying the uncertainty of the mass discharge across a multilevel control plane. The method decouples the flow and transport simulation and has the advantage of avoiding the heavy computational burden of three-dimensional numerical flow and transport simulation coupled with geostatistical inversion. It may therefore be of practical relevance to practitioners compared to existing methods that are either too simple or computationally demanding. The method is based on conditional geostatistical simulation and accounts for i) heterogeneity of both the flow field and the concentration distribution through Bayesian geostatistics (including the uncertainty in covariance functions), ii) measurement uncertainty, and iii) uncertain source zone geometry and transport parameters. The method generates multiple equally likely realizations of the spatial flow and concentration distribution, which all honour the measured data at the control plane. The flow realizations are generated by analytical co-simulation of the hydraulic conductivity and the hydraulic gradient across the control plane. These realizations are made consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box-Cox transformed concentration data is used to simulate observed

  17. Multiple-source multiple-harmonic active vibration control of variable section cylindrical structures: A numerical study

    Liu, Jinxin; Chen, Xuefeng; Gao, Jiawei; Zhang, Xingwu

    2016-12-01

    Air vehicles, space vehicles and underwater vehicles, the cabins of which can be viewed as variable section cylindrical structures, have multiple rotational vibration sources (e.g., engines, propellers, compressors and motors), making the spectrum of noise multiple-harmonic. The suppression of such noise has been a focus of interests in the field of active vibration control (AVC). In this paper, a multiple-source multiple-harmonic (MSMH) active vibration suppression algorithm with feed-forward structure is proposed based on reference amplitude rectification and conjugate gradient method (CGM). An AVC simulation scheme called finite element model in-loop simulation (FEMILS) is also proposed for rapid algorithm verification. Numerical studies of AVC are conducted on a variable section cylindrical structure based on the proposed MSMH algorithm and FEMILS scheme. It can be seen from the numerical studies that: (1) the proposed MSMH algorithm can individually suppress each component of the multiple-harmonic noise with an unified and improved convergence rate; (2) the FEMILS scheme is convenient and straightforward for multiple-source simulations with an acceptable loop time. Moreover, the simulations have similar procedure to real-life control and can be easily extended to physical model platform.

  18. Prevention and Control of Agricultural Non-Point Source Pollutions in UK and Suggestions to China

    Liu, Kun; Ren, Tianzhi; Wu, Wenliang; Meng, Fanquiao; Bellarby, Jessica; Smith, Laurence

    2016-01-01

    Currently, the world is facing challenges of maintaining food production growth while improving agricultural ecological environmental quality. The prevention and control of agricultural non-point source pollution, a key component of these challenges, is a systematic program which integrates many factors such as technology and its extension, relevant regulation and policies. In the project of UK-China Sustainable Agriculture Innovation Network, we undertook a comprehensive analysis of the prev...

  19. High angle grain boundaries as sources or sinks for point defects

    Balluffi, R.W.

    1979-09-01

    A secondary grain boundary dislocation climb model for high angle grain boundaries as sources/sinks for point defects is described in the light of recent advances in our knowledge of grain boundary structure. Experimental results are reviewed and are then compared with the expected behavior of the proposed model. Reasonably good consistency is found at the level of our present understanding of the subject. However, several gaps in our present knowledge still exist, and these are identified and discussed briefly.

  20. Gamma Rays from the Inner Milky Way: Dark Matter or Point Sources?

    CERN. Geneva

    2015-01-01

    Studies of data from the Fermi Gamma-Ray Space Telescope have revealed bright gamma-ray emission from the central regions of our galaxy, with a spatial and spectral profile consistent with annihilating dark matter. I will present a new model-independent analysis that suggests that rather than originating from dark matter, the GeV excess may arise from a surprising new population of as-yet-unresolved gamma-ray point sources in the heart of the Milky Way.

  1. CO2 point sources and subsurface storage capacities for CO2 in aquifers in Norway

    Boee, Reidulv; Magnus, Christian; Osmundsen, Per Terje; Rindstad, Bjoern Ivar

    2002-01-01

    The GESTCO project comprises a study of the distribution and coincidence of thermal CO 2 emission sources and location/quality of geological storage capacity in Europe. Four of the most promising types of geological storage are being studied. 1. Onshore/offshore saline aquifers with or without lateral seal. 2. Low entalpy geothermal reservoirs. 3. Deep methane-bearing coal beds and abandoned coal and salt mines. 4. Exhausted or near exhausted hydrocarbon structures. In this report we present an inventory of CO 2 point sources in Norway (1999) and the results of the work within Study Area C: Deep saline aquifers offshore/near shore Northern and Central Norway. Also offshore/near shore Southern Norway has been included while the Barents Sea is not described in any detail. The most detailed studies are on the Tilje and Aare Formations on the Troendelag Platform off Mid-Norway and on the Sognefjord, Fensfjord and Krossfjord Formations, southeast of the Troll Field off Western Norway. The Tilje Formation has been chosen as one of the cases to be studied in greater detail (numerical modelling) in the project. This report shows that offshore Norway, there are concentrations of large CO 2 point sources in the Haltenbanken, the Viking Graben/Tampen Spur area, the Southern Viking Graben and the central Trough, while onshore Norway there are concentrations of point sources in the Oslofjord/Porsgrund area, along the coast of western Norway and in the Troendelag. A number of aquifers with large theoretical CO 2 storage potential are pointed out in the North Sea, the Norwegian Sea and in the Southern Barents Sea. The storage capacity in the depth interval 0.8 - 4 km below sea level is estimated to be ca. 13 Gt (13000000000 tonnes) CO 2 in geological traps (outside hydrocarbon fields), while the storage capacity in aquifers not confined to traps is estimated to be at least 280 Gt CO 2 . (Author)

  2. Fast and Accurate Rat Head Motion Tracking With Point Sources for Awake Brain PET.

    Miranda, Alan; Staelens, Steven; Stroobants, Sigrid; Verhaeghe, Jeroen

    2017-07-01

    To avoid the confounding effects of anesthesia and immobilization stress in rat brain positron emission tomography (PET), motion tracking-based unrestrained awake rat brain imaging is being developed. In this paper, we propose a fast and accurate rat headmotion tracking method based on small PET point sources. PET point sources (3-4) attached to the rat's head are tracked in image space using 15-32-ms time frames. Our point source tracking (PST) method was validated using a manually moved microDerenzo phantom that was simultaneously tracked with an optical tracker (OT) for comparison. The PST method was further validated in three awake [ 18 F]FDG rat brain scans. Compared with the OT, the PST-based correction at the same frame rate (31.2 Hz) reduced the reconstructed FWHM by 0.39-0.66 mm for the different tested rod sizes of the microDerenzo phantom. The FWHM could be further reduced by another 0.07-0.13 mm when increasing the PST frame rate (66.7 Hz). Regional brain [ 18 F]FDG uptake in the motion corrected scan was strongly correlated ( ) with that of the anesthetized reference scan for all three cases ( ). The proposed PST method allowed excellent and reproducible motion correction in awake in vivo experiments. In addition, there is no need of specialized tracking equipment or additional calibrations to be performed, the point sources are practically imperceptible to the rat, and PST is ideally suitable for small bore scanners, where optical tracking might be challenging.

  3. Accommodating multiple illumination sources in an imaging colorimetry environment

    Tobin, Kenneth W., Jr.; Goddard, James S., Jr.; Hunt, Martin A.; Hylton, Kathy W.; Karnowski, Thomas P.; Simpson, Marc L.; Richards, Roger K.; Treece, Dale A.

    2000-03-01

    Researchers at the Oak Ridge National Laboratory have been developing a method for measuring color quality in textile products using a tri-stimulus color camera system. Initial results of the Imaging Tristimulus Colorimeter (ITC) were reported during 1999. These results showed that the projection onto convex sets (POCS) approach to color estimation could be applied to complex printed patterns on textile products with high accuracy and repeatability. Image-based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. Our earlier work reports these results for a broad-band, smoothly varying D65 standard illuminant. To move the measurement to the on-line environment with continuously manufactured textile webs, the illumination source becomes problematic. The spectral content of these light sources varies substantially from the D65 standard illuminant and can greatly impact the measurement performance of the POCS system. Although absolute color measurements are difficult to make under different illumination, referential measurements to monitor color drift provide a useful indication of product quality. Modifications to the ITC system have been implemented to enable the study of different light sources. These results and the subsequent analysis of relative color measurements will be reported for textile products.

  4. Temperature Effects of Point Sources, Riparian Shading, and Dam Operations on the Willamette River, Oregon

    Rounds, Stewart A.

    2007-01-01

    Water temperature is an important factor influencing the migration, rearing, and spawning of several important fish species in rivers of the Pacific Northwest. To protect these fish populations and to fulfill its responsibilities under the Federal Clean Water Act, the Oregon Department of Environmental Quality set a water temperature Total Maximum Daily Load (TMDL) in 2006 for the Willamette River and the lower reaches of its largest tributaries in northwestern Oregon. As a result, the thermal discharges of the largest point sources of heat to the Willamette River now are limited at certain times of the year, riparian vegetation has been targeted for restoration, and upstream dams are recognized as important influences on downstream temperatures. Many of the prescribed point-source heat-load allocations are sufficiently restrictive that management agencies may need to expend considerable resources to meet those allocations. Trading heat allocations among point-source dischargers may be a more economical and efficient means of meeting the cumulative point-source temperature limits set by the TMDL. The cumulative nature of these limits, however, precludes simple one-to-one trades of heat from one point source to another; a more detailed spatial analysis is needed. In this investigation, the flow and temperature models that formed the basis of the Willamette temperature TMDL were used to determine a spatially indexed 'heating signature' for each of the modeled point sources, and those signatures then were combined into a user-friendly, spreadsheet-based screening tool. The Willamette River Point-Source Heat-Trading Tool allows the user to increase or decrease the heating signature of each source and thereby evaluate the effects of a wide range of potential point-source heat trades. The predictions of the Trading Tool were verified by running the Willamette flow and temperature models under four different trading scenarios, and the predictions typically were accurate

  5. Non-point Source Pollutants Loss of Planting Industry in the Yunnan Plateau Lake Basin, China

    ZHAO Zu-jun

    2017-12-01

    Full Text Available Non-point source pollution of planting has become a major factor affecting the quality and safety of water environment in our country. In recent years, some studies show that the loss of nitrogen and phosphorus in agricultural chemical fertilizers has led to more serious non-point source pollution. By means of the loss coefficient method and spatial overlay analysis, the loss amount, loss of strength and its spatial distribution characteristics of total nitrogen, total phosphorus, ammonium nitrogen and nitrate nitrogen were analyzed in the Fuxian Lake, Xingyun Lake and Qilu Lake Basin in 2015. The results showed that:The loss of total nitrogen was the highest in the three basins, following by ammonium nitrogen, nitrate nitrogen and total phosphorus, which the loss of intensity range were 2.73~22.07, 0.003~3.52, 0.01~2.25 kg·hm-2 and 0.05~1.36 kg·hm-2, respectively. Total nitrogen and total phosphorus loss were mainly concentrated in the southwest of Qilu Lake, west and south of Xingyun Lake. Ammonium nitrogen and nitrate nitrogen loss mainly concentrated in the south of Qilu Lake, south and north of Xingyun Lake. The loss of nitrogen and phosphorus was mainly derived from cash crops and rice. Therefore, zoning, grading and phased prevention and control schemes were proposed, in order to provide scientific basis for controlling non-point source pollution in the study area.

  6. The Herschel Virgo Cluster Survey. XVII. SPIRE point-source catalogs and number counts

    Pappalardo, Ciro; Bendo, George J.; Bianchi, Simone; Hunt, Leslie; Zibetti, Stefano; Corbelli, Edvige; di Serego Alighieri, Sperello; Grossi, Marco; Davies, Jonathan; Baes, Maarten; De Looze, Ilse; Fritz, Jacopo; Pohlen, Michael; Smith, Matthew W. L.; Verstappen, Joris; Boquien, Médéric; Boselli, Alessandro; Cortese, Luca; Hughes, Thomas; Viaene, Sebastien; Bizzocchi, Luca; Clemens, Marcel

    2015-01-01

    Aims: We present three independent catalogs of point-sources extracted from SPIRE images at 250, 350, and 500 μm, acquired with the Herschel Space Observatory as a part of the Herschel Virgo Cluster Survey (HeViCS). The catalogs have been cross-correlated to consistently extract the photometry at SPIRE wavelengths for each object. Methods: Sources have been detected using an iterative loop. The source positions are determined by estimating the likelihood to be a real source for each peak on the maps, according to the criterion defined in the sourceExtractorSussextractor task. The flux densities are estimated using the sourceExtractorTimeline, a timeline-based point source fitter that also determines the fitting procedure with the width of the Gaussian that best reproduces the source considered. Afterwards, each source is subtracted from the maps, removing a Gaussian function in every position with the full width half maximum equal to that estimated in sourceExtractorTimeline. This procedure improves the robustness of our algorithm in terms of source identification. We calculate the completeness and the flux accuracy by injecting artificial sources in the timeline and estimate the reliability of the catalog using a permutation method. Results: The HeViCS catalogs contain about 52 000, 42 200, and 18 700 sources selected at 250, 350, and 500 μm above 3σ and are ~75%, 62%, and 50% complete at flux densities of 20 mJy at 250, 350, 500 μm, respectively. We then measured source number counts at 250, 350, and 500 μm and compare them with previous data and semi-analytical models. We also cross-correlated the catalogs with the Sloan Digital Sky Survey to investigate the redshift distribution of the nearby sources. From this cross-correlation, we select ~2000 sources with reliable fluxes and a high signal-to-noise ratio, finding an average redshift z ~ 0.3 ± 0.22 and 0.25 (16-84 percentile). Conclusions: The number counts at 250, 350, and 500 μm show an increase in

  7. Temporal-spatial distribution of non-point source pollution in a drinking water source reservoir watershed based on SWAT

    M. Wang

    2015-05-01

    Full Text Available The conservation of drinking water source reservoirs has a close relationship between regional economic development and people’s livelihood. Research on the non-point pollution characteristics in its watershed is crucial for reservoir security. Tang Pu Reservoir watershed was selected as the study area. The non-point pollution model of Tang Pu Reservoir was established based on the SWAT (Soil and Water Assessment Tool model. The model was adjusted to analyse the temporal-spatial distribution patterns of total nitrogen (TN and total phosphorus (TP. The results showed that the loss of TN and TP in the reservoir watershed were related to precipitation in flood season. And the annual changes showed an "M" shape. It was found that the contribution of loss of TN and TP accounted for 84.5% and 85.3% in high flow years, and for 70.3% and 69.7% in low flow years, respectively. The contributions in normal flow years were 62.9% and 63.3%, respectively. The TN and TP mainly arise from Wangtan town, Gulai town, and Wangyuan town, etc. In addition, it was found that the source of TN and TP showed consistency in space.

  8. A scanning point source for quality control of FOV uniformity in GC-PET imaging

    Bergmann, H.; Minear, G.; Dobrozemsky, G.; Nowotny, R.; Koenig, B.

    2002-01-01

    Aim: PET imaging with coincidence cameras (GC-PET) requires additional quality control procedures to check the function of coincidence circuitry and detector zoning. In particular, the uniformity response over the field of view needs special attention since it is known that coincidence counting mode may suffer from non-uniformity effects not present in single photon mode. Materials and methods: An inexpensive linear scanner with a stepper motor and a digital interface to a PC with software allowing versatile scanning modes was developed. The scanner is used with a source holder containing a Sodium-22 point source. While moving the source along the axis of rotation of the GC-PET system, a tomographic acquisition takes place. The scan covers the full axial field of view of the 2-D or 3-D scatter frame. Depending on the acquisition software, point source scanning takes place continuously while only one projection is acquired or is done in step-and-shoot mode with the number of positions equal to the number of gantry steps. Special software was developed to analyse the resulting list mode acquisition files and to produce an image of the recorded coincidence events of each head. Results: Uniformity images of coincidence events were obtained after further correction for systematic sensitivity variations caused by acquisition geometry. The resulting images are analysed visually and by calculating NEMA uniformity indices as for a planar flood field. The method has been applied successfully to two different brands of GC-PET capable gamma cameras. Conclusion: Uniformity of GC-PET can be tested quickly and accurately with a routine QC procedure, using a Sodium-22 scanning point source and an inexpensive mechanical scanning device. The method can be used for both 2-D and 3-D acquisition modes and fills an important gap in the quality control system for GC-PET

  9. Detection of Point Sources on Two-Dimensional Images Based on Peaks

    R. B. Barreiro

    2005-09-01

    Full Text Available This paper considers the detection of point sources in two-dimensional astronomical images. The detection scheme we propose is based on peak statistics. We discuss the example of the detection of far galaxies in cosmic microwave background experiments throughout the paper, although the method we present is totally general and can be used in many other fields of data analysis. We consider sources with a Gaussian profile—that is, a fair approximation of the profile of a point source convolved with the detector beam in microwave experiments—on a background modeled by a homogeneous and isotropic Gaussian random field characterized by a scale-free power spectrum. Point sources are enhanced with respect to the background by means of linear filters. After filtering, we identify local maxima and apply our detection scheme, a Neyman-Pearson detector that defines our region of acceptance based on the a priori pdf of the sources and the ratio of number densities. We study the different performances of some linear filters that have been used in this context in the literature: the Mexican hat wavelet, the matched filter, and the scale-adaptive filter. We consider as well an extension to two dimensions of the biparametric scale-adaptive filter (BSAF. The BSAF depends on two parameters which are determined by maximizing the number density of real detections while fixing the number density of spurious detections. For our detection criterion the BSAF outperforms the other filters in the interesting case of white noise.

  10. Source apportionment of nitrogen and phosphorus from non-point source pollution in Nansi Lake Basin, China.

    Zhang, Bao-Lei; Cui, Bo-Hao; Zhang, Shu-Min; Wu, Quan-Yuan; Yao, Lei

    2018-05-03

    Nitrogen (N) and phosphorus (P) from non-point source (NPS) pollution in Nansi Lake Basin greatly influenced the water quality of Nansi Lake, which is the determinant factor for the success of East Route of South-North Water Transfer Project in China. This research improved Johnes export coefficient model (ECM) by developing a method to determine the export coefficients of different land use types based on the hydrological and water quality data. Taking NPS total nitrogen (TN) and total phosphorus (TP) as the study objects, this study estimated the contributions of different pollution sources and analyzed their spatial distributions based on the improved ECM. The results underlined that the method for obtaining output coefficients of land use types using hydrology and water quality data is feasible and accurate, and is suitable for the study of NPS pollution at large-scale basins. The average output structure of NPS TN from land use, rural breeding and rural life is 33.6, 25.9, and 40.5%, and the NPS TP is 31.6, 43.7, and 24.7%, respectively. Especially, dry land was the main land use source for both NPS TN and TP pollution, with the contributed proportions of 81.3 and 81.8% respectively. The counties of Zaozhuang, Tengzhou, Caoxian, Yuncheng, and Shanxian had higher contribution rates and the counties of Dingtao, Juancheng, and Caoxian had the higher load intensities for both NPS TN and TP pollution. The results of this study allowed for an improvement in the understanding of the pollution source contribution and enabled researchers and planners to focus on the most important sources and regions of NPS pollution.

  11. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Murray, S. G.; Trott, C. M.; Jordan, C. H. [ARC Centre of Excellence for All-sky Astrophysics (CAASTRO) (Australia)

    2017-08-10

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  12. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  13. IceCube point source searches using through-going muon tracks

    Coenders, Stefan [TU Muenchen, Physik-Department, Excellence Cluster Universe, Boltzmannstr. 2, 85748 Garching (Germany); Collaboration: IceCube-Collaboration

    2015-07-01

    The IceCube neutrino observatory located at the South Pole is the current largest neutrino telescope. Using through-going muon tracks, IceCube records approximately 130,000 events per year with reconstruction accuracy as low as 0.7 deg for energies of 10 TeV. Having analysed an integrated time-scale of 4 years, no sources of neutrinos have yet been observed. This talk deals with the current progress in point-source searches, adding another two years of data recorded in the years 2012 and 2013. In a combined search with starting events, sources of hard and soft spectra with- and with-out cut-offs are characterised.

  14. IceCube-Gen2 sensitivity improvement for steady neutrino point sources

    Coenders, Stefan; Resconi, Elisa [TU Muenchen, Physik-Department, Excellence Cluster Universe, Boltzmannstr. 2, 85748 Garching (Germany); Collaboration: IceCube-Collaboration

    2015-07-01

    The observation of an astrophysical neutrino flux by high-energy events starting in IceCube strengthens the search for sources of astrophysical neutrinos. Identification of these sources requires good pointing at high statistics, mainly using muons created by charged-current muon neutrino interactions going through the IceCube detector. We report about preliminary studies of a possible high-energy extension IceCube-Gen2. Using a 6 times bigger detection volume, effective area as well as reconstruction accuracy will improve with respect to IceCube. Moreover, using (in-ice) active veto techniques will significantly improve the performance for Southern hemisphere events, where possible local candidate neutrino sources are located.

  15. Some problems of neutron source multiplication method for site measurement technology in nuclear critical safety

    Shi Yongqian; Zhu Qingfu; Hu Dingsheng; He Tao; Yao Shigui; Lin Shenghuo

    2004-01-01

    The paper gives experiment theory and experiment method of neutron source multiplication method for site measurement technology in the nuclear critical safety. The measured parameter by source multiplication method actually is a sub-critical with source neutron effective multiplication factor k s , but not the neutron effective multiplication factor k eff . The experiment research has been done on the uranium solution nuclear critical safety experiment assembly. The k s of different sub-criticality is measured by neutron source multiplication experiment method, and k eff of different sub-criticality, the reactivity coefficient of unit solution level, is first measured by period method, and then multiplied by difference of critical solution level and sub-critical solution level and obtained the reactivity of sub-critical solution level. The k eff finally can be extracted from reactivity formula. The effect on the nuclear critical safety and different between k eff and k s are discussed

  16. Multiple Sources of Prescription Payment and Risky Opioid Therapy Among Veterans.

    Becker, William C; Fenton, Brenda T; Brandt, Cynthia A; Doyle, Erin L; Francis, Joseph; Goulet, Joseph L; Moore, Brent A; Torrise, Virginia; Kerns, Robert D; Kreiner, Peter W

    2017-07-01

    Opioid overdose and other related harms are a major source of morbidity and mortality among US Veterans, in part due to high-risk opioid prescribing. We sought to determine whether having multiple sources of payment for opioids-as a marker for out-of-system access-is associated with risky opioid therapy among veterans. Cross-sectional study examining the association between multiple sources of payment and risky opioid therapy among all individuals with Veterans Health Administration (VHA) payment for opioid analgesic prescriptions in Kentucky during fiscal year 2014-2015. Source of payment categories: (1) VHA only source of payment (sole source); (2) sources of payment were VHA and at least 1 cash payment [VHA+cash payment(s)] whether or not there was a third source of payment; and (3) at least one other noncash source: Medicare, Medicaid, or private insurance [VHA+noncash source(s)]. Our outcomes were 2 risky opioid therapies: combination opioid/benzodiazepine therapy and high-dose opioid therapy, defined as morphine equivalent daily dose ≥90 mg. Of the 14,795 individuals in the analytic sample, there were 81.9% in the sole source category, 6.6% in the VHA+cash payment(s) category, and 11.5% in the VHA+noncash source(s) category. In logistic regression, controlling for age and sex, persons with multiple payment sources had significantly higher odds of each risky opioid therapy, with those in the VHA+cash having significantly higher odds than those in the VHA+noncash source(s) group. Prescribers should examine the prescription monitoring program as multiple payment sources increase the odds of risky opioid therapy.

  17. Lessons Learned from OMI Observations of Point Source SO2 Pollution

    Krotkov, N.; Fioletov, V.; McLinden, Chris

    2011-01-01

    The Ozone Monitoring Instrument (OMI) on NASA Aura satellite makes global daily measurements of the total column of sulfur dioxide (SO2), a short-lived trace gas produced by fossil fuel combustion, smelting, and volcanoes. Although anthropogenic SO2 signals may not be detectable in a single OMI pixel, it is possible to see the source and determine its exact location by averaging a large number of individual measurements. We describe new techniques for spatial and temporal averaging that have been applied to the OMI SO2 data to determine the spatial distributions or "fingerprints" of SO2 burdens from top 100 pollution sources in North America. The technique requires averaging of several years of OMI daily measurements to observe SO2 pollution from typical anthropogenic sources. We found that the largest point sources of SO2 in the U.S. produce elevated SO2 values over a relatively small area - within 20-30 km radius. Therefore, one needs higher than OMI spatial resolution to monitor typical SO2 sources. TROPOMI instrument on the ESA Sentinel 5 precursor mission will have improved ground resolution (approximately 7 km at nadir), but is limited to once a day measurement. A pointable geostationary UVB spectrometer with variable spatial resolution and flexible sampling frequency could potentially achieve the goal of daily monitoring of SO2 point sources and resolve downwind plumes. This concept of taking the measurements at high frequency to enhance weak signals needs to be demonstrated with a GEOCAPE precursor mission before 2020, which will help formulating GEOCAPE measurement requirements.

  18. Multiple Household Water Sources and Their Use in Remote Communities With Evidence From Pacific Island Countries

    Elliott, Mark; MacDonald, Morgan C.; Chan, Terence; Kearton, Annika; Shields, Katherine F.; Bartram, Jamie K.; Hadwen, Wade L.

    2017-11-01

    Global water research and monitoring typically focus on the household's "main source of drinking-water." Use of multiple water sources to meet daily household needs has been noted in many developing countries but rarely quantified or reported in detail. We gathered self-reported data using a cross-sectional survey of 405 households in eight communities of the Republic of the Marshall Islands (RMI) and five Solomon Islands (SI) communities. Over 90% of households used multiple sources, with differences in sources and uses between wet and dry seasons. Most RMI households had large rainwater tanks and rationed stored rainwater for drinking throughout the dry season, whereas most SI households collected rainwater in small pots, precluding storage across seasons. Use of a source for cooking was strongly positively correlated with use for drinking, whereas use for cooking was negatively correlated or uncorrelated with nonconsumptive uses (e.g., bathing). Dry season water uses implied greater risk of water-borne disease, with fewer (frequently zero) handwashing sources reported and more unimproved sources consumed. Use of multiple sources is fundamental to household water management and feasible to monitor using electronic survey tools. We contend that recognizing multiple water sources can greatly improve understanding of household-level and community-level climate change resilience, that use of multiple sources confounds health impact studies of water interventions, and that incorporating multiple sources into water supply interventions can yield heretofore-unrealized benefits. We propose that failure to consider multiple sources undermines the design and effectiveness of global water monitoring, data interpretation, implementation, policy, and research.

  19. Screening of point mutations by multiple SSCP analysis in the dystrophin gene

    Lasa, A.; Baiget, M.; Gallano, P. [Hospital Sant Pau, Barcelona (Spain)

    1994-09-01

    Duchenne muscular dystrophy (DMD) is a lethal, X-linked neuromuscular disorder. The population frequency of DMD is one in approximately 3500 boys, of which one third is thought to be a new mutant. The DMD gene is the largest known to date, spanning over 2,3 Mb in band Xp21.2; 79 exons are transcribed into a 14 Kb mRNA coding for a protein of 427 kD which has been named dystrophin. It has been shown that about 65% of affected boys have a gene deletion with a wide variation in localization and size. The remaining affected individuals who have no detectable deletions or duplications would probably carry more subtle mutations that are difficult to detect. These mutations occur in several different exons and seem to be unique to single patients. Their identification represents a formidable goal because of the large size and complexity of the dystrophin gene. SSCP is a very efficient method for the detection of point mutations if the parameters that affect the separation of the strands are optimized for a particular DNA fragment. The multiple SSCP allows the simultaneous study of several exons, and implies the use of different conditions because no single set of conditions will be optimal for all fragments. Seventy-eight DMD patients with no deletion or duplication in the dystrophin gene were selected for the multiple SSCP analysis. Genomic DNA from these patients was amplified using the primers described for the diagnosis procedure (muscle promoter and exons 3, 8, 12, 16, 17, 19, 32, 45, 48 and 51). We have observed different mobility shifts in bands corresponding to exons 8, 12, 43 and 51. In exons 17 and 45, altered electrophoretic patterns were found in different samples identifying polymorphisms already described.

  20. Association of a novel point mutation in MSH2 gene with familial multiple primary cancers

    Hai Hu

    2017-10-01

    Full Text Available Abstract Background Multiple primary cancers (MPC have been identified as two or more cancers without any subordinate relationship that occur either simultaneously or metachronously in the same or different organs of an individual. Lynch syndrome is an autosomal dominant genetic disorder that increases the risk of many types of cancers. Lynch syndrome patients who suffer more than two cancers can also be considered as MPC; patients of this kind provide unique resources to learn how genetic mutation causes MPC in different tissues. Methods We performed a whole genome sequencing on blood cells and two tumor samples of a Lynch syndrome patient who was diagnosed with five primary cancers. The mutational landscape of the tumors, including somatic point mutations and copy number alternations, was characterized. We also compared Lynch syndrome with sporadic cancers and proposed a model to illustrate the mutational process by which Lynch syndrome progresses to MPC. Results We revealed a novel pathologic mutation on the MSH2 gene (G504 splicing that associates with Lynch syndrome. Systematical comparison of the mutation landscape revealed that multiple cancers in the proband were evolutionarily independent. Integrative analysis showed that truncating mutations of DNA mismatch repair (MMR genes were significantly enriched in the patient. A mutation progress model that included germline mutations of MMR genes, double hits of MMR system, mutations in tissue-specific driver genes, and rapid accumulation of additional passenger mutations was proposed to illustrate how MPC occurs in Lynch syndrome patients. Conclusion Our findings demonstrate that both germline and somatic alterations are driving forces of carcinogenesis, which may resolve the carcinogenic theory of Lynch syndrome.

  1. Multiple ECG Fiducial Points-Based Random Binary Sequence Generation for Securing Wireless Body Area Networks.

    Zheng, Guanglou; Fang, Gengfa; Shankaran, Rajan; Orgun, Mehmet A; Zhou, Jie; Qiao, Li; Saleem, Kashif

    2017-05-01

    Generating random binary sequences (BSes) is a fundamental requirement in cryptography. A BS is a sequence of N bits, and each bit has a value of 0 or 1. For securing sensors within wireless body area networks (WBANs), electrocardiogram (ECG)-based BS generation methods have been widely investigated in which interpulse intervals (IPIs) from each heartbeat cycle are processed to produce BSes. Using these IPI-based methods to generate a 128-bit BS in real time normally takes around half a minute. In order to improve the time efficiency of such methods, this paper presents an ECG multiple fiducial-points based binary sequence generation (MFBSG) algorithm. The technique of discrete wavelet transforms is employed to detect arrival time of these fiducial points, such as P, Q, R, S, and T peaks. Time intervals between them, including RR, RQ, RS, RP, and RT intervals, are then calculated based on this arrival time, and are used as ECG features to generate random BSes with low latency. According to our analysis on real ECG data, these ECG feature values exhibit the property of randomness and, thus, can be utilized to generate random BSes. Compared with the schemes that solely rely on IPIs to generate BSes, this MFBSG algorithm uses five feature values from one heart beat cycle, and can be up to five times faster than the solely IPI-based methods. So, it achieves a design goal of low latency. According to our analysis, the complexity of the algorithm is comparable to that of fast Fourier transforms. These randomly generated ECG BSes can be used as security keys for encryption or authentication in a WBAN system.

  2. Diffusion of dust particles from a point-source above ground level

    Hassan, M.H.A.; Eltayeb, I.A.

    1998-10-01

    A pollutant of small particles is emitted by a point source at a height h above ground level in an atmosphere in which a uni-directional wind speed, U, is prevailing. The pollutant is subjected to diffusion in all directions in the presence of advection and settling due to gravity. The equation governing the concentration of the pollutant is studied with the wind speed and the different components of diffusion tensor are proportional to the distance above ground level and the source has a uniform strength. Adopting a Cartesian system of coordinates in which the x-axis lies along the direction of the wind velocity, the z-axis is vertically upwards and the y-axis completes the right-hand triad, the solution for the concentration c(x,y,z) is obtained in closed form. The relative importance of the components of diffusion along the three axes is discussed. It is found that for any plane y=constant (=A), c(x,y,z) is concentrated along a curve of ''extensive pollution''. In the plane A=0, the concentration decreases along the line of extensive pollution as we move away from the source. However, for planes A≅0, the line of extensive pollution possesses a point of accumulation, which lies at a nonzero value of x. As we move away from the plane A=0, the point of accumulation moves laterally away from the plane x=0 and towards the plane z=0. The presence of the point of accumulation is entirely due to the presence of lateral diffusion. (author)

  3. A robust poverty profile for Brazil using multiple data sources

    Ferreira Francisco H. G.

    2003-01-01

    Full Text Available This paper presents a poverty profile for Brazil, based on three different sources of household data for 1996. We use PPV consumption data to estimate poverty and indigence lines. ''Contagem'' data is used to allow for an unprecedented refinement of the country's poverty map. Poverty measures and shares are also presented for a wide range of population subgroups, based on the PNAD 1996, with new adjustments for imputed rents and spatial differences in cost of living. Robustness of the profile is verified with respect to different poverty lines, spatial price deflators, and equivalence scales. Overall poverty incidence ranges from 23% with respect to an indigence line to 45% with respect to a more generous poverty line. More importantly, however, poverty is found to vary significantly across regions and city sizes, with rural areas, small and medium towns and the metropolitan peripheries of the North and Northeast regions being poorest.

  4. Exploiting semantic linkages among multiple sources for semantic information retrieval

    Li, JianQiang; Yang, Ji-Jiang; Liu, Chunchen; Zhao, Yu; Liu, Bo; Shi, Yuliang

    2014-07-01

    The vision of the Semantic Web is to build a global Web of machine-readable data to be consumed by intelligent applications. As the first step to make this vision come true, the initiative of linked open data has fostered many novel applications aimed at improving data accessibility in the public Web. Comparably, the enterprise environment is so different from the public Web that most potentially usable business information originates in an unstructured form (typically in free text), which poses a challenge for the adoption of semantic technologies in the enterprise environment. Considering that the business information in a company is highly specific and centred around a set of commonly used concepts, this paper describes a pilot study to migrate the concept of linked data into the development of a domain-specific application, i.e. the vehicle repair support system. The set of commonly used concepts, including the part name of a car and the phenomenon term on the car repairing, are employed to build the linkage between data and documents distributed among different sources, leading to the fusion of documents and data across source boundaries. Then, we describe the approaches of semantic information retrieval to consume these linkages for value creation for companies. The experiments on two real-world data sets show that the proposed approaches outperform the best baseline 6.3-10.8% and 6.4-11.1% in terms of top five and top 10 precisions, respectively. We believe that our pilot study can serve as an important reference for the development of similar semantic applications in an enterprise environment.

  5. Search Strategy of Detector Position For Neutron Source Multiplication Method by Using Detected-Neutron Multiplication Factor

    Endo, Tomohiro

    2011-01-01

    In this paper, an alternative definition of a neutron multiplication factor, detected-neutron multiplication factor kdet, is produced for the neutron source multiplication method..(NSM). By using kdet, a search strategy of appropriate detector position for NSM is also proposed. The NSM is one of the practical subcritical measurement techniques, i.e., the NSM does not require any special equipment other than a stationary external neutron source and an ordinary neutron detector. Additionally, the NSM method is based on steady-state analysis, so that this technique is very suitable for quasi real-time measurement. It is noted that the correction factors play important roles in order to accurately estimate subcriticality from the measured neutron count rates. The present paper aims to clarify how to correct the subcriticality measured by the NSM method, the physical meaning of the correction factors, and how to reduce the impact of correction factors by setting a neutron detector at an appropriate detector position

  6. Optimizing the diagnostic power with gastric emptying scintigraphy at multiple time points

    Gajewski Byron J

    2011-05-01

    Full Text Available Abstract Background Gastric Emptying Scintigraphy (GES at intervals over 4 hours after a standardized radio-labeled meal is commonly regarded as the gold standard for diagnosing gastroparesis. The objectives of this study were: 1 to investigate the best time point and the best combination of multiple time points for diagnosing gastroparesis with repeated GES measures, and 2 to contrast and cross-validate Fisher's Linear Discriminant Analysis (LDA, a rank based Distribution Free (DF approach, and the Classification And Regression Tree (CART model. Methods A total of 320 patients with GES measures at 1, 2, 3, and 4 hour (h after a standard meal using a standardized method were retrospectively collected. Area under the Receiver Operating Characteristic (ROC curve and the rate of false classification through jackknife cross-validation were used for model comparison. Results Due to strong correlation and an abnormality in data distribution, no substantial improvement in diagnostic power was found with the best linear combination by LDA approach even with data transformation. With DF method, the linear combination of 4-h and 3-h increased the Area Under the Curve (AUC and decreased the number of false classifications (0.87; 15.0% over individual time points (0.83, 0.82; 15.6%, 25.3%, for 4-h and 3-h, respectively at a higher sensitivity level (sensitivity = 0.9. The CART model using 4 hourly GES measurements along with patient's age was the most accurate diagnostic tool (AUC = 0.88, false classification = 13.8%. Patients having a 4-h gastric retention value >10% were 5 times more likely to have gastroparesis (179/207 = 86.5% than those with ≤10% (18/113 = 15.9%. Conclusions With a mixed group of patients either referred with suspected gastroparesis or investigated for other reasons, the CART model is more robust than the LDA and DF approaches, capable of accommodating covariate effects and can be generalized for cross institutional applications, but

  7. Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model

    Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef

    2016-10-01

    We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.

  8. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  9. Zero-Point Energy Constraint for Unimolecular Dissociation Reactions. Giving Trajectories Multiple Chances To Dissociate Correctly.

    Paul, Amit K; Hase, William L

    2016-01-28

    A zero-point energy (ZPE) constraint model is proposed for classical trajectory simulations of unimolecular decomposition and applied to CH4* → H + CH3 decomposition. With this model trajectories are not allowed to dissociate unless they have ZPE in the CH3 product. If not, they are returned to the CH4* region of phase space and, if necessary, given additional opportunities to dissociate with ZPE. The lifetime for dissociation of an individual trajectory is the time it takes to dissociate with ZPE in CH3, including multiple possible returns to CH4*. With this ZPE constraint the dissociation of CH4* is exponential in time as expected for intrinsic RRKM dynamics and the resulting rate constant is in good agreement with the harmonic quantum value of RRKM theory. In contrast, a model that discards trajectories without ZPE in the reaction products gives a CH4* → H + CH3 rate constant that agrees with the classical and not quantum RRKM value. The rate constant for the purely classical simulation indicates that anharmonicity may be important and the rate constant from the ZPE constrained classical trajectory simulation may not represent the complete anharmonicity of the RRKM quantum dynamics. The ZPE constraint model proposed here is compared with previous models for restricting ZPE flow in intramolecular dynamics, and connecting product and reactant/product quantum energy levels in chemical dynamics simulations.

  10. LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics

    Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel

    2017-10-01

    Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.

  11. A location-based multiple point statistics method: modelling the reservoir with non-stationary characteristics

    Yin Yanshu

    2017-12-01

    Full Text Available In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.

  12. Forces, surface finish and friction characteristics in surface engineered single- and multiple-point cutting edges

    Sarwar, M.; Gillibrand, D.; Bradbury, S.R.

    1991-01-01

    Advanced surface engineering technologies (physical and chemical vapour deposition) have been successfully applied to high speed steel and carbide cutting tools, and the potential benefits in terms of both performance and longer tool life, are now well established. Although major achievements have been reported by many manufacturers and users, there are a number of applications where surface engineering has been unsuccessful. Considerable attention has been given to the film characteristics and the variables associated with its properties; however, very little attention has been directed towards the benefits to the tool user. In order to apply surface engineering technology effectively to cutting tools, the coater needs to have accurate information relating to cutting conditions, i.e. cutting forces, stress and temperature etc. The present paper describes results obtained with single- and multiple-point cutting tools with examples of failures, which should help the surface coater to appreciate the significance of the cutting conditions, and in particular the magnitude of the forces and stresses present during cutting processes. These results will assist the development of a systems approach to cutting tool technology and surface engineering with a view to developing an improved product. (orig.)

  13. Journey of a Package: Category 1 Source (Co-60) Shipment with Several Border Crossings, Multiple Modes

    Gray, P. A.

    2016-01-01

    Radioactive materials (RAM) are used extensively in a vast array of industries and in an even wider breadth of applications on a truly global basis each and every day. Over the past 50 years, these applications and the quantity (activity) of RAM shipped has grown significantly, with the next 50 years expected to show a continuing trend. The movement of these goods occurs in all regions of the world, and must therefore be conducted in a manner which will not adversely impact people or the environment. Industry and regulators have jointly met this challenge, so much so that RAM shipments are amongst the safest of any product. How has this level of performance been achieved? What is involved in shipping RAM from one corner of the world to another, often via a number of in-transit locations and often utilizing multiple modes of transport in any single shipment? This paper reviews one such journey, of Category 1 Cobalt-60 sources, as they move from point of manufacture through to point of use including the detailed and multi-approval process, the stringent regulatory requirements in place, the extensive communications required throughout, and the practical aspects needed to simply offer such a product for sale and transport. Upon completion, the rationale for such an exemplary safety and security record will be readily apparent. (author)

  14. Using the Chandra Source-Finding Algorithm to Automatically Identify Solar X-ray Bright Points

    Adams, Mitzi L.; Tennant, A.; Cirtain, J. M.

    2009-01-01

    This poster details a technique of bright point identification that is used to find sources in Chandra X-ray data. The algorithm, part of a program called LEXTRCT, searches for regions of a given size that are above a minimum signal to noise ratio. The algorithm allows selected pixels to be excluded from the source-finding, thus allowing exclusion of saturated pixels (from flares and/or active regions). For Chandra data the noise is determined by photon counting statistics, whereas solar telescopes typically integrate a flux. Thus the calculated signal-to-noise ratio is incorrect, but we find we can scale the number to get reasonable results. For example, Nakakubo and Hara (1998) find 297 bright points in a September 11, 1996 Yohkoh image; with judicious selection of signal-to-noise ratio, our algorithm finds 300 sources. To further assess the efficacy of the algorithm, we analyze a SOHO/EIT image (195 Angstroms) and compare results with those published in the literature (McIntosh and Gurman, 2005). Finally, we analyze three sets of data from Hinode, representing different parts of the decline to minimum of the solar cycle.

  15. Mercury exposure in terrestrial birds far downstream of an historical point source

    Jackson, Allyson K.; Evers, David C.; Folsom, Sarah B.; Condon, Anne M.; Diener, John; Goodrick, Lizzie F.; McGann, Andrew J.; Schmerfeld, John; Cristol, Daniel A.

    2011-01-01

    Mercury (Hg) is a persistent environmental contaminant found in many freshwater and marine ecosystems. Historical Hg contamination in rivers can impact the surrounding terrestrial ecosystem, but there is little known about how far downstream this contamination persists. In 2009, we sampled terrestrial forest songbirds at five floodplain sites up to 137 km downstream of an historical source of Hg along the South and South Fork Shenandoah Rivers (Virginia, USA). We found that blood total Hg concentrations remained elevated over the entire sampling area and there was little evidence of decline with distance. While it is well known that Hg is a pervasive and long-lasting aquatic contaminant, it has only been recently recognized that it also biomagnifies effectively in floodplain forest food webs. This study extends the area of concern for terrestrial habitats near contaminated rivers for more than 100 km downstream from a waterborne Hg point source. - Highlights: → We report blood mercury levels for terrestrial songbirds downstream of contamination. → Blood mercury levels remain elevated above reference for at least 137 km downstream. → Trends vary based on foraging guild and migration strategy. → Mercury affects terrestrial biota farther downstream than previously documented. - Blood mercury levels of forest songbirds remain elevated above reference levels for at least 137 km downstream of historical point source.

  16. Mercury exposure in terrestrial birds far downstream of an historical point source

    Jackson, Allyson K., E-mail: allyson.jackson@briloon.org [Biodiversity Research Institute, 19 Flaggy Meadow Road, Gorham, ME 04038 (United States); Institute for Integrative Bird Behavior Studies, Department of Biology, College of William and Mary, PO Box 8795, Williamsburg, VA 23187 (United States); Evers, David C.; Folsom, Sarah B. [Biodiversity Research Institute, 19 Flaggy Meadow Road, Gorham, ME 04038 (United States); Condon, Anne M. [U.S. Fish and Wildlife Service, 6669 Short Lane, Gloucester, VA 23061 (United States); Diener, John; Goodrick, Lizzie F. [Biodiversity Research Institute, 19 Flaggy Meadow Road, Gorham, ME 04038 (United States); McGann, Andrew J. [Institute for Integrative Bird Behavior Studies, Department of Biology, College of William and Mary, PO Box 8795, Williamsburg, VA 23187 (United States); Schmerfeld, John [U.S. Fish and Wildlife Service, 6669 Short Lane, Gloucester, VA 23061 (United States); Cristol, Daniel A. [Institute for Integrative Bird Behavior Studies, Department of Biology, College of William and Mary, PO Box 8795, Williamsburg, VA 23187 (United States)

    2011-12-15

    Mercury (Hg) is a persistent environmental contaminant found in many freshwater and marine ecosystems. Historical Hg contamination in rivers can impact the surrounding terrestrial ecosystem, but there is little known about how far downstream this contamination persists. In 2009, we sampled terrestrial forest songbirds at five floodplain sites up to 137 km downstream of an historical source of Hg along the South and South Fork Shenandoah Rivers (Virginia, USA). We found that blood total Hg concentrations remained elevated over the entire sampling area and there was little evidence of decline with distance. While it is well known that Hg is a pervasive and long-lasting aquatic contaminant, it has only been recently recognized that it also biomagnifies effectively in floodplain forest food webs. This study extends the area of concern for terrestrial habitats near contaminated rivers for more than 100 km downstream from a waterborne Hg point source. - Highlights: > We report blood mercury levels for terrestrial songbirds downstream of contamination. > Blood mercury levels remain elevated above reference for at least 137 km downstream. > Trends vary based on foraging guild and migration strategy. > Mercury affects terrestrial biota farther downstream than previously documented. - Blood mercury levels of forest songbirds remain elevated above reference levels for at least 137 km downstream of historical point source.

  17. Strategies for lidar characterization of particulates from point and area sources

    Wojcik, Michael D.; Moore, Kori D.; Martin, Randal S.; Hatfield, Jerry

    2010-10-01

    Use of ground based remote sensing technologies such as scanning lidar systems (light detection and ranging) has gained traction in characterizing ambient aerosols due to some key advantages such as wide area of regard (10 km2), fast response time, high spatial resolution (University, in conjunction with the USDA-ARS, has developed a three-wavelength scanning lidar system called Aglite that has been successfully deployed to characterize particle motion, concentration, and size distribution at both point and diffuse area sources in agricultural and industrial settings. A suite of massbased and size distribution point sensors are used to locally calibrate the lidar. Generating meaningful particle size distribution, mass concentration, and emission rate results based on lidar data is dependent on strategic onsite deployment of these point sensors with successful local meteorological measurements. Deployment strategies learned from field use of this entire measurement system over five years include the characterization of local meteorology and its predictability prior to deployment, the placement of point sensors to prevent contamination and overloading, the positioning of the lidar and beam plane to avoid hard target interferences, and the usefulness of photographic and written observational data.

  18. A systematic analysis of the Braitenberg vehicle 2b for point-like stimulus sources

    Rañó, Iñaki

    2012-01-01

    Braitenberg vehicles have been used experimentally for decades in robotics with limited empirical understanding. This paper presents the first mathematical model of the vehicle 2b, displaying so-called aggression behaviour, and analyses the possible trajectories for point-like smooth stimulus sources. This sensory-motor steering control mechanism is used to implement biologically grounded target approach, target-seeking or obstacle-avoidance behaviour. However, the analysis of the resulting model reveals that complex and unexpected trajectories can result even for point-like stimuli. We also prove how the implementation of the controller and the vehicle morphology interact to affect the behaviour of the vehicle. This work provides a better understanding of Braitenberg vehicle 2b, explains experimental results and paves the way for a formally grounded application on robotics as well as for a new way of understanding target seeking in biology. (paper)

  19. Assessment of Groundwater Susceptibility to Non-Point Source Contaminants Using Three-Dimensional Transient Indexes.

    Zhang, Yong; Weissmann, Gary S; Fogg, Graham E; Lu, Bingqing; Sun, HongGuang; Zheng, Chunmiao

    2018-06-05

    Groundwater susceptibility to non-point source contamination is typically quantified by stable indexes, while groundwater quality evolution (or deterioration globally) can be a long-term process that may last for decades and exhibit strong temporal variations. This study proposes a three-dimensional (3- d ), transient index map built upon physical models to characterize the complete temporal evolution of deep aquifer susceptibility. For illustration purposes, the previous travel time probability density (BTTPD) approach is extended to assess the 3- d deep groundwater susceptibility to non-point source contamination within a sequence stratigraphic framework observed in the Kings River fluvial fan (KRFF) aquifer. The BTTPD, which represents complete age distributions underlying a single groundwater sample in a regional-scale aquifer, is used as a quantitative, transient measure of aquifer susceptibility. The resultant 3- d imaging of susceptibility using the simulated BTTPDs in KRFF reveals the strong influence of regional-scale heterogeneity on susceptibility. The regional-scale incised-valley fill deposits increase the susceptibility of aquifers by enhancing rapid downward solute movement and displaying relatively narrow and young age distributions. In contrast, the regional-scale sequence-boundary paleosols within the open-fan deposits "protect" deep aquifers by slowing downward solute movement and displaying a relatively broad and old age distribution. Further comparison of the simulated susceptibility index maps to known contaminant distributions shows that these maps are generally consistent with the high concentration and quick evolution of 1,2-dibromo-3-chloropropane (DBCP) in groundwater around the incised-valley fill since the 1970s'. This application demonstrates that the BTTPDs can be used as quantitative and transient measures of deep aquifer susceptibility to non-point source contamination.

  20. The Atacama Cosmology Telescope: Development and preliminary results of point source observations

    Fisher, Ryan P.

    2009-06-01

    The Atacama Cosmology Telescope (ACT) is a six meter diameter telescope designed to measure the millimeter sky with arcminute angular resolution. The instrument is currently conducting its third season of observations from Cerro Toco in the Chilean Andes. The primary science goal of the experiment is to expand our understanding of cosmology by mapping the temperature fluctuations of the Cosmic Microwave Background (CMB) at angular scales corresponding to multipoles up to [cursive l] ~ 10000. The primary receiver for current ACT observations is the Millimeter Bolometer Array Camera (MBAC). The instrument is specially designed to observe simultaneously at 148 GHz, 218 GHz and 277 GHz. To accomplish this, the camera has three separate detector arrays, each containing approximately 1000 detectors. After discussing the ACT experiment in detail, a discussion of the development and testing of the cold readout electronics for the MBAC is presented. Currently, the ACT collaboration is in the process of generating maps of the microwave sky using our first and second season observations. The analysis used to generate these maps requires careful data calibration to produce maps of the arcminute scale CMB temperature fluctuations. Tests and applications of several elements of the ACT calibrations are presented in the context of the second season observations. Scientific exploration has already begun on preliminary maps made using these calibrations. The final portion of this thesis is dedicated to discussing the point sources observed by the ACT. A discussion of the techniques used for point source detection and photometry is followed by a presentation of our current measurements of point source spectral indices.

  1. Point-Source Contributions to the Water Quality of an Urban Stream

    Little, S. F. B.; Young, M.; Lowry, C.

    2014-12-01

    Scajaquada Creek, which runs through the heart of the city of Buffalo, is a prime example of the ways in which human intervention and local geomorphology can impact water quality and urban hydrology. Beginning in the 1920's, the Creek has been partially channelized and connected to Buffalo's combined sewer system (CSS). At Forest Lawn Cemetery, where this study takes place, Scajaquada Creek emerges from a 3.5-mile tunnel built to route stream flow under the city. Collocated with the tunnel outlet is a discharge point for Buffalo's CSS, combined sewer outlet (CSO) #53. It is at this point that runoff and sanitary sewage discharge regularly during rain events. Initially, this study endeavored to create a spatial and temporal picture for this portion of the Creek, monitoring such parameters as conductivity, dissolved oxygen, pH, temperature, and turbidity, in addition to measuring Escherichia coli (E. coli) concentrations. As expected, these factors responded directly to seasonality, local geomorphology, and distance from the point source (CSO #53), displaying a overall, linear response. However, the addition of nitrate and phosphate testing to the study revealed an entirely separate signal from that previously observed. Concentrations of these parameters did not respond to location in the same manner as E. coli. Instead of decreasing with distance from the CSO, a distinct periodicity was observed, correlating with a series of outflow pipes lining the stream banks. It is hypothesized that nitrate and phosphate occurring in this stretch of Scajaquada Creek originate not from the CSO, but from fertilizers used to maintain the lawns within the subwatershed. These results provide evidence of the complexity related to water quality issues in urban streams as a result of point- and nonpoint-source hydrologic inputs.

  2. Crowd-sourced BMS point matching and metadata maintenance with Babel

    Fürst, Jonathan; Chen, Kaifei; Katz, Randy H.

    2016-01-01

    Cyber-physical applications, deployed on top of Building Management Systems (BMS), promise energy saving and comfort improvement in non-residential buildings. Such applications are so far mainly deployed as research prototypes. The main roadblock to widespread adoption is the low quality of BMS...... systems. Such applications access sensors and actuators through BMS metadata in form of point labels. The naming of labels is however often inconsistent and incomplete. To tackle this problem, we introduce Babel, a crowd-sourced approach to the creation and maintenance of BMS metadata. In our system...

  3. UHE γ-rays from point sources based on GRAPES-I observations

    Gupta, S.K.; Sreekantan, B.V.; Srivatsan, R.; Tonwar, S.C.

    1993-01-01

    An experiment called GRAPES I (Gamma Ray Astronomy at PeV EnergieS) was set up in 1984 at Ooty in India, using 24 scintillation counters, to detect Extensive Air Showers (EAS) produced in the atmosphere by the primary cosmic radiation. The goal of the experiment has been to search for Ultra High Energy (UHE) γ-rays (E≥10 14 eV) from point sources in the sky. Here we discuss the results on X-ray binaries CYG X-3, HER X-1 and SCO X-1 obtained with GRAPES I experiment which covers the period 1984--87

  4. Point-source reconstruction with a sparse light-sensor array for optical TPC readout

    Rutter, G; Richards, M; Bennieston, A J; Ramachers, Y A

    2011-01-01

    A reconstruction technique for sparse array optical signal readout is introduced and applied to the generic challenge of large-area readout of a large number of point light sources. This challenge finds a prominent example in future, large volume neutrino detector studies based on liquid argon. It is concluded that the sparse array option may be ruled out for reasons of required number of channels when compared to a benchmark derived from charge readout on wire-planes. Smaller-scale detectors, however, could benefit from this technology.

  5. Magnox fuel inventories. Experiment and calculation using a point source model

    Nair, S.

    1978-08-01

    The results of calculations of Magnox fuel inventories using the point source code RICE and associated Magnox reactor data set have been compared with experimental measurements for the actinide isotopes 234 , 235 , 236 , 238 U, 238 , 239 , 240 , 241 , 242 Pu, 241 , 243 Am and 242 , 244 Cm and the fission product isotopes 142 , 143 , 144 , 145 , 146 , 150 Nd, 95 Zr, 134 , 137 Cs, 144 Ce and daughter 144 Pr produced in four samples of spent Magnox fuel spanning the burnup range 3000 to 9000 MWd/Te. The neutron emissions from a further two samples were also measured and compared with RICE predictions. The results of the comparison were such as to justify the use of the code RICE for providing source terms for environmental impact studies, for the isotopes considered in the present work. (author)

  6. The 1.4-2.7 micron spectrum of the point source at the galactic center

    Treffers, R. R.; Fink, U.; Larson, H. P.; Gautier, T. N., III

    1976-01-01

    The spectrum of the 2-micron point source at the galactic center is presented over the range from 1.4 to 2.7 microns. The two-level-transition CO band heads are seen near 2.3 microns, confirming that the radiation from this source is due to a cool supergiant star. The heliocentric radial velocity is found to be - 173 (+ or -90) km/s and is consistent with the star being in orbit about a dense galactic nucleus. No evidence is found for Brackett-gamma emission, and no interstellar absorption features are seen. Upper limits for the column densities of interstellar H2, CH4, CO, and NH3 are derived.

  7. Development of uniform hazard response spectra for rock sites considering line and point sources of earthquakes

    Ghosh, A.K.; Kushwaha, H.S.

    2001-12-01

    Traditionally, the seismic design basis ground motion has been specified by normalised response spectral shapes and peak ground acceleration (PGA). The mean recurrence interval (MRI) used to computed for PGA only. It is shown that the MRI associated with such response spectra are not the same at all frequencies. The present work develops uniform hazard response spectra i.e. spectra having the same MRI at all frequencies for line and point sources of earthquakes by using a large number of strong motion accelerograms recorded on rock sites. Sensitivity of the number of the results to the changes in various parameters has also been presented. This work is an extension of an earlier work for aerial sources of earthquakes. These results will help to determine the seismic hazard at a given site and the associated uncertainities. (author)

  8. Correction of head movements in positron emission tomography using point source tracking system: a simulation study.

    Nazarparvar, Babak; Shamsaei, Mojtaba; Rajabi, Hossein

    2012-01-01

    The motion of the head during brain positron emission tomography (PET) acquisitions has been identified as a source of artifact in the reconstructed image. In this study, a method is described to develop an image-based motion correction technique for correcting the post-acquisition data without using external optical motion-tracking system such as POLARIS. In this technique, GATE has been used to simulate PET brain scan using point sources mounted around the head to accurately monitor the position of the head during the time frames. The measurement of head motion in each frame showed a transformation in the image frame matrix, resulting in a fully corrected data set. Using different kinds of phantoms and motions, the accuracy of the correction method is tested and its applicability to experimental studies is demonstrated as well.

  9. The acoustic field of a point source in a uniform boundary layer over an impedance plane

    Zorumski, W. E.; Willshire, W. L., Jr.

    1986-01-01

    The acoustic field of a point source in a boundary layer above an impedance plane is investigated anatytically using Obukhov quasi-potential functions, extending the normal-mode theory of Chunchuzov (1984) to account for the effects of finite ground-plane impedance and source height. The solution is found to be asymptotic to the surface-wave term studies by Wenzel (1974) in the limit of vanishing wind speed, suggesting that normal-mode theory can be used to model the effects of an atmospheric boundary layer on infrasonic sound radiation. Model predictions are derived for noise-generation data obtained by Willshire (1985) at the Medicine Bow wind-turbine facility. Long-range downwind propagation is found to behave as a cylindrical wave, with attention proportional to the wind speed, the boundary-layer displacement thickness, the real part of the ground admittance, and the square of the frequency.

  10. Dynamic Control of Particle Deposition in Evaporating Droplets by an External Point Source of Vapor.

    Malinowski, Robert; Volpe, Giovanni; Parkin, Ivan P; Volpe, Giorgio

    2018-02-01

    The deposition of particles on a surface by an evaporating sessile droplet is important for phenomena as diverse as printing, thin-film deposition, and self-assembly. The shape of the final deposit depends on the flows within the droplet during evaporation. These flows are typically determined at the onset of the process by the intrinsic physical, chemical, and geometrical properties of the droplet and its environment. Here, we demonstrate deterministic emergence and real-time control of Marangoni flows within the evaporating droplet by an external point source of vapor. By varying the source location, we can modulate these flows in space and time to pattern colloids on surfaces in a controllable manner.

  11. Decreasing Computational Time for VBBinaryLensing by Point Source Approximation

    Tirrell, Bethany M.; Visgaitis, Tiffany A.; Bozza, Valerio

    2018-01-01

    The gravitational lens of a binary system produces a magnification map that is more intricate than a single object lens. This map cannot be calculated analytically and one must rely on computational methods to resolve. There are generally two methods of computing the microlensed flux of a source. One is based on ray-shooting maps (Kayser, Refsdal, & Stabell 1986), while the other method is based on an application of Green’s theorem. This second method finds the area of an image by calculating a Riemann integral along the image contour. VBBinaryLensing is a C++ contour integration code developed by Valerio Bozza, which utilizes this method. The parameters at which the source object could be treated as a point source, or in other words, when the source is far enough from the caustic, was of interest to substantially decrease the computational time. The maximum and minimum values of the caustic curves produced, were examined to determine the boundaries for which this simplification could be made. The code was then run for a number of different maps, with separation values and accuracies ranging from 10-1 to 10-3, to test the theoretical model and determine a safe buffer for which minimal error could be made for the approximation. The determined buffer was 1.5+5q, with q being the mass ratio. The theoretical model and the calculated points worked for all combinations of the separation values and different accuracies except the map with accuracy and separation equal to 10-3 for y1 max. An alternative approach has to be found in order to accommodate a wider range of parameters.

  12. Point, surface and volumetric heat sources in the thermal modelling of selective laser melting

    Yang, Yabin; Ayas, Can

    2017-10-01

    Selective laser melting (SLM) is a powder based additive manufacturing technique suitable for producing high precision metal parts. However, distortions and residual stresses within products arise during SLM because of the high temperature gradients created by the laser heating. Residual stresses limit the load resistance of the product and may even lead to fracture during the built process. It is therefore of paramount importance to predict the level of part distortion and residual stress as a function of SLM process parameters which requires a reliable thermal modelling of the SLM process. Consequently, a key question arises which is how to describe the laser source appropriately. Reasonable simplification of the laser representation is crucial for the computational efficiency of the thermal model of the SLM process. In this paper, first a semi-analytical thermal modelling approach is described. Subsequently, the laser heating is modelled using point, surface and volumetric sources, in order to compare the influence of different laser source geometries on the thermal history prediction of the thermal model. The present work provides guidelines on appropriate representation of the laser source in the thermal modelling of the SLM process.

  13. An open, interoperable, transdisciplinary approach to a point cloud data service using OGC standards and open source software.

    Steer, Adam; Trenham, Claire; Druken, Kelsey; Evans, Benjamin; Wyborn, Lesley

    2017-04-01

    High resolution point clouds and other topology-free point data sources are widely utilised for research, management and planning activities. A key goal for research and management users is making these data and common derivatives available in a way which is seamlessly interoperable with other observed and modelled data. The Australian National Computational Infrastructure (NCI) stores point data from a range of disciplines, including terrestrial and airborne LiDAR surveys, 3D photogrammetry, airborne and ground-based geophysical observations, bathymetric observations and 4D marine tracers. These data are stored alongside a significant store of Earth systems data including climate and weather, ecology, hydrology, geoscience and satellite observations, and available from NCI's National Environmental Research Data Interoperability Platform (NERDIP) [1]. Because of the NERDIP requirement for interoperability with gridded datasets, the data models required to store these data may not conform to the LAS/LAZ format - the widely accepted community standard for point data storage and transfer. The goal for NCI is making point data discoverable, accessible and useable in ways which allow seamless integration with earth observation datasets and model outputs - in turn assisting researchers and decision-makers in the often-convoluted process of handling and analyzing massive point datasets. With a use-case of providing a web data service and supporting a derived product workflow, NCI has implemented and tested a web-based point cloud service using the Open Geospatial Consortium (OGC) Web Processing Service [2] as a transaction handler between a web-based client and server-side computing tools based on a native Linux operating system. Using this model, the underlying toolset for driving a data service is flexible and can take advantage of NCI's highly scalable research cloud. Present work focusses on the Point Data Abstraction Library (PDAL) [3] as a logical choice for

  14. Joint part-of-speech and dependency projection from multiple sources

    Johannsen, Anders Trærup; Agic, Zeljko; Søgaard, Anders

    2016-01-01

    for multiple tasks from multiple source languages, relying on parallel corpora available for hundreds of languages. When training POS taggers and dependency parsers on jointly projected POS tags and syntactic dependencies using our algorithm, we obtain better performance than a standard approach on 20...

  15. Development of a Whole-Body Haptic Sensor with Multiple Supporting Points and Its Application to a Manipulator

    Hanyu, Ryosuke; Tsuji, Toshiaki

    This paper proposes a whole-body haptic sensing system that has multiple supporting points between the body frame and the end-effector. The system consists of an end-effector and multiple force sensors. Using this mechanism, the position of a contact force on the surface can be calculated without any sensor array. A haptic sensing system with a single supporting point structure has previously been developed by the present authors. However, the system has drawbacks such as low stiffness and low strength. Therefore, in this study, a mechanism with multiple supporting points was proposed and its performance was verified. In this paper, the basic concept of the mechanism is first introduced. Next, an evaluation of the proposed method, performed by conducting some experiments, is presented.

  16. Adaptive aspirations and performance heterogeneity : Attention allocation among multiple reference points

    Blettner, D.P.; He, Z.; Hu, S.; Bettis, R.

    Organizations learn and adapt their aspiration levels based on reference points (prior aspiration, prior performance, and prior performance of reference groups). The relative attention that organizations allocate to these reference points impacts organizational search and strategic decisions.

  17. Genetic diversity and antimicrobial resistance of Escherichia coli from human and animal sources uncovers multiple resistances from human sources.

    A Mark Ibekwe

    Full Text Available Escherichia coli are widely used as indicators of fecal contamination, and in some cases to identify host sources of fecal contamination in surface water. Prevalence, genetic diversity and antimicrobial susceptibility were determined for 600 generic E. coli isolates obtained from surface water and sediment from creeks and channels along the middle Santa Ana River (MSAR watershed of southern California, USA, after a 12 month study. Evaluation of E. coli populations along the creeks and channels showed that E. coli were more prevalent in sediment compared to surface water. E. coli populations were not significantly different (P = 0.05 between urban runoff sources and agricultural sources, however, E. coli genotypes determined by pulsed-field gel electrophoresis (PFGE were less diverse in the agricultural sources than in urban runoff sources. PFGE also showed that E. coli populations in surface water were more diverse than in the sediment, suggesting isolates in sediment may be dominated by clonal populations.Twenty four percent (144 isolates of the 600 isolates exhibited resistance to more than one antimicrobial agent. Most multiple resistances were associated with inputs from urban runoff and involved the antimicrobials rifampicin, tetracycline, and erythromycin. The occurrence of a greater number of E. coli with multiple antibiotic resistances from urban runoff sources than agricultural sources in this watershed provides useful evidence in planning strategies for water quality management and public health protection.

  18. A Spatial and Temporal Assessment of Non-Point Groundwater Pollution Sources, Tutuila Island, American Samoa

    Shuler, C. K.; El-Kadi, A. I.; Dulaiova, H.; Glenn, C. R.; Fackrell, J.

    2015-12-01

    The quality of municipal groundwater supplies on Tutuila, the main island in American Samoa, is currently in question. A high vulnerability for contamination from surface activities has been recognized, and there exists a strong need to clearly identify anthropogenic sources of pollution and quantify their influence on the aquifer. This study examines spatial relationships and time series measurements of nutrients and other tracers to identify predominant pollution sources and determine the water quality impacts of the island's diverse land uses. Elevated groundwater nitrate concentrations are correlated with areas of human development, however, the mixture of residential and agricultural land use in this unique village based agrarian setting makes specific source identification difficult using traditional geospatial analysis. Spatial variation in anthropogenic impact was assessed by linking NO3- concentrations and δ15N(NO3) from an extensive groundwater survey to land-use types within well capture zones and groundwater flow-paths developed with MODFLOW, a numerical groundwater model. Land use types were obtained from high-resolution GIS data and compared to water quality results with multiple-regression analysis to quantify the impact that different land uses have on water quality. In addition, historical water quality data and new analyses of δD and δ18O in precipitation, groundwater, and mountain-front recharge waters were used to constrain the sources and mechanisms of contamination. Our analyses indicate that groundwater nutrient levels on Tutuila are controlled primarily by residential, not agricultural activity. Also a lack of temporal variation suggests that episodic pollution events are limited to individual water sources as opposed to the entire aquifer. These results are not only valuable for water quality management on Tutuila, but also provide insight into the sustainability of groundwater supplies on other islands with similar hydrogeology and land

  19. The effect of energy distribution of external source on source multiplication in fast assemblies

    Karam, R.A.; Vakilian, M.

    1976-02-01

    The essence of this study is the effect of energy distribution of a source on the detection rate as a function of K effective in fast assemblies. This effectiveness, as a function of K was studied in a fission chamber, using the ABN cross-section set and Mach 1 code. It was found that with a source which has a fission spectrum, the reciprocal count rate versus mass relationship is linear down to K effective 0.59. For a thermal source, the linearity was never achieved. (author)

  20. Seasonal and spatial variation of diffuse (non-point) source zinc pollution in a historically metal mined river catchment, UK

    Gozzard, E., E-mail: emgo@ceh.ac.uk [Hydrogeochemical Engineering Research and Outreach Group, School of Civil Engineering and Geosciences, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom); Mayes, W.M., E-mail: W.Mayes@hull.ac.uk [Hydrogeochemical Engineering Research and Outreach Group, School of Civil Engineering and Geosciences, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom); Potter, H.A.B., E-mail: hugh.potter@environment-agency.gov.uk [Environment Agency England and Wales, c/o Institute for Research on Environment and Sustainability, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom); Jarvis, A.P., E-mail: a.p.jarvis@ncl.ac.uk [Hydrogeochemical Engineering Research and Outreach Group, School of Civil Engineering and Geosciences, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom)

    2011-10-15

    Quantifying diffuse sources of pollution is becoming increasingly important when characterising river catchments in entirety - a prerequisite for environmental management. This study examines both low and high flow events, as well as spatial variability, in order to assess point and diffuse components of zinc pollution within the River West Allen catchment, which lies within the northern England lead-zinc Orefield. Zinc levels in the river are elevated under all flow regimes, and are of environmental concern. Diffuse components are of little importance at low flow, with point source mine water discharges dominating instream zinc concentration and load. During higher river flows 90% of the instream zinc load is attributed to diffuse sources, where inputs from resuspension of metal-rich sediments, and groundwater influx are likely to be more dominant. Remediating point mine water discharges should significantly improve water quality at lower flows, but contribution from diffuse sources will continue to elevate zinc flux at higher flows. - Highlights: > Zinc concentrations breach EU quality thresholds under all river flow conditions. > Contributions from point sources dominate instream zinc dynamics in low flow. > Contributions from diffuse sources dominate instream zinc dynamics in high flow. > Important diffuse sources include river-bed sediment resuspension and groundwater influx. > Diffuse sources would still create significant instream pollution, even with point source treatment. - Diffuse zinc sources are an important source of instream contamination to mine-impacted rivers under varying flow conditions.

  1. Calibrate the aerial surveying instrument by the limited surface source and the single point source that replace the unlimited surface source

    Lu Cunheng

    1999-01-01

    It is described that the calculating formula and surveying result is found on the basis of the stacking principle of gamma ray and the feature of hexagonal surface source when the limited surface source replaces the unlimited surface source to calibrate the aerial survey instrument on the ground, and that it is found in the light of the exchanged principle of the gamma ray when the single point source replaces the unlimited surface source to calibrate aerial surveying instrument in the air. Meanwhile through the theoretical analysis, the receiving rate of the crystal bottom and side surfaces is calculated when aerial surveying instrument receives gamma ray. The mathematical expression of the gamma ray decaying following height according to the Jinge function regularity is got. According to this regularity, the absorbing coefficient that air absorbs the gamma ray and the detective efficiency coefficient of the crystal is calculated based on the ground and air measuring value of the bottom surface receiving count rate (derived from total receiving count rate of the bottom and side surface). Finally, according to the measuring value, it is proved that imitating the change of total receiving gamma ray exposure rate of the bottom and side surfaces with this regularity in a certain high area is feasible

  2. [Urban non-point source pollution control by runoff retention and filtration pilot system].

    Bai, Yao; Zuo, Jian-E; Gan, Li-Li; Low, Thong Soon; Miao, Heng-Feng; Ruan, Wen-Quan; Huang, Xia

    2011-09-01

    A runoff retention and filtration pilot system was designed and the long-term purification effect of the runoff was monitored. Runoff pollution characters in 2 typical events and treatment effect of the pilot system were analyzed. The results showed that the runoff was severely polluted. Event mean concentrations (EMCs) of SS, COD, TN and TP in the runoff were 361, 135, 7.88 and 0.62 mg/L respectively. The runoff formed by long rain presented an obvious first flush effect. The first 25% flow contributed more than 50% of the total pollutants loading of SS, TP, DTP and PO4(3-). The pilot system could reduce 100% of the non-point source pollution if the volume of the runoff was less than the retention tank. Otherwise the overflow will be purification by the filtration pilot system and the removal rates of SS, COD, TN, TP, DTP and PO4(3-) reached 97.4% , 61.8%, 22.6%, 85.1%, 72.1%, and 85.2% respectively. The system was stable and the removal rate of SS, COD, TN, and TP were 98.6%, 65.4%, 55.1% and 92.6%. The whole system could effectively remove the non-point source pollution caused by runoff.

  3. Current status of agricultural and rural non-point source Pollution assessment in China

    Ongley, Edwin D.; Zhang Xiaolan; Yu Tao

    2010-01-01

    Estimates of non-point source (NPS) contribution to total water pollution in China range up to 81% for nitrogen and to 93% for phosphorus. We believe these values are too high, reflecting (a) misuse of estimation techniques that were developed in America under very different conditions and (b) lack of specificity on what is included as NPS. We compare primary methods used for NPS estimation in China with their use in America. Two observations are especially notable: empirical research is limited and does not provide an adequate basis for calibrating models nor for deriving export coefficients; the Chinese agricultural situation is so different than that of the United States that empirical data produced in America, as a basis for applying estimation techniques to rural NPS in China, often do not apply. We propose a set of national research and policy initiatives for future NPS research in China. - Estimation techniques used in China for non-point source pollution are evaluated as a basis for recommending future policies and research in NPS studies in China.

  4. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  5. Modeling spatial variability of sand-lenses in clay till settings using transition probability and multiple-point geostatistics

    Kessler, Timo Christian; Nilsson, Bertel; Klint, Knud Erik

    2010-01-01

    (TPROGS) of alternating geological facies. The second method, multiple-point statistics, uses training images to estimate the conditional probability of sand-lenses at a certain location. Both methods respect field observations such as local stratigraphy, however, only the multiple-point statistics can...... of sand-lenses in clay till. Sand-lenses mainly account for horizontal transport and are prioritised in this study. Based on field observations, the distribution has been modeled using two different geostatistical approaches. One method uses a Markov chain model calculating the transition probabilities...

  6. Using rare earth elements to trace wind-driven dispersion of sediments from a point source

    Van Pelt, R. Scott; Barnes, Melanie C. W.; Strack, John E.

    2018-06-01

    The entrainment and movement of aeolian sediments is determined by the direction and intensity of erosive winds. Although erosive winds may blow from all directions, in most regions there is a predominant direction. Dust emission causes the removal preferentially of soil nutrients and contaminants which may be transported tens to even thousands of kilometers from the source and deposited into other ecosystems. It would be beneficial to understand spatially and temporally how the soil source may be degraded and depositional zones enriched. A stable chemical tracer not found in the soil but applied to the surface of all particles in the surface soil would facilitate this endeavor. This study examined whether solution-applied rare earth elements (REEs) could be used to trace aeolian sediment movement from a point source through space and time at the field scale. We applied erbium nitrate solution to a 5 m2 area in the center of a 100 m diameter field 7854 m2 on the Southern High Plains of Texas. The solution application resulted in a soil-borne concentration three orders of magnitude greater than natively found in the field soil. We installed BSNE sampler masts in circular configurations and collected the trapped sediment weekly. We found that REE-tagged sediment was blown into every sampler mast during the course of the study but that there was a predominant direction of transport during the spring. This preliminary investigation suggests that the REEs provide a viable and incisive technique to study spatial and temporal variation of aeolian sediment movement from specific sources to identifiable locations of deposition or locations through which the sediments were transported as horizontal mass flux and the relative contribution of the specific source to the total mass flux.

  7. Two- and three-particle interference correlations of identical bosons and fermions with close momenta in the model of independent point-like sources

    Lyuboshits, V.L.

    1991-01-01

    Interference correlations introduced between identical particles with close momenta by the effect of Bose or Fermi statistics are discussed. Relations describing two- and three-particle correlations of identical bosons and fermions with arbitrary spin and arbitrary spin polarization are obtained on the basis of the model of independent single-particle point-like sources. The general structure of the dependence of narrow two- and three-particle correlations on the difference of the four-momenta in the presence of several groups of single-particle sources with different space-time distributions is analyzed. The idea of many-particle point sources of identical bosons is introduced. The suppression of two- and three-particle interference correlations between identical π mesons under conditions when one or several many-particle sources are added to a system of randomly distributed independent single-particle sources is studied. It is shown that if the multiplicities of the particles emitted by the sources are distributed according to the Poisson law, the present results agree with the relations obtained by means of the formalism of coherent states. This agreement also holds in the limit of very large multiplicities with any distribution laws

  8. Super-resolution for a point source better than λ/500 using positive refraction

    Miñano, Juan C; González, Juan C; Benítez, Pablo; Grabovickic, Dejan; Marqués, Ricardo; Delgado, Vicente; Freire, Manuel

    2011-01-01

    Leonhardt (2009 New J. Phys. 11 093040) demonstrated that the two-dimensional (2D) Maxwell fish eye (MFE) lens can focus perfectly 2D Helmholtz waves of arbitrary frequency; that is, it can transport perfectly an outward (monopole) 2D Helmholtz wave field, generated by a point source, towards a ‘perfect point drain’ located at the corresponding image point. Moreover, a prototype with λ/5 super-resolution property for one microwave frequency has been manufactured and tested (Ma et al 2010 arXiv:1007.2530v1; Ma et al 2010 New J. Phys. 13 033016). However, neither software simulations nor experimental measurements for a broad band of frequencies have yet been reported. Here, we present steady-state simulations with a non-perfect drain for a device equivalent to the MFE, called the spherical geodesic waveguide (SGW), which predicts up to λ/500 super-resolution close to discrete frequencies. Out of these frequencies, the SGW does not show super-resolution in the analysis carried out. (paper)

  9. Super-resolution for a point source better than λ/500 using positive refraction

    Miñano, Juan C.; Marqués, Ricardo; González, Juan C.; Benítez, Pablo; Delgado, Vicente; Grabovickic, Dejan; Freire, Manuel

    2011-12-01

    Leonhardt (2009 New J. Phys. 11 093040) demonstrated that the two-dimensional (2D) Maxwell fish eye (MFE) lens can focus perfectly 2D Helmholtz waves of arbitrary frequency; that is, it can transport perfectly an outward (monopole) 2D Helmholtz wave field, generated by a point source, towards a ‘perfect point drain’ located at the corresponding image point. Moreover, a prototype with λ/5 super-resolution property for one microwave frequency has been manufactured and tested (Ma et al 2010 arXiv:1007.2530v1; Ma et al 2010 New J. Phys. 13 033016). However, neither software simulations nor experimental measurements for a broad band of frequencies have yet been reported. Here, we present steady-state simulations with a non-perfect drain for a device equivalent to the MFE, called the spherical geodesic waveguide (SGW), which predicts up to λ/500 super-resolution close to discrete frequencies. Out of these frequencies, the SGW does not show super-resolution in the analysis carried out.

  10. Emissions of perfluorinated alkylated substances (PFAS) from point sources--identification of relevant branches.

    Clara, M; Scheffknecht, C; Scharf, S; Weiss, S; Gans, O

    2008-01-01

    Effluents of wastewater treatment plants are relevant point sources for the emission of hazardous xenobiotic substances to the aquatic environment. One group of substances, which recently entered scientific and political discussions, is the group of the perfluorinated alkylated substances (PFAS). The most studied compounds from this group are perfluorooctanoic acid (PFOA) and perfluorooctane sulphonate (PFOS), which are the most important degradation products of PFAS. These two substances are known to be persistent, bioaccumulative and toxic (PBT). In the present study, eleven PFAS were investigated in effluents of municipal wastewater treatment plants (WWTP) and in industrial wastewaters. PFOS and PFOA proved to be the dominant compounds in all sampled wastewaters. Concentrations of up to 340 ng/L of PFOS and up to 220 ng/L of PFOA were observed. Besides these two compounds, perfluorohexanoic acid (PFHxA) was also present in nearly all effluents and maximum concentrations of up to 280 ng/L were measured. Only N-ethylperfluorooctane sulphonamide (N-EtPFOSA) and its degradation/metabolisation product perfluorooctane sulphonamide (PFOSA) were either detected below the limit of quantification or were not even detected at all. Beside the effluents of the municipal WWTPs, nine industrial wastewaters from six different industrial branches were also investigated. Significantly, the highest emissions or PFOS were observed from metal industry whereas paper industry showed the highest PFOA emission. Several PFAS, especially perfluorononanoic acid (PFNA), perfluorodecanoic acid (PFDA), perfluorododecanoic acid (PFDoA) and PFOS are predominantly emitted from industrial sources, with concentrations being a factor of 10 higher than those observed in the municipal WWTP effluents. Perfluorodecane sulphonate (PFDS), N-Et-PFOSA and PFOSA were not detected in any of the sampled industrial point sources. (c) IWA Publishing 2008.

  11. Channel capacity of TDD-OFDM-MIMO for multiple access points in a wireless single-frequency-network

    Takatori, Y.; Fitzek, Frank; Tsunekawa, K.

    2005-01-01

    MIMO data transmission scheme, which combines Single-Frequency-Network (SFN) with TDD-OFDM-MIMO applied for wireless LAN networks. In our proposal, we advocate to use SFN for multiple access points (MAP) MIMO data transmission. The goal of this approach is to achieve very high channel capacity in both......The multiple-input-multiple-output (MIMO) technique is the most attractive candidate to improve the spectrum efficiency in the next generation wireless communication systems. However, the efficiency of MIMO techniques reduces in the line of sight (LOS) environments. In this paper, we propose a new...

  12. Multiple-point statistical simulation for hydrogeological models: 3-D training image development and conditioning strategies

    Høyer, Anne-Sophie; Vignoli, Giulio; Mejer Hansen, Thomas; Thanh Vu, Le; Keefer, Donald A.; Jørgensen, Flemming

    2017-12-01

    Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and

  13. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.

  14. Launching and controlling Gaussian beams from point sources via planar transformation media

    Odabasi, Hayrettin; Sainath, Kamalesh; Teixeira, Fernando L.

    2018-02-01

    Based on operations prescribed under the paradigm of complex transformation optics (CTO) [F. Teixeira and W. Chew, J. Electromagn. Waves Appl. 13, 665 (1999), 10.1163/156939399X01104; F. L. Teixeira and W. C. Chew, Int. J. Numer. Model. 13, 441 (2000), 10.1002/1099-1204(200009/10)13:5%3C441::AID-JNM376%3E3.0.CO;2-J; H. Odabasi, F. L. Teixeira, and W. C. Chew, J. Opt. Soc. Am. B 28, 1317 (2011), 10.1364/JOSAB.28.001317; B.-I. Popa and S. A. Cummer, Phys. Rev. A 84, 063837 (2011), 10.1103/PhysRevA.84.063837], it was recently shown in [G. Castaldi, S. Savoia, V. Galdi, A. Alù, and N. Engheta, Phys. Rev. Lett. 110, 173901 (2013), 10.1103/PhysRevLett.110.173901] that a complex source point (CSP) can be mimicked by parity-time (PT ) transformation media. Such coordinate transformation has a mirror symmetry for the imaginary part, and results in a balanced loss/gain metamaterial slab. A CSP produces a Gaussian beam and, consequently, a point source placed at the center of such a metamaterial slab produces a Gaussian beam propagating away from the slab. Here, we extend the CTO analysis to nonsymmetric complex coordinate transformations as put forth in [S. Savoia, G. Castaldi, and V. Galdi, J. Opt. 18, 044027 (2016), 10.1088/2040-8978/18/4/044027] and verify that, by using simply a (homogeneous) doubly anisotropic gain-media metamaterial slab, one can still mimic a CSP and produce Gaussian beam. In addition, we show that a Gaussian-like beams can be produced by point sources placed outside the slab as well. By making use of the extra degrees of freedom (the real and imaginary parts of the coordinate transformation) provided by CTO, the near-zero requirement on the real part of the resulting constitutive parameters can be relaxed to facilitate potential realization of Gaussian-like beams. We illustrate how beam properties such as peak amplitude and waist location can be controlled by a proper choice of (complex-valued) CTO Jacobian elements. In particular, the beam waist

  15. Electrical source imaging of interictal spikes using multiple sparse volumetric priors for presurgical epileptogenic focus localization

    Gregor Strobbe

    2016-01-01

    Full Text Available Electrical source imaging of interictal spikes observed in EEG recordings of patients with refractory epilepsy provides useful information to localize the epileptogenic focus during the presurgical evaluation. However, the selection of the time points or time epochs of the spikes in order to estimate the origin of the activity remains a challenge. In this study, we consider a Bayesian EEG source imaging technique for distributed sources, i.e. the multiple volumetric sparse priors (MSVP approach. The approach allows to estimate the time courses of the intensity of the sources corresponding with a specific time epoch of the spike. Based on presurgical averaged interictal spikes in six patients who were successfully treated with surgery, we estimated the time courses of the source intensities for three different time epochs: (i an epoch starting 50 ms before the spike peak and ending at 50% of the spike peak during the rising phase of the spike, (ii an epoch starting 50 ms before the spike peak and ending at the spike peak and (iii an epoch containing the full spike time period starting 50 ms before the spike peak and ending 230 ms after the spike peak. To identify the primary source of the spike activity, the source with the maximum energy from 50 ms before the spike peak till 50% of the spike peak was subsequently selected for each of the time windows. For comparison, the activity at the spike peaks and at 50% of the peaks was localized using the LORETA inversion technique and an ECD approach. Both patient-specific spherical forward models and patient-specific 5-layered finite difference models were considered to evaluate the influence of the forward model. Based on the resected zones in each of the patients, extracted from post-operative MR images, we compared the distances to the resection border of the estimated activity. Using the spherical models, the distances to the resection border for the MSVP approach and each of the different time

  16. Assisting People with Multiple Disabilities by Improving Their Computer Pointing Efficiency with an Automatic Target Acquisition Program

    Shih, Ching-Hsiang; Shih, Ching-Tien; Peng, Chin-Ling

    2011-01-01

    This study evaluated whether two people with multiple disabilities would be able to improve their pointing performance through an Automatic Target Acquisition Program (ATAP) and a newly developed mouse driver (i.e. a new mouse driver replaces standard mouse driver, and is able to monitor mouse movement and intercept click action). Initially, both…

  17. Quantitative Analysis of VIIRS DNB Nightlight Point Source for Light Power Estimation and Stability Monitoring

    Changyong Cao

    2014-12-01

    Full Text Available The high sensitivity and advanced onboard calibration on the Visible Infrared Imaging Radiometer Suite (VIIRS Day/Night Band (DNB enables accurate measurements of low light radiances which leads to enhanced quantitative applications at night. The finer spatial resolution of DNB also allows users to examine social economic activities at urban scales. Given the growing interest in the use of the DNB data, there is a pressing need for better understanding of the calibration stability and absolute accuracy of the DNB at low radiances. The low light calibration accuracy was previously estimated at a moderate 15% using extended sources while the long-term stability has yet to be characterized. There are also several science related questions to be answered, for example, how the Earth’s atmosphere and surface variability contribute to the stability of the DNB measured radiances; how to separate them from instrument calibration stability; whether or not SI (International System of Units traceable active light sources can be designed and installed at selected sites to monitor the calibration stability, radiometric and geolocation accuracy, and point spread functions of the DNB; furthermore, whether or not such active light sources can be used for detecting environmental changes, such as aerosols. This paper explores the quantitative analysis of nightlight point sources, such as those from fishing vessels, bridges, and cities, using fundamental radiometry and radiative transfer, which would be useful for a number of applications including search and rescue in severe weather events, as well as calibration/validation of the DNB. Time series of the bridge light data are used to assess the stability of the light measurements and the calibration of VIIRS DNB. It was found that the light radiant power computed from the VIIRS DNB data matched relatively well with independent assessments based on the in situ light installations, although estimates have to be

  18. Search for point-like sources using the diffuse astrophysical muon-neutrino flux in IceCube

    Reimann, Rene; Haack, Christian; Raedel, Leif; Schoenen, Sebastian; Schumacher, Lisa; Wiebusch, Christopher [III. Physikalisches Institut B, RWTH Aachen (Germany); Collaboration: IceCube-Collaboration

    2016-07-01

    IceCube, a cubic-kilometer sized neutrino detector at the geographic South Pole, has recently confirmed a flux of high-energy astrophysical neutrinos in the track-like muon channel. Although this muon-neutrino flux has now been observed with high significance, no point sources or source classes could be identified yet with these well pointing events. We present a search for point-like sources based on a six year sample of upgoing muon-neutrinos with very low background contamination. To improve the sensitivity, the standard likelihood approach has been modified to focus on the properties of the measured astrophysical muon-neutrino flux.

  19. Broadband integrated mid infrared light sources as enabling technology for point of care mid-infrared spectroscopy

    2017-08-20

    AFRL-AFOSR-JP-TR-2017-0061 Broadband integrated mid-infrared light sources as enabling technology for point-of-care mid- infrared spectroscopy Alex...mid-infrared light sources as enabling technology for point-of-care mid-infrared spectroscopy 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386-16-1-4037...Broadband integrated mid-infrared light sources as enabling technology for point-of-care mid- infrared spectroscopy ” Date: 16th August 2017 Name

  20. Simulation of agricultural non-point source pollution in Xichuan by using SWAT model

    Xing, Linan; Zuo, Jiane; Liu, Fenglin; Zhang, Xiaohui; Cao, Qiguang

    2018-02-01

    This paper evaluated the applicability of using SWAT to access agricultural non-point source pollution in Xichuan area. In order to build the model, DEM, soil sort and land use map, climate monitoring data were collected as basic database. The SWAT model was calibrated and validated for the SWAT was carried out using streamflow, suspended solids, total phosphorus and total nitrogen records from 2009 to 2011. Errors, coefficient of determination and Nash-Sutcliffe coefficient were considered to evaluate the applicability. The coefficient of determination were 0.96, 0.66, 0.55 and 0.66 for streamflow, SS, TN, and TP, respectively. Nash-Sutcliffe coefficient were 0.93, 0.5, 0.52 and 0.63, respectively. The results all meet the requirements. It suggested that the SWAT model can simulate the study area.

  1. Exposure buildup factors for a cobalt-60 point isotropic source for single and two layer slabs

    Chakarova, R.

    1992-01-01

    Exposure buildup factors for point isotropic cobalt-60 sources are calculated by the Monte Carlo method with statistical errors ranging from 1.5 to 7% for 1-5 mean free paths (mfp) thick water and iron single slabs and for 1 and 2 mfp iron layers followed by water layers 1-5 mfp thick. The computations take into account Compton scattering. The Monte Carlo data for single slab geometries are approximated by Geometric Progression formula. Kalos's formula using the calculated single slab buildup factors may be applied to reproduce the data for two-layered slabs. The presented results and discussion may help when choosing the manner in which the radiation field gamma irradiation units will be described. (author)

  2. Large-Eddy Simulation of Chemically Reactive Pollutant Transport from a Point Source in Urban Area

    Du, Tangzheng; Liu, Chun-Ho

    2013-04-01

    Most air pollutants are chemically reactive so using inert scalar as the tracer in pollutant dispersion modelling would often overlook their impact on urban inhabitants. In this study, large-eddy simulation (LES) is used to examine the plume dispersion of chemically reactive pollutants in a hypothetical atmospheric boundary layer (ABL) in neutral stratification. The irreversible chemistry mechanism of ozone (O3) titration is integrated into the LES model. Nitric oxide (NO) is emitted from an elevated point source in a rectangular spatial domain doped with O3. The LES results are compared well with the wind tunnel results available in literature. Afterwards, the LES model is applied to idealized two-dimensional (2D) street canyons of unity aspect ratio to study the behaviours of chemically reactive plume over idealized urban roughness. The relation among various time scales of reaction/turbulence and dimensionless number are analysed.

  3. Risk-based prioritization of ground water threatening point sources at catchment and regional scales

    Overheu, Niels Døssing; Tuxen, Nina; Flyvbjerg, John

    2014-01-01

    framework has been developed to enable a systematic and transparent risk assessment and prioritization of contaminant point sources, considering the local, catchment, or regional scales (Danish EPA, 2011, 2012). The framework has been tested in several catchments in Denmark with different challenges...... and needs, and two of these are presented. Based on the lessons learned, the Danish EPA has prepared a handbook to guide the user through the steps in a risk-based prioritization (Danish EPA, 2012). It provides guidance on prioritization both in an administratively defined area such as a Danish Region...... of the results are presented using the case studies as examples. The methodology was developed by a broad industry group including the Danish EPA, the Danish Regions, the Danish Nature Agency, the Technical University of Denmark, and consultants — and the framework has been widely accepted by the professional...

  4. Modeling a point-source release of 1,1,1-trichloroethane using EPA's SCREEN model

    Henriques, W.D.; Dixon, K.R.

    1994-01-01

    Using data from the Environmental Protection Agency's Toxic Release Inventory 1988 (EPA TRI88), pollutant concentration estimates were modeled for a point source air release of 1,1,1-trichloroethane at the Savannah River Plant located in Aiken, South Carolina. Estimates were calculating using the EPA's SCREEN model utilizing typical meteorological conditions to determine maximum impact of the plume under different mixing conditions for locations within 100 meters of the stack. Input data for the SCREEN model were then manipulated to simulate the impact of the release under urban conditions (for the purpose of assessing future landuse considerations) and under flare release options to determine if these parameters lessen or increase the probability of human or wildlife exposure to significant concentrations. The results were then compared to EPA reference concentrations (RfC) in order to assess the size of the buffer around the stack which may potentially have levels that exceed this level of safety

  5. Improvement of correlation-based centroiding methods for point source Shack-Hartmann wavefront sensor

    Li, Xuxu; Li, Xinyang; wang, Caixia

    2018-03-01

    This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.

  6. Preliminary limits on the flux of muon neutrinos from extraterrestrial point sources

    Bionta, R.M.; Blewitt, G.; Bratton, C.B.

    1985-01-01

    We present the arrival directions of 117 upward-going muon events collected with the IMB proton lifetime detector during 317 days of live detector operation. The rate of upward-going muons observed in our detector was found to be consistent with the rate expected from atmospheric neutrino production. The upper limit on the total flux of extraterrestrial neutrinos >1 GeV is 2 -sec. Using our data and a Monte Carlo simulation of high energy muon production in the earth surrounding the detector, we place limits on the flux of neutrinos from a point source in the Vela X-2 system of 2 -sec with E > 1 GeV. 6 refs., 5 figs

  7. Carbon dioxide capture and separation techniques for advanced power generation point sources

    Pennline, H.W.; Luebke, D.R.; Morsi, B.I.; Heintz, Y.J.; Jones, K.L.; Ilconich, J.B.

    2006-09-01

    The capture/separation step for carbon dioxide (CO2) from large-point sources is a critical one with respect to the technical feasibility and cost of the overall carbon sequestration scenario. For large-point sources, such as those found in power generation, the carbon dioxide capture techniques being investigated by the in-house research area of the National Energy Technology Laboratory possess the potential for improved efficiency and costs as compared to more conventional technologies. The investigated techniques can have wide applications, but the research has focused on capture/separation of carbon dioxide from flue gas (postcombustion from fossil fuel-fired combustors) and from fuel gas (precombustion, such as integrated gasification combined cycle – IGCC). With respect to fuel gas applications, novel concepts are being developed in wet scrubbing with physical absorption; chemical absorption with solid sorbents; and separation by membranes. In one concept, a wet scrubbing technique is being investigated that uses a physical solvent process to remove CO2 from fuel gas of an IGCC system at elevated temperature and pressure. The need to define an ideal solvent has led to the study of the solubility and mass transfer properties of various solvents. Fabrication techniques and mechanistic studies for hybrid membranes separating CO2 from the fuel gas produced by coal gasification are also being performed. Membranes that consist of CO2-philic silanes incorporated into an alumina support or ionic liquids encapsulated into a polymeric substrate have been investigated for permeability and selectivity. An overview of two novel techniques is presented along with a research progress status of each technology.

  8. Carbon Dioxide Capture and Separation Techniques for Gasification-based Power Generation Point Sources

    Pennline, H.W.; Luebke, D.R.; Jones, K.L.; Morsi, B.I. (Univ. of Pittsburgh, PA); Heintz, Y.J. (Univ. of Pittsburgh, PA); Ilconich, J.B. (Parsons)

    2007-06-01

    The capture/separation step for carbon dioxide (CO2) from large-point sources is a critical one with respect to the technical feasibility and cost of the overall carbon sequestration scenario. For large-point sources, such as those found in power generation, the carbon dioxide capture techniques being investigated by the in-house research area of the National Energy Technology Laboratory possess the potential for improved efficiency and reduced costs as compared to more conventional technologies. The investigated techniques can have wide applications, but the research has focused on capture/separation of carbon dioxide from flue gas (post-combustion from fossil fuel-fired combustors) and from fuel gas (precombustion, such as integrated gasification combined cycle or IGCC). With respect to fuel gas applications, novel concepts are being developed in wet scrubbing with physical absorption; chemical absorption with solid sorbents; and separation by membranes. In one concept, a wet scrubbing technique is being investigated that uses a physical solvent process to remove CO2 from fuel gas of an IGCC system at elevated temperature and pressure. The need to define an ideal solvent has led to the study of the solubility and mass transfer properties of various solvents. Pertaining to another separation technology, fabrication techniques and mechanistic studies for membranes separating CO2 from the fuel gas produced by coal gasification are also being performed. Membranes that consist of CO2-philic ionic liquids encapsulated into a polymeric substrate have been investigated for permeability and selectivity. Finally, dry, regenerable processes based on sorbents are additional techniques for CO2 capture from fuel gas. An overview of these novel techniques is presented along with a research progress status of technologies related to membranes and physical solvents.

  9. Field-scale operation of methane biofiltration systems to mitigate point source methane emissions

    Hettiarachchi, Vijayamala C.; Hettiaratchi, Patrick J.; Mehrotra, Anil K.; Kumar, Sunil

    2011-01-01

    Methane biofiltration (MBF) is a novel low-cost technique for reducing low volume point source emissions of methane (CH 4 ). MBF uses a granular medium, such as soil or compost, to support the growth of methanotrophic bacteria responsible for converting CH 4 to carbon dioxide (CO 2 ) and water (H 2 O). A field research program was undertaken to evaluate the potential to treat low volume point source engineered CH 4 emissions using an MBF at a natural gas monitoring station. A new comprehensive three-dimensional numerical model was developed incorporating advection-diffusive flow of gas, biological reactions and heat and moisture flow. The one-dimensional version of this model was used as a guiding tool for designing and operating the MBF. The long-term monitoring results of the field MBF are also presented. The field MBF operated with no control of precipitation, evaporation, and temperature, provided more than 80% of CH 4 oxidation throughout spring, summer, and fall seasons. The numerical model was able to predict the CH 4 oxidation behavior of the field MBF with high accuracy. The numerical model simulations are presented for estimating CH 4 oxidation efficiencies under various operating conditions, including different filter bed depths and CH 4 flux rates. The field observations as well as numerical model simulations indicated that the long-term performance of MBFs is strongly dependent on environmental factors, such as ambient temperature and precipitation. - Highlights: → One-dimensional version of the model was used as a guiding tool for designing and operating the MBF. → Mathematical model predicted CH 4 oxidation behaviors of the field MBF with high accuracy i.e. (> 80 %). → Performance of MBF is dependent on ambient temperature and precipitation. - The developed numerical model simulations and field observations for estimating CH 4 oxidation efficiencies under various operating conditions indicate that the long-term performance of MBFs is strongly

  10. Observational constraints on the physical nature of submillimetre source multiplicity: chance projections are common

    Hayward, Christopher C.; Chapman, Scott C.; Steidel, Charles C.; Golob, Anneya; Casey, Caitlin M.; Smith, Daniel J. B.; Zitrin, Adi; Blain, Andrew W.; Bremer, Malcolm N.; Chen, Chian-Chou; Coppin, Kristen E. K.; Farrah, Duncan; Ibar, Eduardo; Michałowski, Michał J.; Sawicki, Marcin; Scott, Douglas; van der Werf, Paul; Fazio, Giovanni G.; Geach, James E.; Gurwell, Mark; Petitpas, Glen; Wilner, David J.

    2018-05-01

    Interferometric observations have demonstrated that a significant fraction of single-dish submillimetre (submm) sources are blends of multiple submm galaxies (SMGs), but the nature of this multiplicity, i.e. whether the galaxies are physically associated or chance projections, has not been determined. We performed spectroscopy of 11 SMGs in six multicomponent submm sources, obtaining spectroscopic redshifts for nine of them. For an additional two component SMGs, we detected continuum emission but no obvious features. We supplement our observed sources with four single-dish submm sources from the literature. This sample allows us to statistically constrain the physical nature of single-dish submm source multiplicity for the first time. In three (3/7, { or} 43^{+39 }_{ -33} {per cent at 95 {per cent} confidence}) of the single-dish sources for which the nature of the blending is unambiguous, the components for which spectroscopic redshifts are available are physically associated, whereas 4/7 (57^{+33 }_{ -39} per cent) have at least one unassociated component. When components whose spectra exhibit continuum but no features and for which the photometric redshift is significantly different from the spectroscopic redshift of the other component are also considered, 6/9 (67^{+26 }_{ -37} per cent) of the single-dish sources are comprised of at least one unassociated component SMG. The nature of the multiplicity of one single-dish source is ambiguous. We conclude that physically associated systems and chance projections both contribute to the multicomponent single-dish submm source population. This result contradicts the conventional wisdom that bright submm sources are solely a result of merger-induced starbursts, as blending of unassociated galaxies is also important.

  11. Multiple memory systems, multiple time points: how science can inform treatment to control the expression of unwanted emotional memories.

    Visser, Renée M; Lau-Zhu, Alex; Henson, Richard N; Holmes, Emily A

    2018-03-19

    Memories that have strong emotions associated with them are particularly resilient to forgetting. This is not necessarily problematic, however some aspects of memory can be. In particular, the involuntary expression of those memories, e.g. intrusive memories after trauma, are core to certain psychological disorders. Since the beginning of this century, research using animal models shows that it is possible to change the underlying memory, for example by interfering with its consolidation or reconsolidation. While the idea of targeting maladaptive memories is promising for the treatment of stress and anxiety disorders, a direct application of the procedures used in non-human animals to humans in clinical settings is not straightforward. In translational research, more attention needs to be paid to specifying what aspect of memory (i) can be modified and (ii) should be modified. This requires a clear conceptualization of what aspect of memory is being targeted, and how different memory expressions may map onto clinical symptoms. Furthermore, memory processes are dynamic, so procedural details concerning timing are crucial when implementing a treatment and when assessing its effectiveness. To target emotional memory in its full complexity, including its malleability, science cannot rely on a single method, species or paradigm. Rather, a constructive dialogue is needed between multiple levels of research, all the way 'from mice to mental health'.This article is part of a discussion meeting issue 'Of mice and mental health: facilitating dialogue between basic and clinical neuroscientists'. © 2018 The Authors.

  12. Source of vacuum electromagnetic zero-point energy and Dirac's large numbers hypothesis

    Simaciu, I.; Dumitrescu, G.

    1993-01-01

    The stochastic electrodynamics states that zero-point fluctuation of the vacuum (ZPF) is an electromagnetic zero-point radiation with spectral density ρ(ω)=ℎω 3 / 2π 2 C 3 . Protons, free electrons and atoms are sources for this radiation. Each of them absorbs and emits energy by interacting with ZPF. At equilibrium ZPF radiation is scattered by dipoles.Scattered radiation spectral density is ρ(ω,r) ρ(ω).c.σ(ω) / 4πr 2 . Radiation of dipole spectral density of Universe is ρ ∫ 0 R nρ(ω,r)4πr 2 dr. But if σ atom P e σ=σ T then ρ ρ(ω)σ T R.n. Moreover if ρ=ρ(ω) then σ T Rn = 1. With R = G M/c 2 and σ T ≅(e 2 /m e c 2 ) 2 ∝ r e 2 then σ T .Rn 1 is equivalent to R/r e = e 2 /Gm p m e i.e. the cosmological coincidence discussed in the context of Dirac's large-numbers hypothesis. (Author)

  13. Application of random-point processes to the detection of radiation sources

    Woods, J.W.

    1978-01-01

    In this report the mathematical theory of random-point processes is reviewed and it is shown how use of the theory can obtain optimal solutions to the problem of detecting radiation sources. As noted, the theory also applies to image processing in low-light-level or low-count-rate situations. Paralleling Snyder's work, the theory is extended to the multichannel case of a continuous, two-dimensional (2-D), energy-time space. This extension essentially involves showing that the data are doubly stochastic Poisson (DSP) point processes in energy as well as time. Further, a new 2-D recursive formulation is presented for the radiation-detection problem with large computational savings over nonrecursive techniques when the number of channels is large (greater than or equal to 30). Finally, some adaptive strategies for on-line ''learning'' of unknown, time-varying signal and background-intensity parameters and statistics are present and discussed. These adaptive procedures apply when a complete statistical description is not available a priori

  14. An effective dose assessment technique with NORM added consumer products using skin-point source on computational human phantom

    Yoo, Do Hyeon; Shin, Wook-Geun; Lee, Hyun Cheol; Choi, Hyun Joon; Testa, Mauro; Lee, Jae Kook; Yeom, Yeon Soo; Kim, Chan Hyeong; Min, Chul Hee

    2016-01-01

    The aim of this study is to develop the assessment technique of the effective dose by calculating the organ equivalent dose with a Monte Carlo (MC) simulation and a computational human phantom for the naturally occurring radioactive material (NORM) added consumer products. In this study, we suggests the method determining the MC source term based on the skin-point source enabling the convenient and conservative modeling of the various type of the products. To validate the skin-point source method, the organ equivalent doses were compared with that by the product modeling source of the realistic shape for the pillow, waist supporter, sleeping mattress etc. Our results show that according to the source location, the organ equivalent doses were observed as the similar tendency for both source determining methods, however, it was observed that the annual effective dose with the skin-point source was conservative than that with the modeling source with the maximum 3.3 times higher dose. With the assumption of the gamma energy of 1 MeV and product activity of 1 Bq g"−"1, the annual effective doses of the pillow, waist supporter and sleeping mattress with skin-point source was 3.09E-16 Sv Bq"−"1 year"−"1, 1.45E-15 Sv Bq"−"1 year"−"1, and 2,82E-16 Sv Bq"−"1 year"−"1, respectively, while the product modeling source showed 9.22E-17 Sv Bq"−"1 year"−"1, 9.29E-16 Sv Bq"−"1 year"−"1, and 8.83E-17 Sv Bq"−"1 year"−"1, respectively. In conclusion, it was demonstrated in this study that the skin-point source method could be employed to efficiently evaluate the annual effective dose due to the usage of the NORM added consumer products. - Highlights: • We evaluate the exposure dose from the usage of NORM added consumer products. • We suggest the method determining the MC source term based on the skin-point source. • To validate the skin-point source, the organ equivalent doses were compared with that the modeling source. • The skin-point source could

  15. Analysis of ultrasonically rotating droplet using moving particle semi-implicit and distributed point source methods

    Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro

    2016-07-01

    Numerical analysis of the rotation of an ultrasonically levitated droplet with a free surface boundary is discussed. The ultrasonically levitated droplet is often reported to rotate owing to the surface tangential component of acoustic radiation force. To observe the torque from an acoustic wave and clarify the mechanism underlying the phenomena, it is effective to take advantage of numerical simulation using the distributed point source method (DPSM) and moving particle semi-implicit (MPS) method, both of which do not require a calculation grid or mesh. In this paper, the numerical treatment of the viscoacoustic torque, which emerges from the viscous boundary layer and governs the acoustical droplet rotation, is discussed. The Reynolds stress traction force is calculated from the DPSM result using the idea of effective normal particle velocity through the boundary layer and input to the MPS surface particles. A droplet levitated in an acoustic chamber is simulated using the proposed calculation method. The droplet is vertically supported by a plane standing wave from an ultrasonic driver and subjected to a rotating sound field excited by two acoustic sources on the side wall with different phases. The rotation of the droplet is successfully reproduced numerically and its acceleration is discussed and compared with those in the literature.

  16. Lead in the blood of children living close to industrial point sources in Bulgaria and Poland

    Willeke-Wetstein, C.; Bainova, A.; Georgieva, R.; Huzior-Balajewicz, A.; Bacon, J. R.

    2003-05-01

    ln Eastern European countries some industrial point sources are still suspected to have unacceptable emission rates of lead that pose a major health risk in particular to children. An interdisciplinary research project under the auspices of the EU had the aims (I) to monitor the current contamination of two industrial zones in Bulgaria and Poland, (2) to relate the Pb levels in ecological strata to the internal exposure of children, (3) to develop public health strategies in order to reduce the health risk by heavy metals. The human monitoring of Pb in Poland did not show increased health risks for the children living in an industrial zone close to Krakow. Bulgarian children, however, exceeded the WHO limit of 100 μg lead per litre blood by over one hundred percent (240 μg/1). Samples of soil, fodder and livestock organs showed elevated concentrations of lead. Recent literature results are compared with the findings in Bulgaria and Poland. The sources of the high internal exposure of children are discussed. Public health strategies to prevent mental dysfunction in Bulgarian children at risk include awareness building and social masures.

  17. Biosolid stockpiles are a significant point source for greenhouse gas emissions.

    Majumder, Ramaprasad; Livesley, Stephen J; Gregory, David; Arndt, Stefan K

    2014-10-01

    The wastewater treatment process generates large amounts of sewage sludge that are dried and then often stored in biosolid stockpiles in treatment plants. Because the biosolids are rich in decomposable organic matter they could be a significant source for greenhouse gas (GHG) emissions, yet there are no direct measurements of GHG from stockpiles. We therefore measured the direct emissions of methane (CH4), nitrous oxide (N2O) and carbon dioxide (CO2) on a monthly basis from three different age classes of biosolid stockpiles at the Western Treatment Plant (WTP), Melbourne, Australia, from December 2009 to November 2011 using manual static chambers. All biosolid stockpiles were a significant point source for CH4 and N2O emissions. The youngest biosolids (nitrate and ammonium concentration. We also modeled CH4 emissions based on a first order decay model and the model based estimated annual CH4 emissions were higher as compared to the direct field based estimated annual CH4 emissions. Our results indicate that labile organic material in stockpiles is decomposed over time and that nitrogen decomposition processes lead to significant N2O emissions. Carbon decomposition favors CO2 over CH4 production probably because of aerobic stockpile conditions or CH4 oxidation in the outer stockpile layers. Although the GHG emission rate decreased with biosolid age, managers of biosolid stockpiles should assess alternate storage or uses for biosolids to avoid nutrient losses and GHG emissions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. NOx emissions from large point sources: variability in ozone production, resulting health damages and economic costs

    Mauzerall, D.L.; Namsoug Kim

    2005-01-01

    We present a proof-of-concept analysis of the measurement of the health damage of ozone (O 3 ) produced from nitrogen oxides (NO x =NO+NO 2 ) emitted by individual large point sources in the eastern United States. We use a regional atmospheric model of the eastern United States, the Comprehensive Air quality Model with Extensions (CAMx), to quantify the variable impact that a fixed quantity of NO x emitted from individual sources can have on the downwind concentration of surface O 3 , depending on temperature and local biogenic hydrocarbon emissions. We also examine the dependence of resulting O 3 -related health damages on the size of the exposed population. The investigation is relevant to the increasingly widely used 'cap and trade' approach to NO x regulation, which presumes that shifts of emission over time and space, holding the total fixed over the course of the summer O 3 season, will have minimal effect on the environmental outcome. By contrast, we show that a shift of a unit of NO x emissions from one place or time to another could result in large changes in resulting health effects due to O 3 formation and exposure. We indicate how the type of modeling carried out here might be used to attach externality-correcting prices to emissions. Charging emitters fees that are commensurate with the damage caused by their NO x emissions would create an incentive for emitters to reduce emissions at times and in locations where they cause the largest damage. (author)

  19. The energy sources and nuclear energy - The point of view of the Belgian Catholic Church

    Hoenraet, Christian

    2000-01-01

    The problems related to the environment are reported regularly to the public by means of the newspapers, on radio and television. The story is the product of a journalistic process and in general does not bear much resemblance to the original event. The rate and type of reportage depend not only on the body of data available to the journalist but on the information sources the journalist chosen to use. The same story is reported in a positive or negative way. Finally people are overwhelmed by contradictory information and became uncertain or frightened. In order to provide the general public with objective information about nuclear energy in particular and to made a statement about the position of the Belgian Catholic Church concerning this matter, the results of the study were published in Dutch under the form of a book with the title 'The Energy Sources and Nuclear Energy - Comparative analysis and ethical thoughts written the same author. Thia paper is a short survey of the results of the study and to present the point of view of the Belgian Catholic Church in the energy debate

  20. Eddy covariance methane flux measurements over a grazed pasture: effect of cows as moving point sources

    Felber, R.; Münger, A.; Neftel, A.; Ammann, C.

    2015-06-01

    Methane (CH4) from ruminants contributes one-third of global agricultural greenhouse gas emissions. Eddy covariance (EC) technique has been extensively used at various flux sites to investigate carbon dioxide exchange of ecosystems. Since the development of fast CH4 analyzers, the instrumentation at many flux sites has been amended for these gases. However, the application of EC over pastures is challenging due to the spatially and temporally uneven distribution of CH4 point sources induced by the grazing animals. We applied EC measurements during one grazing season over a pasture with 20 dairy cows (mean milk yield: 22.7 kg d-1) managed in a rotational grazing system. Individual cow positions were recorded by GPS trackers to attribute fluxes to animal emissions using a footprint model. Methane fluxes with cows in the footprint were up to 2 orders of magnitude higher than ecosystem fluxes without cows. Mean cow emissions of 423 ± 24 g CH4 head-1 d-1 (best estimate from this study) correspond well to animal respiration chamber measurements reported in the literature. However, a systematic effect of the distance between source and EC tower on cow emissions was found, which is attributed to the analytical footprint model used. We show that the EC method allows one to determine CH4 emissions of cows on a pasture if the data evaluation is adjusted for this purpose and if some cow distribution information is available.

  1. Luminosity distribution in the central regions of Messier 87: Isothermal core, point source, or black hole

    de Vaucouleurs, G.; Nieto, J.

    1979-01-01

    A combination of photographic and photoelectric photometry with the McDonald 2 m reflector is used to derive a precise mean luminosity profile μ/sub B/(r*) of M87 (jet excluded) at approx.0''.6 resolution out to r*=70''. Within 8'' from the center the luminosity is less than predicted by extrapolation of the r/sup 1/4/ law defined by the main body of the galaxy (8'' 0 =30.5) the structural length of the underlying isothermal is α=2''.78=170 pc, the mass of the ''black hole'' M 0 =1.7.10 9 M/sub sun/ and the luminosity of the point source (B 0 =16.95, M 0 =-13.55) equals 4.2% of the integrated luminosity B (6'') =13.52 of the galaxy within r*=6''. These results agree closely with and confirm the work of the Hale team. Comparison of the McDonald and Hale data suggests that the central source may have been slightly brighter (approx.0.5 mag) in 1964 than in 1975--1977

  2. Point-source and diffuse high-energy neutrino emission from Type IIn supernovae

    Petropoulou, M.; Coenders, S.; Vasilopoulos, G.; Kamble, A.; Sironi, L.

    2017-09-01

    Type IIn supernovae (SNe), a rare subclass of core collapse SNe, explode in dense circumstellar media that have been modified by the SNe progenitors at their last evolutionary stages. The interaction of the freely expanding SN ejecta with the circumstellar medium gives rise to a shock wave propagating in the dense SN environment, which may accelerate protons to multi-PeV energies. Inelastic proton-proton collisions between the shock-accelerated protons and those of the circumstellar medium lead to multimessenger signatures. Here, we evaluate the possible neutrino signal of Type IIn SNe and compare with IceCube observations. We employ a Monte Carlo method for the calculation of the diffuse neutrino emission from the SN IIn class to account for the spread in their properties. The cumulative neutrino emission is found to be ˜10 per cent of the observed IceCube neutrino flux above 60 TeV. Type IIn SNe would be the dominant component of the diffuse astrophysical flux, only if 4 per cent of all core collapse SNe were of this type and 20-30 per cent of the shock energy was channeled to accelerated protons. Lower values of the acceleration efficiency are accessible by the observation of a single Type IIn SN as a neutrino point source with IceCube using up-going muon neutrinos. Such an identification is possible in the first year following the SN shock breakout for sources within 20 Mpc.

  3. An ultrabright and monochromatic electron point source made of a LaB6 nanowire

    Zhang, Han; Tang, Jie; Yuan, Jinshi; Yamauchi, Yasushi; Suzuki, Taku T.; Shinya, Norio; Nakajima, Kiyomi; Qin, Lu-Chang

    2016-03-01

    Electron sources in the form of one-dimensional nanotubes and nanowires are an essential tool for investigations in a variety of fields, such as X-ray computed tomography, flexible displays, chemical sensors and electron optics applications. However, field emission instability and the need to work under high-vacuum or high-temperature conditions have imposed stringent requirements that are currently limiting the range of application of electron sources. Here we report the fabrication of a LaB6 nanowire with only a few La atoms bonded on the tip that emits collimated electrons from a single point with high monochromaticity. The nanostructured tip has a low work function of 2.07 eV (lower than that of Cs) while remaining chemically inert, two properties usually regarded as mutually exclusive. Installed in a scanning electron microscope (SEM) field emission gun, our tip shows a current density gain that is about 1,000 times greater than that achievable with W(310) tips, and no emission decay for tens of hours of operation. Using this new SEM, we acquired very low-noise, high-resolution images together with rapid chemical compositional mapping using a tip operated at room temperature and at 10-times higher residual gas pressure than that required for W tips.

  4. Point sources of emerging contaminants along the Colorado River Basin: Source water for the arid Southwestern United States

    Jones-Lepp, Tammy L.; Sanchez, Charles; Alvarez, David A.; Wilson, Doyle C.; Taniguchi-Fu, Randi-Laurant

    2012-01-01

    Emerging contaminants (ECs) (e.g., pharmaceuticals, illicit drugs, personal care products) have been detected in waters across the United States. The objective of this study was to evaluate point sources of ECs along the Colorado River, from the headwaters in Colorado to the Gulf of California. At selected locations in the Colorado River Basin (sites in Colorado, Utah, Nevada, Arizona, and California), waste stream tributaries and receiving surface waters were sampled using either grab sampling or polar organic chemical integrative samplers (POCIS). The grab samples were extracted using solid-phase cartridge extraction (SPE), and the POCIS sorbents were transferred into empty SPEs and eluted with methanol. All extracts were prepared for, and analyzed by, liquid chromatography–electrospray-ion trap mass spectrometry (LC–ESI-ITMS). Log DOW values were calculated for all ECs in the study and compared to the empirical data collected. POCIS extracts were screened for the presence of estrogenic chemicals using the yeast estrogen screen (YES) assay. Extracts from the 2008 POCIS deployment in the Las Vegas Wash showed the second highest estrogenicity response. In the grab samples, azithromycin (an antibiotic) was detected in all but one urban waste stream, with concentrations ranging from 30 ng/L to 2800 ng/L. Concentration levels of azithromycin, methamphetamine and pseudoephedrine showed temporal variation from the Tucson WWTP. Those ECs that were detected in the main surface water channels (those that are diverted for urban use and irrigation along the Colorado River) were in the region of the limit-of-detection (e.g., 10 ng/L), but most were below detection limits.

  5. Anthropogenic Methane Emissions in California's San Joaquin Valley: Characterizing Large Point Source Emitters

    Hopkins, F. M.; Duren, R. M.; Miller, C. E.; Aubrey, A. D.; Falk, M.; Holland, L.; Hook, S. J.; Hulley, G. C.; Johnson, W. R.; Kuai, L.; Kuwayama, T.; Lin, J. C.; Thorpe, A. K.; Worden, J. R.; Lauvaux, T.; Jeong, S.; Fischer, M. L.

    2015-12-01

    Methane is an important atmospheric pollutant that contributes to global warming and tropospheric ozone production. Methane mitigation could reduce near term climate change and improve air quality, but is hindered by a lack of knowledge of anthropogenic methane sources. Recent work has shown that methane emissions are not evenly distributed in space, or across emission sources, suggesting that a large fraction of anthropogenic methane comes from a few "super-emitters." We studied the distribution of super-emitters in California's southern San Joaquin Valley, where elevated levels of atmospheric CH4 have also been observed from space. Here, we define super-emitters as methane plumes that could be reliably detected (i.e., plume observed more than once in the same location) under varying wind conditions by airborne thermal infrared remote sensing. The detection limit for this technique was determined to be 4.5 kg CH4 h-1 by a controlled release experiment, corresponding to column methane enhancement at the point of emissions greater than 20% above local background levels. We surveyed a major oil production field, and an area with a high concentration of large dairies using a variety of airborne and ground-based measurements. Repeated airborne surveys (n=4) with the Hyperspectral Thermal Emission Spectrometer revealed 28 persistent methane plumes emanating from oil field infrastructure, including tanks, wells, and processing facilities. The likelihood that a given source type was a super-emitter varied from roughly 1/3 for processing facilities to 1/3000 for oil wells. 11 persistent plumes were detected in the dairy area, and all were associated with wet manure management. The majority (11/14) of manure lagoons in the study area were super-emitters. Comparing to a California methane emissions inventory for the surveyed areas, we estimate that super-emitters comprise a minimum of 9% of inventoried dairy emissions, and 13% of inventoried oil emissions in this region.

  6. Using sorbent waste materials to enhance treatment of micro-point source effluents by constructed wetlands

    Green, Verity; Surridge, Ben; Quinton, John; Matthews, Mike

    2014-05-01

    Sorbent materials are widely used in environmental settings as a means of enhancing pollution remediation. A key area of environmental concern is that of water pollution, including the need to treat micro-point sources of wastewater pollution, such as from caravan sites or visitor centres. Constructed wetlands (CWs) represent one means for effective treatment of wastewater from small wastewater producers, in part because they are believed to be economically viable and environmentally sustainable. Constructed wetlands have the potential to remove a range of pollutants found in wastewater, including nitrogen (N), phosphorus (P), biochemical oxygen demand (BOD) and carbon (C), whilst also reducing the total suspended solids (TSS) concentration in effluents. However, there remain particular challenges for P and N removal from wastewater in CWs, as well as the sometimes limited BOD removal within these treatment systems, particularly for micro-point sources of wastewater. It has been hypothesised that the amendment of CWs with sorbent materials can enhance their potential to treat wastewater, particularly through enhancing the removal of N and P. This paper focuses on data from batch and mesocosm studies that were conducted to identify and assess sorbent materials suitable for use within CWs. The aim in using sorbent material was to enhance the combined removal of phosphate (PO4-P) and ammonium (NH4-N). The key selection criteria for the sorbent materials were that they possess effective PO4-P, NH4-N or combined pollutant removal, come from low cost and sustainable sources, have potential for reuse, for example as a fertiliser or soil conditioner, and show limited potential for re-release of adsorbed nutrients. The sorbent materials selected for testing were alum sludge from water treatment works, ochre derived from minewater treatment, biochar derived from various feedstocks, plasterboard and zeolite. The performance of the individual sorbents was assessed through

  7. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  8. The recovery of a time-dependent point source in a linear transport equation: application to surface water pollution

    Hamdi, Adel

    2009-01-01

    The aim of this paper is to localize the position of a point source and recover the history of its time-dependent intensity function that is both unknown and constitutes the right-hand side of a 1D linear transport equation. Assuming that the source intensity function vanishes before reaching the final control time, we prove that recording the state with respect to the time at two observation points framing the source region leads to the identification of the source position and the recovery of its intensity function in a unique manner. Note that at least one of the two observation points should be strategic. We establish an identification method that determines quasi-explicitly the source position and transforms the task of recovering its intensity function into solving directly a well-conditioned linear system. Some numerical experiments done on a variant of the water pollution BOD model are presented

  9. Use of ultrasonic array method for positioning multiple partial discharge sources in transformer oil.

    Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng

    2014-08-01

    Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.

  10. Solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators.

    Zhao, Jing; Zong, Haili

    2018-01-01

    In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.

  11. Roadside Multiple Objects Extraction from Mobile Laser Scanning Point Cloud Based on DBN

    LUO Haifeng

    2018-02-01

    Full Text Available This paper proposed an novel algorithm for exploring deep belief network (DBN architectures to extract and recognize roadside facilities (trees,cars and traffic poles from mobile laser scanning (MLS point cloud.The proposed methods firstly partitioned the raw MLS point cloud into blocks and then removed the ground and building points.In order to partition the off-ground objects into individual objects,off-ground points were organized into an Octree structure and clustered into candidate objects based on connected component.To improve segmentation performance on clusters containing overlapped objects,a refining processing using a voxel-based normalized cut was then implemented.In addition,multi-view features descriptor was generated for each independent roadside facilities based on binary images.Finally,a deep belief network (DBN was trained to extract trees,cars and traffic pole objects.Experiments are undertaken to evaluate the validities of the proposed method with two datasets acquired by Lynx Mobile Mapper System.The precision of trees,cars and traffic poles objects extraction results respectively was 97.31%,97.79% and 92.78%.The recall was 98.30%,98.75% and 96.77% respectively.The quality is 95.70%,93.81% and 90.00%.And the F1 measure was 97.80%,96.81% and 94.73%.

  12. Under digital fluoroscopic guidance multiple-point injection with absolute alcohol and pinyangmycin for the treatment of superficial venous malformations

    Yang Ming; Xiao Gang; Peng Youlin

    2010-01-01

    Objective: to investigate the therapeutic efficacy of multiple-point injection with absolute alcohol and pinyangmycin under digital fluoroscopic guidance for superficial venous malformations. Methods: By using a disposal venous transfusion needle the superficial venous malformation was punctured and then contrast media lohexol was injected in to visualize the tumor body, which was followed by the injection of ethanol and pinyangmycin when the needle was confirmed in the correct position. The procedure was successfully performed in 31 patients. The clinical results were observed and analyzed. Results: After one treatment complete cure was achieved in 21 cases and marked effect was obtained in 8 cases, with a total effectiveness of 93.5%. Conclusion: Multiple-point injection with ethanol and pinyangmycin under digital fluoroscopic guidance is an effective and safe technique for the treatment of superficial venous malformations, especially for the lesions that are deeply located and ill-defined. (authors)

  13. Multiple sclerosis: patients’ information sources and needs on disease symptoms and management

    Albert I Matti

    2010-06-01

    Full Text Available Albert I Matti1, Helen McCarl2, Pamela Klaer2, Miriam C Keane1, Celia S Chen11Department of Ophthalmology, Flinders Medical Centre and Flinders University, Bedford Park, SA, Australia; 2The Multiple Sclerosis Society of South Australia and Northern Territory, Klemzig, SA, AustraliaObjective: To investigate the current information sources of patients with multiple sclerosis (MS in the early stages of their disease and to identify patients’ preferred source of information. The relative amounts of information from the different sources were also compared.Methods: Participants at a newly diagnosed information session organized by the Multiple Sclerosis Society of South Australia were invited to complete a questionnaire. Participants were asked to rate on a visual analog scale how much information they had received about MS and optic neuritis from different information sources and how much information they would like to receive from each of the sources.Results: A close to ideal amount of information is being provided by the MS society and MS specialist nurses. There is a clear deficit between what information patients are currently receiving and the amount of information they actually want from various sources. Patients wish to receive significantly more information from treating general practitioners, eye specialists, neurologists, and education sessions. Patients have identified less than adequate information received on optic neuritis from all sources.Conclusion: This study noted a clear information deficit regarding MS from all sources. This information deficit is more pronounced in relation to optic neuritis and needs to be addressed in the future.Practice implications: More patient information and counselling needs to be provided to MS patients even at early stages of their disease, especially in relation to management of disease relapse.Keywords: information sources, information needs, MS patients, optic neuritis

  14. Modeling Multi-Event Non-Point Source Pollution in a Data-Scarce Catchment Using ANN and Entropy Analysis

    Lei Chen

    2017-06-01

    Full Text Available Event-based runoff–pollutant relationships have been the key for water quality management, but the scarcity of measured data results in poor model performance, especially for multiple rainfall events. In this study, a new framework was proposed for event-based non-point source (NPS prediction and evaluation. The artificial neural network (ANN was used to extend the runoff–pollutant relationship from complete data events to other data-scarce events. The interpolation method was then used to solve the problem of tail deviation in the simulated pollutographs. In addition, the entropy method was utilized to train the ANN for comprehensive evaluations. A case study was performed in the Three Gorges Reservoir Region, China. Results showed that the ANN performed well in the NPS simulation, especially for light rainfall events, and the phosphorus predictions were always more accurate than the nitrogen predictions under scarce data conditions. In addition, peak pollutant data scarcity had a significant impact on the model performance. Furthermore, these traditional indicators would lead to certain information loss during the model evaluation, but the entropy weighting method could provide a more accurate model evaluation. These results would be valuable for monitoring schemes and the quantitation of event-based NPS pollution, especially in data-poor catchments.

  15. Modeling a Single SEP Event from Multiple Vantage Points Using the iPATH Model

    Hu, Junxiang; Li, Gang; Fu, Shuai; Zank, Gary; Ao, Xianzhi

    2018-02-01

    Using the recently extended 2D improved Particle Acceleration and Transport in the Heliosphere (iPATH) model, we model an example gradual solar energetic particle event as observed at multiple locations. Protons and ions that are energized via the diffusive shock acceleration mechanism are followed at a 2D coronal mass ejection-driven shock where the shock geometry varies across the shock front. The subsequent transport of energetic particles, including cross-field diffusion, is modeled by a Monte Carlo code that is based on a stochastic differential equation method. Time intensity profiles and particle spectra at multiple locations and different radial distances, separated in longitudes, are presented. The results shown here are relevant to the upcoming Parker Solar Probe mission.

  16. Effects of pointing compared with naming and observing during encoding on item and source memory in young and older adults.

    Ouwehand, Kim; van Gog, Tamara; Paas, Fred

    2016-10-01

    Research showed that source memory functioning declines with ageing. Evidence suggests that encoding visual stimuli with manual pointing in addition to visual observation can have a positive effect on spatial memory compared with visual observation only. The present study investigated whether pointing at picture locations during encoding would lead to better spatial source memory than naming (Experiment 1) and visual observation only (Experiment 2) in young and older adults. Experiment 3 investigated whether response modality during the test phase would influence spatial source memory performance. Experiments 1 and 2 supported the hypothesis that pointing during encoding led to better source memory for picture locations than naming or observation only. Young adults outperformed older adults on the source memory but not the item memory task in both Experiments 1 and 2. In Experiments 1 and 2, participants manually responded in the test phase. Experiment 3 showed that if participants had to verbally respond in the test phase, the positive effect of pointing compared with naming during encoding disappeared. The results suggest that pointing at picture locations during encoding can enhance spatial source memory in both young and older adults, but only if the response modality is congruent in the test phase.

  17. Active control on high-order coherence and statistic characterization on random phase fluctuation of two classical point sources.

    Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan

    2016-03-29

    Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.

  18. Realtime Gas Emission Monitoring at Hazardous Sites Using a Distributed Point-Source Sensing Infrastructure

    Manes, Gianfranco; Collodi, Giovanni; Gelpi, Leonardo; Fusco, Rosanna; Ricci, Giuseppe; Manes, Antonio; Passafiume, Marco

    2016-01-01

    This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN) architecture, is organised into sub-networks to be positioned in the plant’s critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks) gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events. PMID:26805832

  19. Realtime Gas Emission Monitoring at Hazardous Sites Using a Distributed Point-Source Sensing Infrastructure

    Gianfranco Manes

    2016-01-01

    Full Text Available This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN architecture, is organised into sub-networks to be positioned in the plant’s critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events.

  20. Novel Remarks on Point Mass Sources, Firewalls, Null Singularities and Gravitational Entropy

    Perelman, Carlos Castro

    2016-01-01

    A continuous family of static spherically symmetric solutions of Einstein's vacuum field equations with a spatial singularity at the origin r = 0 is found. These solutions are parametrized by a real valued parameter λ (ranging from 0 to 1) and such that the radial horizon's location is displaced continuously towards the singularity ( r = 0 ) as λ increases. In the extreme limit λ = 1, the location of the singularity and horizon merges leading to a null singularity. In this extreme case, any infalling observer hits the null singularity at the very moment he/she crosses the horizon. This fact may have important consequences for the resolution of the fire wall problem and the complementarity controversy in black holes. An heuristic argument is provided how one might avoid the Hawking particle emission process in this extreme case when the singularity and horizon merges. The field equations due to a delta-function point-mass source at r = 0 are solved and the Euclidean gravitational action corresponding to those solutions is evaluated explicitly. It is found that the Euclidean action is precisely equal to the black hole entropy (in Planck area units). This result holds in any dimensions D ≥ 3.

  1. Application of distributed point source method (DPSM) to wave propagation in anisotropic media

    Fooladi, Samaneh; Kundu, Tribikram

    2017-04-01

    Distributed Point Source Method (DPSM) was developed by Placko and Kundu1, as a technique for modeling electromagnetic and elastic wave propagation problems. DPSM has been used for modeling ultrasonic, electrostatic and electromagnetic fields scattered by defects and anomalies in a structure. The modeling of such scattered field helps to extract valuable information about the location and type of defects. Therefore, DPSM can be used as an effective tool for Non-Destructive Testing (NDT). Anisotropy adds to the complexity of the problem, both mathematically and computationally. Computation of the Green's function which is used as the fundamental solution in DPSM is considerably more challenging for anisotropic media, and it cannot be reduced to a closed-form solution as is done for isotropic materials. The purpose of this study is to investigate and implement DPSM for an anisotropic medium. While the mathematical formulation and the numerical algorithm will be considered for general anisotropic media, more emphasis will be placed on transversely isotropic materials in the numerical example presented in this paper. The unidirectional fiber-reinforced composites which are widely used in today's industry are good examples of transversely isotropic materials. Development of an effective and accurate NDT method based on these modeling results can be of paramount importance for in-service monitoring of damage in composite structures.

  2. Economic-environmental modeling of point source pollution in Jefferson County, Alabama, USA.

    Kebede, Ellene; Schreiner, Dean F; Huluka, Gobena

    2002-05-01

    This paper uses an integrated economic-environmental model to assess the point source pollution from major industries in Jefferson County, Northern Alabama. Industrial expansion generates employment, income, and tax revenue for the public sector; however, it is also often associated with the discharge of chemical pollutants. Jefferson County is one of the largest industrial counties in Alabama that experienced smog warnings and ambient ozone concentration, 1996-1999. Past studies of chemical discharge from industries have used models to assess the pollution impact of individual plants. This study, however, uses an extended Input-Output (I-O) economic model with pollution emission coefficients to assess direct and indirect pollutant emission for several major industries in Jefferson County. The major findings of the study are: (a) the principal emission by the selected industries are volatile organic compounds (VOC) and these contribute to the ambient ozone concentration; (b) the direct and indirect emissions are significantly higher than the direct emission by some industries, indicating that an isolated analysis will underestimate the emission by an industry; (c) while low emission coefficient industries may suggest industry choice they may also emit the most hazardous chemicals. This study is limited by the assumptions made, and the data availability, however it provides a useful analytical tool for direct and cumulative emission estimation and generates insights on the complexity in choice of industries.

  3. Science, information, technology, and the changing character of public policy in non-point source pollution

    King, John L.; Corwin, Dennis L.

    Information technologies are already delivering important new capabilities for scientists working on non-point source (NPS) pollution in the vadose zone, and more are expected. This paper focuses on the special contributions of modeling and network communications for enhancing the effectiveness of scientists in the realm of policy debates regarding NPS pollution mitigation and abatement. The discussion examines a fundamental shift from a strict regulatory strategy of pollution control characterized by a bureaucratic/technical alliance during the period through the 1970's and early 1980's, to a more recently evolving paradigm of pluralistic environmental management. The role of science and scientists in this shift is explored, with special attention to the challenges facing scientists working in NPS pollution in the vadose zone. These scientists labor under a special handicap in the evolving model because their scientific tools are often times incapable of linking NPS pollution with individuals responsible for causing it. Information can facilitate the effectiveness of these scientists in policy debates, but not under the usual assumptions in which scientific truth prevails. Instead, information technology's key role is in helping scientists shape the evolving discussion of trade-offs and in bringing citizens and policymakers closer to the routine work of scientists.

  4. Realtime Gas Emission Monitoring at Hazardous Sites Using a Distributed Point-Source Sensing Infrastructure.

    Manes, Gianfranco; Collodi, Giovanni; Gelpi, Leonardo; Fusco, Rosanna; Ricci, Giuseppe; Manes, Antonio; Passafiume, Marco

    2016-01-20

    This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN) architecture, is organised into sub-networks to be positioned in the plant's critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks) gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events.

  5. Using a dynamic point-source percolation model to simulate bubble growth

    Zimmerman, Jonathan A.; Zeigler, David A.; Cowgill, Donald F.

    2004-01-01

    Accurate modeling of nucleation, growth and clustering of helium bubbles within metal tritide alloys is of high scientific and technological importance. Of interest is the ability to predict both the distribution of these bubbles and the manner in which these bubbles interact at a critical concentration of helium-to-metal atoms to produce an accelerated release of helium gas. One technique that has been used in the past to model these materials, and again revisited in this research, is percolation theory. Previous efforts have used classical percolation theory to qualitatively and quantitatively model the behavior of interstitial helium atoms in a metal tritide lattice; however, higher fidelity models are needed to predict the distribution of helium bubbles and include features that capture the underlying physical mechanisms present in these materials. In this work, we enhance classical percolation theory by developing the dynamic point-source percolation model. This model alters the traditionally binary character of site occupation probabilities by enabling them to vary depending on proximity to existing occupied sites, i.e. nucleated bubbles. This revised model produces characteristics for one and two dimensional systems that are extremely comparable with measurements from three dimensional physical samples. Future directions for continued development of the dynamic model are also outlined

  6. Mobility and Sector-specific Effects of Changes in Multiple Sources ...

    Using the second and third Cameroon household consumption surveys, this study examined mobility and sector-specific effects of changes in multiple sources of deprivation in Cameroon. Results indicated that between 2001 and 2007, deprivations associated with human capital and labour capital reduced, while ...

  7. Transfer functions of double- and multiple-cavity Fabry-Perot filters driven by Lorentzian sources.

    Marti, J; Capmany, J

    1996-12-20

    We derive expressions for the transfer functions of double- and multiple-cavity Fabry-Perot filters driven by laser sources with Lorentzian spectrum. These are of interest because of their applications in sensing and channel filtering in optical frequency-division multiplexing networks.

  8. Organizational Communication in Emergencies: Using Multiple Channels and Sources to Combat Noise and Capture Attention

    Stephens, Keri K.; Barrett, Ashley K.; Mahometa, Michael J.

    2013-01-01

    This study relies on information theory, social presence, and source credibility to uncover what best helps people grasp the urgency of an emergency. We surveyed a random sample of 1,318 organizational members who received multiple notifications about a large-scale emergency. We found that people who received 3 redundant messages coming through at…

  9. Reading on the World Wide Web: Dealing with conflicting information from multiple sources

    Van Strien, Johan; Brand-Gruwel, Saskia; Boshuizen, Els

    2011-01-01

    Van Strien, J. L. H., Brand-Gruwel, S., & Boshuizen, H. P. A. (2011, August). Reading on the World Wide Web: Dealing with conflicting information from multiple sources. Poster session presented at the biannual conference of the European Association for Research on Learning and Instruction, Exeter,

  10. Variogram based and Multiple - Point Statistical simulation of shallow aquifer structures in the Upper Salzach valley, Austria

    Jandrisevits, Carmen; Marschallinger, Robert

    2014-05-01

    Quarternary sediments in overdeepened alpine valleys and basins in the Eastern Alps bear substantial groundwater resources. The associated aquifer systems are generally geometrically complex with highly variable hydraulic properties. 3D geological models provide predictions of both geometry and properties of the subsurface required for subsequent modelling of groundwater flow and transport. In hydrology, geostatistical Kriging and Kriging based conditional simulations are widely used to predict the spatial distribution of hydrofacies. In the course of investigating the shallow aquifer structures in the Zell basin in the Upper Salzach valley (Salzburg, Austria), a benchmark of available geostatistical modelling and simulation methods was performed: traditional variogram based geostatistical methods, i.e. Indicator Kriging, Sequential Indicator Simulation and Sequential Indicator Co - Simulation were used as well as Multiple Point Statistics. The ~ 6 km2 investigation area is sampled by 56 drillings with depths of 5 to 50 m; in addition, there are 2 geophysical sections with lengths of 2 km and depths of 50 m. Due to clustered drilling sites, indicator Kriging models failed to consistently model the spatial variability of hydrofacies. Using classical variogram based geostatistical simulation (SIS), equally probable realizations were generated with differences among the realizations providing an uncertainty measure. The yielded models are unstructured from a geological point - they do not portray the shapes and lateral extensions of associated sedimentary units. Since variograms consider only two - point spatial correlations, they are unable to capture the spatial variability of complex geological structures. The Multiple Point Statistics approach overcomes these limitations of two point statistics as it uses a Training image instead of variograms. The 3D Training Image can be seen as a reference facies model where geological knowledge about depositional

  11. Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model

    Musekiwa, Alfred; Manda, Samuel O. M.; Mwambi, Henry G.; Chen, Ding-Geng

    2016-01-01

    Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results. PMID:27798661

  12. Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array

    Yankui Zhang

    2018-05-01

    Full Text Available Direct position determination (DPD is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer–Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.

  13. Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array.

    Zhang, Yankui; Ba, Bin; Wang, Daming; Geng, Wei; Xu, Haiyun

    2018-05-08

    Direct position determination (DPD) is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer⁻Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.

  14. Higher moments of net kaon multiplicity distributions at RHIC energies for the search of QCD Critical Point at STAR

    Sarkar Amal

    2013-11-01

    Full Text Available In this paper we report the measurements of the various moments mean (M, standard deviation (σ skewness (S and kurtosis (κ of the net-Kaon multiplicity distribution at midrapidity from Au+Au collisions at √sNN = 7.7 to 200 GeV in the STAR experiment at RHIC in an effort to locate the critical point in the QCD phase diagram. These moments and their products are related to the thermodynamic susceptibilities of conserved quantities such as net baryon number, net charge, and net strangeness as also to the correlation length of the system. A non-monotonic behavior of these variable indicate the presence of the critical point. In this work we also present the moments products Sσ, κσ2 of net-Kaon multiplicity distribution as a function of collision centrality and energies. The energy and the centrality dependence of higher moments of net-Kaons and their products have been compared with it0s Poisson expectation and with simulations from AMPT which does not include the critical point. From the measurement at all seven available beam energies, we find no evidence for a critical point in the QCD phase diagram for √sNN below 200 GeV.

  15. Simulation of neutron multiplicity measurements using Geant4. Open source software for nuclear arms control

    Kuett, Moritz

    2016-07-07

    Nuclear arms control, including nuclear safeguards and verification technologies for nuclear disarmament typically use software as part of many different technological applications. This thesis proposes to use three open source criteria for such software, allowing users and developers to have free access to a program, have access to the full source code and be able to publish modifications for the program. This proposition is presented and analyzed in detail, together with the description of the development of ''Open Neutron Multiplicity Simulation'', an open source software tool to simulate neutron multiplicity measurements. The description includes physical background of the method, details of the developed program and a comprehensive set of validation calculations.

  16. Five-Level Z-Source Neutral Point-Clamped Inverter

    Gao, F.; Loh, P.C.; Blaabjerg, Frede

    2007-01-01

    This paper proposes a five-level Z-source neutralpoint- clamped (NPC) inverter with two Z-source networks functioning as intermediate energy storages coupled between dc sources and NPC inverter circuitry. Analyzing the operational principles of Z-source network with partial dclink shoot......-through scheme reveals the hidden theories in the five-level Z-source NPC inverter unlike the operational principle appeared in the general two-level Z-source inverter, so that the five-level Z-source NPC inverter can be designed with the modulation of carrier-based phase disposition (PD) or alternative phase...

  17. Multiple Speech Source Separation Using Inter-Channel Correlation and Relaxed Sparsity

    Maoshen Jia

    2018-01-01

    Full Text Available In this work, a multiple speech source separation method using inter-channel correlation and relaxed sparsity is proposed. A B-format microphone with four spatially located channels is adopted due to the size of the microphone array to preserve the spatial parameter integrity of the original signal. Specifically, we firstly measure the proportion of overlapped components among multiple sources and find that there exist many overlapped time-frequency (TF components with increasing source number. Then, considering the relaxed sparsity of speech sources, we propose a dynamic threshold-based separation approach of sparse components where the threshold is determined by the inter-channel correlation among the recording signals. After conducting a statistical analysis of the number of active sources at each TF instant, a form of relaxed sparsity called the half-K assumption is proposed so that the active source number in a certain TF bin does not exceed half the total number of simultaneously occurring sources. By applying the half-K assumption, the non-sparse components are recovered by regarding the extracted sparse components as a guide, combined with vector decomposition and matrix factorization. Eventually, the final TF coefficients of each source are recovered by the synthesis of sparse and non-sparse components. The proposed method has been evaluated using up to six simultaneous speech sources under both anechoic and reverberant conditions. Both objective and subjective evaluations validated that the perceptual quality of the separated speech by the proposed approach outperforms existing blind source separation (BSS approaches. Besides, it is robust to different speeches whilst confirming all the separated speeches with similar perceptual quality.

  18. Evaluation of spatial dependence of point spread function-based PET reconstruction using a traceable point-like 22Na source

    Taisuke Murata

    2016-10-01

    Full Text Available Abstract Background The point spread function (PSF of positron emission tomography (PET depends on the position across the field of view (FOV. Reconstruction based on PSF improves spatial resolution and quantitative accuracy. The present study aimed to quantify the effects of PSF correction as a function of the position of a traceable point-like 22Na source over the FOV on two PET scanners with a different detector design. Methods We used Discovery 600 and Discovery 710 (GE Healthcare PET scanners and traceable point-like 22Na sources (<1 MBq with a spherical absorber design that assures uniform angular distribution of the emitted annihilation photons. The source was moved in three directions at intervals of 1 cm from the center towards the peripheral FOV using a three-dimensional (3D-positioning robot, and data were acquired over a period of 2 min per point. The PET data were reconstructed by filtered back projection (FBP, the ordered subset expectation maximization (OSEM, OSEM + PSF, and OSEM + PSF + time-of-flight (TOF. Full width at half maximum (FWHM was determined according to the NEMA method, and total counts in regions of interest (ROI for each reconstruction were quantified. Results The radial FWHM of FBP and OSEM increased towards the peripheral FOV, whereas PSF-based reconstruction recovered the FWHM at all points in the FOV of both scanners. The radial FWHM for PSF was 30–50 % lower than that of OSEM at the center of the FOV. The accuracy of PSF correction was independent of detector design. Quantitative values were stable across the FOV in all reconstruction methods. The effect of TOF on spatial resolution and quantitation accuracy was less noticeable. Conclusions The traceable 22Na point-like source allowed the evaluation of spatial resolution and quantitative accuracy across the FOV using different reconstruction methods and scanners. PSF-based reconstruction reduces dependence of the spatial resolution on the

  19. Source term determination from subcritical multiplication measurements at Koral-1 reactor

    Blazquez, J.B.; Barrado, J.M.

    1978-01-01

    By using an AmBe neutron source two independent procedures have been settled for the zero-power experimental fast-reactor Coral-1 in order to measure the source term which appears in the point kinetical equations. In the first one, the source term is measured when the reactor is just critical with source by taking advantage of the wide range of the linear approach to critical for Coral-1. In the second one, the measurement is made in subcritical state by making use of the previous calibrated control rods. Several applications are also included such as the measurement of the detector dead time, the determinations of the reactivity of small samples and the shape of the neutron importance of the source. (author)

  20. Clutter-free Visualization of Large Point Symbols at Multiple Scales by Offset Quadtrees

    ZHANG Xiang

    2016-08-01

    Full Text Available To address the cartographic problems in map mash-up applications in the Web 2.0 context, this paper studies a clutter-free technique for visualizing large symbols on Web maps. Basically, a quadtree is used to select one symbol in each grid cell at each zoom level. To resolve the symbol overlaps between neighboring quad-grids, multiple offsets are applied to the quadtree and a voting strategy is used to compute the significant level of symbols for their selection at multiple scales. The method is able to resolve spatial conflicts without explicit conflict detection, thus enabling a highly efficient processing. Also the resulting map forms a visual hierarchy of semantic importance. We discuss issues such as the relative importance, symbol-to-grid size ratio, and effective offset schemes, and propose two extensions to make better use of the free space available on the map. Experiments were carried out to validate the technique,which demonstrates its robustness and efficiency (a non-optimal implementation leads to a sub-second processing for datasets of a 105 magnitude.

  1. Exact analytical solution of time-independent neutron transport equation, and its applications to systems with a point source

    Mikata, Y.

    2014-01-01

    Highlights: • An exact solution for the one-speed neutron transport equation is obtained. • This solution as well as its derivation are believed to be new. • Neutron flux for a purely absorbing material with a point neutron source off the origin is obtained. • Spherically as well as cylindrically piecewise constant cross sections are studied. • Neutron flux expressions for a point neutron source off the origin are believed to be new. - Abstract: An exact analytical solution of the time-independent monoenergetic neutron transport equation is obtained in this paper. The solution is applied to systems with a point source. Systematic analysis of the solution of the time-independent neutron transport equation, and its applications represent the primary goal of this paper. To the best of the author’s knowledge, certain key results on the scalar neutron flux as well as their derivations are new. As an application of these results, a scalar neutron flux for a purely absorbing medium with a spherically piecewise constant cross section and an isotropic point neutron source off the origin as well as that for a cylindrically piecewise constant cross section with a point neutron source off the origin are obtained. Both of these results are believed to be new

  2. Analytical solutions of nonlocal Poisson dielectric models with multiple point charges inside a dielectric sphere

    Xie, Dexuan; Volkmer, Hans W.; Ying, Jinyong

    2016-04-01

    The nonlocal dielectric approach has led to new models and solvers for predicting electrostatics of proteins (or other biomolecules), but how to validate and compare them remains a challenge. To promote such a study, in this paper, two typical nonlocal dielectric models are revisited. Their analytical solutions are then found in the expressions of simple series for a dielectric sphere containing any number of point charges. As a special case, the analytical solution of the corresponding Poisson dielectric model is also derived in simple series, which significantly improves the well known Kirkwood's double series expansion. Furthermore, a convolution of one nonlocal dielectric solution with a commonly used nonlocal kernel function is obtained, along with the reaction parts of these local and nonlocal solutions. To turn these new series solutions into a valuable research tool, they are programed as a free fortran software package, which can input point charge data directly from a protein data bank file. Consequently, different validation tests can be quickly done on different proteins. Finally, a test example for a protein with 488 atomic charges is reported to demonstrate the differences between the local and nonlocal models as well as the importance of using the reaction parts to develop local and nonlocal dielectric solvers.

  3. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  4. Watershed-based point sources permitting strategy and dynamic permit-trading analysis.

    Ning, Shu-Kuang; Chang, Ni-Bin

    2007-09-01

    Permit-trading policy in a total maximum daily load (TMDL) program may provide an additional avenue to produce environmental benefit, which closely approximates what would be achieved through a command and control approach, with relatively lower costs. One of the important considerations that might affect the effective trading mechanism is to determine the dynamic transaction prices and trading ratios in response to seasonal changes of assimilative capacity in the river. Advanced studies associated with multi-temporal spatially varied trading ratios among point sources to manage water pollution hold considerable potential for industries and policy makers alike. This paper aims to present an integrated simulation and optimization analysis for generating spatially varied trading ratios and evaluating seasonal transaction prices accordingly. It is designed to configure a permit-trading structure basin-wide and provide decision makers with a wealth of cost-effective, technology-oriented, risk-informed, and community-based management strategies. The case study, seamlessly integrating a QUAL2E simulation model with an optimal waste load allocation (WLA) scheme in a designated TMDL study area, helps understand the complexity of varying environmental resources values over space and time. The pollutants of concern in this region, which are eligible for trading, mainly include both biochemical oxygen demand (BOD) and ammonia-nitrogen (NH3-N). The problem solution, as a consequence, suggests an array of waste load reduction targets in a well-defined WLA scheme and exhibits a dynamic permit-trading framework among different sub-watersheds in the study area. Research findings gained in this paper may extend to any transferable dynamic-discharge permit (TDDP) program worldwide.

  5. Mycotoxins: diffuse and point source contributions of natural contaminants of emerging concern to streams

    Kolpin, Dana W.; Schenzel, Judith; Meyer, Michael T.; Phillips, Patrick J.; Hubbard, Laura E.; Scott, Tia-Marie; Bucheli, Thomas D.

    2014-01-01

    To determine the prevalence of mycotoxins in streams, 116 water samples from 32 streams and three wastewater treatment plant effluents were collected in 2010 providing the broadest investigation on the spatial and temporal occurrence of mycotoxins in streams conducted in the United States to date. Out of the 33 target mycotoxins measured, nine were detected at least once during this study. The detections of mycotoxins were nearly ubiquitous during this study even though the basin size spanned four orders of magnitude. At least one mycotoxin was detected in 94% of the 116 samples collected. Deoxynivalenol was the most frequently detected mycotoxin (77%), followed by nivalenol (59%), beauvericin (43%), zearalenone (26%), β-zearalenol (20%), 3-acetyl-deoxynivalenol (16%), α-zearalenol (10%), diacetoxyscirpenol (5%), and verrucarin A (1%). In addition, one or more of the three known estrogenic compounds (i.e. zearalenone, α-zearalenol, and β-zearalenol) were detected in 43% of the samples, with maximum concentrations substantially higher than observed in previous research. While concentrations were generally low (i.e. < 50 ng/L) during this study, concentrations exceeding 1000 ng/L were measured during spring snowmelt conditions in agricultural settings and in wastewater treatment plant effluent. Results of this study suggest that both diffuse (e.g. release from infected plants and manure applications from exposed livestock) and point (e.g. wastewater treatment plants and food processing plants) sources are important environmental pathways for mycotoxin transport to streams. The ecotoxicological impacts from the long-term, low-level exposures to mycotoxins alone or in combination with complex chemical mixtures are unknown

  6. Reduction of non-point source contaminants associated with road-deposited sediments by sweeping.

    Kim, Do-Gun; Kang, Hee-Man; Ko, Seok-Oh

    2017-09-19

    Road-deposited sediments (RDS) on an expressway, residual RDS collected after sweeping, and RDS removed by means of sweeping were analyzed to evaluate the degree to which sweeping removed various non-point source contaminants. The total RDS load was 393.1 ± 80.3 kg/km and the RDS, residual RDS, and swept RDS were all highly polluted with organics, nutrients, and metals. Among the metals studied, Cu, Zn, Pb, Ni, Ca, and Fe were significantly enriched, and most of the contaminants were associated with particles within the size range from 63 μm to 2 mm. Sweeping reduced RDS and its associated contaminants by 33.3-49.1% on average. We also measured the biological oxygen demand (BOD) of RDS in the present work, representing to our knowledge the first time that this has been done; we found that RDS contains a significant amount of biodegradable organics and that the reduction of BOD by sweeping was higher than that of other contaminants. Significant correlations were found between the contaminants measured, indicating that the organics and the metals originated from both exhaust and non-exhaust particles. Meanwhile, the concentrations of Cu and Ni were higher in 63 μm-2 mm particles than in smaller particles, suggesting that some metals in RDS likely exist intrinsically in particles, rather than only as adsorbates on particle surfaces. Overall, the results in this study showed that sweeping to collect RDS can be a good alternative for reduction of contaminants in runoff.

  7. Interpolating precipitation and its relation to runoff and non-point source pollution.

    Chang, Chia-Ling; Lo, Shang-Lien; Yu, Shaw-L

    2005-01-01

    When rainfall spatially varies, complete rainfall data for each region with different rainfall characteristics are very important. Numerous interpolation methods have been developed for estimating unknown spatial characteristics. However, no interpolation method is suitable for all circumstances. In this study, several methods, including the arithmetic average method, the Thiessen Polygons method, the traditional inverse distance method, and the modified inverse distance method, were used to interpolate precipitation. The modified inverse distance method considers not only horizontal distances but also differences between the elevations of the region with no rainfall records and of its surrounding rainfall stations. The results show that when the spatial variation of rainfall is strong, choosing a suitable interpolation method is very important. If the rainfall is uniform, the precipitation estimated using any interpolation method would be quite close to the actual precipitation. When rainfall is heavy in locations with high elevation, the rainfall changes with the elevation. In this situation, the modified inverse distance method is much more effective than any other method discussed herein for estimating the rainfall input for WinVAST to estimate runoff and non-point source pollution (NPSP). When the spatial variation of rainfall is random, regardless of the interpolation method used to yield rainfall input, the estimation errors of runoff and NPSP are large. Moreover, the relationship between the relative error of the predicted runoff and predicted pollutant loading of SS is high. However, the pollutant concentration is affected by both runoff and pollutant export, so the relationship between the relative error of the predicted runoff and the predicted pollutant concentration of SS may be unstable.

  8. Relationship Between Non-Point Source Pollution and Korean Green Factor

    Seung Chul Lee

    2015-01-01

    Full Text Available In determining the relationship between the rational event mean concentration (REMC which is a volume-weighted mean of event mean concentrations (EMCs as a non-point source (NPS pollution indicator and the green factor (GF as a low impact development (LID land use planning indicator, we constructed at runoff database containing 1483 rainfall events collected from 107 different experimental catchments from 19 references in Korea. The collected data showed that EMCs were not correlated with storm factors whereas they showed significant differences according to the land use types. The calculated REMCs for BOD, COD, TSS, TN, and TP showed negative correlations with the GFs. However, even though the GFs of the agricultural area were concentrated in values of 80 like the green areas, the REMCs for TSS, TN, and TP were especially high. There were few differences in REMC runoff characteristics according to the GFs such as recreational facilities areas in suburbs and highways and trunk roads that connect to major roads between major cities. Except for those areas, the REMCs for BOD and COD were significantly related to the GFs. The REMCs for BOD and COD decreased when the rate of natural green area increased. On the other hand, some of the REMCs for TSS, TN, and TP were still high where the catchments encountered mixed land use patterns, especially public facility areas with bare ground and artificial grassland areas. The GF could therefore be used as a major planning indicator when establishing land use planning aimed at sustainable development with NPS management in urban areas if the weighted GF values will be improved.

  9. Stochastic Management of Non-Point Source Contamination: Joint Impact of Aquifer Heterogeneity and Well Characteristics

    Henri, C. V.; Harter, T.

    2017-12-01

    Agricultural activities are recognized as the preeminent origin of non-point source (NPS) contamination of water bodies through the leakage of nitrate, salt and agrochemicals. A large fraction of world agricultural activities and therefore NPS contamination occurs over unconsolidated alluvial deposit basins offering soil composition and topography favorable to productive farming. These basins represent also important groundwater reservoirs. The over-exploitation of aquifers coupled with groundwater pollution by agriculture-related NPS contaminant has led to a rapid deterioration of the quality of these groundwater basins. The management of groundwater contamination from NPS is challenged by the inherent complexity of aquifers systems. Contaminant transport dynamics are highly uncertain due to the heterogeneity of hydraulic parameters controlling groundwater flow. Well characteristics are also key uncertain elements affecting pollutant transport and NPS management but quantifying uncertainty in NPS management under these conditions is not well documented. Our work focuses on better understanding the joint impact of aquifer heterogeneity and pumping well characteristics (extraction rate and depth) on (1) the transport of contaminants from NPS and (2) the spatio-temporal extension of the capture zone. To do so, we generate a series of geostatistically equivalent 3D heterogeneous aquifers and simulate the flow and non-reactive solute transport from NPS to extraction wells within a stochastic framework. The propagation of the uncertainty on the hydraulic conductivity field is systematically analyzed. A sensitivity analysis of the impact of extraction well characteristics (pumping rate and screen depth) is also conducted. Results highlight the significant role that heterogeneity and well characteristics plays on management metrics. We finally show that, in case of NPS contamination, the joint impact of regional longitudinal and transverse vertical hydraulic gradients and

  10. Instream Biological Assessment of NPDES Point Source Discharges at the Savannah River Site, 2000

    Specht, W.L.

    2001-01-01

    The Savannah River Site (SRS) currently has 31 NPDES outfalls that have been permitted by the South Carolina Department of Health and Environmental Control (SCDHEC) to discharge to SRS streams and the Savannah River. In order to determine the cumulative impacts of these discharges to the receiving streams, a study plan was developed to perform in-stream assessments of the fish assemblages, macroinvertebrate assemblages, and habitats of the receiving streams. These studies were designed to detect biological impacts due to point source discharges. Sampling was initially conducted between November 1997 and July 1998 and was repeated in the summer and fall of 2000. A total of 18 locations were sampled (Table 1, Figure 1). Sampling locations for fish and macroinvertebrates were generally the same. However, different locations were sampled for fish (Road A-2) and macroinvertebrates (Road C) in the lower portion of Upper Three Runs, to avoid interference with ongoing fisheries studies at Road C. Also, fish were sampled in Fourmile Branch at Road 4 rather than at Road F because the stream at Road F was too narrow and shallow to support many fish. Sampling locations and parameters are detailed in Sections 2 and 3 of this report. In general, sampling locations were selected that would permit comparisons upstream and downstream of NPDES outfalls. In instances where this approach was not feasible because effluents discharge into the headwaters of a stream, appropriate unimpacted reference were used for comparison purposes. This report summarizes the results of the sampling that was conducted in 2000 and also compares these data to the data that were collected in 1997 and 1998

  11. Multiple organ failure in the newborn: the point of view of the pathologist

    Clara Gerosa

    2014-06-01

    Full Text Available One of the most severe events occurring in critically ill patients admitted to a neonatal intensive care unit (NICU center is represented by the multiple organ failure (MOF, a systemic inflammatory response leading to a progressive organ dysfunction and mortality in newborns. MOF may occur in newborns primarily affected by multiple single organ diseases, including respiratory distress syndrome neonatal sepsis with acute kidney injury, post-asphyxial hypoxic-ischemic encephalopathy and pandemic influenza A (H1N1 infection. In a previous article from our group, based on the histological examination of all organs at autopsy of newborns affected by MOF, all organs studied did not escape to be damaged, including thymus and pancreas normally not mentioned in the literature of MOF. The aim of this article is to review the most important pathological changes pathologists should look for in every case of MOF occurring in the perinatal period, with particular attention to systemic endothelial changes occurring in blood vessels in all organs and sytems. On the basis of our experience, matching data during the last phases of the clinicopathological diagnosis represents a useful method, much more productive as compared to the method based on giving pathological answers to the clinical questions prospected before autopsy. As for the pathological features observed in neonatal MOF, one of them deserves a particular attention: the vascular lesions, and in particular the multiple changes occurring during MOF development in endothelial cells, ending with the loss of the endothelial barrier, probably the most relevant histological lesion followed by the insurgence of interstitial edema and disseminated intravascular coagulation. Small vessels should be observed at high power, with particular attention to the size and shape of endothelial nuclei, in order to evidence endothelial swelling, probably the initial modification of the endothelial cells leading to their

  12. Two-point versus multiple-point geostatistics: the ability of geostatistical methods to capture complex geobodies and their facies associations—an application to a channelized carbonate reservoir, southwest Iran

    Hashemi, Seyyedhossein; Javaherian, Abdolrahim; Ataee-pour, Majid; Khoshdel, Hossein

    2014-01-01

    Facies models try to explain facies architectures which have a primary control on the subsurface heterogeneities and the fluid flow characteristics of a given reservoir. In the process of facies modeling, geostatistical methods are implemented to integrate different sources of data into a consistent model. The facies models should describe facies interactions; the shape and geometry of the geobodies as they occur in reality. Two distinct categories of geostatistical techniques are two-point and multiple-point (geo) statistics (MPS). In this study, both of the aforementioned categories were applied to generate facies models. A sequential indicator simulation (SIS) and a truncated Gaussian simulation (TGS) represented two-point geostatistical methods, and a single normal equation simulation (SNESIM) selected as an MPS simulation representative. The dataset from an extremely channelized carbonate reservoir located in southwest Iran was applied to these algorithms to analyze their performance in reproducing complex curvilinear geobodies. The SNESIM algorithm needs consistent training images (TI) in which all possible facies architectures that are present in the area are included. The TI model was founded on the data acquired from modern occurrences. These analogies delivered vital information about the possible channel geometries and facies classes that are typically present in those similar environments. The MPS results were conditioned to both soft and hard data. Soft facies probabilities were acquired from a neural network workflow. In this workflow, seismic-derived attributes were implemented as the input data. Furthermore, MPS realizations were conditioned to hard data to guarantee the exact positioning and continuity of the channel bodies. A geobody extraction workflow was implemented to extract the most certain parts of the channel bodies from the seismic data. These extracted parts of the channel bodies were applied to the simulation workflow as hard data

  13. A Microsoft Kinect-Based Point-of-Care Gait Assessment Framework for Multiple Sclerosis Patients.

    Gholami, Farnood; Trojan, Daria A; Kovecses, Jozsef; Haddad, Wassim M; Gholami, Behnood

    2017-09-01

    Gait impairment is a prevalent and important difficulty for patients with multiple sclerosis (MS), a common neurological disorder. An easy to use tool to objectively evaluate gait in MS patients in a clinical setting can assist clinicians to perform an objective assessment. The overall objective of this study is to develop a framework to quantify gait abnormalities in MS patients using the Microsoft Kinect for the Windows sensor; an inexpensive, easy to use, portable camera. Specifically, we aim to evaluate its feasibility for utilization in a clinical setting, assess its reliability, evaluate the validity of gait indices obtained, and evaluate a novel set of gait indices based on the concept of dynamic time warping. In this study, ten ambulatory MS patients, and ten age and sex-matched normal controls were studied at one session in a clinical setting with gait assessment using a Kinect camera. The expanded disability status scale (EDSS) clinical ambulation score was calculated for the MS subjects, and patients completed the Multiple Sclerosis walking scale (MSWS). Based on this study, we established the potential feasibility of using a Microsoft Kinect camera in a clinical setting. Seven out of the eight gait indices obtained using the proposed method were reliable with intraclass correlation coefficients ranging from 0.61 to 0.99. All eight MS gait indices were significantly different from those of the controls (p-values less than 0.05). Finally, seven out of the eight MS gait indices were correlated with the objective and subjective gait measures (Pearson's correlation coefficients greater than 0.40). This study shows that the Kinect camera is an easy to use tool to assess gait in MS patients in a clinical setting.

  14. Leading bureaucracies to the tipping point: An alternative model of multiple stable equilibrium levels of corruption

    Caulkins, Jonathan P.; Feichtinger, Gustav; Grass, Dieter; Hartl, Richard F.; Kort, Peter M.; Novak, Andreas J.; Seidl, Andrea

    2013-01-01

    We present a novel model of corruption dynamics in the form of a nonlinear optimal dynamic control problem. It has a tipping point, but one whose origins and character are distinct from that in the classic Schelling (1978) model. The decision maker choosing a level of corruption is the chief or some other kind of authority figure who presides over a bureaucracy whose state of corruption is influenced by the authority figure’s actions, and whose state in turn influences the pay-off for the authority figure. The policy interpretation is somewhat more optimistic than in other tipping models, and there are some surprising implications, notably that reforming the bureaucracy may be of limited value if the bureaucracy takes its cues from a corrupt leader. PMID:23565027

  15. Leading bureaucracies to the tipping point: An alternative model of multiple stable equilibrium levels of corruption.

    Caulkins, Jonathan P; Feichtinger, Gustav; Grass, Dieter; Hartl, Richard F; Kort, Peter M; Novak, Andreas J; Seidl, Andrea

    2013-03-16

    We present a novel model of corruption dynamics in the form of a nonlinear optimal dynamic control problem. It has a tipping point, but one whose origins and character are distinct from that in the classic Schelling (1978) model. The decision maker choosing a level of corruption is the chief or some other kind of authority figure who presides over a bureaucracy whose state of corruption is influenced by the authority figure's actions, and whose state in turn influences the pay-off for the authority figure. The policy interpretation is somewhat more optimistic than in other tipping models, and there are some surprising implications, notably that reforming the bureaucracy may be of limited value if the bureaucracy takes its cues from a corrupt leader.

  16. Multiple Information Fusion Face Recognition Using Key Feature Points

    LIN Kezheng

    2017-06-01

    Full Text Available After years of face recognition research,due to the effect of illumination,noise and other conditions have led to the recognition rate is relatively low,2 d face recognition technology has couldn’t keep up with the pace of The Times the forefront,Although 3 d face recognition technology is developing step by step,but it has a higher complexity. In order to solve this problem,based on the traditional depth information positioning method and local characteristic analysis methods LFA,puts forward an improved 3 d face key feature points localization algorithm, and on the basis of the trained sample which obtained by complete cluster,further put forward the global and local feature extraction algorithm of weighted fusion. Through FRGC and BU-3DFE experiment data comparison and analysis of the two face library,the method in terms of 3 d face recognition effect has a higher robustness.

  17. Major and Trace Element Fluxes to the Ganges River: Significance of Small Flood Plain Tributary as Non-Point Pollution Source

    Lakshmi, V.; Sen, I. S.; Mishra, G.

    2017-12-01

    There has been much discussion amongst biologists, ecologists, chemists, geologists, environmental firms, and science policy makers about the impact of human activities on river health. As a result, multiple river restoration projects are on going on many large river basins around the world. In the Indian subcontinent, the Ganges River is the focal point of all restoration actions as it provides food and water security to half a billion people. Serious concerns have been raised about the quality of Ganga water as toxic chemicals and many more enters the river system through point-sources such as direct wastewater discharge to rivers, or non-point-sources. Point source pollution can be easily identified and remedial actions can be taken; however, non-point pollution sources are harder to quantify and mitigate. A large non-point pollution source in the Indo-Gangetic floodplain is the network of small floodplain rivers. However, these rivers are rarely studied since they are small in catchment area ( 1000-10,000 km2) and discharge (knowledge gap we have monitored the Pandu River for one year between February 2015 and April 2016. Pandu river is 242 km long and is a right bank tributary of Ganges with a total catchment area of 1495 km2. Water samples were collected every month for dissolved major and trace elements. Here we show that the concentration of heavy metals in river Pandu is in higher range as compared to the world river average, and all the dissolved elements shows a large spatial-temporal variation. We show that the Pandu river exports 192170, 168517, 57802, 32769, 29663, 1043, 279, 241, 225, 162, 97, 28, 25, 22, 20, 8, 4 Kg/yr of Ca, Na, Mg, K, Si, Sr, Zn, B, Ba, Mn, Al, Li, Rb, Mo, U, Cu, and Sb, respectively, to the Ganga river, and the exported chemical flux effects the water chemistry of the Ganga river downstream of its confluence point. We further speculate that small floodplain rivers is an important source that contributes to the dissolved chemical

  18. Identifying and characterizing major emission point sources as a basis for geospatial distribution of mercury emissions inventories

    Steenhuisen, Frits; Wilson, Simon J.

    2015-07-01

    Mercury is a global pollutant that poses threats to ecosystem and human health. Due to its global transport, mercury contamination is found in regions of the Earth that are remote from major emissions areas, including the Polar regions. Global anthropogenic emission inventories identify important sectors and industries responsible for emissions at a national level; however, to be useful for air transport modelling, more precise information on the locations of emission is required. This paper describes the methodology applied, and the results of work that was conducted to assign anthropogenic mercury emissions to point sources as part of geospatial mapping of the 2010 global anthropogenic mercury emissions inventory prepared by AMAP/UNEP. Major point-source emission sectors addressed in this work account for about 850 tonnes of the emissions included in the 2010 inventory. This work allocated more than 90% of these emissions to some 4600 identified point source locations, including significantly more point source locations in Africa, Asia, Australia and South America than had been identified during previous work to geospatially-distribute the 2005 global inventory. The results demonstrate the utility and the limitations of using existing, mainly public domain resources to accomplish this work. Assumptions necessary to make use of selected online resources are discussed, as are artefacts that can arise when these assumptions are applied to assign (national-sector) emissions estimates to point sources in various countries and regions. Notwithstanding the limitations of the available information, the value of this procedure over alternative methods commonly used to geo-spatially distribute emissions, such as use of 'proxy' datasets to represent emissions patterns, is illustrated. Improvements in information that would facilitate greater use of these methods in future work to assign emissions to point-sources are discussed. These include improvements to both national

  19. Extraction of Point Source Gamma Signals from Aerial Survey Data Taken over a Las Vegas Nevada, Residential Area

    Thane J. Hendricks

    2007-01-01

    Detection of point-source gamma signals from aerial measurements is complicated by widely varying terrestrial gamma backgrounds, since these variations frequently resemble signals from point-sources. Spectral stripping techniques have been very useful in separating man-made and natural radiation contributions which exist on Energy Research and Development Administration (ERDA) plant sites and other like facilities. However, these facilities are generally situated in desert areas or otherwise flat terrain with few man-made structures to disturb the natural background. It is of great interest to determine if the stripping technique can be successfully applied in populated areas where numerous man-made disturbances (houses, streets, yards, vehicles, etc.) exist

  20. Role of rural solid waste management in non-point source pollution control of Dianchi Lake catchments, China

    Wenjing LU; Hongtao WANG

    2008-01-01

    In recent years, with control of the main municipal and industrial point pollution sources and implementation of cleaning for some inner pollution sources in the water body, the discharge of point source pollution decreased gradually, while non-point source pollution has become increasingly distressing in Dianchi Lake catchments. As one of the major targets in non-point source pollution control, an integrated solid waste controlling strategy combined with a technological solution and management system was proposed and implemented based on the waste disposal situation and characteristics of rural solid waste in the demonstration area. As the key technoogy in rural solid waste treatment, both centralized plantscale composting and a dispersed farmer-operated waste treating system showed promise in rendering timely benefits in efficiency, large handling capacity, high quality of the end product, as well as good economic return. Problems encountered during multi-substrates co-com-posting such as pathogens, high moisture content, asyn-chronism in the decomposition of different substrates, and low quality of the end product can all be tackled. 92.5% of solid waste was collected in the demonstration area, while the treating and recycling ratio reached 87.9%, which pre-vented 32.2 t nitrogen and 3.9 t phosphorus per year from entering the water body of Dianchi Lake after imple-mentation of the project.

  1. Source location in plates based on the multiple sensors array method and wavelet analysis

    Yang, Hong Jun; Shin, Tae Jin; Lee, Sang Kwon

    2014-01-01

    A new method for impact source localization in a plate is proposed based on the multiple signal classification (MUSIC) and wavelet analysis. For source localization, the direction of arrival of the wave caused by an impact on a plate and the distance between impact position and sensor should be estimated. The direction of arrival can be estimated accurately using MUSIC method. The distance can be obtained by using the time delay of arrival and the group velocity of the Lamb wave in a plate. Time delay is experimentally estimated using the continuous wavelet transform for the wave. The elasto dynamic theory is used for the group velocity estimation.

  2. Source location in plates based on the multiple sensors array method and wavelet analysis

    Yang, Hong Jun; Shin, Tae Jin; Lee, Sang Kwon [Inha University, Incheon (Korea, Republic of)

    2014-01-15

    A new method for impact source localization in a plate is proposed based on the multiple signal classification (MUSIC) and wavelet analysis. For source localization, the direction of arrival of the wave caused by an impact on a plate and the distance between impact position and sensor should be estimated. The direction of arrival can be estimated accurately using MUSIC method. The distance can be obtained by using the time delay of arrival and the group velocity of the Lamb wave in a plate. Time delay is experimentally estimated using the continuous wavelet transform for the wave. The elasto dynamic theory is used for the group velocity estimation.

  3. Does the nervous system use equilibrium-point control to guide single and multiple joint movements?

    Bizzi, E; Hogan, N; Mussa-Ivaldi, F A; Giszter, S

    1992-12-01

    The hypothesis that the central nervous system (CNS) generates movement as a shift of the limb's equilibrium posture has been corroborated experimentally in studies involving single- and multijoint motions. Posture may be controlled through the choice of muscle length-tension curve that set agonist-antagonist torque-angle curves determining an equilibrium position for the limb and the stiffness about the joints. Arm trajectories seem to be generated through a control signal defining a series of equilibrium postures. The equilibrium-point hypothesis drastically simplifies the requisite computations for multijoint movements and mechanical interactions with complex dynamic objects in the environment. Because the neuromuscular system is springlike, the instantaneous difference between the arm's actual position and the equilibrium position specified by the neural activity can generate the requisite torques, avoiding the complex "inverse dynamic" problem of computing the torques at the joints. The hypothesis provides a simple, unified description of posture and movement as well as contact control task performance, in which the limb must exert force stably and do work on objects in the environment. The latter is a surprisingly difficult problem, as robotic experience has shown. The prior evidence for the hypothesis came mainly from psychophysical and behavioral experiments. Our recent work has shown that microstimulation of the frog spinal cord's premotoneural network produces leg movements to various positions in the frog's motor space. The hypothesis can now be investigated in the neurophysiological machinery of the spinal cord.

  4. Modelling the transport of solid contaminants originated from a point source

    Salgueiro, Dora V.; Conde, Daniel A. S.; Franca, Mário J.; Schleiss, Anton J.; Ferreira, Rui M. L.

    2017-04-01

    The solid phases of natural flows can comprise an important repository for contaminants in aquatic ecosystems and can propagate as turbidity currents generating a stratified environment. Contaminants can be desorbed under specific environmental conditions becoming re-suspended, with a potential impact on the aquatic biota. Forecasting the distribution of the contaminated turbidity current is thus crucial for a complete assessment of environmental exposure. In this work we validate the ability of the model STAV-2D, developed at CERIS (IST), to simulate stratified flows such as those resulting from turbidity currents in complex geometrical environments. The validation involves not only flow phenomena inherent to flows generated by density imbalance but also convective effects brought about by the complex geometry of the water basin where the current propagates. This latter aspect is of paramount importance since, in real applications, currents may propagate in semi-confined geometries in plan view, generating important convective accelerations. Velocity fields and mass distributions obtained from experiments carried out at CERIS - (IST) are used as validation data for the model. The experimental set-up comprises a point source in a rectangular basin with a wall placed perpendicularly to the outer walls. Thus generates a complex 2D flow with an advancing wave front and shocks due to the flow reflection from the walls. STAV-2D is based on the depth- and time-averaged mass and momentum equations for mixtures of water and sediment, understood as continua. It is closed in terms of flow resistance and capacity bedload discharge by a set of classic closure models and a specific high concentration formulation. The two-layer model is derived from layer-averaged Navier-Stokes equations, resulting in a system of layer-specific non-linear shallow-water equations, solved through explicit first or second-order schemes. According to the experimental data for mass distribution, the

  5. Monte Carlo analyses of the source multiplication factor of the YALINA booster facility

    Talamo, Alberto; Gohar, Y.; Kondev, F.; Aliberti, Gerardo [Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Bolshinsky, I. [Idaho National Laboratory, P. O. Box 2528, Idaho Falls, Idaho 83403 (United States); Kiyavitskaya, Hanna; Bournos, Victor; Fokov, Yury; Routkovskaya, Christina; Serafimovich, Ivan [Joint Institute for Power and Nuclear Research-Sosny, National Academy of Sciences, Minsk, acad. Krasin, 99, 220109 (Belarus)

    2008-07-01

    The multiplication factor of a subcritical assembly is affected by the energy spectrum and spatial distribution of the neutron source. In a critical assembly, neutrons emerge from the fission reactions with an average energy of approx2 MeV; in a deuteron accelerator driven subcritical assembly, neutrons emerge from the fusion target with a fixed energy of 2.45 or 14.1 MeV, from the Deuterium-Deuterium (D-D) and Deuterium-Tritium (D-T) reactions respectively. This study aims at generating accurate neutronics models for the YALINA Booster facility, based on the use of different Monte Carlo neutron transport codes, at defining the facility key physical parameters, and at comparing the neutron multiplication factor for three different neutron sources: fission, D-D and D-T. The calculated values are compared with the experimental results. (authors)

  6. Monte Carlo analyses of the source multiplication factor of the YALINA booster facility

    Talamo, Alberto; Gohar, Y.; Kondev, F.; Aliberti, Gerardo; Bolshinsky, I.; Kiyavitskaya, Hanna; Bournos, Victor; Fokov, Yury; Routkovskaya, Christina; Serafimovich, Ivan

    2008-01-01

    The multiplication factor of a subcritical assembly is affected by the energy spectrum and spatial distribution of the neutron source. In a critical assembly, neutrons emerge from the fission reactions with an average energy of ∼2 MeV; in a deuteron accelerator driven subcritical assembly, neutrons emerge from the fusion target with a fixed energy of 2.45 or 14.1 MeV, from the Deuterium-Deuterium (D-D) and Deuterium-Tritium (D-T) reactions respectively. This study aims at generating accurate neutronics models for the YALINA Booster facility, based on the use of different Monte Carlo neutron transport codes, at defining the facility key physical parameters, and at comparing the neutron multiplication factor for three different neutron sources: fission, D-D and D-T. The calculated values are compared with the experimental results. (authors)

  7. Design and Integration of an All-Magnetic Attitude Control System for FASTSAT-HSV01's Multiple Pointing Objectives

    DeKock, Brandon; Sanders, Devon; Vanzwieten, Tannen; Capo-Lugo, Pedro

    2011-01-01

    The FASTSAT-HSV01 spacecraft is a microsatellite with magnetic torque rods as it sole attitude control actuator. FASTSAT s multiple payloads and mission functions require the Attitude Control System (ACS) to maintain Local Vertical Local Horizontal (LVLH)-referenced attitudes without spin-stabilization, while the pointing errors for some attitudes be significantly smaller than the previous best-demonstrated for this type of control system. The mission requires the ACS to hold multiple stable, unstable, and non-equilibrium attitudes, as well as eject a 3U CubeSat from an onboard P-POD and recover from the ensuing tumble. This paper describes the Attitude Control System, the reasons for design choices, how the ACS integrates with the rest of the spacecraft, and gives recommendations for potential future applications of the work.

  8. Modeling non-point source pollutants in the vadose zone: Back to the basics

    Corwin, Dennis L.; Letey, John, Jr.; Carrillo, Marcia L. K.

    More than ever before in the history of scientific investigation, modeling is viewed as a fundamental component of the scientific method because of the relatively recent development of the computer. No longer must the scientific investigator be confined to artificially isolated studies of individual processes that can lead to oversimplified and sometimes erroneous conceptions of larger phenomena. Computer models now enable scientists to attack problems related to open systems such as climatic change, and the assessment of environmental impacts, where the whole of the interactive processes are greater than the sum of their isolated components. Environmental assessment involves the determination of change of some constituent over time. This change can be measured in real time or predicted with a model. The advantage of prediction, like preventative medicine, is that it can be used to alter the occurrence of potentially detrimental conditions before they are manifest. The much greater efficiency of preventative, rather than remedial, efforts strongly justifies the need for an ability to accurately model environmental contaminants such as non-point source (NPS) pollutants. However, the environmental modeling advances that have accompanied computer technological development are a mixed blessing. Where once we had a plethora of discordant data without a holistic theory, now the pendulum has swung so that we suffer from a growing stockpile of models of which a significant number have never been confirmed or even attempts made to confirm them. Modeling has become an end in itself rather than a means because of limited research funding, the high cost of field studies, limitations in time and patience, difficulty in cooperative research and pressure to publish papers as quickly as possible. Modeling and experimentation should be ongoing processes that reciprocally enhance one another with sound, comprehensive experiments serving as the building blocks of models and models

  9. Point source detection using the Spherical Mexican Hat Wavelet on simulated all-sky Planck maps

    Vielva, P.; Martínez-González, E.; Gallegos, J. E.; Toffolatti, L.; Sanz, J. L.

    2003-09-01

    We present an estimation of the point source (PS) catalogue that could be extracted from the forthcoming ESA Planck mission data. We have applied the Spherical Mexican Hat Wavelet (SMHW) to simulated all-sky maps that include cosmic microwave background (CMB), Galactic emission (thermal dust, free-free and synchrotron), thermal Sunyaev-Zel'dovich effect and PS emission, as well as instrumental white noise. This work is an extension of the one presented in Vielva et al. We have developed an algorithm focused on a fast local optimal scale determination, that is crucial to achieve a PS catalogue with a large number of detections and a low flux limit. An important effort has been also done to reduce the CPU time processor for spherical harmonic transformation, in order to perform the PS detection in a reasonable time. The presented algorithm is able to provide a PS catalogue above fluxes: 0.48 Jy (857 GHz), 0.49 Jy (545 GHz), 0.18 Jy (353 GHz), 0.12 Jy (217 GHz), 0.13 Jy (143 GHz), 0.16 Jy (100 GHz HFI), 0.19 Jy (100 GHz LFI), 0.24 Jy (70 GHz), 0.25 Jy (44 GHz) and 0.23 Jy (30 GHz). We detect around 27 700 PS at the highest frequency Planck channel and 2900 at the 30-GHz one. The completeness level are: 70 per cent (857 GHz), 75 per cent (545 GHz), 70 per cent (353 GHz), 80 per cent (217 GHz), 90 per cent (143 GHz), 85 per cent (100 GHz HFI), 80 per cent (100 GHz LFI), 80 per cent (70 GHz), 85 per cent (44 GHz) and 80 per cent (30 GHz). In addition, we can find several PS at different channels, allowing the study of the spectral behaviour and the physical processes acting on them. We also present the basic procedure to apply the method in maps convolved with asymmetric beams. The algorithm takes ~72 h for the most CPU time-demanding channel (857 GHz) in a Compaq HPC320 (Alpha EV68 1-GHz processor) and requires 4 GB of RAM memory; the CPU time goes as O[NRoN3/2pix log(Npix)], where Npix is the number of pixels in the map and NRo is the number of optimal scales needed.

  10. Agricultural non-point source pollution of glyphosate and AMPA at a catchment scale

    Okada, Elena; Perez, Debora; De Geronimo, Eduardo; Aparicio, Virginia; Costa, Jose Luis

    2017-04-01

    Information on the actual input of pesticides into the environment is crucial for proper risk assessment and the design of risk reduction measures. The Crespo basin is found within the Balcarce County, located south-east of the Buenos Aires Province. The whole basin has an area of approximately 490 km2 and the river has a length of 65 km. This study focuses on the upper basin of the Crespo stream, covering an area of 226 km2 in which 94.7% of the land is under agricultural production representing a highly productive area, characteristic of the Austral Pampas region. In this study we evaluated the levels of glyphosate and its metabolite aminomethylphosphonic acid (AMPA) in soils; and the non-point source pollution of surface waters, stream sediments and groundwater, over a period of one year. Stream water samples were taken monthly using propylene bottles, from the center of the bridge. If present, sediment samples from the first 5 cm were collected using cylinder samplers. Groundwater samples were taken from windmills or electric pumps from different farms every two months. At the same time, composite soil samples (at 5 cm depth) were taken from an agricultural plot of each farm. Samples were analyzed for detection and quantification of glyphosate and AMPA using ultra-performance liquid chromatography coupled to a mass spectrometer (UPLC-MS/MS). The limit of detection (LD) in the soil samples was 0.5 μg Kg-1 and the limit of quantification (LQ) was 3 μg Kg-1, both for glyphosate and AMPA. In water samples the LD was 0.1 μg L-1 and the LQ was 0.5 μg L-1. The results showed that the herbicide dispersed into all the studied environmental compartments. Glyphosate and AMPA residues were detected in 34 and 54% of the stream water samples, respectively. Sediment samples had a higher detection frequency (>96%) than water samples, and there was no relationship between the presence in surface water with the detection in sediment samples. The presence in sediment samples

  11. Distributed 3D Source Localization from 2D DOA Measurements Using Multiple Linear Arrays

    Antonio Canclini

    2017-01-01

    Full Text Available This manuscript addresses the problem of 3D source localization from direction of arrivals (DOAs in wireless acoustic sensor networks. In this context, multiple sensors measure the DOA of the source, and a central node combines the measurements to yield the source location estimate. Traditional approaches require 3D DOA measurements; that is, each sensor estimates the azimuth and elevation of the source by means of a microphone array, typically in a planar or spherical configuration. The proposed methodology aims at reducing the hardware and computational costs by combining measurements related to 2D DOAs estimated from linear arrays arbitrarily displaced in the 3D space. Each sensor measures the DOA in the plane containing the array and the source. Measurements are then translated into an equivalent planar geometry, in which a set of coplanar equivalent arrays observe the source preserving the original DOAs. This formulation is exploited to define a cost function, whose minimization leads to the source location estimation. An extensive simulation campaign validates the proposed approach and compares its accuracy with state-of-the-art methodologies.

  12. Taming the beast : Free and open-source massive point cloud web visualization

    Martinez-Rubi, O.; Verhoeven, S.; Van Meersbergen, M.; Schûtz, M.; Van Oosterom, P.; Gonçalves, R.; Tijssen, T.

    2015-01-01

    Powered by WebGL, some renderers have recently become available for the visualization of point cloud data over the web, for example Plasio or Potree. We have extended Potree to be able to visualize massive point clouds and we have successfully used it with the second national Lidar survey of the

  13. Monte Carlo full-waveform inversion of crosshole GPR data using multiple-point geostatistical a priori information

    Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus

    2012-01-01

    We present a general Monte Carlo full-waveform inversion strategy that integrates a priori information described by geostatistical algorithms with Bayesian inverse problem theory. The extended Metropolis algorithm can be used to sample the a posteriori probability density of highly nonlinear...... inverse problems, such as full-waveform inversion. Sequential Gibbs sampling is a method that allows efficient sampling of a priori probability densities described by geostatistical algorithms based on either two-point (e.g., Gaussian) or multiple-point statistics. We outline the theoretical framework......) Based on a posteriori realizations, complicated statistical questions can be answered, such as the probability of connectivity across a layer. (3) Complex a priori information can be included through geostatistical algorithms. These benefits, however, require more computing resources than traditional...

  14. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  15. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  16. A Monte Carlo multiple source model applied to radiosurgery narrow photon beams

    Chaves, A.; Lopes, M.C.; Alves, C.C.; Oliveira, C.; Peralta, L.; Rodrigues, P.; Trindade, A.

    2004-01-01

    Monte Carlo (MC) methods are nowadays often used in the field of radiotherapy. Through successive steps, radiation fields are simulated, producing source Phase Space Data (PSD) that enable a dose calculation with good accuracy. Narrow photon beams used in radiosurgery can also be simulated by MC codes. However, the poor efficiency in simulating these narrow photon beams produces PSD whose quality prevents calculating dose with the required accuracy. To overcome this difficulty, a multiple source model was developed that enhances the quality of the reconstructed PSD, reducing also the time and storage capacities. This multiple source model was based on the full MC simulation, performed with the MC code MCNP4C, of the Siemens Mevatron KD2 (6 MV mode) linear accelerator head and additional collimators. The full simulation allowed the characterization of the particles coming from the accelerator head and from the additional collimators that shape the narrow photon beams used in radiosurgery treatments. Eight relevant photon virtual sources were identified from the full characterization analysis. Spatial and energy distributions were stored in histograms for the virtual sources representing the accelerator head components and the additional collimators. The photon directions were calculated for virtual sources representing the accelerator head components whereas, for the virtual sources representing the additional collimators, they were recorded into histograms. All these histograms were included in the MC code, DPM code and using a sampling procedure that reconstructed the PSDs, dose distributions were calculated in a water phantom divided in 20000 voxels of 1x1x5 mm 3 . The model accurately calculates dose distributions in the water phantom for all the additional collimators; for depth dose curves, associated errors at 2σ were lower than 2.5% until a depth of 202.5 mm for all the additional collimators and for profiles at various depths, deviations between measured

  17. Gas production strategy of underground coal gasification based on multiple gas sources.

    Tianhong, Duan; Zuotang, Wang; Limin, Zhou; Dongdong, Li

    2014-01-01

    To lower stability requirement of gas production in UCG (underground coal gasification), create better space and opportunities of development for UCG, an emerging sunrise industry, in its initial stage, and reduce the emission of blast furnace gas, converter gas, and coke oven gas, this paper, for the first time, puts forward a new mode of utilization of multiple gas sources mainly including ground gasifier gas, UCG gas, blast furnace gas, converter gas, and coke oven gas and the new mode was demonstrated by field tests. According to the field tests, the existing power generation technology can fully adapt to situation of high hydrogen, low calorific value, and gas output fluctuation in the gas production in UCG in multiple-gas-sources power generation; there are large fluctuations and air can serve as a gasifying agent; the gas production of UCG in the mode of both power and methanol based on multiple gas sources has a strict requirement for stability. It was demonstrated by the field tests that the fluctuations in gas production in UCG can be well monitored through a quality control chart method.

  18. Gas Production Strategy of Underground Coal Gasification Based on Multiple Gas Sources

    Duan Tianhong

    2014-01-01

    Full Text Available To lower stability requirement of gas production in UCG (underground coal gasification, create better space and opportunities of development for UCG, an emerging sunrise industry, in its initial stage, and reduce the emission of blast furnace gas, converter gas, and coke oven gas, this paper, for the first time, puts forward a new mode of utilization of multiple gas sources mainly including ground gasifier gas, UCG gas, blast furnace gas, converter gas, and coke oven gas and the new mode was demonstrated by field tests. According to the field tests, the existing power generation technology can fully adapt to situation of high hydrogen, low calorific value, and gas output fluctuation in the gas production in UCG in multiple-gas-sources power generation; there are large fluctuations and air can serve as a gasifying agent; the gas production of UCG in the mode of both power and methanol based on multiple gas sources has a strict requirement for stability. It was demonstrated by the field tests that the fluctuations in gas production in UCG can be well monitored through a quality control chart method.

  19. A novel data mining system points out hidden relationships between immunological markers in multiple sclerosis

    Gironi Maira

    2013-01-01

    Full Text Available Abstract Background Multiple Sclerosis (MS is a multi-factorial disease, where a single biomarker unlikely can provide comprehensive information. Moreover, due to the non-linearity of biomarkers, traditional statistic is both unsuitable and underpowered to dissect their relationship. Patients affected with primary (PP=14, secondary (SP=33, benign (BB=26, relapsing-remitting (RR=30 MS, and 42 sex and age matched healthy controls were studied. We performed a depth immune-phenotypic and functional analysis of peripheral blood mononuclear cell (PBMCs by flow-cytometry. Semantic connectivity maps (AutoCM were applied to find the natural associations among immunological markers. AutoCM is a special kind of Artificial Neural Network able to find consistent trends and associations among variables. The matrix of connections, visualized through minimum spanning tree, keeps non linear associations among variables and captures connection schemes among clusters. Results Complex immunological relationships were shown to be related to different disease courses. Low CD4IL25+ cells level was strongly related (link strength, ls=0.81 to SP MS. This phenotype was also associated to high CD4ROR+ cells levels (ls=0.56. BB MS was related to high CD4+IL13 cell levels (ls=0.90, as well as to high CD14+IL6 cells percentage (ls=0.80. RR MS was strongly (ls=0.87 related to CD4+IL25 high cell levels, as well indirectly to high percentages of CD4+IL13 cells. In this latter strong (ls=0.92 association could be confirmed the induction activity of the former cells (CD4+IL25 on the latter (CD4+IL13. Another interesting topographic data was the isolation of Th9 cells (CD4IL9 from the main part of the immunological network related to MS, suggesting a possible secondary role of this new described cell phenotype in MS disease. Conclusions This novel application of non-linear mathematical techniques suggests peculiar immunological signatures for different MS phenotypes. Notably, the

  20. Validation of novel calibration scheme with traceable point-like (22)Na sources on six types of PET scanners.

    Hasegawa, Tomoyuki; Oda, Keiichi; Wada, Yasuhiro; Sasaki, Toshiaki; Sato, Yasushi; Yamada, Takahiro; Matsumoto, Mikio; Murayama, Hideo; Kikuchi, Kei; Miyatake, Hiroki; Abe, Yutaka; Miwa, Kenta; Akimoto, Kenta; Wagatsuma, Kei

    2013-05-01

    To improve the reliability and convenience of the calibration procedure of positron emission tomography (PET) scanners, we have been developing a novel calibration path based on traceable point-like sources. When using (22)Na sources, special care should be taken to avoid the effects of 1.275-MeV γ rays accompanying β (+) decays. The purpose of this study is to validate this new calibration scheme with traceable point-like (22)Na sources on various types of PET scanners. Traceable point-like (22)Na sources with a spherical absorber design that assures uniform angular distribution of the emitted annihilation photons were used. The tested PET scanners included a clinical whole-body PET scanner, four types of clinical PET/CT scanners from different manufacturers, and a small-animal PET scanner. The region of interest (ROI) diameter dependence of ROI values was represented with a fitting function, which was assumed to consist of a recovery part due to spatial resolution and a quadratic background part originating from the scattered γ rays. The observed ROI radius dependence was well represented with the assumed fitting function (R (2) > 0.994). The calibration factors determined using the point-like sources were consistent with those by the standard cross-calibration method within an uncertainty of ±4 %, which was reasonable considering the uncertainty in the standard cross-calibration method. This novel calibration scheme based on the use of traceable (22)Na point-like sources was successfully validated for six types of commercial PET scanners.

  1. Design and evaluation of aircraft heat source systems for use with high-freezing point fuels

    Pasion, A. J.

    1979-01-01

    The objectives were the design, performance and economic analyses of practical aircraft fuel heating systems that would permit the use of high freezing-point fuels on long-range aircraft. Two hypothetical hydrocarbon fuels with freezing points of -29 C and -18 C were used to represent the variation from current day jet fuels. A Boeing 747-200 with JT9D-7/7A engines was used as the baseline aircraft. A 9300 Km mission was used as the mission length from which the heat requirements to maintain the fuel above its freezing point was based.

  2. Seeing the Point: Using Visual Sources to Understand the Arguments for Women's Suffrage

    Card, Jane

    2011-01-01

    Visual sources, Jane Card argues, are a powerful resource for historical learning but using them in the classroom requires careful thought and planning. Card here shares how she has used visual source material in order to teach her students about the women's suffrage movement. In particular, Card shows how a chain of questions that moves from the…

  3. Improved Point-source Detection in Crowded Fields Using Probabilistic Cataloging

    Portillo, Stephen K. N.; Lee, Benjamin C. G.; Daylan, Tansu; Finkbeiner, Douglas P.

    2017-10-01

    Cataloging is challenging in crowded fields because sources are extremely covariant with their neighbors and blending makes even the number of sources ambiguous. We present the first optical probabilistic catalog, cataloging a crowded (˜0.1 sources per pixel brighter than 22nd mag in F606W) Sloan Digital Sky Survey r-band image from M2. Probabilistic cataloging returns an ensemble of catalogs inferred from the image and thus can capture source-source covariance and deblending ambiguities. By comparing to a traditional catalog of the same image and a Hubble Space Telescope catalog of the same region, we show that our catalog ensemble better recovers sources from the image. It goes more than a magnitude deeper than the traditional catalog while having a lower false-discovery rate brighter than 20th mag. We also present an algorithm for reducing this catalog ensemble to a condensed catalog that is similar to a traditional catalog, except that it explicitly marginalizes over source-source covariances and nuisance parameters. We show that this condensed catalog has a similar completeness and false-discovery rate to the catalog ensemble. Future telescopes will be more sensitive, and thus more of their images will be crowded. Probabilistic cataloging performs better than existing software in crowded fields and so should be considered when creating photometric pipelines in the Large Synoptic Survey Telescope era.

  4. Risk-based prioritisation of point sources through assessment of the impact on a water supply

    Overheu, Niels D.; Tuxen, Nina; Troldborg, Mads

    2011-01-01

    vulnerability mapping, site-specific mass flux estimates on a local scale from all the sources, and 3-D catchment-scale fate and transport modelling. It handles sources at various knowledge levels and accounts for uncertainties. The tool estimates the impacts on the water supply in the catchment and provides...

  5. Polarized point sources in the LOFAR Two-meter Sky Survey: A preliminary catalog

    Van Eck, C. L.; Haverkorn, M.; Alves, M. I. R.; Beck, R.; Best, P.; Carretti, E.; Chyży, K. T.; Farnes, J. S.; Ferrière, K.; Hardcastle, M. J.; Heald, G.; Horellou, C.; Iacobelli, M.; Jelić, V.; Mulcahy, D. D.; O'Sullivan, S. P.; Polderman, I. M.; Reich, W.; Riseley, C. J.; Röttgering, H.; Schnitzeler, D. H. F. M.; Shimwell, T. W.; Vacca, V.; Vink, J.; White, G. J.

    2018-06-01

    The polarization properties of radio sources at very low frequencies (right ascension, 45°-57° declination, 570 square degrees). We have produced a catalog of 92 polarized radio sources at 150 MHz at 4.'3 resolution and 1 mJy rms sensitivity, which is the largest catalog of polarized sources at such low frequencies. We estimate a lower limit to the polarized source surface density at 150 MHz, with our resolution and sensitivity, of 1 source per 6.2 square degrees. We find that our Faraday depth measurements are in agreement with previous measurements and have significantly smaller errors. Most of our sources show significant depolarization compared to 1.4 GHz, but there is a small population of sources with low depolarization indicating that their polarized emission is highly localized in Faraday depth. We predict that an extension of this work to the full LOTSS data would detect at least 3400 polarized sources using the same methods, and probably considerably more with improved data processing.

  6. Managing Multiple Sources of Competitive Advantage in a Complex Competitive Environment

    Alexandre Howard Henry Lapersonne

    2013-12-01

    Full Text Available The aim of this article is to review the literature on the topic of sustained and temporary competitive advantage creation, specifically in dynamic markets, and to propose further research possibilities. After having analyzed the main trends and scholars’ works on the subject, it was concluded that a firm which has been experiencing erosion of its core sources of economic rent generation, should have diversified its strategy portfolio in a search for new sources of competitive advantage, ones that could compensate for the decline of profits provoked by intensive competitive environments. This review concludes with the hypothesis that firms, who have decided to enter and manage multiple competitive environments, should have developed a multiple strategies framework approach. The management of this source of competitive advantage portfolio should have allowed persistence of a firm’s superior economic performance through the management of diverse temporary advantages lifecycle and through a resilient effect, where a very successful source of competitive advantage compensates the ones that have been eroded. Additionally, the review indicates that economies of emerging countries, such as the ones from the BRIC block, should present a more complex competitive environment due to their historical nature of cultural diversity, social contrasts and frequent economic disruption, and also because of recent institutional normalization that has turned the market into hypercompetition. Consequently, the study of complex competition should be appropriate in such environments.

  7. Two Wrongs Make a Right: Addressing Underreporting in Binary Data from Multiple Sources.

    Cook, Scott J; Blas, Betsabe; Carroll, Raymond J; Sinha, Samiran

    2017-04-01

    Media-based event data-i.e., data comprised from reporting by media outlets-are widely used in political science research. However, events of interest (e.g., strikes, protests, conflict) are often underreported by these primary and secondary sources, producing incomplete data that risks inconsistency and bias in subsequent analysis. While general strategies exist to help ameliorate this bias, these methods do not make full use of the information often available to researchers. Specifically, much of the event data used in the social sciences is drawn from multiple, overlapping news sources (e.g., Agence France-Presse, Reuters). Therefore, we propose a novel maximum likelihood estimator that corrects for misclassification in data arising from multiple sources. In the most general formulation of our estimator, researchers can specify separate sets of predictors for the true-event model and each of the misclassification models characterizing whether a source fails to report on an event. As such, researchers are able to accurately test theories on both the causes of and reporting on an event of interest. Simulations evidence that our technique regularly out performs current strategies that either neglect misclassification, the unique features of the data-generating process, or both. We also illustrate the utility of this method with a model of repression using the Social Conflict in Africa Database.

  8. Using National Drug Codes and drug knowledge bases to organize prescription records from multiple sources.

    Simonaitis, Linas; McDonald, Clement J

    2009-10-01

    The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.

  9. Morphology, chemistry and distribution of neoformed spherulites in agricultural land affected by metallurgical point-source pollution

    Leguedois, S.; Oort, van F.; Jongmans, A.G.; Chevalier, P.

    2004-01-01

    Metal distribution patterns in superficial soil horizons of agricultural land affected by metallurgical point-source pollution were studied using optical and electron microscopy, synchrotron radiation and spectroscopy analyses. The site is located in northern France, at the center of a former entry

  10. The Central Point Source in G76.9++1.0 V. R. Marthi1,∗ , J. N. ...

    Astr. (2011) 32, 451–455 c Indian Academy of Sciences. The Central Point Source in G76.9++1.0. V. R. Marthi1,∗. , J. N. Chengalur1, Y. Gupta1 ... emission has indeed been seen at 2 GHz with the Green Bank Telescope. (GBT), establishing the fact that scattering is responsible for its non- detection at low radio frequencies.

  11. Effects of pointing compared with naming and observing during encoding on item and source memory in young and older adults

    Ouwehand, Kim; Gog, Tamara van; Paas, Fred

    2016-01-01

    Research showed that source memory functioning declines with ageing. Evidence suggests that encoding visual stimuli with manual pointing in addition to visual observation can have a positive effect on spatial memory compared with visual observation only. The present study investigated whether

  12. Point Sources of Emerging Contaminants Along the Colorado River Basin: Impact on Water Use and Reuse in the Arid Southwest

    Emerging contaminants (ECs) (e.g., pharmaceuticals, illicit drugs, personal care products) have been detected in waters across the United States. The objective of this study was to evaluate point sources of ECs along the Colorado River, from the headwaters in Colorado to the Gulf...

  13. Transparent mediation-based access to multiple yeast data sources using an ontology driven interface.

    Briache, Abdelaali; Marrakchi, Kamar; Kerzazi, Amine; Navas-Delgado, Ismael; Rossi Hassani, Badr D; Lairini, Khalid; Aldana-Montes, José F

    2012-01-25

    Saccharomyces cerevisiae is recognized as a model system representing a simple eukaryote whose genome can be easily manipulated. Information solicited by scientists on its biological entities (Proteins, Genes, RNAs...) is scattered within several data sources like SGD, Yeastract, CYGD-MIPS, BioGrid, PhosphoGrid, etc. Because of the heterogeneity of these sources, querying them separately and then manually combining the returned results is a complex and time-consuming task for biologists most of whom are not bioinformatics expert. It also reduces and limits the use that can be made on the available data. To provide transparent and simultaneous access to yeast sources, we have developed YeastMed: an XML and mediator-based system. In this paper, we present our approach in developing this system which takes advantage of SB-KOM to perform the query transformation needed and a set of Data Services to reach the integrated data sources. The system is composed of a set of modules that depend heavily on XML and Semantic Web technologies. User queries are expressed in terms of a domain ontology through a simple form-based web interface. YeastMed is the first mediation-based system specific for integrating yeast data sources. It was conceived mainly to help biologists to find simultaneously relevant data from multiple data sources. It has a biologist-friendly interface easy to use. The system is available at http://www.khaos.uma.es/yeastmed/.

  14. Multiple Spectral Ratio Analyses Reveal Earthquake Source Spectra of Small Earthquakes and Moment Magnitudes of Microearthquakes

    Uchide, T.; Imanishi, K.

    2016-12-01

    Spectral studies for macroscopic earthquake source parameters are helpful for characterizing earthquake rupture process and hence understanding earthquake source physics and fault properties. Those studies require us mute wave propagation path and site effects in spectra of seismograms to accentuate source effect. We have recently developed the multiple spectral ratio method [Uchide and Imanishi, BSSA, 2016] employing many empirical Green's function (EGF) events to reduce errors from the choice of EGF events. This method helps us estimate source spectra more accurately as well as moment ratios among reference and EGF events, which are useful to constrain the seismic moment of microearthquakes. First, we focus on earthquake source spectra. The source spectra have generally been thought to obey the omega-square model with single corner-frequency. However recent studies imply the existence of another corner frequency for some earthquakes. We analyzed small shallow inland earthquakes (3.5 multiple spectral ratio analyses. For 20000 microearthquakes in Fukushima Hamadori and northern Ibaraki prefecture area, we found that the JMA magnitudes (Mj) based on displacement or velocity amplitude are systematically below Mw. The slope of the Mj-Mw relation is 0.5 for Mj 5. We propose a fitting curve for the obtained relationship as Mw = (1/2)Mj + (1/2)(Mjγ + Mcorγ)1/γ+ c, where Mcor is a corner magnitude, γ determines the sharpness of the corner, and c denotes an offset. We obtained Mcor = 4.1, γ = 5.6, and c = -0.47 to fit the observation. The parameters are useful for characterizing the Mj-Mw relationship. This non-linear relationship affects the b-value of the Gutenberg-Richter law. Quantitative discussions on b-values are affected by the definition of magnitude to use.

  15. The Potential for Electrofuels Production in Sweden Utilizing Fossil and Biogenic CO{sub 2} Point Sources

    Hansson, Julia, E-mail: julia.hansson@ivl.se [Climate and Sustainable Cities, IVL Swedish Environmental Research Institute, Stockholm (Sweden); Division of Physical Resource Theory, Department of Energy and Environment, Chalmers University of Technology, Göteborg (Sweden); Hackl, Roman [Climate and Sustainable Cities, IVL Swedish Environmental Research Institute, Stockholm (Sweden); Taljegard, Maria [Division of Energy Technology, Department of Energy and Environment, Chalmers University of Technology, Göteborg (Sweden); Brynolf, Selma; Grahn, Maria [Division of Physical Resource Theory, Department of Energy and Environment, Chalmers University of Technology, Göteborg (Sweden)

    2017-03-13

    This paper maps, categorizes, and quantifies all major point sources of carbon dioxide (CO{sub 2}) emissions from industrial and combustion processes in Sweden. The paper also estimates the Swedish technical potential for electrofuels (power-to-gas/fuels) based on carbon capture and utilization. With our bottom-up approach using European databases, we find that Sweden emits approximately 50 million metric tons of CO{sub 2} per year from different types of point sources, with 65% (or about 32 million tons) from biogenic sources. The major sources are the pulp and paper industry (46%), heat and power production (23%), and waste treatment and incineration (8%). Most of the CO{sub 2} is emitted at low concentrations (<15%) from sources in the southern part of Sweden where power demand generally exceeds in-region supply. The potentially recoverable emissions from all the included point sources amount to 45 million tons. If all the recoverable CO{sub 2} were used to produce electrofuels, the yield would correspond to 2–3 times the current Swedish demand for transportation fuels. The electricity required would correspond to about 3 times the current Swedish electricity supply. The current relatively few emission sources with high concentrations of CO{sub 2} (>90%, biofuel operations) would yield electrofuels corresponding to approximately 2% of the current demand for transportation fuels (corresponding to 1.5–2 TWh/year). In a 2030 scenario with large-scale biofuels operations based on lignocellulosic feedstocks, the potential for electrofuels production from high-concentration sources increases to 8–11 TWh/year. Finally, renewable electricity and production costs, rather than CO{sub 2} supply, limit the potential for production of electrofuels in Sweden.

  16. The 100 strongest radio point sources in the field of the Large Magellanic Cloud at 1.4 GHz

    Payne J.L.

    2009-01-01

    Full Text Available We present the 100 strongest 1.4 GHz point sources from a new mosaic image in the direction of the Large Magellanic Cloud (LMC. The observations making up the mosaic were made using Australia Telescope Compact Array (ATCA over a ten year period and were combined with Parkes single dish data at 1.4 GHz to complete the image for short spacing. An initial list of co-identifications within 1000 at 0.843, 4.8 and 8.6 GHz consisted of 2682 sources. Elimination of extended objects and artifact noise allowed the creation of a refined list containing 1988 point sources. Most of these are presumed to be background objects seen through the LMC; a small portion may represent compact H ii regions, young SNRs and radio planetary nebulae. For the 1988 point sources we find a preliminary average spectral index (α of -0.53 and present a 1.4 GHz image showing source location in the direction of the LMC.

  17. The 100 Strongest Radio Point Sources in the Field of the Large Magellanic Cloud at 1.4 GHz

    Payne, J. L.

    2009-06-01

    Full Text Available We present the 100 strongest 1.4~GHz point sources from a new mosaicimage in the direction of the Large Magellanic Cloud (LMC. The observationsmaking up the mosaic were made using Australia Telescope Compact Array (ATCAover a ten year period and were combined with Parkes single dish data at 1.4 GHz to complete the image for short spacing. An initial list of co-identifications within 10arcsec at 0.843, 4.8 and 8.6 GHz consisted of 2682 sources. Elimination of extended objects and artifact noise allowed the creation of a refined list containing 1988 point sources. Most of these are presumed to be background objects seen through the LMC; a small portion may represent compact HII regions, young SNRs and radio planetary nebulae. For the 1988 point sources we find a preliminary average spectral index ($alpha$ of -0.53 and present a 1.4 GHz image showing source locationin the direction of the LMC.

  18. Discovery of a point-like very-high-energy gamma-ray source in Monoceros

    Aharonian, F.A.; Benbow, W.; Berge, D.; Bernlohr, K.; Bolz, O.; Braun, I.; Buhler, R.; Carrigan, S.; Costamante, L.; Domainko, W.; Egberts, K.; Forster, A.; Funk, S.; Hauser, D.; Hermann, G.; Hinton, J.A.; Hofmann, W.; Hoppe, S.; Khelifi, B.; Kosack, K.; Masterson, C.; Panter, M.; Rowell, G.; van Eldik, C.; Volk, H.J.; Akhperjanian, A.G.; Sahakian, V.; Bazer-Bachi, A.R.; Borrel, V.; Marcowith, A.; Olive, J.P.; Beilicke, M.; Cornils, R.; Heinzelmann, G.; Raue, M.; Ripken, J.; Bernlohr, K.; Funk, Seb.; Fussling, M.; Kerschhaggl, M.; Lohse, T.; Schlenker, S.; Schwanke, U.; Boisson, C.; Martin, J.M.; Sol, H.; Brion, E.; Glicenstein, J.F.; Goret, P.; Moulin, E.; Rolland, L.

    2007-01-01

    Aims. The complex Monoceros Loop SNR/Rosette Nebula region contains several potential sources of very-high-energy (VHE) γ-ray emission and two as yet unidentified high-energy EGRET sources. Sensitive VHE observations are required to probe acceleration processes in this region. Methods. The HESS telescope array has been used to search for very high-energy gamma-ray sources in this region. CO data from the NANTEN telescope were used to map the molecular clouds in the region, which could act as target material for γ-ray production via hadronic interactions. Results. We announce the discovery of a new γ-ray source, HESS J0632+057, located close to the rim of the Monoceros SNR. This source is unresolved by HESS and has no clear counterpart at other wavelengths but is possibly associated with the weak X-ray source 1RXS J063258.3+054857, the Be-star MWC148 and/or the lower energy γ-ray source 3EGJ0634+0521. No evidence for an associated molecular cloud was found in the CO data. (authors)

  19. Impact of Point and Non-point Source Pollution on Coral Reef Ecosystems In Mamala Bay, Oahu, Hawaii based on Water Quality Measurements and Benthic Surveys in 1993-1994 (NODC Accession 0001172)

    National Oceanic and Atmospheric Administration, Department of Commerce — The effects of both point and non-point sources of pollution on coral reef ecosystems in Mamala Bay were studied at three levels of biological organization; the...

  20. Comparison of point-source pollutant loadings to soil and groundwater for 72 chemical substances.

    Yu, Soonyoung; Hwang, Sang-Il; Yun, Seong-Taek; Chae, Gitak; Lee, Dongsu; Kim, Ki-Eun

    2017-11-01

    Fate and transport of 72 chemicals in soil and groundwater were assessed by using a multiphase compositional model (CompFlow Bio) because some of the chemicals are non-aqueous phase liquids or solids in the original form. One metric ton of chemicals were assumed to leak in a stylized facility. Scenarios of both surface spills and subsurface leaks were considered. Simulation results showed that the fate and transport of chemicals above the water table affected the fate and transport of chemicals below the water table, and vice versa. Surface spill scenarios caused much less concentrations than subsurface leak scenarios because leaching amounts into the subsurface environment were small (at most 6% of the 1 t spill for methylamine). Then, simulation results were applied to assess point-source pollutant loadings to soil and groundwater above and below the water table, respectively, by multiplying concentrations, impact areas, and durations. These three components correspond to the intensity of contamination, mobility, and persistency in the assessment of pollutant loading, respectively. Assessment results showed that the pollutant loadings in soil and groundwater were linearly related (r 2  = 0.64). The pollutant loadings were negatively related with zero-order and first-order decay rates in both soil (r = - 0.5 and - 0.6, respectively) and groundwater (- 1.0 and - 0.8, respectively). In addition, this study scientifically defended that the soil partitioning coefficient (K d ) significantly affected the pollutant loadings in soil (r = 0.6) and the maximum masses in groundwater (r = - 0.9). However, K d was not a representative factor for chemical transportability unlike the expectation in chemical ranking systems of soil and groundwater pollutants. The pollutant loadings estimated using a physics-based hydrogeological model provided a more rational ranking for exposure assessment, compared to the summation of persistency and transportability scores in

  1. Optimizing the calculation of point source count-centroid in pixel size measurement

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Pixel size is an important parameter of gamma camera and SPECT. A number of methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source (PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (X m ) is an approximation of the true count-centroid (X p ) of the PS, i.e. X m =X p + (X b -X p )/(1+R p /R b ), where Rp is the net counting rate of the PS, X b the background count-centroid and Rb the background counting. To get accurate measurement, R p must be very big, which is unpractical, resulting in the variation of measured pixel size. R p -independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (X b -X p )/(1 + R p /R b ) by bringing X b closer to X p and by reducing R b . In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=1-(0.5) D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent X p . The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128 x 128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10 cps to 1183 cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01 (mean

  2. Optimizing the calculation of point source count-centroid in pixel size measurement

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Purpose: Pixel size is an important parameter of gamma camera and SPECT. A number of Methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source(PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (Xm) is an approximation of the true count-centroid (Xp) of the PS, i.e. Xm=Xp+(Xb-Xp)/(1+Rp/Rb), where Rp is the net counting rate of the PS, Xb the background count-centroid and Rb the background counting rate. To get accurate measurement, Rp must be very big, which is unpractical, resulting in the variation of measured pixel size. Rp-independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (Xb-Xp)/(1+Rp/Rb) by bringing Xb closer to Xp and by reducing Rb. In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=I-(0.5)D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent Xp. The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128*128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10cps to 1183cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01( mean= 3.01±0.00) as Rp increased

  3. A method to analyze “source–sink” structure of non-point source pollution based on remote sensing technology

    Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui

    2013-01-01

    With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the “source–sink” theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of “source” of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km 2 in 2008, and the “sink” was 172.06 km 2 . The “source” of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the “sink” was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of “source” gets weaker along with the distance from the seas boundary increase, while “sink” gets stronger. -- Highlights: •We built an index to study the “source–sink” structure of NSP in a space scale. •The Index was applied in Jiulongjiang estuary and got a well result. •The study is beneficial to discern the high load area of non-point source pollution. -- “Source–Sink” Structure of non-point source nitrogen and phosphorus pollution in Jiulongjiang estuary in China was worked out by the Grid Landscape Contrast Index

  4. Dynamic analysis of multiple nuclear-coupled boiling channels based on a multi-point reactor model

    Lee, J.D.; Pan Chin

    2005-01-01

    This work investigates the non-linear dynamics and stabilities of a multiple nuclear-coupled boiling channel system based on a multi-point reactor model using the Galerkin nodal approximation method. The nodal approximation method for the multiple boiling channels developed by Lee and Pan [Lee, J.D., Pan, C., 1999. Dynamics of multiple parallel boiling channel systems with forced flows. Nucl. Eng. Des. 192, 31-44] is extended to address the two-phase flow dynamics in the present study. The multi-point reactor model, modified from Uehiro et al. [Uehiro, M., Rao, Y.F., Fukuda, K., 1996. Linear stability analysis on instabilities of in-phase and out-of-phase modes in boiling water reactors. J. Nucl. Sci. Technol. 33, 628-635], is employed to study a multiple-channel system with unequal steady-state neutron density distribution. Stability maps, non-linear dynamics and effects of major parameters on the multiple nuclear-coupled boiling channel system subject to a constant total flow rate are examined. This study finds that the void-reactivity feedback and neutron interactions among subcores are coupled and their competing effects may influence the system stability under different operating conditions. For those cases with strong neutron interaction conditions, by strengthening the void-reactivity feedback, the nuclear-coupled effect on the non-linear dynamics may induce two unstable oscillation modes, the supercritical Hopf bifurcation and the subcritical Hopf bifurcation. Moreover, for those cases with weak neutron interactions, by quadrupling the void-reactivity feedback coefficient, period-doubling and complex chaotic oscillations may appear in a three-channel system under some specific operating conditions. A unique type of complex chaotic attractor may evolve from the Rossler attractor because of the coupled channel-to-channel thermal-hydraulic and subcore-to-subcore neutron interactions. Such a complex chaotic attractor has the imbedding dimension of 5 and the

  5. Time dependence of the field energy densities surrounding sources: Application to scalar mesons near point sources and to electromagnetic fields near molecules

    Persico, F.; Power, E.A.

    1987-01-01

    The time dependence of the dressing-undressing process, i.e., the acquiring or losing by a source of a boson field intensity and hence of a field energy density in its neighborhood, is considered by examining some simple soluble models. First, the loss of the virtual field is followed in time when a point source is suddenly decoupled from a neutral scalar meson field. Second, an initially bare point source acquires a virtual meson cloud as the coupling is switched on. The third example is that of an initially bare molecule interacting with the vacuum of the electromagnetic field to acquire a virtual photon cloud. In all three cases the dressing-undressing is shown to take place within an expanding sphere of radius r = ct centered at the source. At each point in space the energy density tends, for large times, to that of the ground state of the total system. Differences in the time dependence of the dressing between the massive scalar field and the massless electromagnetic field are discussed. The results are also briefly discussed in the light of Feinberg's ideas on the nature of half-dressed states in quantum field theory

  6. Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning

    Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Nex, Francesco; Vosselman, George

    2018-06-01

    Oblique aerial images offer views of both building roofs and façades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds - an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional

  7. A 24 μm point source catalog of the galactic plane from Spitzer/MIPSGAL

    Gutermuth, Robert A.; Heyer, Mark [Department of Astronomy, University of Massachusetts, Amherst, MA 01003 (United States)

    2015-02-01

    In this contribution, we describe the applied methods to construct a 24 μm based point source catalog derived from the image data of the MIPSGAL 24 μm Galactic Plane Survey and the corresponding data products. The high quality catalog product contains 933,818 sources, with a total of 1,353,228 in the full archive catalog. The source tables include positional and photometric information derived from the 24 μm images, source quality and confusion flags, and counterpart photometry from matched 2MASS, GLIMPSE, and WISE point sources. Completeness decay data cubes are constructed at 1′ angular resolution that describe the varying background levels over the MIPSGAL field and the ability to extract sources of a given magnitude from this background. The completeness decay cubes are included in the set of data products. We present the results of our efforts to verify the astrometric and photometric calibration of the catalog, and present several analyses of minor anomalies in these measurements to justify adopted mitigation strategies.

  8. Accounting for multiple sources of uncertainty in impact assessments: The example of the BRACE study

    O'Neill, B. C.

    2015-12-01

    Assessing climate change impacts often requires the use of multiple scenarios, types of models, and data sources, leading to a large number of potential sources of uncertainty. For example, a single study might require a choice of a forcing scenario, climate model, bias correction and/or downscaling method, societal development scenario, model (typically several) for quantifying elements of societal development such as economic and population growth, biophysical model (such as for crop yields or hydrology), and societal impact model (e.g. economic or health model). Some sources of uncertainty are reduced or eliminated by the framing of the question. For example, it may be useful to ask what an impact outcome would be conditional on a given societal development pathway, forcing scenario, or policy. However many sources of uncertainty remain, and it is rare for all or even most of these sources to be accounted for. I use the example of a recent integrated project on the Benefits of Reduced Anthropogenic Climate changE (BRACE) to explore useful approaches to uncertainty across multiple components of an impact assessment. BRACE comprises 23 papers that assess the differences in impacts between two alternative climate futures: those associated with Representative Concentration Pathways (RCPs) 4.5 and 8.5. It quantifies difference in impacts in terms of extreme events, health, agriculture, tropical cyclones, and sea level rise. Methodologically, it includes climate modeling, statistical analysis, integrated assessment modeling, and sector-specific impact modeling. It employs alternative scenarios of both radiative forcing and societal development, but generally uses a single climate model (CESM), partially accounting for climate uncertainty by drawing heavily on large initial condition ensembles. Strengths and weaknesses of the approach to uncertainty in BRACE are assessed. Options under consideration for improving the approach include the use of perturbed physics

  9. An efficient central DOA tracking algorithm for multiple incoherently distributed sources

    Hassen, Sonia Ben; Samet, Abdelaziz

    2015-12-01

    In this paper, we develop a new tracking method for the direction of arrival (DOA) parameters assuming multiple incoherently distributed (ID) sources. The new approach is based on a simple covariance fitting optimization technique exploiting the central and noncentral moments of the source angular power densities to estimate the central DOAs. The current estimates are treated as measurements provided to the Kalman filter that model the dynamic property of directional changes for the moving sources. Then, the covariance-fitting-based algorithm and the Kalman filtering theory are combined to formulate an adaptive tracking algorithm. Our algorithm is compared to the fast approximated power iteration-total least square-estimation of signal parameters via rotational invariance technique (FAPI-TLS-ESPRIT) algorithm using the TLS-ESPRIT method and the subspace updating via FAPI-algorithm. It will be shown that the proposed algorithm offers an excellent DOA tracking performance and outperforms the FAPI-TLS-ESPRIT method especially at low signal-to-noise ratio (SNR) values. Moreover, the performances of the two methods increase as the SNR values increase. This increase is more prominent with the FAPI-TLS-ESPRIT method. However, their performances degrade when the number of sources increases. It will be also proved that our method depends on the form of the angular distribution function when tracking the central DOAs. Finally, it will be shown that the more the sources are spaced, the more the proposed method can exactly track the DOAs.

  10. Interpolating between random walks and optimal transportation routes: Flow with multiple sources and targets

    Guex, Guillaume

    2016-05-01

    In recent articles about graphs, different models proposed a formalism to find a type of path between two nodes, the source and the target, at crossroads between the shortest-path and the random-walk path. These models include a freely adjustable parameter, allowing to tune the behavior of the path toward randomized movements or direct routes. This article presents a natural generalization of these models, namely a model with multiple sources and targets. In this context, source nodes can be viewed as locations with a supply of a certain good (e.g. people, money, information) and target nodes as locations with a demand of the same good. An algorithm is constructed to display the flow of goods in the network between sources and targets. With again a freely adjustable parameter, this flow can be tuned to follow routes of minimum cost, thus displaying the flow in the context of the optimal transportation problem or, by contrast, a random flow, known to be similar to the electrical current flow if the random-walk is reversible. Moreover, a source-targetcoupling can be retrieved from this flow, offering an optimal assignment to the transportation problem. This algorithm is described in the first part of this article and then illustrated with case studies.

  11. Differentiating between anthropogenic and geological sources of nitrate using multiple geochemical tracers

    Linhoff, B.; Norton, S.; Travis, R.; Romero, Z.; Waters, B.

    2017-12-01

    Nitrate contamination of groundwater is a major problem globally including within the Albuquerque Basin in New Mexico. Ingesting high concentrations of nitrate (> 10 mg/L as N) can lead to an increased risk of cancer and to methemoglobinemia in infants. Numerous anthropogenic sources of nitrate have been identified within the Albuquerque Basin including fertilizers, landfills, multiple sewer pipe releases, sewer lagoons, domestic septic leach fields, and a nitric acid line outfall. Furthermore, groundwater near ephemeral streams often exhibits elevated NO3 concentrations and high NO3/Cl ratios incongruous with an anthropogenic source. These results suggest that NO3 can be concentrated through evaporation beneath ephemeral streams and mobilized via irrigation or land use change. This study seeks to use extensive geochemical analyses of groundwater and surface water to differentiate between various sources of NO3 contamination. The U.S. Geological Survey collected 54 groundwater samples from wells and six samples from ephemeral streams from within and from outside of areas of known nitrate contamination. To fingerprint the sources of nitrate pollution, samples were analyzed for major ions, trace metals, nutrients, dissolved gases, δ15N and δ18O in NO3, δ15N within N2 gas, and, δ2H and δ18O in H2O. Furthermore, most sites were sampled for artificial sweeteners and numerous contaminants of emerging concern including pharmaceutical drugs, caffeine, and wastewater indicators. This study will also investigate the age distribution of groundwater and the approximate age of anthropogenic NO3 contamination using 3He/4He, δ13C, 14C, 3H, as well as pharmaceutical drugs and artificial sweeteners with known patent and U.S. Food and Drug Administration approval dates. This broad suite of analytes will be used to differentiate between naturally occurring and multiple anthropogenic NO3 sources, and to potentially determine the approximate date of NO3 contamination.

  12. Validation and calibration of structural models that combine information from multiple sources.

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  13. Glutathione provides a source of cysteine essential for intracellular multiplication of Francisella tularensis.

    Khaled Alkhuder

    2009-01-01

    Full Text Available Francisella tularensis is a highly infectious bacterium causing the zoonotic disease tularemia. Its ability to multiply and survive in macrophages is critical for its virulence. By screening a bank of HimarFT transposon mutants of the F. tularensis live vaccine strain (LVS to isolate intracellular growth-deficient mutants, we selected one mutant in a gene encoding a putative gamma-glutamyl transpeptidase (GGT. This gene (FTL_0766 was hence designated ggt. The mutant strain showed impaired intracellular multiplication and was strongly attenuated for virulence in mice. Here we present evidence that the GGT activity of F. tularensis allows utilization of glutathione (GSH, gamma-glutamyl-cysteinyl-glycine and gamma-glutamyl-cysteine dipeptide as cysteine sources to ensure intracellular growth. This is the first demonstration of the essential role of a nutrient acquisition system in the intracellular multiplication of F. tularensis. GSH is the most abundant source of cysteine in the host cytosol. Thus, the capacity this intracellular bacterial pathogen has evolved to utilize the available GSH, as a source of cysteine in the host cytosol, constitutes a paradigm of bacteria-host adaptation.

  14. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C

  15. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.

  16. An Analysis of Air Pollution in Makkah - a View Point of Source Identification

    Turki M. Habeebullah

    2013-07-01

    Full Text Available Makkah is one of the busiest cities in Saudi Arabia and remains busy all year around, especially during the season of Hajj and the month of Ramadan when millions of people visit this city. This emphasizes the importance of clean air and of understanding the sources of various air pollutants, which is vital for the management and advanced modeling of air pollution. This study intends to identify the major sources of air pollutants in Makkah, near the Holy Mosque (Al-Haram using a graphical approach. Air pollutants considered in this study are nitrogen oxides (NOx, nitrogen dioxide (NO2, nitric oxide (NO, carbon monoxide (CO, sulphur dioxide (SO2, ozone (O3 and particulate matter with aero-dynamic diameter of 10 um or less (PM10. Polar plots, time variation plots and correlation analysis are used to analyse the data and identify the major sources of emissions. Most of the pollutants demonstrate high concentrations during the morning traffic peak hours, suggesting road traffic as the main source of emission. The main sources of pollutant emissions identified in Makkahwere road traffic, re-suspended and windblown dust and sand particles. Further investigation on detailedsource apportionment is required, which is part of the ongoing project.

  17. A THEORETICAL ANALYSIS OF KEY POINTS WHEN CHOOSING OPEN SOURCE ERP SYSTEMS

    Fernando Gustavo Dos Santos Gripe

    2011-08-01

    Full Text Available The present work is aimed at presenting a theoretical analysis of the main features of Open Source ERP systems, herein identified as success technical factors, in order to contribute to the establishment of parameters to be used in decision-making processes when choosing a system which fulfills the organization´s needs. Initially, the life cycle of ERP systems is contextualized, highlighting the features of Open Source ERP systems. As a result, it was verified that, when carefully analyzed, these systems need further attention regarding issues of project continuity and maturity, structure, transparency, updating frequency, and support, all of which are inherent to the reality of this type of software. Nevertheless, advantages were observed in what concerns flexibility, costs, and non-discontinuity as benefits. The main goal is to broaden the discussion about the adoption of Open Source ERP systems.

  18. SREM - WRS system module number 3348 for calculating the removal flux due to point, line or disc sources

    Grimstone, M.J.

    1978-06-01

    The WRS Modular Programming System has been developed as a means by which programmes may be more efficiently constructed, maintained and modified. In this system a module is a self-contained unit typically composed of one or more Fortran routines, and a programme is constructed from a number of such modules. This report describes one WRS module, the function of which is to calculate the uncollided flux and first-collision source from a disc source in a slab geometry system, a line source at the centre of a cylindrical system or a point source at the centre of a spherical system. The information given in this manual is of use both to the programmer wishing to incorporate the module in a programme, and to the user of such a programme. (author)

  19. Methane Flux Estimation from Point Sources using GOSAT Target Observation: Detection Limit and Improvements with Next Generation Instruments

    Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.

    2017-12-01

    Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH

  20. Development of repository-wide radionuclide transport model considering the effects of multiple sources

    Hatanaka, Koichiro; Watari, Shingo; Ijiri, Yuji

    1999-11-01

    Safety assessment of the geological isolation system according to the groundwater scenario has traditionally been conducted based on the signal canister configuration and then the safety of total system has been evaluated based on the dose rates which were obtained by multiplying the migration rates released from the engineered barrier and/or the natural barrier by dose conversion factors and total number of canisters disposed in the repository. The dose conversion factors can be obtained from the biosphere analysis. In this study, we focused on the effect of multiple sources due to the disposal of canisters at different positions in the repository. By taking the effect of multiple sources into consideration, concentration interference in the repository region is possible to take place. Therefore, radionuclide transport model/code considering the effect of concentration interference due to the multiple sources was developed to make assessments of the effect quantitatively. The newly developed model/code was verified through the comparison analysis with the existing radionuclide transport analysis code used in the second progress report. In addition, the effect of the concentration interference was evaluated by setting a simple problem using the newly developed analysis code. This results shows that the maximum park value of the migration rates from the repository was about two orders of magnitude lower than that based on single canister configuration. Since the analysis code was developed by assuming that all canisters disposed of along the one-dimensional groundwater flow contribute to the concentration interference in the repository region, the assumption should be verified by conducting two or three-dimensional analysis considering heterogeneous geological structure as a future work. (author)

  1. Sterile paper points as a bacterial DNA-contamination source in microbiome profiles of clinical samples

    van der Horst, J.; Buijs, M.J.; Laine, M.L.; Wismeijer, D.; Loos, B.G.; Crielaard, W.; Zaura, E.

    2013-01-01

    Objectives High throughput sequencing of bacterial DNA from clinical samples provides untargeted, open-ended information on the entire microbial community. The downside of this approach is the vulnerability to DNA contamination from other sources than the clinical sample. Here we describe

  2. Improving sourcing decisions in NPD projects: Monetary quantification of points of difference

    Wouters, Marc; Anderson, James C.; Narus, James A.; Wynstra, Finn

    2009-01-01

    During new product development (NPD), firms make critical design and sourcing decisions that determine the new product's cost, performance, competitive position, and profitability. The purchase price of materials and components for the new product provides only part of the picture for design and

  3. Forced sound transmission through a finite-sized single leaf panel subject to a point source excitation.

    Wang, Chong

    2018-03-01

    In the case of a point source in front of a panel, the wavefront of the incident wave is spherical. This paper discusses spherical sound waves transmitting through a finite sized panel. The forced sound transmission performance that predominates in the frequency range below the coincidence frequency is the focus. Given the point source located along the centerline of the panel, forced sound transmission coefficient is derived through introducing the sound radiation impedance for spherical incident waves. It is found that in addition to the panel mass, forced sound transmission loss also depends on the distance from the source to the panel as determined by the radiation impedance. Unlike the case of plane incident waves, sound transmission performance of a finite sized panel does not necessarily converge to that of an infinite panel, especially when the source is away from the panel. For practical applications, the normal incidence sound transmission loss expression of plane incident waves can be used if the distance between the source and panel d and the panel surface area S satisfy d/S>0.5. When d/S ≈0.1, the diffuse field sound transmission loss expression may be a good approximation. An empirical expression for d/S=0  is also given.

  4. Relaxation dynamics in the presence of pulse multiplicative noise sources with different correlation properties

    Kargovsky, A. V.; Chichigina, O. A.; Anashkina, E. I.; Valenti, D.; Spagnolo, B.

    2015-10-01

    The relaxation dynamics of a system described by a Langevin equation with pulse multiplicative noise sources with different correlation properties is considered. The solution of the corresponding Fokker-Planck equation is derived for Gaussian white noise. Moreover, two pulse processes with regulated periodicity are considered as a noise source: the dead-time-distorted Poisson process and the process with fixed time intervals, which is characterized by an infinite correlation time. We find that the steady state of the system is dependent on the correlation properties of the pulse noise. An increase of the noise correlation causes the decrease of the mean value of the solution at the steady state. The analytical results are in good agreement with the numerical ones.

  5. System and method for integrating and accessing multiple data sources within a data warehouse architecture

    Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA

    2006-12-19

    A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.

  6. Prediction of the neutrons subcritical multiplication using the diffusion hybrid equation with external neutron sources

    Costa da Silva, Adilson; Carvalho da Silva, Fernando [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, 21941-914, Rio de Janeiro (Brazil); Senra Martinez, Aquilino, E-mail: aquilino@lmp.ufrj.br [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, 21941-914, Rio de Janeiro (Brazil)

    2011-07-15

    Highlights: > We proposed a new neutron diffusion hybrid equation with external neutron source. > A coarse mesh finite difference method for the adjoint flux and reactivity calculation was developed. > 1/M curve to predict the criticality condition is used. - Abstract: We used the neutron diffusion hybrid equation, in cartesian geometry with external neutron sources to predict the subcritical multiplication of neutrons in a pressurized water reactor, using a 1/M curve to predict the criticality condition. A Coarse Mesh Finite Difference Method was developed for the adjoint flux calculation and to obtain the reactivity values of the reactor. The results obtained were compared with benchmark values in order to validate the methodology presented in this paper.

  7. Prediction of the neutrons subcritical multiplication using the diffusion hybrid equation with external neutron sources

    Costa da Silva, Adilson; Carvalho da Silva, Fernando; Senra Martinez, Aquilino

    2011-01-01

    Highlights: → We proposed a new neutron diffusion hybrid equation with external neutron source. → A coarse mesh finite difference method for the adjoint flux and reactivity calculation was developed. → 1/M curve to predict the criticality condition is used. - Abstract: We used the neutron diffusion hybrid equation, in cartesian geometry with external neutron sources to predict the subcritical multiplication of neutrons in a pressurized water reactor, using a 1/M curve to predict the criticality condition. A Coarse Mesh Finite Difference Method was developed for the adjoint flux calculation and to obtain the reactivity values of the reactor. The results obtained were compared with benchmark values in order to validate the methodology presented in this paper.

  8. An open-source software program for performing Bonferroni and related corrections for multiple comparisons

    Kyle Lesack

    2011-01-01

    Full Text Available Increased type I error resulting from multiple statistical comparisons remains a common problem in the scientific literature. This may result in the reporting and promulgation of spurious findings. One approach to this problem is to correct groups of P-values for "family-wide significance" using a Bonferroni correction or the less conservative Bonferroni-Holm correction or to correct for the "false discovery rate" with a Benjamini-Hochberg correction. Although several solutions are available for performing this correction through commercially available software there are no widely available easy to use open source programs to perform these calculations. In this paper we present an open source program written in Python 3.2 that performs calculations for standard Bonferroni, Bonferroni-Holm and Benjamini-Hochberg corrections.

  9. Joint source based analysis of multiple brain structures in studying major depressive disorder

    Ramezani, Mahdi; Rasoulian, Abtin; Hollenstein, Tom; Harkness, Kate; Johnsrude, Ingrid; Abolmaesumi, Purang

    2014-03-01

    We propose a joint Source-Based Analysis (jSBA) framework to identify brain structural variations in patients with Major Depressive Disorder (MDD). In this framework, features representing position, orientation and size (i.e. pose), shape, and local tissue composition are extracted. Subsequently, simultaneous analysis of these features within a joint analysis method is performed to generate the basis sources that show signi cant di erences between subjects with MDD and those in healthy control. Moreover, in a cross-validation leave- one-out experiment, we use a Fisher Linear Discriminant (FLD) classi er to identify individuals within the MDD group. Results show that we can classify the MDD subjects with an accuracy of 76% solely based on the information gathered from the joint analysis of pose, shape, and tissue composition in multiple brain structures.

  10. Modelling a real-world buried valley system with vertical non-stationarity using multiple-point statistics

    He, Xiulan; Sonnenborg, Torben; Jørgensen, Flemming

    2017-01-01

    -stationary geological system characterized by a network of connected buried valleys that incise deeply into layered Miocene sediments (case study in Denmark). The results show that, based on fragmented information of the formation boundaries, the MPS partition method is able to simulate a non-stationary system......Stationarity has traditionally been a requirement of geostatistical simulations. A common way to deal with non-stationarity is to divide the system into stationary sub-regions and subsequently merge the realizations for each region. Recently, the so-called partition approach that has...... the flexibility to model non-stationary systems directly was developed for multiple-point statistics simulation (MPS). The objective of this study is to apply the MPS partition method with conventional borehole logs and high-resolution airborne electromagnetic (AEM) data, for simulation of a real-world non...

  11. Investigating lithological and geophysical relationships with applications to geological uncertainty analysis using Multiple-Point Statistical methods

    Barfod, Adrian

    The PhD thesis presents a new method for analyzing the relationship between resistivity and lithology, as well as a method for quantifying the hydrostratigraphic modeling uncertainty related to Multiple-Point Statistical (MPS) methods. Three-dimensional (3D) geological models are im...... is to improve analysis and research of the resistivity-lithology relationship and ensemble geological/hydrostratigraphic modeling. The groundwater mapping campaign in Denmark, beginning in the 1990’s, has resulted in the collection of large amounts of borehole and geophysical data. The data has been compiled...... in two publicly available databases, the JUPITER and GERDA databases, which contain borehole and geophysical data, respectively. The large amounts of available data provided a unique opportunity for studying the resistivity-lithology relationship. The method for analyzing the resistivity...

  12. On the possibility of the multiple inductively coupled plasma and helicon plasma sources for large-area processes

    Lee, Jin-Won; Lee, Yun-Seong, E-mail: leeeeys@kaist.ac.kr; Chang, Hong-Young [Low-temperature Plasma Laboratory, Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of); An, Sang-Hyuk [Agency of Defense Development, Yuseong-gu, Daejeon 305-151 (Korea, Republic of)

    2014-08-15

    In this study, we attempted to determine the possibility of multiple inductively coupled plasma (ICP) and helicon plasma sources for large-area processes. Experiments were performed with the one and two coils to measure plasma and electrical parameters, and a circuit simulation was performed to measure the current at each coil in the 2-coil experiment. Based on the result, we could determine the possibility of multiple ICP sources due to a direct change of impedance due to current and saturation of impedance due to the skin-depth effect. However, a helicon plasma source is difficult to adapt to the multiple sources due to the consistent change of real impedance due to mode transition and the low uniformity of the B-field confinement. As a result, it is expected that ICP can be adapted to multiple sources for large-area processes.

  13. Generation of point isotropic source dose buildup factor data for the PFBR special concretes in a form compatible for usage in point kernel computer code QAD-CGGP

    Radhakrishnan, G.

    2003-01-01

    Full text: Around the PFBR (Prototype Fast Breeder Reactor) reactor assembly, in the peripheral shields special concretes of density 2.4 g/cm 3 and 3.6 g/cm 3 are to be used in complex geometrical shapes. Point-kernel computer code like QAD-CGGP, written for complex shield geometry comes in handy for the shield design optimization of peripheral shields. QAD-CGGP requires data base for the buildup factor data and it contains only ordinary concrete of density 2.3 g/cm 3 . In order to extend the data base for the PFBR special concretes, point isotropic source dose buildup factors have been generated by Monte Carlo method using the computer code MCNP-4A. For the above mentioned special concretes, buildup factor data have been generated in the energy range 0.5 MeV to 10.0 MeV with the thickness ranging from 1 mean free paths (mfp) to 40 mfp. Capo's formula fit of the buildup factor data compatible with QAD-CGGP has been attempted

  14. Shape optimization of an airfoil in a BZT flow with multiple-source uncertainties

    Congedo, P.M.; Corre, C.; Martinez, J.M.

    2011-01-01

    Bethe-Zel'dovich-Thompson fluids (BZT) are characterized by negative values of the fundamental derivative of gas dynamics for a range of temperatures and pressures in the vapor phase, which leads to non-classical gas dynamic behaviors such as the disintegration of compression shocks. These non-classical phenomena can be exploited, when using these fluids in Organic Rankine Cycles (ORCs), to increase isentropic efficiency. A predictive numerical simulation of these flows must account for two main sources of physical uncertainties: the BZT fluid properties often difficult to measure accurately and the usually fluctuating turbine inlet conditions. For taking full advantage of the BZT properties, the turbine geometry must also be specifically designed, keeping in mind the geometry achieved in practice after machining always slightly differs from the theoretical shape. This paper investigates some efficient procedures to perform shape optimization in a 2D BZT flow with multiple-source uncertainties (thermodynamic model, operating conditions and geometry). To demonstrate the feasibility of the proposed efficient strategies for shape optimization in the presence of multiple-source uncertainties, a zero incidence symmetric airfoil wave-drag minimization problem is retained as a case-study. This simplified configuration encompasses most of the features associated with a turbine design problem, as far the uncertainty quantification is concerned. A preliminary analysis of the contributions to the variance of the wave-drag allows to select the most significant sources of uncertainties using a reduced number of flow computations. The resulting mean value and variance of the objective are next turned into meta models. The optimal Pareto sets corresponding to the minimization of various substitute functions are obtained using a genetic algorithm as optimizer and their differences are discussed. (authors)

  15. Use of multiple data sources to estimate hepatitis C seroprevalence among prisoners: A retrospective cohort study.

    Kathryn J Snow

    Full Text Available Hepatitis C is a major cause of preventable morbidity and mortality. Prisoners are a key population for hepatitis C control programs, and with the advent of highly effective therapies, prisons are increasingly important sites for hepatitis C diagnosis and treatment. Accurate estimates of hepatitis C prevalence among prisoners are needed in order to plan and resource service provision, however many prevalence estimates are based on surveys compromised by limited and potentially biased participation. We aimed to compare estimates derived from three different data sources, and to assess whether the use of self-report as a supplementary data source may help researchers assess the risk of selection bias. We used three data sources to estimate the prevalence of hepatitis C antibodies in a large cohort of Australian prisoners-prison medical records, self-reported status during a face-to-face interview prior to release from prison, and data from a statewide notifiable conditions surveillance system. Of 1,315 participants, 33.8% had at least one indicator of hepatitis C seropositivity, however less than one third of these (9.5% of the entire cohort were identified by all three data sources. Among participants of known status, self-report had a sensitivity of 80.1% and a positive predictive value of 97.8%. Any one data source used in isolation would have under-estimated the prevalence of hepatitis C in this cohort. Using multiple data sources in studies of hepatitis C seroprevalence among prisoners may improve case detection and help researchers assess the risk of selection bias due to non-participation in serological testing.

  16. On-site meteorological instrumentation requirements to characterize diffusion from point sources: workshop report. Final report Sep 79-Sep 80

    Strimaitis, D.; Hoffnagle, G.; Bass, A.

    1981-04-01

    Results of a workshop entitled 'On-Site Meteorological Instrumentation Requirements to Characterize Diffusion from Point Sources' are summarized and reported. The workshop was sponsored by the U.S. Environmental Protection Agency in Raleigh, North Carolina, on January 15-17, 1980. Its purpose was to provide EPA with a thorough examination of the meteorological instrumentation and data collection requirements needed to characterize airborne dispersion of air contaminants from point sources and to recommend, based on an expert consensus, specific measurement technique and accuracies. Secondary purposes of the workshop were to (1) make recommendations to the National Weather Service (NWS) about collecting and archiving meteorological data that would best support air quality dispersion modeling objectives and (2) make recommendations on standardization of meteorological data reporting and quality assurance programs

  17. Correlation Wave-Front Sensing Algorithms for Shack-Hartmann-Based Adaptive Optics using a Point Source

    Poynee, L A

    2003-01-01

    Shack-Hartmann based Adaptive Optics system with a point-source reference normally use a wave-front sensing algorithm that estimates the centroid (center of mass) of the point-source image 'spot' to determine the wave-front slope. The centroiding algorithm suffers for several weaknesses. For a small number of pixels, the algorithm gain is dependent on spot size. The use of many pixels on the detector leads to significant propagation of read noise. Finally, background light or spot halo aberrations can skew results. In this paper an alternative algorithm that suffers from none of these problems is proposed: correlation of the spot with a ideal reference spot. The correlation method is derived and a theoretical analysis evaluates its performance in comparison with centroiding. Both simulation and data from real AO systems are used to illustrate the results. The correlation algorithm is more robust than centroiding, but requires more computation

  18. A rotating modulation imager for locating mid-range point sources

    Kowash, B.R.; Wehe, D.K.; Fessler, J.A.

    2009-01-01

    Rotating modulation collimators (RMC) are relatively simple indirect imaging devices that have proven useful in gamma ray astronomy (far field) and have more recently been studied for medical imaging (very near field). At the University of Michigan a RMC has been built to study the performance for homeland security applications. This research highlights the imaging performance of this system and focuses on three distinct regions in the RMC field of view that can impact the search for hidden sources. These regions are a blind zone around the axis of rotation, a two mask image zone that extends from the blind zone to the edge of the field of view, and a single mask image zone that occurs when sources fall outside the field of view of both masks. By considering the extent and impact of these zones, the size of the two mask region can be optimized for the best system performance.

  19. The Non-point Source Pollution Effects of Pesticides Based on the Survey of 340 Farmers in Chongqing City

    YU, Lianchao; GU, Limeng; BI, Qian

    2015-01-01

    Using the survey data on 340 farmers in Chongqing City, this paper performs an empirical analysis of the factors influencing the non-point source pollution of pesticides. The results show that the older householders will apply more pesticides, which may be due to the weak physical strength and weak ability to accept the concept of advanced cultivation; the householders with high level of education will choose to use less pesticides; the pesticide application rate is negatively correlated with...

  20. Analytical formulae to calculate the solid angle subtended at an arbitrarily positioned point source by an elliptical radiation detector

    Abbas, Mahmoud I.; Hammoud, Sami; Ibrahim, Tarek; Sakr, Mohamed

    2015-01-01

    In this article, we introduce a direct analytical mathematical method for calculating the solid angle, Ω, subtended at a point by closed elliptical contours. The solid angle is required in many areas of optical and nuclear physics to estimate the flux of particle beam of radiation and to determine the activity of a radioactive source. The validity of the derived analytical expressions was successfully confirmed by the comparison with some published data (Numerical Method)

  1. Study The Validity of The Direct Mathematical Method For Calculation The Total Efficiency Using Point And Disk Sources

    Hagag, O.M.; Nafee, S.S.; Naeem, M.A.; El Khatib, A.M.

    2011-01-01

    The direct mathematical method has been developed for calculating the total efficiency of many cylindrical gamma detectors, especially HPGe and NaI detector. Different source geometries are considered (point and disk). Further into account is taken of gamma attenuation from detector window or any interfacing absorbing layer. Results are compared with published experimental data to study the validity of the direct mathematical method to calculate total efficiency for any gamma detector size.

  2. The Newtonian force experienced by a point mass near a finite cylindrical source

    Selvaggi, Jerry P; Salon, Sheppard; Chari, M V K

    2008-01-01

    The Newtonian gravitational force experienced by a point mass located at some external point from a thick-walled, hollow and uniform finite circular cylindrical body was recently solved by Lockerbie, Veryaskin and Xu (1993 Class. Quantum Grav. 10 2419). Their method of attack relied on the introduction of the circular cylindrical free-space Green function representation for the inverse distance which appears in the formulation of the Newtonian potential function. This ultimately leads Lockerbie et al to a final expression for the Newtonian potential function which is expressed as a double summation of even-ordered Legendre polynomials. However, the kernel of the cylindrical free-space Green function which is represented by an infinite integral of the product of two Bessel functions and a decaying exponential can be analytically evaluated in terms of a toroidal function. This leads to a simplification in the mathematical analysis developed by Lockerbie et al. Also, each term in the infinite series solution for the Newtonian potential function can be expressed in closed form in terms of elementary functions. The authors develop the Newtonian potential function by employing toroidal functions of zeroth order or Legendre functions of half-integral degree, Q m-1/2 (β)(Bouwkamp and de Bruijn 1947 J. Appl. Phys.18 562, Cohl et al 2001 Phys. Rev.A 64 052509-1, Selvaggi et al 2004 IEEE Trans. Magn.40 3278). These functions are monotonically decreasing and converge rapidly (Moon and Spencer 1961 Field Theory for Engineers (New Jersey: Van Nostrand Company) pp 368-76, Cohl and Tohline 1999 Astrophys. J.527 86). The introduction of the toroidal harmonic expansion leads to an infinite series solution for which each term can be expressed as an elementary function. This enables one to easily compute the axial and radial forces experienced by an internal or an external point mass

  3. Design, development and integration of a large scale multiple source X-ray computed tomography system

    Malcolm, Andrew A.; Liu, Tong; Ng, Ivan Kee Beng; Teng, Wei Yuen; Yap, Tsi Tung; Wan, Siew Ping; Kong, Chun Jeng

    2013-01-01

    X-ray Computed Tomography (CT) allows visualisation of the physical structures in the interior of an object without physically opening or cutting it. This technology supports a wide range of applications in the non-destructive testing, failure analysis or performance evaluation of industrial products and components. Of the numerous factors that influence the performance characteristics of an X-ray CT system the energy level in the X-ray spectrum to be used is one of the most significant. The ability of the X-ray beam to penetrate a given thickness of a specific material is directly related to the maximum available energy level in the beam. Higher energy levels allow penetration of thicker components made of more dense materials. In response to local industry demand and in support of on-going research activity in the area of 3D X-ray imaging for industrial inspection the Singapore Institute of Manufacturing Technology (SIMTech) engaged in the design, development and integration of large scale multiple source X-ray computed tomography system based on X-ray sources operating at higher energies than previously available in the Institute. The system consists of a large area direct digital X-ray detector (410 x 410 mm), a multiple-axis manipulator system, a 225 kV open tube microfocus X-ray source and a 450 kV closed tube millifocus X-ray source. The 225 kV X-ray source can be operated in either transmission or reflection mode. The body of the 6-axis manipulator system is fabricated from heavy-duty steel onto which high precision linear and rotary motors have been mounted in order to achieve high accuracy, stability and repeatability. A source-detector distance of up to 2.5 m can be achieved. The system is controlled by a proprietary X-ray CT operating system developed by SIMTech. The system currently can accommodate samples up to 0.5 x 0.5 x 0.5 m in size with weight up to 50 kg. These specifications will be increased to 1.0 x 1.0 x 1.0 m and 100 kg in future

  4. Optimizing Irrigation Water Allocation under Multiple Sources of Uncertainty in an Arid River Basin

    Wei, Y.; Tang, D.; Gao, H.; Ding, Y.

    2015-12-01

    Population growth and climate change add additional pressures affecting water resources management strategies for meeting demands from different economic sectors. It is especially challenging in arid regions where fresh water is limited. For instance, in the Tailanhe River Basin (Xinjiang, China), a compromise must be made between water suppliers and users during drought years. This study presents a multi-objective irrigation water allocation model to cope with water scarcity in arid river basins. To deal with the uncertainties from multiple sources in the water allocation system (e.g., variations of available water amount, crop yield, crop prices, and water price), the model employs a interval linear programming approach. The multi-objective optimization model developed from this study is characterized by integrating eco-system service theory into water-saving measures. For evaluation purposes, the model is used to construct an optimal allocation system for irrigation areas fed by the Tailan River (Xinjiang Province, China). The objective functions to be optimized are formulated based on these irrigation areas' economic, social, and ecological benefits. The optimal irrigation water allocation plans are made under different hydroclimate conditions (wet year, normal year, and dry year), with multiple sources of uncertainty represented. The modeling tool and results are valuable for advising decision making by the local water authority—and the agricultural community—especially on measures for coping with water scarcity (by incorporating uncertain factors associated with crop production planning).

  5. Assessing regional groundwater stress for nations using multiple data sources with the groundwater footprint

    Gleeson, Tom; Wada, Yoshihide

    2013-01-01

    Groundwater is a critical resource for agricultural production, ecosystems, drinking water and industry, yet groundwater depletion is accelerating, especially in a number of agriculturally important regions. Assessing the stress of groundwater resources is crucial for science-based policy and management, yet water stress assessments have often neglected groundwater and used single data sources, which may underestimate the uncertainty of the assessment. We consistently analyze and interpret groundwater stress across whole nations using multiple data sources for the first time. We focus on two nations with the highest national groundwater abstraction rates in the world, the United States and India, and use the recently developed groundwater footprint and multiple datasets of groundwater recharge and withdrawal derived from hydrologic models and data synthesis. A minority of aquifers, mostly with known groundwater depletion, show groundwater stress regardless of the input dataset. The majority of aquifers are not stressed with any input data while less than a third are stressed for some input data. In both countries groundwater stress affects agriculturally important regions. In the United States, groundwater stress impacts a lower proportion of the national area and population, and is focused in regions with lower population and water well density compared to India. Importantly, the results indicate that the uncertainty is generally greater between datasets than within datasets and that much of the uncertainty is due to recharge estimates. Assessment of groundwater stress consistently across a nation and assessment of uncertainty using multiple datasets are critical for the development of a science-based rationale for policy and management, especially with regard to where and to what extent to focus limited research and management resources. (letter)

  6. Search for atmospheric muon-neutrinos and extraterrestric neutrino point sources in the 1997 AMANDA-B10 data

    Biron von Curland, A.

    2002-07-01

    The young field of high energy neutrino astronomy can be motivated by the search for the origin of the charged cosmic rays. Large astrophysical objects like AGNs or supernova remnants are candidates to accelerate hadrons which then can interact to eventually produce high energy neutrinos. Neutrino-induced muons can be detected via their emission of Cherenkov light in large neutrino telescopes like AMANDA. More than 10 9 atmospheric muon events and approximately 5000 atmospheric neutrino events were registered by AMANDA-B10 in 1997. Out of these, 223 atmospheric neutrino candidate events have been extracted. This data set contains approximately 15 background events. It allows to confirm the expected sensitivity of the detector towards neutrino events. A second set containing 369 (approximately 270 atmospheric neutrino events and 100 atmospheric muon events) was used to search for extraterrestrial neutrino point sources. Neither a binned search, nor a cluster search, nor a search for preselected sources gave indications for the existence of a strong neutrino point source. Based on this result, flux limits were derived. Assuming E ν -2 spectra, typical flux limits for selected sources of the order of Φ μ limit ∝ 10 -14 cm -2 s -1 for muons and Φ ν limit ∝ 10 -7 cm -2 s -1 for neutrinos have been obtained. (orig.)

  7. Energy demand modelling: pointing out alternative energy sources. The example of industry in OECD countries

    Renou, P.

    1992-01-01

    This thesis studies energy demand and alternative energy sources in OECD countries. In the first part, the principle models usually used for energy demand modelling. In the second part, the author studies the flexible functional forms (translog, generalized Leontief, generalized quadratic, Fourier) to obtain an estimation of the production function. In the third part, several examples are given, chosen in seven countries (Usa, Japan, Federal Republic of Germany, France, United Kingdom, Italy, Canada). Energy systems analysis in these countries, can help to choose models and gives informations on alternative energies. 246 refs., 24 figs., 27 tabs

  8. Source to point of use drinking water changes and knowledge, attitude and practices in Katsina State, Northern Nigeria

    Onabolu, B.; Jimoh, O. D.; Igboro, S. B.; Sridhar, M. K. C.; Onyilo, G.; Gege, A.; Ilya, R.

    In many Sub-Saharan countries such as Nigeria, inadequate access to safe drinking water is a serious problem with 37% in the region and 58% of rural Nigeria using unimproved sources. The global challenge to measuring household water quality as a determinant of safety is further compounded in Nigeria by the possibility of deterioration from source to point of use. This is associated with the use of decentralised water supply systems in rural areas which are not fully reticulated to the household taps, creating a need for an integrated water quality monitoring system. As an initial step towards establishing the system in the north west and north central zones of Nigeria, The Katsina State Rural Water and Sanitation Agency, responsible for ensuring access to safe water and adequate sanitation to about 6 million people carried out a three pronged study with the support of UNICEF Nigeria. Part 1 was an assessment of the legislative and policy framework, institutional arrangements and capacity for drinking water quality monitoring through desk top reviews and Key Informant Interviews (KII) to ascertain the institutional capacity requirements for developing the water quality monitoring system. Part II was a water quality study in 700 households of 23 communities in four local government areas. The objectives were to assess the safety of drinking water, compare the safety at source and household level and assess the possible contributory role of end users’ Knowledge Attitudes and Practices. These were achieved through water analysis, household water quality tracking, KII and questionnaires. Part III was the production of a visual documentary as an advocacy tool to increase awareness of the policy makers of the linkages between source management, treatment and end user water quality. The results indicate that except for pH, conductivity and manganese, the improved water sources were safe at source. However there was a deterioration in water quality between source and

  9. Sources of water column methylmercury across multiple estuaries in the Northeast U.S.

    Balcom, Prentiss H; Schartup, Amina T; Mason, Robert P; Chen, Celia Y

    2015-12-20

    Estuarine water column methylmercury (MeHg) is an important driver of mercury (Hg) bioaccumulation in pelagic organisms and thus it is necessary to understand the sources and processes affecting environmental levels of MeHg. Increases in water column MeHg concentrations can ultimately be transferred to fish consumed by humans, but despite this, the sources of MeHg to the estuarine water column are still poorly understood. Here we evaluate MeHg sources across 4 estuaries and 10 sampling sites and examine the distributions and partitioning of sediment and water column MeHg across a geographic range (Maine to New Jersey). Our study sites present a gradient in the concentrations of sediment, pore water and water column Hg species. Suspended particle MeHg ranged from below detection to 187 pmol g -1 , dissolved MeHg from 0.01 to 0.68 pM, and sediment MeHg from 0.01 to 109 pmol g -1 . Across multiple estuaries, dissolved MeHg correlated with Hg species in the water column, and sediment MeHg correlated with sediment total Hg (HgT). Water column MeHg did not correlate well with sediment Hg across estuaries, indicating that sediment concentrations were not a good predictor of water MeHg concentrations. This is an unexpected finding since it has been shown that MeHg production from inorganic Hg 2+ within sediment is the primary source of MeHg to coastal waters. Additional sources of MeHg regulate water column MeHg levels in some of the shallow estuaries included in this study.

  10. The Protein Identifier Cross-Referencing (PICR service: reconciling protein identifiers across multiple source databases

    Leinonen Rasko

    2007-10-01

    Full Text Available Abstract Background Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. Results We have created the Protein Identifier Cross-Reference (PICR service, a web application that provides interactive and programmatic (SOAP and REST access to a mapping algorithm that uses the UniProt Archive (UniParc as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV or Microsoft Excel (XLS files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. Conclusion We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR

  11. Electronic equilibrium as a function of depth in tissue from cobalt-60 point source exposures

    Myrick, J.A.

    1994-08-01

    The Nuclear Regulatory Commission has set the basic criteria for assessing skin dose stemming from hot particle contaminations. Compliance with 10 CFR 20.101 requires that exposure to the skin be evaluated over a 1 cm 2 area at a depth of 0.007 cm. Skin exposure can arise from both the beta and gamma components of radioactive particles and gamma radiation can contribute significantly to skin doses. The gamma component of dose increases dramatically when layers of protective clothing are interposed between the hot particle source and the skin, and in cases where the hot particle is large in comparison to the range of beta particles. Once the protective clothing layer is thicker than the maximum range of the beta particles, skin dose is due solely to gamma radiation. Charged particle equilibrium is not established at shallow depths. The degree of electronic equilibrium establishment must be assessed for shallow doses to prevent the over-assessment of skin dose because conventional fluence-to-dose conversion factors are not applicable. To assess the effect of electronic equilibrium, selected thicknesses of tissue equivalent material were interposed between radiochromic dye film and a 60 Co hot particle source and dose was measured as a function of depth. These measured values were then compared to models which are used to calculate charged particle equilibrium. The Miller-Reece model was found to agree closely with the experimental data while the Lantz-Lambert model overestimated dose at shallow depths

  12. MILLIMETER TRANSIENT POINT SOURCES IN THE SPTpol 100 SQUARE DEGREE SURVEY

    Whitehorn, N.; Haan, T. de; George, E. M. [Department of Physics, University of California, Berkeley, CA 94720 (United States); Natoli, T.; Carlstrom, J. E. [Department of Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Ade, P. A. R. [Cardiff University, Cardiff CF10 3XQ (United Kingdom); Austermann, J. E.; Beall, J. A. [NIST Quantum Devices Group, 325 Broadway Mailcode 817.03, Boulder, CO 80305 (United States); Bender, A. N.; Benson, B. A.; Bleem, L. E.; Chang, C. L.; Citron, R.; Crawford, T. M.; Crites, A. T.; Gallicchio, J. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Chiang, H. C. [School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban (South Africa); Cho, H-M. [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Dobbs, M. A. [Department of Physics, McGill University, 3600 Rue University, Montreal, Quebec H3A 2T8 (Canada); Everett, W., E-mail: nwhitehorn@berkeley.edu, E-mail: t.natoli@utoronto.ca [Department of Astrophysical and Planetary Sciences, University of Colorado, Boulder, CO 80309 (United States); and others

    2016-10-20

    The millimeter transient sky is largely unexplored, with measurements limited to follow-up of objects detected at other wavelengths. High-angular-resolution telescopes, designed for measurement of the cosmic microwave background (CMB), offer the possibility to discover new, unknown transient sources in this band—particularly the afterglows of unobserved gamma-ray bursts (GRBs). Here, we use the 10 m millimeter-wave South Pole Telescope, designed for the primary purpose of observing the CMB at arcminute and larger angular scales, to conduct a search for such objects. During the 2012–2013 season, the telescope was used to continuously observe a 100 deg{sup 2} patch of sky centered at R.A. 23{sup h}30{sup m} and decl. −55° using the polarization-sensitive SPTpol camera in two bands centered at 95 and 150 GHz. These 6000 hr of observations provided continuous monitoring for day- to month-scale millimeter-wave transient sources at the 10 mJy level. One candidate object was observed with properties broadly consistent with a GRB afterglow, but at a statistical significance too low ( p = 0.01) to confirm detection.

  13. Airborne remote sensing and in situ measurements of atmospheric CO2 to quantify point source emissions

    Krings, Thomas; Neininger, Bruno; Gerilowski, Konstantin; Krautwurst, Sven; Buchwitz, Michael; Burrows, John P.; Lindemann, Carsten; Ruhtz, Thomas; Schüttemeyer, Dirk; Bovensmann, Heinrich

    2018-02-01

    Reliable techniques to infer greenhouse gas emission rates from localised sources require accurate measurement and inversion approaches. In this study airborne remote sensing observations of CO2 by the MAMAP instrument and airborne in situ measurements are used to infer emission estimates of carbon dioxide released from a cluster of coal-fired power plants. The study area is complex due to sources being located in close proximity and overlapping associated carbon dioxide plumes. For the analysis of in situ data, a mass balance approach is described and applied, whereas for the remote sensing observations an inverse Gaussian plume model is used in addition to a mass balance technique. A comparison between methods shows that results for all methods agree within 10 % or better with uncertainties of 10 to 30 % for cases in which in situ measurements were made for the complete vertical plume extent. The computed emissions for individual power plants are in agreement with results derived from emission factors and energy production data for the time of the overflight.

  14. Investigating the effects of point source and nonpoint source pollution on the water quality of the East River (Dongjiang) in South China

    Wu, Yiping; Chen, Ji

    2013-01-01

    Understanding the physical processes of point source (PS) and nonpoint source (NPS) pollution is critical to evaluate river water quality and identify major pollutant sources in a watershed. In this study, we used the physically-based hydrological/water quality model, Soil and Water Assessment Tool, to investigate the influence of PS and NPS pollution on the water quality of the East River (Dongjiang in Chinese) in southern China. Our results indicate that NPS pollution was the dominant contribution (>94%) to nutrient loads except for mineral phosphorus (50%). A comprehensive Water Quality Index (WQI) computed using eight key water quality variables demonstrates that water quality is better upstream than downstream despite the higher level of ammonium nitrogen found in upstream waters. Also, the temporal (seasonal) and spatial distributions of nutrient loads clearly indicate the critical time period (from late dry season to early wet season) and pollution source areas within the basin (middle and downstream agricultural lands), which resource managers can use to accomplish substantial reduction of NPS pollutant loadings. Overall, this study helps our understanding of the relationship between human activities and pollutant loads and further contributes to decision support for local watershed managers to protect water quality in this region. In particular, the methods presented such as integrating WQI with watershed modeling and identifying the critical time period and pollutions source areas can be valuable for other researchers worldwide.

  15. Sources of Aid and Resilience and Points of Pain in Jamaican Migrant Families

    Thompson, Paul; Bauer, Elaine

    2008-01-01

    Soutien, résilience et points douloureux dans les familles jamaïcaines transnationales.L’article se fonde sur des récits de vie recueillis auprès de membres de 45 familles qui ont de la parentèle à la fois en Jamaïque, en Grande-Bretagne et en Amérique du Nord. L’idée centrale est que la forme jamaïcaine transnationale de famille est centrée sur un type de structure familiale qui est complexe, informel, et pragmatique ; ce type de structure serait caractéristique de la parenté jamaïcaine depu...

  16. Multiple Positive Solutions of a Nonlinear Four-Point Singular Boundary Value Problem with a p-Laplacian Operator on Time Scales

    Shihuang Hong

    2009-01-01

    Full Text Available We present sufficient conditions for the existence of at least twin or triple positive solutions of a nonlinear four-point singular boundary value problem with a p-Laplacian dynamic equation on a time scale. Our results are obtained via some new multiple fixed point theorems.

  17. The test beamline of the European Spallation Source – Instrumentation development and wavelength frame multiplication

    Woracek, R.; Hofmann, T.; Bulat, M.; Sales, M.; Habicht, K.; Andersen, K.; Strobl, M.

    2016-01-01

    The European Spallation Source (ESS), scheduled to start operation in 2020, is aiming to deliver the most intense neutron beams for experimental research of any facility worldwide. Its long pulse time structure implies significant differences for instrumentation compared to other spallation sources which, in contrast, are all providing short neutron pulses. In order to enable the development of methods and technology adapted to this novel type of source well in advance of the first instruments being constructed at ESS, a test beamline (TBL) was designed and built at the BER II research reactor at Helmholtz-Zentrum Berlin (HZB). Operating the TBL shall provide valuable experience in order to allow for a smooth start of operations at ESS. The beamline is capable of mimicking the ESS pulse structure by a double chopper system and provides variable wavelength resolution as low as 0.5% over a wide wavelength band between 1.6 Å and 10 Å by a dedicated wavelength frame multiplication (WFM) chopper system. WFM is proposed for several ESS instruments to allow for flexible time-of-flight resolution. Hence, ESS will benefit from the TBL which offers unique possibilities for testing methods and components. This article describes the main capabilities of the instrument, its performance as experimentally verified during the commissioning, and its relevance to currently starting ESS instrumentation projects.

  18. The test beamline of the European Spallation Source – Instrumentation development and wavelength frame multiplication

    Woracek, R., E-mail: robin.woracek@esss.se [European Spallation Source ESS ERIC, P.O. Box 176, SE-22100 Lund (Sweden); Hofmann, T.; Bulat, M. [Helmholtz-Zentrum Berlin für Materialien und Energie, Hahn-Meitner Platz 1, 14109 Berlin (Germany); Sales, M. [Technical University of Denmark, Fysikvej, 2800 Kgs. Lyngby (Denmark); Habicht, K. [Helmholtz-Zentrum Berlin für Materialien und Energie, Hahn-Meitner Platz 1, 14109 Berlin (Germany); Andersen, K. [European Spallation Source ESS ERIC, P.O. Box 176, SE-22100 Lund (Sweden); Strobl, M. [European Spallation Source ESS ERIC, P.O. Box 176, SE-22100 Lund (Sweden); Technical University of Denmark, Fysikvej, 2800 Kgs. Lyngby (Denmark)

    2016-12-11

    The European Spallation Source (ESS), scheduled to start operation in 2020, is aiming to deliver the most intense neutron beams for experimental research of any facility worldwide. Its long pulse time structure implies significant differences for instrumentation compared to other spallation sources which, in contrast, are all providing short neutron pulses. In order to enable the development of methods and technology adapted to this novel type of source well in advance of the first instruments being constructed at ESS, a test beamline (TBL) was designed and built at the BER II research reactor at Helmholtz-Zentrum Berlin (HZB). Operating the TBL shall provide valuable experience in order to allow for a smooth start of operations at ESS. The beamline is capable of mimicking the ESS pulse structure by a double chopper system and provides variable wavelength resolution as low as 0.5% over a wide wavelength band between 1.6 Å and 10 Å by a dedicated wavelength frame multiplication (WFM) chopper system. WFM is proposed for several ESS instruments to allow for flexible time-of-flight resolution. Hence, ESS will benefit from the TBL which offers unique possibilities for testing methods and components. This article describes the main capabilities of the instrument, its performance as experimentally verified during the commissioning, and its relevance to currently starting ESS instrumentation projects.

  19. Advection-diffusion model for the simulation of air pollution distribution from a point source emission

    Ulfah, S.; Awalludin, S. A.; Wahidin

    2018-01-01

    Advection-diffusion model is one of the mathematical models, which can be used to understand the distribution of air pollutant in the atmosphere. It uses the 2D advection-diffusion model with time-dependent to simulate air pollution distribution in order to find out whether the pollutants are more concentrated at ground level or near the source of emission under particular atmospheric conditions such as stable, unstable, and neutral conditions. Wind profile, eddy diffusivity, and temperature are considered in the model as parameters. The model is solved by using explicit finite difference method, which is then visualized by a computer program developed using Lazarus programming software. The results show that the atmospheric conditions alone influencing the level of concentration of pollutants is not conclusive as the parameters in the model have their own effect on each atmospheric condition.

  20. Typhoid fever: A report on a point-source outbreak of 69 cases in Cape Town.

    Popkiss, M E

    1980-03-01

    In 1978, after a party in a Cape Town suburb attended by several hundred people, 69 persons were treated for typhoid fever. The precise source of the infection could not be traced, although it is reasonable to suppose that food eaten at the party had been contaminated by the main caterer. All 57 cultures of Salmonella typhi phage-typed were of phage type 46, including that obtained from the stool of the main caterer, who was asymptomatic. An epidemiological profile of the cases and an account of the management of the outbreak is given. There were no deaths and no patient became a carrier. Although the outbreak was contained, certain problems relating thereto are aired, including in particular the potential hazard of food-borne disease wherever housing and environmental standards are low.