WorldWideScience

Sample records for source distribution method

  1. A New Method for the 2D DOA Estimation of Coherently Distributed Sources

    Directory of Open Access Journals (Sweden)

    Liang Zhou

    2014-03-01

    Full Text Available The purpose of this paper is to develop a new technique for estimating the two- dimensional (2D direction-of-arrivals (DOAs of coherently distributed (CD sources, which can estimate effectively the central azimuth and central elevation of CD sources at the cost of less computational cost. Using the special L-shape array, a new approach for parametric estimation of CD sources is proposed. The proposed method is based on two rotational invariance relations under small angular approximation, and estimates two rotational matrices which depict the relations, using propagator technique. And then the central DOA estimations are obtained by utilizing the primary diagonal elements of two rotational matrices. Simulation results indicate that the proposed method can exhibit a good performance under small angular spread and be applied to the multisource scenario where different sources may have different angular distribution shapes. Without any peak-finding search and the eigendecomposition of the high-dimensional sample covariance matrix, the proposed method has significantly reduced the computational cost compared with the existing methods, and thus is beneficial to real-time processing and engineering realization. In addition, our approach is also a robust estimator which does not depend on the angular distribution shape of CD sources.

  2. Development of unfolding method to obtain pin-wise source strength distribution from PWR spent fuel assembly measurement

    International Nuclear Information System (INIS)

    Sitompul, Yos Panagaman; Shin, Hee-Sung; Park, Se-Hwan; Oh, Jong Myeong; Seo, Hee; Kim, Ho Dong

    2013-01-01

    An unfolding method has been developed to obtain a pin-wise source strength distribution of a 14 × 14 pressurized water reactor (PWR) spent fuel assembly. Sixteen measured gamma dose rates at 16 control rod guide tubes of an assembly are unfolded to 179 pin-wise source strengths of the assembly. The method calculates and optimizes five coefficients of the quadratic fitting function for X-Y source strength distribution, iteratively. The pin-wise source strengths are obtained at the sixth iteration, with a maximum difference between two sequential iterations of about 0.2%. The relative distribution of pin-wise source strength from the unfolding is checked using a comparison with the design code (Westinghouse APA code). The result shows that the relative distribution from the unfolding and design code is consistent within a 5% difference. The absolute value of the pin-wise source strength is also checked by reproducing the dose rates at the measurement points. The result shows that the pin-wise source strengths from the unfolding reproduce the dose rates within a 2% difference. (author)

  3. Source distribution dependent scatter correction for PVI

    International Nuclear Information System (INIS)

    Barney, J.S.; Harrop, R.; Dykstra, C.J.

    1993-01-01

    Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction

  4. Distributed coding of multiview sparse sources with joint recovery

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren

    2016-01-01

    In support of applications involving multiview sources in distributed object recognition using lightweight cameras, we propose a new method for the distributed coding of sparse sources as visual descriptor histograms extracted from multiview images. The problem is challenging due to the computati...... transform (SIFT) descriptors extracted from multiview images shows that our method leads to bit-rate saving of up to 43% compared to the state-of-the-art distributed compressed sensing method with independent encoding of the sources....

  5. Family of Quantum Sources for Improving Near Field Accuracy in Transducer Modeling by the Distributed Point Source Method

    Directory of Open Access Journals (Sweden)

    Dominique Placko

    2016-10-01

    Full Text Available The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD. In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.

  6. Radiation Source Mapping with Bayesian Inverse Methods

    Science.gov (United States)

    Hykes, Joshua Michael

    We present a method to map the spectral and spatial distributions of radioactive sources using a small number of detectors. Locating and identifying radioactive materials is important for border monitoring, accounting for special nuclear material in processing facilities, and in clean-up operations. Most methods to analyze these problems make restrictive assumptions about the distribution of the source. In contrast, the source-mapping method presented here allows an arbitrary three-dimensional distribution in space and a flexible group and gamma peak distribution in energy. To apply the method, the system's geometry and materials must be known. A probabilistic Bayesian approach is used to solve the resulting inverse problem (IP) since the system of equations is ill-posed. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint flux, discrete ordinates solutions, obtained in this work by the Denovo code, are required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes are then used to form the linear model to map the state space to the response space. The test for the method is simultaneously locating a set of 137Cs and 60Co gamma sources in an empty room. This test problem is solved using synthetic measurements generated by a Monte Carlo (MCNP) model and using experimental measurements that we collected for this purpose. With the synthetic data, the predicted source distributions identified the locations of the sources to within tens of centimeters, in a room with an approximately four-by-four meter floor plan. Most of the predicted source intensities were within a factor of ten of their true value. The chi-square value of the predicted source was within a factor of five from the expected value based on the number of measurements employed. With a favorable uniform initial guess, the predicted source map was nearly identical to the true distribution

  7. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    Science.gov (United States)

    Bansal, A. R.; Anand, S. P.; Rajaram, Mita; Rao, V. K.; Dimri, V. P.

    2013-09-01

    The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.

  8. 99Tc in the environment. Sources, distribution and methods

    International Nuclear Information System (INIS)

    Garcia-Leon, Manuel

    2005-01-01

    99 Tc is a β-emitter, E max =294 keV, with a very long half-life (T 1/2 =2.11 x 10 5 y). It is mainly produced in the fission of 235 U and 239 Pu at a rate of about 6%. This rate together with its long half-life makes it a significant nuclide in the whole nuclear fuel cycle, from which it can be introduced into the environment at different rates depending on the cycle step. A gross estimation shows that adding all the possible sources, at least 2000 TBq had been released into the environment up to 2000 and that up to the middle of the nineties of the last century some 64000 TBq had been produced worldwide. Nuclear explosions have liberated some 160 TBq into the environment. In this work, environmental distribution of 99 Tc as well as the methods for its determination will be discussed. Emphasis is put on the environmental relevance of 99 Tc, mainly with regard to the future committed radiation dose received by the population and to the problem of nuclear waste management. Its determination at environmental levels is a challenging task. For that, special mention is made about the mass spectrometric methods for its measurement. (author)

  9. Microseism Source Distribution Observed from Ireland

    Science.gov (United States)

    Craig, David; Bean, Chris; Donne, Sarah; Le Pape, Florian; Möllhoff, Martin

    2017-04-01

    Ocean generated microseisms (OGM) are recorded globally with similar spectral features observed everywhere. The generation mechanism for OGM and their subsequent propagation to continental regions has led to their use as a proxy for sea-state characteristics. Also many modern seismological methods make use of OGM signals. For example, the Earth's crust and upper mantle can be imaged using ``ambient noise tomography``. For many of these methods an understanding of the source distribution is necessary to properly interpret the results. OGM recorded on near coastal seismometers are known to be related to the local ocean wavefield. However, contributions from more distant sources may also be present. This is significant for studies attempting to use OGM as a proxy for sea-state characteristics such as significant wave height. Ireland has a highly energetic ocean wave climate and is close to one of the major source regions for OGM. This provides an ideal location to study an OGM source region in detail. Here we present the source distribution observed from seismic arrays in Ireland. The region is shown to consist of several individual source areas. These source areas show some frequency dependence and generally occur at or near the continental shelf edge. We also show some preliminary results from an off-shore OBS network to the North-West of Ireland. The OBS network includes instruments on either side of the shelf and should help interpret the array observations.

  10. Precise Mapping Of A Spatially Distributed Radioactive Source

    International Nuclear Information System (INIS)

    Beck, A.; Caras, I.; Piestum, S.; Sheli, E.; Melamud, Y.; Berant, S.; Kadmon, Y.; Tirosh, D.

    1999-01-01

    Spatial distribution measurement of radioactive sources is a routine task in the nuclear industry. The precision of each measurement depends upon the specific application. However, the technological edge of this precision is motivated by the production of standards for calibration. Within this definition, the most demanding field is the calibration of standards for medical equipment. In this paper, a semi-empirical method for controlling the measurement precision is demonstrated, using a relatively simple laboratory apparatus. The spatial distribution of the source radioactivity is measured as part of the quality assurance tests, during the production of flood sources. These sources are further used in calibration of medical gamma cameras. A typical flood source is a 40 x 60 cm 2 plate with an activity of 10 mCi (or more) of 57 Co isotope. The measurement set-up is based on a single NaI(Tl) scintillator with a photomultiplier tube, moving on an X Y table which scans the flood source. In this application the source is required to have a uniform activity distribution over its surface

  11. Distributed optimization system and method

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  12. Galactic distribution of X-ray burst sources

    International Nuclear Information System (INIS)

    Lewin, W.H.G.; Hoffman, J.A.; Doty, J.; Clark, G.W.; Swank, J.H.; Becker, R.H.; Pravdo, S.H.; Serlemitsos, P.J.

    1977-01-01

    It is stated that 18 X-ray burst sources have been observed to date, applying the following definition for these bursts - rise times of less than a few seconds, durations of seconds to minutes, and recurrence in some regular pattern. If single burst events that meet the criteria of rise time and duration, but not recurrence are included, an additional seven sources can be added. A sky map is shown indicating their positions. The sources are spread along the galactic equator and cluster near low galactic longitudes, and their distribution is different from that of the observed globular clusters. Observations based on the SAS-3 X-ray observatory studies and the Goddard X-ray Spectroscopy Experiment on OSO-9 are described. The distribution of the sources is examined and the effect of uneven sky exposure on the observed distribution is evaluated. It has been suggested that the bursts are perhaps produced by remnants of disrupted globular clusters and specifically supermassive black holes. This would imply the existence of a new class of unknown objects, and at present is merely an ad hoc method of relating the burst sources to globular clusters. (U.K.)

  13. Source Distribution Method for Unsteady One-Dimensional Flows With Small Mass, Momentum, and Heat Addition and Small Area Variation

    Science.gov (United States)

    Mirels, Harold

    1959-01-01

    A source distribution method is presented for obtaining flow perturbations due to small unsteady area variations, mass, momentum, and heat additions in a basic uniform (or piecewise uniform) one-dimensional flow. First, the perturbations due to an elemental area variation, mass, momentum, and heat addition are found. The general solution is then represented by a spatial and temporal distribution of these elemental (source) solutions. Emphasis is placed on discussing the physical nature of the flow phenomena. The method is illustrated by several examples. These include the determination of perturbations in basic flows consisting of (1) a shock propagating through a nonuniform tube, (2) a constant-velocity piston driving a shock, (3) ideal shock-tube flows, and (4) deflagrations initiated at a closed end. The method is particularly applicable for finding the perturbations due to relatively thin wall boundary layers.

  14. Reliable method for fission source convergence of Monte Carlo criticality calculation with Wielandt's method

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro; Miyoshi, Yoshinori

    2004-01-01

    A new algorithm of Monte Carlo criticality calculations for implementing Wielandt's method, which is one of acceleration techniques for deterministic source iteration methods, is developed, and the algorithm can be successfully implemented into MCNP code. In this algorithm, part of fission neutrons emitted during random walk processes are tracked within the current cycle, and thus a fission source distribution used in the next cycle spread more widely. Applying this method intensifies a neutron interaction effect even in a loosely-coupled array where conventional Monte Carlo criticality methods have difficulties, and a converged fission source distribution can be obtained with fewer cycles. Computing time spent for one cycle, however, increases because of tracking fission neutrons within the current cycle, which eventually results in an increase of total computing time up to convergence. In addition, statistical fluctuations of a fission source distribution in a cycle are worsened by applying Wielandt's method to Monte Carlo criticality calculations. However, since a fission source convergence is attained with fewer source iterations, a reliable determination of convergence can easily be made even in a system with a slow convergence. This acceleration method is expected to contribute to prevention of incorrect Monte Carlo criticality calculations. (author)

  15. The dose distribution surrounding 192Ir and 137Cs seed sources

    International Nuclear Information System (INIS)

    Thomason, C.; Mackie, T.R.; Wisconsin Univ., Madison, WI; Lindstrom, M.J.; Higgins, P.D.

    1991-01-01

    Dose distributions in water were measured using LiF thermoluminescent dosemeters for 192 Ir seed sources with stainless steel and with platinum encapsulation to determine the effect of differing encapsulation. Dose distribution was measured for a 137 Cs seed source. In addition, dose distributions surrounding these sources were calculated using the EGS4 Monte Carlo code and were compared to measured data. The two methods are in good agreement for all three sources. Tables are given describing dose distribution surrounding each source as a function of distance and angle. Specific dose constants were also determined from results of Monte Carlo simulation. This work confirms the use of the EGS4 Monte Carlo code in modelling 192 Ir and 137 Cs seed sources to obtain brachytherapy dose distributions. (author)

  16. An Active Power Sharing Method among Distributed Energy Sources in an Islanded Series Micro-Grid

    Directory of Open Access Journals (Sweden)

    Wei-Man Yang

    2014-11-01

    Full Text Available Active power-sharing among distributed energy sources (DESs is not only an important way to realize optimal operation of micro-grids, but also the key to maintaining stability for islanded operation. Due to the unique configuration of series micro-grids (SMGs, the power-sharing method adopted in an ordinary AC, DC, and hybrid AC/DC system cannot be directly applied into SMGs. Power-sharing in one SMG with multiple DESs involves two aspects. On the one hand, capacitor voltage stability based on an energy storage system (ESS in the DC link must be complemented. Actually, this is a problem of power allocation between the generating unit and the ESS in the DES; an extensively researched, similar problem has been grid-off distributed power generation, for which there are good solutions. On the other hand, power-sharing among DESs should be considered to optimize the operation of a series micro-grid. In this paper, a novel method combining master control with auxiliary control is proposed. Master action of a quasi-proportional resonant controller is responsible for stability of the islanded SMG; auxiliary action based on state of charge (SOC realizes coordinated allocation of load power among the source. At the same time, it is important to ensure that the auxiliary control does not influence the master action.

  17. Over-Distribution in Source Memory

    Science.gov (United States)

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  18. Calculation of dose distribution for 252Cf fission neutron source in tissue equivalent phantoms using Monte Carlo method

    International Nuclear Information System (INIS)

    Ji Gang; Guo Yong; Luo Yisheng; Zhang Wenzhong

    2001-01-01

    Objective: To provide useful parameters for neutron radiotherapy, the author presents results of a Monte Carlo simulation study investigating the dosimetric characteristics of linear 252 Cf fission neutron sources. Methods: A 252 Cf fission source and tissue equivalent phantom were modeled. The dose of neutron and gamma radiations were calculated using Monte Carlo Code. Results: The dose of neutron and gamma at several positions for 252 Cf in the phantom made of equivalent materials to water, blood, muscle, skin, bone and lung were calculated. Conclusion: The results by Monte Carlo methods were compared with the data by measurement and references. According to the calculation, the method using water phantom to simulate local tissues such as muscle, blood and skin is reasonable for the calculation and measurements of dose distribution for 252 Cf

  19. The dose distribution surrounding sup 192 Ir and sup 137 Cs seed sources

    Energy Technology Data Exchange (ETDEWEB)

    Thomason, C [Wisconsin Univ., Madison, WI (USA). Dept. of Medical Physics; Mackie, T R [Wisconsin Univ., Madison, WI (USA). Dept. of Medical Physics Wisconsin Univ., Madison, WI (USA). Dept. of Human Oncology; Lindstrom, M J [Wisconsin Univ., Madison, WI (USA). Biostatistics Center; Higgins, P D [Cleveland Clinic Foundation, OH (USA). Dept. of Radiation Oncology

    1991-04-01

    Dose distributions in water were measured using LiF thermoluminescent dosemeters for {sup 192}Ir seed sources with stainless steel and with platinum encapsulation to determine the effect of differing encapsulation. Dose distribution was measured for a {sup 137}Cs seed source. In addition, dose distributions surrounding these sources were calculated using the EGS4 Monte Carlo code and were compared to measured data. The two methods are in good agreement for all three sources. Tables are given describing dose distribution surrounding each source as a function of distance and angle. Specific dose constants were also determined from results of Monte Carlo simulation. This work confirms the use of the EGS4 Monte Carlo code in modelling {sup 192}Ir and {sup 137}Cs seed sources to obtain brachytherapy dose distributions. (author).

  20. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  1. Analysis of ultrasonically rotating droplet using moving particle semi-implicit and distributed point source methods

    Science.gov (United States)

    Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro

    2016-07-01

    Numerical analysis of the rotation of an ultrasonically levitated droplet with a free surface boundary is discussed. The ultrasonically levitated droplet is often reported to rotate owing to the surface tangential component of acoustic radiation force. To observe the torque from an acoustic wave and clarify the mechanism underlying the phenomena, it is effective to take advantage of numerical simulation using the distributed point source method (DPSM) and moving particle semi-implicit (MPS) method, both of which do not require a calculation grid or mesh. In this paper, the numerical treatment of the viscoacoustic torque, which emerges from the viscous boundary layer and governs the acoustical droplet rotation, is discussed. The Reynolds stress traction force is calculated from the DPSM result using the idea of effective normal particle velocity through the boundary layer and input to the MPS surface particles. A droplet levitated in an acoustic chamber is simulated using the proposed calculation method. The droplet is vertically supported by a plane standing wave from an ultrasonic driver and subjected to a rotating sound field excited by two acoustic sources on the side wall with different phases. The rotation of the droplet is successfully reproduced numerically and its acceleration is discussed and compared with those in the literature.

  2. Blind Separation of Nonstationary Sources Based on Spatial Time-Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Zhang Yimin

    2006-01-01

    Full Text Available Blind source separation (BSS based on spatial time-frequency distributions (STFDs provides improved performance over blind source separation methods based on second-order statistics, when dealing with signals that are localized in the time-frequency (t-f domain. In this paper, we propose the use of STFD matrices for both whitening and recovery of the mixing matrix, which are two stages commonly required in many BSS methods, to provide robust BSS performance to noise. In addition, a simple method is proposed to select the auto- and cross-term regions of time-frequency distribution (TFD. To further improve the BSS performance, t-f grouping techniques are introduced to reduce the number of signals under consideration, and to allow the receiver array to separate more sources than the number of array sensors, provided that the sources have disjoint t-f signatures. With the use of one or more techniques proposed in this paper, improved performance of blind separation of nonstationary signals can be achieved.

  3. An efficient central DOA tracking algorithm for multiple incoherently distributed sources

    Science.gov (United States)

    Hassen, Sonia Ben; Samet, Abdelaziz

    2015-12-01

    In this paper, we develop a new tracking method for the direction of arrival (DOA) parameters assuming multiple incoherently distributed (ID) sources. The new approach is based on a simple covariance fitting optimization technique exploiting the central and noncentral moments of the source angular power densities to estimate the central DOAs. The current estimates are treated as measurements provided to the Kalman filter that model the dynamic property of directional changes for the moving sources. Then, the covariance-fitting-based algorithm and the Kalman filtering theory are combined to formulate an adaptive tracking algorithm. Our algorithm is compared to the fast approximated power iteration-total least square-estimation of signal parameters via rotational invariance technique (FAPI-TLS-ESPRIT) algorithm using the TLS-ESPRIT method and the subspace updating via FAPI-algorithm. It will be shown that the proposed algorithm offers an excellent DOA tracking performance and outperforms the FAPI-TLS-ESPRIT method especially at low signal-to-noise ratio (SNR) values. Moreover, the performances of the two methods increase as the SNR values increase. This increase is more prominent with the FAPI-TLS-ESPRIT method. However, their performances degrade when the number of sources increases. It will be also proved that our method depends on the form of the angular distribution function when tracking the central DOAs. Finally, it will be shown that the more the sources are spaced, the more the proposed method can exactly track the DOAs.

  4. A calculation of dose distribution around 32P spherical sources and its clinical application

    International Nuclear Information System (INIS)

    Ohara, Ken; Tanaka, Yoshiaki; Nishizawa, Kunihide; Maekoshi, Hisashi

    1977-01-01

    In order to avoid the radiation hazard in radiation therapy of craniopharyngioma by using 32 P, it is helpful to prepare a detailed dose distribution in the vicinity of the source in the tissue. Valley's method is used for calculations. A problem of the method is pointed out and the method itself is refined numerically: it extends a region of xi where an approximate polynomial is available, and it determines an optimum degree of the polynomial as 9. Usefulness of the polynomial is examined by comparing with Berger's scaled absorbed dose distribution F(xi) and the Valley's result. The dose and dose rate distributions around uniformly distributed spherical sources are computed from the termwise integration of our polynomial of degree 9 over the range of xi from 0 to 1.7. The dose distributions calculated from the spherical surface to a point at 0.5 cm outside the source, are given, when the radii of sources are 0.5, 0.6, 0.7, 1.0, and 1.5 cm respectively. The therapeutic dose for a craniopharyngioma which has a spherically shaped cyst, and the absorbed dose to the normal tissue, (oculomotor nerve), are obtained from these dose rate distributions. (auth.)

  5. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    Science.gov (United States)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  6. Calculation of the secondary gamma radiation by the Monte Carlo method at displaced sampling from distributed sources

    International Nuclear Information System (INIS)

    Petrov, Eh.E.; Fadeev, I.A.

    1979-01-01

    A possibility to use displaced sampling from a bulk gamma source in calculating the secondary gamma fields by the Monte Carlo method is discussed. The algorithm proposed is based on the concept of conjugate functions alongside the dispersion minimization technique. For the sake of simplicity a plane source is considered. The algorithm has been put into practice on the M-220 computer. The differential gamma current and flux spectra in 21cm-thick lead have been calculated. The source of secondary gamma-quanta was assumed to be a distributed, constant and isotropic one emitting 4 MeV gamma quanta with the rate of 10 9 quanta/cm 3 xs. The calculations have demonstrated that the last 7 cm of lead are responsible for the whole gamma spectral pattern. The spectra practically coincide with the ones calculated by the ROZ computer code. Thus the algorithm proposed can be offectively used in the calculations of secondary gamma radiation transport and reduces the computation time by 2-4 times

  7. Application of the Monte Carlo method in calculation of energy-time distribution from a pulsed photon source in homogeneous air environment

    International Nuclear Information System (INIS)

    Ilic, R.D.; Vojvodic, V.I.; Orlic, M.P.

    1981-01-01

    The stochastic nature of photon interactions with matter and the characteristics of photon transport through real materials, are very well suited for applications of the Monte Carlo method in calculations of the energy-space distribution of photons. Starting from general principles of the Monte Carlo method, physical-mathematical model of photon transport from a pulsed source is given for the homogeneous air environment. Based on that model, a computer program is written which is applied in calculations of scattered photons delay spectra and changes of the photon energy spectrum. Obtained results provide the estimation of the timespace function of the electromagnetic field generated by photon from a pulsed source. (author)

  8. Establishment of a Practical Approach for Characterizing the Source of Particulates in Water Distribution Systems

    Directory of Open Access Journals (Sweden)

    Seon-Ha Chae

    2016-02-01

    Full Text Available Water quality complaints related to particulate matter and discolored water can be troublesome for water utilities in terms of follow-up investigations and implementation of appropriate actions because particulate matter can enter from a variety of sources; moreover, physicochemical processes can affect the water quality during the purification and transportation processes. The origin of particulates can be attributed to sources such as background organic/inorganic materials from water sources, water treatment plants, water distribution pipelines that have deteriorated, and rehabilitation activities in the water distribution systems. In this study, a practical method is proposed for tracing particulate sources. The method entails collecting information related to hydraulic, water quality, and structural conditions, employing a network flow-path model, and establishing a database of physicochemical properties for tubercles and slimes. The proposed method was implemented within two city water distribution systems that were located in Korea. These applications were conducted to demonstrate the practical applicability of the method for providing solutions to customer complaints. The results of the field studies indicated that the proposed method would be feasible for investigating the sources of particulates and for preparing appropriate action plans for complaints related to particulate matter.

  9. Fiber optic distributed temperature sensing for fire source localization

    Science.gov (United States)

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Sigrist, Markus W.; Li, Jun; Dong, Fengzhong

    2017-08-01

    A method for localizing a fire source based on a distributed temperature sensor system is proposed. Two sections of optical fibers were placed orthogonally to each other as the sensing elements. A tray of alcohol was lit to act as a fire outbreak in a cabinet with an uneven ceiling to simulate a real scene of fire. Experiments were carried out to demonstrate the feasibility of the method. Rather large fluctuations and systematic errors with respect to predicting the exact room coordinates of the fire source caused by the uneven ceiling were observed. Two mathematical methods (smoothing recorded temperature curves and finding temperature peak positions) to improve the prediction accuracy are presented, and the experimental results indicate that the fluctuation ranges and systematic errors are significantly reduced. The proposed scheme is simple and appears reliable enough to locate a fire source in large spaces.

  10. Distributed chemical computing using ChemStar: an open source java remote method invocation architecture applied to large scale molecular data from PubChem.

    Science.gov (United States)

    Karthikeyan, M; Krishnan, S; Pandey, Anil Kumar; Bender, Andreas; Tropsha, Alexander

    2008-04-01

    We present the application of a Java remote method invocation (RMI) based open source architecture to distributed chemical computing. This architecture was previously employed for distributed data harvesting of chemical information from the Internet via the Google application programming interface (API; ChemXtreme). Due to its open source character and its flexibility, the underlying server/client framework can be quickly adopted to virtually every computational task that can be parallelized. Here, we present the server/client communication framework as well as an application to distributed computing of chemical properties on a large scale (currently the size of PubChem; about 18 million compounds), using both the Marvin toolkit as well as the open source JOELib package. As an application, for this set of compounds, the agreement of log P and TPSA between the packages was compared. Outliers were found to be mostly non-druglike compounds and differences could usually be explained by differences in the underlying algorithms. ChemStar is the first open source distributed chemical computing environment built on Java RMI, which is also easily adaptable to user demands due to its "plug-in architecture". The complete source codes as well as calculated properties along with links to PubChem resources are available on the Internet via a graphical user interface at http://moltable.ncl.res.in/chemstar/.

  11. Using a topographic index to distribute variable source area runoff predicted with the SCS curve-number equation

    Science.gov (United States)

    Lyon, Steve W.; Walter, M. Todd; Gérard-Marchant, Pierre; Steenhuis, Tammo S.

    2004-10-01

    Because the traditional Soil Conservation Service curve-number (SCS-CN) approach continues to be used ubiquitously in water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed and tested a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Predicting the location of source areas is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point-source pollution. The method presented here used the traditional SCS-CN approach to predict runoff volume and spatial extent of saturated areas and a topographic index, like that used in TOPMODEL, to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was applied to two subwatersheds of the Delaware basin in the Catskill Mountains region of New York State and one watershed in south-eastern Australia to produce runoff-probability maps. Observed saturated area locations in the watersheds agreed with the distributed CN-VSA method. Results showed good agreement with those obtained from the previously validated soil moisture routing (SMR) model. When compared with the traditional SCS-CN method, the distributed CN-VSA method predicted a similar total volume of runoff, but vastly different locations of runoff generation. Thus, the distributed CN-VSA approach provides a physically based method that is simple enough to be incorporated into water quality models, and other tools that currently use the traditional SCS-CN method, while still adhering to the principles of VSA hydrology.

  12. Dynamic analysis of ultrasonically levitated droplet with moving particle semi-implicit and distributed point source method

    Science.gov (United States)

    Wada, Yuji; Yuge, Kohei; Nakamura, Ryohei; Tanaka, Hiroki; Nakamura, Kentaro

    2015-07-01

    Numerical analysis of an ultrasonically levitated droplet with a free surface boundary is discussed. The droplet is known to change its shape from sphere to spheroid when it is suspended in a standing wave owing to the acoustic radiation force. However, few studies on numerical simulation have been reported in association with this phenomenon including fluid dynamics inside the droplet. In this paper, coupled analysis using the distributed point source method (DPSM) and the moving particle semi-implicit (MPS) method, both of which do not require grids or meshes to handle the moving boundary with ease, is suggested. A droplet levitated in a plane standing wave field between a piston-vibrating ultrasonic transducer and a reflector is simulated with the DPSM-MPS coupled method. The dynamic change in the spheroidal shape of the droplet is successfully reproduced numerically, and the gravitational center and the change in the spheroidal aspect ratio are discussed and compared with the previous literature.

  13. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  14. Determining profile of dose distribution for PD-103 brachytherapy source

    International Nuclear Information System (INIS)

    Berkay, Camgoz; Mehmet, N. Kumru; Gultekin, Yegin

    2006-01-01

    Full text: Brachytherapy is a particular radiotherapy for cancer treatments. By destructing cancerous cells using radiation, the treatment proceeded. When alive tissues are subject it is hazardous to study experimental. For brachytherapy sources generally are studied as theoretical using computer simulation. General concept of the treatment is to locate the radioactive source into cancerous area of related tissue. In computer studies Monte Carlo mathematical method that is in principle based on random number generations, is used. Palladium radioisotope is LDR (Low radiation Dose Rate) source. Main radioactive material was coated with titanium cylinder with 3mm length, 0.25 mm radius. There are two parts of Pd-103 in the titanium cylinder. It is impossible to investigate differential effects come from two part as experimental. Because the source dimensions are small compared with measurement distances. So there is only simulation method. In dosimetric studies it is aimed to determine absorbed dose distribution in tissue as radial and angular. In nuclear physics it is obligation to use computer based methods for researchers. Radiation studies have hazards for scientist and people interacted with radiation. When hazard exceed over recommended limits or physical conditions are not suitable (long work time, non economical experiments, inadequate sensitivity of materials etc.) it is unavoidable to simulate works and experiments before practices of scientific methods in life. In medical area, usage of radiation is required computational work for cancer treatments. Some computational studies are routine in clinics and other studies have scientific development purposes. In brachytherapy studies there are significant differences between experimental measurements and theoretical (computer based) output data. Errors of data taken from experimental studies are larger than simulation values errors. In design of a new brachytherapy source it is important to consider detailed

  15. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    Science.gov (United States)

    Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.

    2017-01-01

    Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.

  16. A Monte Carlo Method for the Analysis of Gamma Radiation Transport from Distributed Sources in Laminated Shields

    Energy Technology Data Exchange (ETDEWEB)

    Leimdoerfer, M

    1964-02-15

    A description is given of a method for calculating the penetration and energy deposition of gamma radiation, based on Monte Carlo techniques. The essential feature is the application of the exponential transformation to promote the transport of penetrating quanta and to balance the steep spatial variations of the source distributions which appear in secondary gamma emission problems. The estimated statistical errors in a number of sample problems, involving concrete shields with thicknesses up to 500 cm, are shown to be quite favorable, even at relatively short computing times. A practical reactor shielding problem is also shown and the predictions compared with measurements.

  17. A Monte Carlo Method for the Analysis of Gamma Radiation Transport from Distributed Sources in Laminated Shields

    International Nuclear Information System (INIS)

    Leimdoerfer, M.

    1964-02-01

    A description is given of a method for calculating the penetration and energy deposition of gamma radiation, based on Monte Carlo techniques. The essential feature is the application of the exponential transformation to promote the transport of penetrating quanta and to balance the steep spatial variations of the source distributions which appear in secondary gamma emission problems. The estimated statistical errors in a number of sample problems, involving concrete shields with thicknesses up to 500 cm, are shown to be quite favorable, even at relatively short computing times. A practical reactor shielding problem is also shown and the predictions compared with measurements

  18. Analysis of Paralleling Limited Capacity Voltage Sources by Projective Geometry Method

    Directory of Open Access Journals (Sweden)

    Alexandr Penin

    2014-01-01

    Full Text Available The droop current-sharing method for voltage sources of a limited capacity is considered. Influence of equalizing resistors and load resistor is investigated on uniform distribution of relative values of currents when the actual loading corresponds to the capacity of a concrete source. Novel concepts for quantitative representation of operating regimes of sources are entered with use of projective geometry method.

  19. SPANDOM - source projection analytic nodal discrete ordinates method

    International Nuclear Information System (INIS)

    Kim, Tae Hyeong; Cho, Nam Zin

    1994-01-01

    We describe a new discrete ordinates nodal method for the two-dimensional transport equation. We solve the discrete ordinates equation analytically after the source term is projected and represented in polynomials. The method is applied to two fast reactor benchmark problems and compared with the TWOHEX code. The results indicate that the present method accurately predicts not only multiplication factor but also flux distribution

  20. Method to Locate Contaminant Source and Estimate Emission Strength

    Directory of Open Access Journals (Sweden)

    Qu Hongquan

    2013-01-01

    Full Text Available People greatly concern the issue of air quality in some confined spaces, such as spacecraft, aircraft, and submarine. With the increase of residence time in such confined space, contaminant pollution has become a main factor which endangers life. It is urgent to identify a contaminant source rapidly so that a prompt remedial action can be taken. A procedure of source identification should be able to locate the position and to estimate the emission strength of the contaminant source. In this paper, an identification method was developed to realize these two aims. This method was developed based on a discrete concentration stochastic model. With this model, a sensitivity analysis algorithm was induced to locate the source position, and a Kalman filter was used to further estimate the contaminant emission strength. This method could track and predict the source strength dynamically. Meanwhile, it can predict the distribution of contaminant concentration. Simulation results have shown the virtues of the method.

  1. Application of distributed point source method (DPSM) to wave propagation in anisotropic media

    Science.gov (United States)

    Fooladi, Samaneh; Kundu, Tribikram

    2017-04-01

    Distributed Point Source Method (DPSM) was developed by Placko and Kundu1, as a technique for modeling electromagnetic and elastic wave propagation problems. DPSM has been used for modeling ultrasonic, electrostatic and electromagnetic fields scattered by defects and anomalies in a structure. The modeling of such scattered field helps to extract valuable information about the location and type of defects. Therefore, DPSM can be used as an effective tool for Non-Destructive Testing (NDT). Anisotropy adds to the complexity of the problem, both mathematically and computationally. Computation of the Green's function which is used as the fundamental solution in DPSM is considerably more challenging for anisotropic media, and it cannot be reduced to a closed-form solution as is done for isotropic materials. The purpose of this study is to investigate and implement DPSM for an anisotropic medium. While the mathematical formulation and the numerical algorithm will be considered for general anisotropic media, more emphasis will be placed on transversely isotropic materials in the numerical example presented in this paper. The unidirectional fiber-reinforced composites which are widely used in today's industry are good examples of transversely isotropic materials. Development of an effective and accurate NDT method based on these modeling results can be of paramount importance for in-service monitoring of damage in composite structures.

  2. Planning method for integration and expansion of renewable energy sources with special attention to security supply in distribution system

    Energy Technology Data Exchange (ETDEWEB)

    Cerda-Arias, Jose Luis

    2012-07-01

    Today's structure of power systems with competitive wholesale markets for electricity encourages the introduction of new agents and products, customers with self-generating capacity and the specialization of generators, network operators and power suppliers. Furthermore one has to take into account the variation of the fossil fuel prices in the world market, which even anticipates the closeness of its scarcity, the instability of the fulfilment of contracts, and the existence of import restrictions. In addition the implementation of policies aiming to control CO{sub 2} emissions, and efficient use of energy plus the advent of more efficient technologies have to be incorporated in new network expansion projects. These are forcing utilities and society to seek new forms of electric system expansion without affecting their economic growth. This expresses a challenge to sustain such a growth changing the vision for the power system and the required security of electricity supply, usually based on internal factors of the electric sector, without considering the connection between the current transmission and distribution networks, the uncertainties related to the competition in the electricity market and the effect of distributed generation units. The high penetration of distributed generation resources, based on renewable energy sources, is increasingly observed worldwide and it depends on the cost of the technologies, market design, and subsidies. On that account, it is necessary to find alternatives and offers to develop a sustainable strategic plan for power system expansion. Currently, efforts are oriented to develop planning models which consider the income of power generation based on renewable energy sources founded on these new requirements, bearing in mind the relationship between the competitive markets and the power system planning. In this Thesis a general planning method for the expansion of the power grids is proposed. This planning method should

  3. Image authentication using distributed source coding.

    Science.gov (United States)

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  4. Linear source approximation scheme for method of characteristics

    International Nuclear Information System (INIS)

    Tang Chuntao

    2011-01-01

    Method of characteristics (MOC) for solving neutron transport equation based on unstructured mesh has already become one of the fundamental methods for lattice calculation of nuclear design code system. However, most of MOC codes are developed with flat source approximation called step characteristics (SC) scheme, which is another basic assumption for MOC. A linear source (LS) characteristics scheme and its corresponding modification for negative source distribution were proposed. The OECD/NEA C5G7-MOX 2D benchmark and a self-defined BWR mini-core problem were employed to validate the new LS module of PEACH code. Numerical results indicate that the proposed LS scheme employs less memory and computational time compared with SC scheme at the same accuracy. (authors)

  5. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    Science.gov (United States)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  6. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  7. Analysis of an ultrasonically rotating droplet by moving particle semi-implicit and distributed point source method in a rotational coordinate

    Science.gov (United States)

    Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro

    2017-07-01

    Numerical analysis on the rotation of an ultrasonically levitated droplet in centrifugal coordinate is discussed. A droplet levitated in an acoustic chamber is simulated using the distributed point source method and the moving particle semi-implicit method. Centrifugal coordinate is adopted to avoid the Laplacian differential error, which causes numerical divergence or inaccuracy in the global coordinate calculation. Consequently, the duration of calculation stability has increased 30 times longer than that in a the previous paper. Moreover, the droplet radius versus rotational acceleration characteristics show a similar trend to the theoretical and experimental values in the literature.

  8. Water-equivalent solid sources prepared by means of two distinct methods

    International Nuclear Information System (INIS)

    Koskinas, Marina F.; Yamazaki, Ione M.; Potiens Junior, Ademar

    2014-01-01

    The Nuclear Metrology Laboratory at IPEN is involved in developing radioactive water-equivalent solid sources prepared from an aqueous solution of acrylamide using two distinct methods for polymerization. One of them is the polymerization by high dose of 60 Co irradiation; in the other method the solid matrix-polyacrylamide is obtained from an aqueous solution composed by acrylamide, catalyzers and an aliquot of a radionuclide. The sources have been prepared in cylindrical geometry. In this paper, the study of the distribution of radioactive material in the solid sources prepared by both methods is presented. (author)

  9. Distributed power sources for Mars colonization

    International Nuclear Information System (INIS)

    Miley, George H.; Shaban, Yasser

    2003-01-01

    One of the fundamental needs for Mars colonization is an abundant source of energy. The total energy system will probably use a mixture of sources based on solar energy, fuel cells, and nuclear energy. Here we concentrate on the possibility of developing a distributed system employing several unique new types of nuclear energy sources, specifically small fusion devices using inertial electrostatic confinement and portable 'battery type' proton reaction cells

  10. Simulated and measured neutron/gamma light output distribution for poly-energetic neutron/gamma sources

    Science.gov (United States)

    Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.

    2018-03-01

    In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.

  11. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System

    Directory of Open Access Journals (Sweden)

    Miao Sun

    2016-06-01

    Full Text Available We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  12. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System.

    Science.gov (United States)

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong

    2016-06-06

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  13. Determining the temperature and density distribution from a Z-pinch radiation source

    International Nuclear Information System (INIS)

    Matuska, W.; Lee, H.

    1997-01-01

    High temperature radiation sources exceeding one hundred eV can be produced via z-pinches using currently available pulsed power. The usual approach to compare the z-pinch simulation and experimental data is to convert the radiation output at the source, whose temperature and density distributions are computed from the 2-D MHD code, into simulated data such as a spectrometer reading. This conversion process involves a radiation transfer calculation through the axially symmetric source, assuming local thermodynamic equilibrium (LTE), and folding the radiation that reaches the detector with the frequency-dependent response function. In this paper the authors propose a different approach by which they can determine the temperature and density distributions of the radiation source directly from the spatially resolved spectral data. This unfolding process is reliable and unambiguous for the ideal case where LTE holds and the source is axially symmetric. In reality, imperfect LTE and axial symmetry will introduce inaccuracies into the unfolded distributions. The authors use a parameter optimization routine to find the temperature and density distributions that best fit the data. They know from their past experience that the radiation source resulting from the implosion of a thin foil does not exhibit good axial symmetry. However, recent experiments carried out at Sandia National Laboratory using multiple wire arrays were very promising to achieve reasonably good symmetry. For these experiments the method will provide a valuable diagnostic tool

  14. Activity distribution of a cobalt-60 teletherapy source

    International Nuclear Information System (INIS)

    Jaffray, D.A.; Munro, P.; Battista, J.J.; Fenster, A.

    1991-01-01

    In the course of quantifying the effect of radiation source size on the spatial resolution of portal images, a concentric ring structure in the activity distribution of a Cobalt-60 teletherapy source has been observed. The activity distribution was measured using a strip integral technique and confirmed independently by a contact radiograph of an identical but inactive source replica. These two techniques suggested that this concentric ring structure is due to the packing configuration of the small 60Co pellets that constitute the source. The source modulation transfer function (MTF) showed that this ring structure has a negligible influence on the spatial resolution of therapy images when compared to the effect of the large size of the 60Co source

  15. Distributed least-squares estimation of a remote chemical source via convex combination in wireless sensor networks.

    Science.gov (United States)

    Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

    2014-06-27

    This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  16. Distributed Least-Squares Estimation of a Remote Chemical Source via Convex Combination in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Meng-Li Cao

    2014-06-01

    Full Text Available This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN. Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE method to solve the chemical source localization (CSL problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  17. Dose distribution and dosimetry parameters calculation of MED3633 Palladium-103 source in water phantom using MCNP

    International Nuclear Information System (INIS)

    Mowlavi, A. A.; Binesh, A.; Moslehitabar, H.

    2006-01-01

    Palladium-103 ( 103 Pd) is a brachytherapy source for cancer treatment. The Monte Carlo codes are usually applied for dose distribution and effect of shieldings. Monte Carlo calculation of dose distribution in water phantom due to a MED3633 103 Pd source is presented in this work. Materials and Methods: The dose distribution around the 10 3Pd Model MED3633 located in the center of 30*30*30 m 3 water phantom cube was calculated using MCNP code by the Monte Carlo method. The percentage depth dose variation along the different axis parallel and perpendicular to the source was also calculated. Then, the isodose curves for 100%, 75%, 50% and 25% percentage depth dose and dosimetry parameters of TG-43 protocol were determined. Results: The results show that the Monte Carlo Method could calculate dose deposition in high gradient region, near the source, accurately. The isodose curves and dosimetric characteristics obtained for MED3633 103 Pd source are in good agreement with published results. Conclusion: The isodose curves of the MED3633 103 Pd source have been derived form dose calculation by MCNP code. The calculated dosimetry parameters for the source agree quite well with their Monte Carlo calculated and experimental measurement values

  18. Computer program for source distribution process in radiation facility

    International Nuclear Information System (INIS)

    Al-Kassiri, H.; Abdul Ghani, B.

    2007-08-01

    Computer simulation for dose distribution using Visual Basic has been done according to the arrangement and activities of Co-60 sources. This program provides dose distribution in treated products depending on the product density and desired dose. The program is useful for optimization of sources distribution during loading process. there is good agreement between calculated data for the program and experimental data.(Author)

  19. Gas source localization and gas distribution mapping with a micro-drone

    International Nuclear Information System (INIS)

    Neumann, Patrick P.

    2013-01-01

    The objective of this Ph.D. thesis is the development and validation of a VTOL-based (Vertical Take Off and Landing) micro-drone for the measurement of gas concentrations, to locate gas emission sources, and to build gas distribution maps. Gas distribution mapping and localization of a static gas source are complex tasks due to the turbulent nature of gas transport under natural conditions and becomes even more challenging when airborne. This is especially so, when using a VTOL-based micro-drone that induces disturbances through its rotors, which heavily affects gas distribution. Besides the adaptation of a micro-drone for gas concentration measurements, a novel method for the determination of the wind vector in real-time is presented. The on-board sensors for the flight control of the micro-drone provide a basis for the wind vector calculation. Furthermore, robot operating software for controlling the micro-drone autonomously is developed and used to validate the algorithms developed within this Ph.D. thesis in simulations and real-world experiments. Three biologically inspired algorithms for locating gas sources are adapted and developed for use with the micro-drone: the surge-cast algorithm (a variant of the silkworm moth algorithm), the zigzag / dung beetle algorithm, and a newly developed algorithm called ''pseudo gradient algorithm''. The latter extracts from two spatially separated measuring positions the information necessary (concentration gradient and mean wind direction) to follow a gas plume to its emission source. The performance of the algorithms is evaluated in simulations and real-world experiments. The distance overhead and the gas source localization success rate are used as main performance criteria for comparing the algorithms. Next, a new method for gas source localization (GSL) based on a particle filter (PF) is presented. Each particle represents a weighted hypothesis of the gas source position. As a first step, the PF-based GSL algorithm

  20. Gas source localization and gas distribution mapping with a micro-drone

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, Patrick P.

    2013-07-01

    The objective of this Ph.D. thesis is the development and validation of a VTOL-based (Vertical Take Off and Landing) micro-drone for the measurement of gas concentrations, to locate gas emission sources, and to build gas distribution maps. Gas distribution mapping and localization of a static gas source are complex tasks due to the turbulent nature of gas transport under natural conditions and becomes even more challenging when airborne. This is especially so, when using a VTOL-based micro-drone that induces disturbances through its rotors, which heavily affects gas distribution. Besides the adaptation of a micro-drone for gas concentration measurements, a novel method for the determination of the wind vector in real-time is presented. The on-board sensors for the flight control of the micro-drone provide a basis for the wind vector calculation. Furthermore, robot operating software for controlling the micro-drone autonomously is developed and used to validate the algorithms developed within this Ph.D. thesis in simulations and real-world experiments. Three biologically inspired algorithms for locating gas sources are adapted and developed for use with the micro-drone: the surge-cast algorithm (a variant of the silkworm moth algorithm), the zigzag / dung beetle algorithm, and a newly developed algorithm called ''pseudo gradient algorithm''. The latter extracts from two spatially separated measuring positions the information necessary (concentration gradient and mean wind direction) to follow a gas plume to its emission source. The performance of the algorithms is evaluated in simulations and real-world experiments. The distance overhead and the gas source localization success rate are used as main performance criteria for comparing the algorithms. Next, a new method for gas source localization (GSL) based on a particle filter (PF) is presented. Each particle represents a weighted hypothesis of the gas source position. As a first step, the PF

  1. Gas source localization and gas distribution mapping with a micro-drone

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, Patrick P.

    2013-07-01

    The objective of this Ph.D. thesis is the development and validation of a VTOL-based (Vertical Take Off and Landing) micro-drone for the measurement of gas concentrations, to locate gas emission sources, and to build gas distribution maps. Gas distribution mapping and localization of a static gas source are complex tasks due to the turbulent nature of gas transport under natural conditions and becomes even more challenging when airborne. This is especially so, when using a VTOL-based micro-drone that induces disturbances through its rotors, which heavily affects gas distribution. Besides the adaptation of a micro-drone for gas concentration measurements, a novel method for the determination of the wind vector in real-time is presented. The on-board sensors for the flight control of the micro-drone provide a basis for the wind vector calculation. Furthermore, robot operating software for controlling the micro-drone autonomously is developed and used to validate the algorithms developed within this Ph.D. thesis in simulations and real-world experiments. Three biologically inspired algorithms for locating gas sources are adapted and developed for use with the micro-drone: the surge-cast algorithm (a variant of the silkworm moth algorithm), the zigzag / dung beetle algorithm, and a newly developed algorithm called ''pseudo gradient algorithm''. The latter extracts from two spatially separated measuring positions the information necessary (concentration gradient and mean wind direction) to follow a gas plume to its emission source. The performance of the algorithms is evaluated in simulations and real-world experiments. The distance overhead and the gas source localization success rate are used as main performance criteria for comparing the algorithms. Next, a new method for gas source localization (GSL) based on a particle filter (PF) is presented. Each particle represents a weighted hypothesis of the gas source position. As a first step, the PF-based GSL algorithm

  2. Geometric discretization of the multidimensional Dirac delta distribution - Application to the Poisson equation with singular source terms

    Science.gov (United States)

    Egan, Raphael; Gibou, Frédéric

    2017-10-01

    We present a discretization method for the multidimensional Dirac distribution. We show its applicability in the context of integration problems, and for discretizing Dirac-distributed source terms in Poisson equations with constant or variable diffusion coefficients. The discretization is cell-based and can thus be applied in a straightforward fashion to Quadtree/Octree grids. The method produces second-order accurate results for integration. Superlinear convergence is observed when it is used to model Dirac-distributed source terms in Poisson equations: the observed order of convergence is 2 or slightly smaller. The method is consistent with the discretization of Dirac delta distribution for codimension one surfaces presented in [1,2]. We present Quadtree/Octree construction procedures to preserve convergence and present various numerical examples, including multi-scale problems that are intractable with uniform grids.

  3. A Heuristic Approach to Distributed Generation Source Allocation for Electrical Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    M. Sharma

    2010-12-01

    Full Text Available The recent trends in electrical power distribution system operation and management are aimed at improving system conditions in order to render good service to the customer. The reforms in distribution sector have given major scope for employment of distributed generation (DG resources which will boost the system performance. This paper proposes a heuristic technique for allocation of distribution generation source in a distribution system. The allocation is determined based on overall improvement in network performance parameters like reduction in system losses, improvement in voltage stability, improvement in voltage profile. The proposed Network Performance Enhancement Index (NPEI along with the heuristic rules facilitate determination of feasible location and corresponding capacity of DG source. The developed approach is tested with different test systems to ascertain its effectiveness.

  4. Perceived loudness of spatially distributed sound sources

    DEFF Research Database (Denmark)

    Song, Woo-keun; Ellermeier, Wolfgang; Minnaar, Pauli

    2005-01-01

    psychoacoustic attributes into account. Therefore, a method for deriving loudness maps was developed in an earlier study [Song, Internoise2004, paper 271]. The present experiment investigates to which extent perceived loudness depends on the distribution of individual sound sources. Three loudspeakers were...... positioned 1.5 m from the centre of the listener’s head, one straight ahead, and two 10 degrees to the right and left, respectively. Six participants matched the loudness of either one, or two simultaneous sounds (narrow-band noises with 1-kHz, and 3.15-kHz centre frequencies) to a 2-kHz, 60-dB SPL narrow......-band noise placed in the frontal loudspeaker. The two sounds were either originating from the central speaker, or from the two offset loudspeakers. It turned out that the subjects perceived the noises to be softer when they were distributed in space. In addition, loudness was calculated from the recordings...

  5. Calculating method for confinement time and charge distribution of ions in electron cyclotron resonance sources

    International Nuclear Information System (INIS)

    Dougar-Jabon, V.D.; Umnov, A.M.; Kutner, V.B.

    1996-01-01

    It is common knowledge that the electrostatic pit in a core plasma of electron cyclotron resonance sources exerts strict control over generation of ions in high charge states. This work is aimed at finding a dependence of the lifetime of ions on their charge states in the core region and to elaborate a numerical model of ion charge dispersion not only for the core plasmas but for extracted beams as well. The calculated data are in good agreement with the experimental results on charge distributions and magnitudes for currents of beams extracted from the 14 GHz DECRIS source. copyright 1996 American Institute of Physics

  6. Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette

    Directory of Open Access Journals (Sweden)

    Fang Li

    2013-10-01

    Full Text Available This paper proposes an approach for acoustic emission (AE source localization in a large marble stone using distributed feedback (DFB fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location.

  7. 137Cs source dose distribution using the Fricke Xylenol Gel dosimetry

    International Nuclear Information System (INIS)

    Sato, R.; De Almeida, A.; Moreira, M.V.

    2009-01-01

    Dosimetric measurements close to radioisotope sources, such as those used in brachytherapy, require high spatial resolution to avoid incorrect results in the steep dose gradient region. In this work the Fricke Xylenol Gel dosimeter was used to obtain the spatial dose distribution. The readings from a 137 Cs source were performed using two methods, visible spectrophotometer and CCD camera images. Good agreement with the Sievert summation method was found for the transversal axis dose profile within uncertainties of 4% and 5%, for the spectrophotometer and CCD camera respectively. Our results show that the dosimeter is adequate for brachytherapy dosimetry and, owing to its relatively fast and easy preparation and reading, it is recommended for quality control in brachytherapy applications.

  8. On Distributions of Emission Sources and Speed-of-Sound in Proton-Proton (Proton-Antiproton Collisions

    Directory of Open Access Journals (Sweden)

    Li-Na Gao

    2015-01-01

    Full Text Available The revised (three-source Landau hydrodynamic model is used in this paper to study the (pseudorapidity distributions of charged particles produced in proton-proton and proton-antiproton collisions at high energies. The central source is assumed to contribute with a Gaussian function which covers the rapidity distribution region as wide as possible. The target and projectile sources are assumed to emit isotropically particles in their respective rest frames. The model calculations obtained with a Monte Carlo method are fitted to the experimental data over an energy range from 0.2 to 13 TeV. The values of the squared speed-of-sound parameter in different collisions are then extracted from the width of the rapidity distributions.

  9. Z-Source-Inverter-Based Flexible Distributed Generation System Solution for Grid Power Quality Improvement

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Vilathgamuwa, D. M.; Loh, Poh Chiang

    2009-01-01

    Distributed generation (DG) systems are usually connected to the grid using power electronic converters. Power delivered from such DG sources depends on factors like energy availability and load demand. The converters used in power conversion do not operate with their full capacity all the time......-stage buck-boost inverter, recently proposed Z-source inverter (ZSI) is a good candidate for future DG systems. This paper presents a controller design for a ZSI-based DG system to improve power quality of distribution systems. The proposed control method is tested with simulation results obtained using...

  10. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    Science.gov (United States)

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  11. The Competition Between a Localised and Distributed Source of Buoyancy

    Science.gov (United States)

    Partridge, Jamie; Linden, Paul

    2012-11-01

    We propose a new mathematical model to study the competition between localised and distributed sources of buoyancy within a naturally ventilated filling box. The main controlling parameters in this configuration are the buoyancy fluxes of the distributed and local source, specifically their ratio Ψ. The steady state dynamics of the flow are heavily dependent on this parameter. For large Ψ, where the distributed source dominates, we find the space becomes well mixed as expected if driven by an distributed source alone. Conversely, for small Ψ we find the space reaches a stable two layer stratification. This is analogous to the classical case of a purely local source but here the lower layer is buoyant compared to the ambient, due to the constant flux of buoyancy emanating from the distributed source. The ventilation flow rate, buoyancy of the layers and also the location of the interface height, which separates the two layer stratification, are obtainable from the model. To validate the theoretical model, small scale laboratory experiments were carried out. Water was used as the working medium with buoyancy being driven directly by temperature differences. Theoretical results were compared with experimental data and overall good agreement was found. A CASE award project with Arup.

  12. GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2012-01-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  13. CMP reflection imaging via interferometry of distributed subsurface sources

    Science.gov (United States)

    Kim, D.; Brown, L. D.; Quiros, D. A.

    2015-12-01

    The theoretical foundations of recovering body wave energy via seismic interferometry are well established. However in practice, such recovery remains problematic. Here, synthetic seismograms computed for subsurface sources are used to evaluate the geometrical combinations of realistic ambient source and receiver distributions that result in useful recovery of virtual body waves. This study illustrates how surface receiver arrays that span a limited distribution suite of sources, can be processed to reproduce virtual shot gathers that result in CMP gathers which can be effectively stacked with traditional normal moveout corrections. To verify the feasibility of the approach in practice, seismic recordings of 50 aftershocks following the magnitude of 5.8 Virginia earthquake occurred in August, 2011 have been processed using seismic interferometry to produce seismic reflection images of the crustal structure above and beneath the aftershock cluster. Although monotonic noise proved to be problematic by significantly reducing the number of usable recordings, the edited dataset resulted in stacked seismic sections characterized by coherent reflections that resemble those seen on a nearby conventional reflection survey. In particular, "virtual" reflections at travel times of 3 to 4 seconds suggest reflector sat approximately 7 to 12 km depth that would seem to correspond to imbricate thrust structures formed during the Appalachian orogeny. The approach described here represents a promising new means of body wave imaging of 3D structure that can be applied to a wide array of geologic and energy problems. Unlike other imaging techniques using natural sources, this technique does not require precise source locations or times. It can thus exploit aftershocks too small for conventional analyses. This method can be applied to any type of microseismic cloud, whether tectonic, volcanic or man-made.

  14. Multipath interference test method for distributed amplifiers

    Science.gov (United States)

    Okada, Takahiro; Aida, Kazuo

    2005-12-01

    A method for testing distributed amplifiers is presented; the multipath interference (MPI) is detected as a beat spectrum between the multipath signal and the direct signal using a binary frequency shifted keying (FSK) test signal. The lightwave source is composed of a DFB-LD that is directly modulated by a pulse stream passing through an equalizer, and emits the FSK signal of the frequency deviation of about 430MHz at repetition rate of 80-100 kHz. The receiver consists of a photo-diode and an electrical spectrum analyzer (ESA). The base-band power spectrum peak appeared at the frequency of the FSK frequency deviation can be converted to amount of MPI using a calibration chart. The test method has improved the minimum detectable MPI as low as -70 dB, compared to that of -50 dB of the conventional test method. The detailed design and performance of the proposed method are discussed, including the MPI simulator for calibration procedure, computer simulations for evaluating the error caused by the FSK repetition rate and the fiber length under test and experiments on singlemode fibers and distributed Raman amplifier.

  15. An improved in situ method for determining depth distributions of gamma-ray emitting radionuclides

    International Nuclear Information System (INIS)

    Benke, R.R.; Kearfott, K.J.

    2001-01-01

    In situ gamma-ray spectrometry determines the quantities of radionuclides in some medium with a portable detector. The main limitation of in situ gamma-ray spectrometry lies in determining the depth distribution of radionuclides. This limitation is addressed by developing an improved in situ method for determining the depth distributions of gamma-ray emitting radionuclides in large area sources. This paper implements a unique collimator design with conventional radiation detection equipment. Cylindrically symmetric collimators were fabricated to allow only those gamma-rays emitted from a selected range of polar angles (measured off the detector axis) to be detected. Positioned with its axis normal to surface of the media, each collimator enables the detection of gamma-rays emitted from a different range of polar angles and preferential depths. Previous in situ methods require a priori knowledge of the depth distribution shape. However, the absolute method presented in this paper determines the depth distribution as a histogram and does not rely on such assumptions. Other advantages over previous in situ methods are that this method only requires a single gamma-ray emission, provides more detailed depth information, and offers a superior ability for characterizing complex depth distributions. Collimated spectrometer measurements of buried area sources demonstrated the ability of the method to yield accurate depth information. Based on the results of actual measurements, this method increases the potential of in situ gamma-ray spectrometry as an independent characterization tool in situations with unknown radionuclide depth distributions

  16. The Impact of Source Distribution on Scalar Transport over Forested Hills

    Science.gov (United States)

    Ross, Andrew N.; Harman, Ian N.

    2015-08-01

    Numerical simulations of neutral flow over a two-dimensional, isolated, forested ridge are conducted to study the effects of scalar source distribution on scalar concentrations and fluxes over forested hills. Three different constant-flux sources are considered that span a range of idealized but ecologically important source distributions: a source at the ground, one uniformly distributed through the canopy, and one decaying with depth in the canopy. A fourth source type, where the in-canopy source depends on both the wind speed and the difference in concentration between the canopy and a reference concentration on the leaf, designed to mimic deposition, is also considered. The simulations show that the topographically-induced perturbations to the scalar concentration and fluxes are quantitatively dependent on the source distribution. The net impact is a balance of different processes affecting both advection and turbulent mixing, and can be significant even for moderate topography. Sources that have significant input in the deep canopy or at the ground exhibit a larger magnitude advection and turbulent flux-divergence terms in the canopy. The flows have identical velocity fields and so the differences are entirely due to the different tracer concentration fields resulting from the different source distributions. These in-canopy differences lead to larger spatial variations in above-canopy scalar fluxes for sources near the ground compared to cases where the source is predominantly located near the canopy top. Sensitivity tests show that the most significant impacts are often seen near to or slightly downstream of the flow separation or reattachment points within the canopy flow. The qualitative similarities to previous studies using periodic hills suggest that important processes occurring over isolated and periodic hills are not fundamentally different. The work has important implications for the interpretation of flux measurements over forests, even in

  17. Methods to determine fast-ion distribution functions from multi-diagnostic measurements

    DEFF Research Database (Denmark)

    Jacobsen, Asger Schou; Salewski, Mirko

    -ion diagnostic views, it is possible to infer the distribution function using a tomography approach. Several inversion methods for solving this tomography problem in velocity space are implemented and compared. It is found that the best quality it obtained when using inversion methods which penalise steep......Understanding the behaviour of fast ions in a fusion plasma is very important, since the fusion-born alpha particles are expected to be the main source of heating in a fusion power plant. Preferably, the entire fast-ion velocity-space distribution function would be measured. However, no fast...

  18. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources

    Directory of Open Access Journals (Sweden)

    Xiang Gao

    2016-07-01

    Full Text Available This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors’ data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented.

  19. Voltage Based Detection Method for High Impedance Fault in a Distribution System

    Science.gov (United States)

    Thomas, Mini Shaji; Bhaskar, Namrata; Prakash, Anupama

    2016-09-01

    High-impedance faults (HIFs) on distribution feeders cannot be detected by conventional protection schemes, as HIFs are characterized by their low fault current level and waveform distortion due to the nonlinearity of the ground return path. This paper proposes a method to identify the HIFs in distribution system and isolate the faulty section, to reduce downtime. This method is based on voltage measurements along the distribution feeder and utilizes the sequence components of the voltages. Three models of high impedance faults have been considered and source side and load side breaking of the conductor have been studied in this work to capture a wide range of scenarios. The effect of neutral grounding of the source side transformer is also accounted in this study. The results show that the algorithm detects the HIFs accurately and rapidly. Thus, the faulty section can be isolated and service can be restored to the rest of the consumers.

  20. The adaptive collision source method for discrete ordinates radiation transport

    International Nuclear Information System (INIS)

    Walters, William J.; Haghighat, Alireza

    2017-01-01

    Highlights: • A new adaptive quadrature method to solve the discrete ordinates transport equation. • The adaptive collision source (ACS) method splits the flux into n’th collided components. • Uncollided flux requires high quadrature; this is lowered with number of collisions. • ACS automatically applies appropriate quadrature order each collided component. • The adaptive quadrature is 1.5–4 times more efficient than uniform quadrature. - Abstract: A novel collision source method has been developed to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained, with potentially a different quadrature order used for each. Traditionally, the flux from every iteration is combined, with the same quadrature applied to the combined flux. Since the scattering process tends to distribute the radiation more evenly over angles (i.e., make it more isotropic), the quadrature requirements generally decrease with each iteration. This method allows for an optimal use of processing power, by using a high order quadrature for the first iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and is referred to as the adaptive collision source (ACS) method. The ACS methodology has been implemented in the 3-D, parallel, multigroup discrete ordinates code TITAN. This code was tested on a several simple and complex fixed-source problems. The ACS implementation in TITAN has shown a reduction in computation time by a factor of 1.5–4 on the fixed-source test problems, for the same desired level of accuracy, as compared to the standard TITAN code.

  1. Brightness distribution data on 2918 radio sources at 365 MHz

    International Nuclear Information System (INIS)

    Cotton, W.D.; Owen, F.N.; Ghigo, F.D.

    1975-01-01

    This paper is the second in a series describing the results of a program attempting to fit models of the brightness distribution to radio sources observed at 365 MHz with the Bandwidth Synthesis Interferometer (BSI) operated by the University of Texas Radio Astronomy Observatory. Results for a further 2918 radio sources are given. An unresolved model and three symmetric extended models with angular sizes in the range 10--70 arcsec were attempted for each radio source. In addition, for 348 sources for which other observations of brightness distribution are published, the reference to the observations and a brief description are included

  2. North Slope, Alaska: Source rock distribution, richness, thermal maturity, and petroleum charge

    Science.gov (United States)

    Peters, K.E.; Magoon, L.B.; Bird, K.J.; Valin, Z.C.; Keller, M.A.

    2006-01-01

    Four key marine petroleum source rock units were identified, characterized, and mapped in the subsurface to better understand the origin and distribution of petroleum on the North Slope of Alaska. These marine source rocks, from oldest to youngest, include four intervals: (1) Middle-Upper Triassic Shublik Formation, (2) basal condensed section in the Jurassic-Lower Cretaceous Kingak Shale, (3) Cretaceous pebble shale unit, and (4) Cretaceous Hue Shale. Well logs for more than 60 wells and total organic carbon (TOC) and Rock-Eval pyrolysis analyses for 1183 samples in 125 well penetrations of the source rocks were used to map the present-day thickness of each source rock and the quantity (TOC), quality (hydrogen index), and thermal maturity (Tmax) of the organic matter. Based on assumptions related to carbon mass balance and regional distributions of TOC, the present-day source rock quantity and quality maps were used to determine the extent of fractional conversion of the kerogen to petroleum and to map the original TOC (TOCo) and the original hydrogen index (HIo) prior to thermal maturation. The quantity and quality of oil-prone organic matter in Shublik Formation source rock generally exceeded that of the other units prior to thermal maturation (commonly TOCo > 4 wt.% and HIo > 600 mg hydrocarbon/g TOC), although all are likely sources for at least some petroleum on the North Slope. We used Rock-Eval and hydrous pyrolysis methods to calculate expulsion factors and petroleum charge for each of the four source rocks in the study area. Without attempting to identify the correct methods, we conclude that calculations based on Rock-Eval pyrolysis overestimate expulsion factors and petroleum charge because low pressure and rapid removal of thermally cracked products by the carrier gas retards cross-linking and pyrobitumen formation that is otherwise favored by natural burial maturation. Expulsion factors and petroleum charge based on hydrous pyrolysis may also be high

  3. Effect of source angular distribution on the evaluation of gamma-ray skyshine

    Energy Technology Data Exchange (ETDEWEB)

    Sheu, R.D.; Jiang, S.H. [Dept. of Engineering and System Science, National Tsing Hua Univ., Taiwan (China); Chang, B.J.; Chen, I.J. [Division of Health Physics, Inst. of Nuclear Energy Research, Taiwan (China)

    2000-03-01

    The effect of the angular distribution of the equivalent point source on the analysis of the skyshine dose rates was investigated in detail. The dedicated skyshine codes SKYDOSE and McSKY were revised to include the capability of dealing with the anisotropic source. It was found that a replace of the cosine-distributed source by an isotropic source will overestimate the skyshine dose rates for large roof-subtended angles and cause underestimation for small roof-subtended angles. For building with roof shielding, however, replacing the cosine-distributed source by an isotropic source will always underestimate the skyshine dose rates. The skyshine dose rates from a volume source calculated by the dedicated skyshine code agree very well with those of the MCNP Monte Carlo calculation. (author)

  4. Two Dimensional Verification of the Dose Distribution of Gamma Knife Model C using Monte Carlo Simulation with a Virtual Source

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae-Hoon; Kim, Yong-Kyun; Lee, Cheol Ho; Son, Jaebum; Lee, Sangmin; Kim, Dong Geon; Choi, Joonbum; Jang, Jae Yeong [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun-Tai [Seoul National University, Seoul (Korea, Republic of)

    2016-10-15

    Gamma Knife model C contains 201 {sup 60}Co sources located on a spherical surface, so that each beam is concentrated on the center of the sphere. In the last work, we simulated the Gamma Knife model C through Monte Carlo simulation code using Geant4. Instead of 201 multi-collimation system, we made one single collimation system that collects source parameter passing through the collimator helmet. Using the virtual source, we drastically reduced the simulation time to transport 201 gamma circle beams to the target. Gamma index has been widely used to compare two dose distributions in cancer radiotherapy. Gamma index pass rates were compared in two calculated results using the virtual source method and the original method and measured results obtained using radiocrhomic films. A virtual source method significantly reduces simulation time of a Gamma Knife Model C and provides equivalent absorbed dose distributions as that of the original method showing Gamma Index pass rate close to 100% under 1mm/3% criteria. On the other hand, it gives a little narrow dose distribution compared to the film measurement showing Gamma Index pass rate of 94%. More accurate and sophisticated examination on the accuracy of the simulation and film measurement is necessary.

  5. A hybrid source-driven method to compute fast neutron fluence in reactor pressure vessel - 017

    International Nuclear Information System (INIS)

    Ren-Tai, Chiang

    2010-01-01

    A hybrid source-driven method is developed to compute fast neutron fluence with neutron energy greater than 1 MeV in nuclear reactor pressure vessel (RPV). The method determines neutron flux by solving a steady-state neutron transport equation with hybrid neutron sources composed of peripheral fixed fission neutron sources and interior chain-reacted fission neutron sources. The relative rod-by-rod power distribution of the peripheral assemblies in a nuclear reactor obtained from reactor core depletion calculations and subsequent rod-by-rod power reconstruction is employed as the relative rod-by-rod fixed fission neutron source distribution. All fissionable nuclides other than U-238 (such as U-234, U-235, U-236, Pu-239 etc) are replaced with U-238 to avoid counting the fission contribution twice and to preserve fast neutron attenuation for heavy nuclides in the peripheral assemblies. An example is provided to show the feasibility of the method. Since the interior fuels only have a marginal impact on RPV fluence results due to rapid attenuation of interior fast fission neutrons, a generic set or one of several generic sets of interior fuels can be used as the driver and only the neutron sources in the peripheral assemblies will be changed in subsequent hybrid source-driven fluence calculations. Consequently, this hybrid source-driven method can simplify and reduce cost for fast neutron fluence computations. This newly developed hybrid source-driven method should be a useful and simplified tool for computing fast neutron fluence at selected locations of interest in RPV of contemporary nuclear power reactors. (authors)

  6. The Space-, Time-, and Energy-distribution of Neutrons from a Pulsed Plane Source

    Energy Technology Data Exchange (ETDEWEB)

    Claesson, Arne

    1962-05-15

    The space-, time- and energy-distribution of neutrons from a pulsed, plane, high energy source in an infinite medium is determined in a diffusion approximation. For simplicity the moderator is first assumed to be hydrogen gas but it is also shown that the method can be used for a moderator of arbitrary mass.

  7. The Galactic Distribution of Massive Star Formation from the Red MSX Source Survey

    Science.gov (United States)

    Figura, Charles C.; Urquhart, J. S.

    2013-01-01

    Massive stars inject enormous amounts of energy into their environments in the form of UV radiation and molecular outflows, creating HII regions and enriching local chemistry. These effects provide feedback mechanisms that aid in regulating star formation in the region, and may trigger the formation of subsequent generations of stars. Understanding the mechanics of massive star formation presents an important key to understanding this process and its role in shaping the dynamics of galactic structure. The Red MSX Source (RMS) survey is a multi-wavelength investigation of ~1200 massive young stellar objects (MYSO) and ultra-compact HII (UCHII) regions identified from a sample of colour-selected sources from the Midcourse Space Experiment (MSX) point source catalog and Two Micron All Sky Survey. We present a study of over 900 MYSO and UCHII regions investigated by the RMS survey. We review the methods used to determine distances, and investigate the radial galactocentric distribution of these sources in context with the observed structure of the galaxy. The distribution of MYSO and UCHII regions is found to be spatially correlated with the spiral arms and galactic bar. We examine the radial distribution of MYSOs and UCHII regions and find variations in the star formation rate between the inner and outer Galaxy and discuss the implications for star formation throughout the galactic disc.

  8. An evaluation of the methods of determining excited state population distributions from sputtering sources

    International Nuclear Information System (INIS)

    Snowdon, K.J.; Andresen, B.; Veje, E.

    1978-01-01

    The method of calculating relative initial level populations of excited states of sputtered atoms is developed in principle and compared with those in current use. The reason that the latter, although mathematically different, have generally led to similar population distributions is outlined. (Auth.)

  9. Source splitting via the point source method

    International Nuclear Information System (INIS)

    Potthast, Roland; Fazi, Filippo M; Nelson, Philip A

    2010-01-01

    We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119–40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731–42). The task is to separate the sound fields u j , j = 1, ..., n of n element of N sound sources supported in different bounded domains G 1 , ..., G n in R 3 from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u 1 + ... + u n on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions g 1 ,…, g n , n element of N, to construct u l for l = 1, ..., n from u| Λ in the form u l (x) = ∫ Λ g l,x (y)u(y)ds(y), l=1,... n. (1) We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online

  10. The evaluation of the earthquake hazard using the exponential distribution method for different seismic source regions in and around Ağrı

    Energy Technology Data Exchange (ETDEWEB)

    Bayrak, Yusuf, E-mail: ybayrak@agri.edu.tr [Ağrı İbrahim Çeçen University, Ağrı/Turkey (Turkey); Türker, Tuğba, E-mail: tturker@ktu.edu.tr [Karadeniz Technical University, Department of Geophysics, Trabzon/Turkey (Turkey)

    2016-04-18

    The aim of this study; were determined of the earthquake hazard using the exponential distribution method for different seismic sources of the Ağrı and vicinity. A homogeneous earthquake catalog has been examined for 1900-2015 (the instrumental period) with 456 earthquake data for Ağrı and vicinity. Catalog; Bogazici University Kandilli Observatory and Earthquake Research Institute (Burke), National Earthquake Monitoring Center (NEMC), TUBITAK, TURKNET the International Seismological Center (ISC), Seismological Research Institute (IRIS) has been created using different catalogs like. Ağrı and vicinity are divided into 7 different seismic source regions with epicenter distribution of formed earthquakes in the instrumental period, focal mechanism solutions, and existing tectonic structures. In the study, the average magnitude value are calculated according to the specified magnitude ranges for 7 different seismic source region. According to the estimated calculations for 7 different seismic source regions, the biggest difference corresponding with the classes of determined magnitudes between observed and expected cumulative probabilities are determined. The recurrence period and earthquake occurrence number per year are estimated of occurring earthquakes in the Ağrı and vicinity. As a result, 7 different seismic source regions are determined occurrence probabilities of an earthquake 3.2 magnitude, Region 1 was greater than 6.7 magnitude, Region 2 was greater than than 4.7 magnitude, Region 3 was greater than 5.2 magnitude, Region 4 was greater than 6.2 magnitude, Region 5 was greater than 5.7 magnitude, Region 6 was greater than 7.2 magnitude, Region 7 was greater than 6.2 magnitude. The highest observed magnitude 7 different seismic source regions of Ağrı and vicinity are estimated 7 magnitude in Region 6. Region 6 are determined according to determining magnitudes, occurrence years of earthquakes in the future years, respectively, 7.2 magnitude was in 158

  11. Neutron distribution modeling based on integro-probabilistic approach of discrete ordinates method

    International Nuclear Information System (INIS)

    Khromov, V.V.; Kryuchkov, E.F.; Tikhomirov, G.V.

    1992-01-01

    In this paper is described the universal nodal method for the neutron distribution calculation in reactor and shielding problems, based on using of influence functions and factors of local-integrated volume and surface neutron sources in phase subregions. This method permits to avoid the limited capabilities of collision-probability method concerning with the detailed calculation of angular neutron flux dependence, scattering anisotropy and empty channels. The proposed method may be considered as modification of S n - method with advantage of ray-effects elimination. There are presented the description of method theory and algorithm following by the examples of method applications for calculation of neutron distribution in three-dimensional model of fusion reactor blanket and in highly heterogeneous reactor with empty channel

  12. Obtaining source current density related to irregularly structured electromagnetic target field inside human body using hybrid inverse/FDTD method.

    Science.gov (United States)

    Han, Jijun; Yang, Deqiang; Sun, Houjun; Xin, Sherman Xuegang

    2017-01-01

    Inverse method is inherently suitable for calculating the distribution of source current density related with an irregularly structured electromagnetic target field. However, the present form of inverse method cannot calculate complex field-tissue interactions. A novel hybrid inverse/finite-difference time domain (FDTD) method that can calculate the complex field-tissue interactions for the inverse design of source current density related with an irregularly structured electromagnetic target field is proposed. A Huygens' equivalent surface is established as a bridge to combine the inverse and FDTD method. Distribution of the radiofrequency (RF) magnetic field on the Huygens' equivalent surface is obtained using the FDTD method by considering the complex field-tissue interactions within the human body model. The obtained magnetic field distributed on the Huygens' equivalent surface is regarded as the next target. The current density on the designated source surface is derived using the inverse method. The homogeneity of target magnetic field and specific energy absorption rate are calculated to verify the proposed method.

  13. Distribution and Source Identification of Pb Contamination in industrial soil

    Science.gov (United States)

    Ko, M. S.

    2017-12-01

    INTRODUCTION Lead (Pb) is toxic element that induce neurotoxic effect to human, because competition of Pb and Ca in nerve system. Lead is classified as a chalophile element and galena (PbS) is the major mineral. Although the Pb is not an abundant element in nature, various anthropogenic source has been enhanced Pb enrichment in the environment after the Industrial Revolution. The representative anthropogenic sources are batteries, paint, mining, smelting, and combustion of fossil fuel. Isotope analysis widely used to identify the Pb contamination source. The Pb has four stable isotopes that are 208Pb, 207Pb, 206Pb, and 204Pb in natural. The Pb is stable isotope and the ratios maintain during physical and chemical fractionation. Therefore, variations of Pb isotope abundance and relative ratios could imply the certain Pb contamination source. In this study, distributions and isotope ratios of Pb in industrial soil were used to identify the Pb contamination source and dispersion pathways. MATERIALS AND METHODS Soil samples were collected at depth 0­-6 m from an industrial area in Korea. The collected soil samples were dried and sieved under 2 mm. Soil pH, aqua-regia digestion and TCLP carried out using sieved soil sample. The isotope analysis was carried out to determine the abundance of Pb isotope. RESULTS AND DISCUSSION The study area was developed land for promotion of industrial facilities. The study area was forest in 1980, and the satellite image show the alterations of land use with time. The variations of land use imply the possibilities of bringing in external contaminated soil. The Pb concentrations in core samples revealed higher in lower soil compare with top soil. Especially, 4 m soil sample show highest Pb concentrations that are approximately 1500 mg/kg. This result indicated that certain Pb source existed at 4 m depth. CONCLUSIONS This study investigated the distribution and source identification of Pb in industrial soil. The land use and Pb

  14. A diffusion-theoretical method to calculate the neutron flux distribution in multisphere configurations

    International Nuclear Information System (INIS)

    Schuerrer, F.

    1980-01-01

    For characterizing heterogene configurations of pebble-bed reactors the fine structure of the flux distribution as well as the determination of the macroscopic neutronphysical quantities are of interest. When calculating system parameters of Wigner-Seitz-cells the usual codes for neutron spectra calculation always neglect the modulation of the neutron flux by the influence of neighbouring spheres. To judge the error arising from that procedure it is necessary to determinate the flux distribution in the surrounding of a spherical fuel element. In the present paper an approximation method to calculate the flux distribution in the two-sphere model is developed. This method is based on the exactly solvable problem of the flux determination of a point source of neutrons in an infinite medium, which contains a spherical perturbation zone eccentric to the point source. An iteration method allows by superposing secondary fields and alternately satisfying the conditions of continuity on the surface of each of the two fuel elements to advance to continually improving approximations. (orig.) 891 RW/orig. 892 CKA [de

  15. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    Science.gov (United States)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  16. MEG (Magnetoencephalography) multipolar modeling of distributed sources using RAP-MUSIC (Recursively Applied and Projected Multiple Signal Characterization)

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, J. C. (John C.); Baillet, S. (Sylvain); Jerbi, K. (Karim); Leahy, R. M. (Richard M.)

    2001-01-01

    We describe the use of truncated multipolar expansions for producing dynamic images of cortical neural activation from measurements of the magnetoencephalogram. We use a signal-subspace method to find the locations of a set of multipolar sources, each of which represents a region of activity in the cerebral cortex. Our method builds up an estimate of the sources in a recursive manner, i.e. we first search for point current dipoles, then magnetic dipoles, and finally first order multipoles. The dynamic behavior of these sources is then computed using a linear fit to the spatiotemporal data. The final step in the procedure is to map each of the multipolar sources into an equivalent distributed source on the cortical surface. The method is illustrated through an application to epileptic interictal MEG data.

  17. Wave resistance calculation method combining Green functions based on Rankine and Kelvin source

    Directory of Open Access Journals (Sweden)

    LI Jingyu

    2017-12-01

    Full Text Available [Ojectives] At present, the Boundary Element Method(BEM of wave-making resistance mostly uses a model in which the velocity distribution near the hull is solved first, and the pressure integral is then calculated using the Bernoulli equation. However,the process of this model of wave-making resistance is complex and has low accuracy.[Methods] To address this problem, the present paper deduces a compound method for the quick calculation of ship wave resistance using the Rankine source Green function to solve the hull surface's source density, and combining the Lagally theorem concerning source point force calculation based on the Kelvin source Green function so as to solve the wave resistance. A case for the Wigley model is given.[Results] The results show that in contrast to the thin ship method of the linear wave resistance theorem, this method has higher precision, and in contrast to the method which completely uses the Kelvin source Green function, this method has better computational efficiency.[Conclusions] In general, the algorithm in this paper provides a compromise between precision and efficiency in wave-making resistance calculation.

  18. Deformation due to distributed sources in micropolar thermodiffusive medium

    Directory of Open Access Journals (Sweden)

    Sachin Kaushal

    2010-10-01

    Full Text Available The general solution to the field equations in micropolar generalized thermodiffusive in the context of G-L theory is investigated by applying the Laplace and Fourier transform's as a result of various sources. An application of distributed normal forces or thermal sources or potential sources has been taken to show the utility of the problem. To get the solution in the physical form, a numerical inversion technique has been applied. The transformed components of stress, temperature distribution and chemical potential for G-L theory and CT theory has been depicted graphically and results are compared analytically to show the impact of diffusion, relaxation times and micropolarity on these quantities. Some special case of interest are also deduced from present investigation.

  19. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    Energy Technology Data Exchange (ETDEWEB)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene, E-mail: mertsch@nbi.ku.dk, E-mail: mohamed.rameez@nbi.ku.dk, E-mail: tamborra@nbi.ku.dk [Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2017-03-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ''warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10{sup −6} Mpc{sup −3} and neutrino luminosity L {sub ν} ∼< 10{sup 42} erg s{sup −1} (10{sup 41} erg s{sup −1}) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.

  20. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    Science.gov (United States)

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and q

  1. Distributed source term analysis, a new approach to nuclear material inventory verification

    CERN Document Server

    Beddingfield, D H

    2002-01-01

    The Distributed Source-Term Analysis (DSTA) technique is a new approach to measuring in-process material holdup that is a significant departure from traditional hold-up measurement methodology. The DSTA method is a means of determining the mass of nuclear material within a large, diffuse, volume using passive neutron counting. The DSTA method is a more efficient approach than traditional methods of holdup measurement and inventory verification. The time spent in performing DSTA measurement and analysis is a fraction of that required by traditional techniques. The error ascribed to a DSTA survey result is generally less than that from traditional methods. Also, the negative bias ascribed to gamma-ray methods is greatly diminished because the DSTA method uses neutrons which are more penetrating than gamma-rays.

  2. Distributed source term analysis, a new approach to nuclear material inventory verification

    International Nuclear Information System (INIS)

    Beddingfield, D.H.; Menlove, H.O.

    2002-01-01

    The Distributed Source-Term Analysis (DSTA) technique is a new approach to measuring in-process material holdup that is a significant departure from traditional hold-up measurement methodology. The DSTA method is a means of determining the mass of nuclear material within a large, diffuse, volume using passive neutron counting. The DSTA method is a more efficient approach than traditional methods of holdup measurement and inventory verification. The time spent in performing DSTA measurement and analysis is a fraction of that required by traditional techniques. The error ascribed to a DSTA survey result is generally less than that from traditional methods. Also, the negative bias ascribed to γ-ray methods is greatly diminished because the DSTA method uses neutrons which are more penetrating than γ-rays

  3. A new hydraulic regulation method on district heating system with distributed variable-speed pumps

    International Nuclear Information System (INIS)

    Wang, Hai; Wang, Haiying; Zhu, Tong

    2017-01-01

    Highlights: • A hydraulic regulation method was presented for district heating with distributed variable speed pumps. • Information and automation technologies were utilized to support the proposed method. • A new hydraulic model was developed for distributed variable speed pumps. • A new optimization model was developed based on genetic algorithm. • Two scenarios of a multi-source looped system was illustrated to validate the method. - Abstract: Compared with the hydraulic configuration based on the conventional central circulating pump, a district heating system with distributed variable-speed-pumps configuration can often save 30–50% power consumption on circulating pumps with frequency inverters. However, the hydraulic regulations on distributed variable-speed-pumps configuration could be more complicated than ever while all distributed pumps need to be adjusted to their designated flow rates. Especially in a multi-source looped structure heating network where the distributed pumps have strongly coupled and severe non-linear hydraulic connections with each other, it would be rather difficult to maintain the hydraulic balance during the regulations. In this paper, with the help of the advanced automation and information technologies, a new hydraulic regulation method was proposed to achieve on-site hydraulic balance for the district heating systems with distributed variable-speed-pumps configuration. The proposed method was comprised of a new hydraulic model, which was developed to adapt the distributed variable-speed-pumps configuration, and a calibration model with genetic algorithm. By carrying out the proposed method step by step, the flow rates of all distributed pumps can be progressively adjusted to their designated values. A hypothetic district heating system with 2 heat sources and 10 substations was taken as a case study to illustrate the feasibility of the proposed method. Two scenarios were investigated respectively. In Scenario I, the

  4. Noise source separation of diesel engine by combining binaural sound localization method and blind source separation method

    Science.gov (United States)

    Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei

    2017-11-01

    In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.

  5. Y-Source Boost DC/DC Converter for Distributed Generation

    DEFF Research Database (Denmark)

    Siwakoti, Yam P.; Loh, Poh Chiang; Blaabjerg, Frede

    2015-01-01

    This paper introduces a versatile Y-source boost dc/dc converter intended for distributed power generation, where high gain is often demanded. The proposed converter uses a Y-source impedance network realized with a tightly coupled three-winding inductor for high voltage boosting that is presently...

  6. Supply and distribution for γ-ray sources

    International Nuclear Information System (INIS)

    Yamamoto, Takeo

    1997-01-01

    Japan Atomic energy Research Institute (JAERI) is the only facility to supply and distribute radioisotopes (RI) in Japan. The γ-ray sources for medical use are 192 Ir and 169 Yb for non-destructive examination and 192 Ir, 198 Au and 153 Gd for clinical use. All of these demands in Japan are supplied with domestic products at present. Meanwhile, γ-ray sources imported are 60 Co sources for medical and industrial uses including sterilization of medical instruments, 137 Cs for irradiation to blood and 241 Am for industrial measurements. The major overseas suppliers are Nordion International Inc. and Amersham International plc. RI products on the market are divided into two groups; one is the primary products which are supplied in liquid or solid after chemical or physical treatments of radioactive materials obtained from reactor and the other is the secondary product which is a final product after various processing. Generally these secondary products are used in practice. In Japan, both of the domestic and imported products are supplied to the users via JRIA (Japan Radioisotope Association). The association participates in the sales and the distributions of the secondary products and also in the processings of the primary ones to their sealed sources. Furthermore, stable supplying systems for these products are almost established according to the half life of each nuclide only if there is no accident in the reactor. (M.N.)

  7. Compressing Sensing Based Source Localization for Controlled Acoustic Signals Using Distributed Microphone Arrays

    Directory of Open Access Journals (Sweden)

    Wei Ke

    2017-01-01

    Full Text Available In order to enhance the accuracy of sound source localization in noisy and reverberant environments, this paper proposes an adaptive sound source localization method based on distributed microphone arrays. Since sound sources lie at a few points in the discrete spatial domain, our method can exploit this inherent sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing (CS theory. In this method, a two-step discrete cosine transform- (DCT- based feature extraction approach is utilized to cover both short-time and long-time properties of acoustic signals and reduce the dimensions of the sparse model. In addition, an online dictionary learning (DL method is used to adjust the dictionary for matching the changes of audio signals, and then the sparse solution could better represent location estimations. Moreover, we propose an improved block-sparse reconstruction algorithm using approximate l0 norm minimization to enhance reconstruction performance for sparse signals in low signal-noise ratio (SNR conditions. The effectiveness of the proposed scheme is demonstrated by simulation results and experimental results where substantial improvement for localization performance can be obtained in the noisy and reverberant conditions.

  8. Presence of thallium in the environment: sources of contaminations, distribution and monitoring methods.

    Science.gov (United States)

    Karbowska, Bozena

    2016-11-01

    Thallium is released into the biosphere from both natural and anthropogenic sources. It is generally present in the environment at low levels; however, human activity has greatly increased its content. Atmospheric emission and deposition from industrial sources have resulted in increased concentrations of thallium in the vicinity of mineral smelters and coal-burning facilities. Increased levels of thallium are found in vegetables, fruit and farm animals. Thallium is toxic even at very low concentrations and tends to accumulate in the environment once it enters the food chain. Thallium and thallium-based compounds exhibit higher water solubility compared to other heavy metals. They are therefore also more mobile (e.g. in soil), generally more bioavailable and tend to bioaccumulate in living organisms. The main aim of this review was to summarize the recent data regarding the actual level of thallium content in environmental niches and to elucidate the most significant sources of thallium in the environment. The review also includes an overview of analytical methods, which are commonly applied for determination of thallium in fly ash originating from industrial combustion of coal, in surface and underground waters, in soils and sediments (including soil derived from different parent materials), in plant and animal tissues as well as in human organisms.

  9. Sediment sources and their Distribution in Chwaka Bay, Zanzibar ...

    African Journals Online (AJOL)

    This work establishes sediment sources, character and their distribution in Chwaka Bay using (i) stable isotopes compositions of organic carbon (OC) and nitrogen, (ii) contents of OC, nitrogen and CaCO3, (iii) C/N ratios, (iv) distribution of sediment mean grain size and sorting, and (v) thickness of unconsolidated sediments.

  10. Non-iterative method to calculate the periodical distribution of temperature in reactors with thermal regeneration

    International Nuclear Information System (INIS)

    Sanchez de Alsina, O.L.; Scaricabarozzi, R.A.

    1982-01-01

    A matrix non-iterative method to calculate the periodical distribution in reactors with thermal regeneration is presented. In case of exothermic reaction, a source term will be included. A computer code was developed to calculate the final temperature distribution in solids and in the outlet temperatures of the gases. The results obtained from ethane oxidation calculation in air, using the Dietrich kinetic data are presented. This method is more advantageous than iterative methods. (E.G.) [pt

  11. The effect of energy distribution of external source on source multiplication in fast assemblies

    International Nuclear Information System (INIS)

    Karam, R.A.; Vakilian, M.

    1976-02-01

    The essence of this study is the effect of energy distribution of a source on the detection rate as a function of K effective in fast assemblies. This effectiveness, as a function of K was studied in a fission chamber, using the ABN cross-section set and Mach 1 code. It was found that with a source which has a fission spectrum, the reciprocal count rate versus mass relationship is linear down to K effective 0.59. For a thermal source, the linearity was never achieved. (author)

  12. Comparison of three methods of restoration of cosmic radio source profiles

    International Nuclear Information System (INIS)

    Malov, I.F.; Frolov, V.A.

    1986-01-01

    Effectiveness of three methods for restoration of radio brightness distribution over the source: main solution, fitting and minimal - phase method (MPM) - was compared on the basis of data on module and phase of luminosity function (LF) of 15 cosmic radiosources. It is concluded that MPM can soccessfully compete with other known methods. Its obvious advantages in comparison with the fitting method consist in that it gives unambigous and direct restoration and a main advantage as compared with the main solution is the feasibility of restoration in the absence of data on LF phase which reduces restoration errors

  13. Improved Mirror Source Method in Roomacoustics

    Science.gov (United States)

    Mechel, F. P.

    2002-10-01

    Most authors in room acoustics qualify the mirror source method (MS-method) as the only exact method to evaluate sound fields in auditoria. But evidently nobody applies it. The reason for this discrepancy is the abundantly high numbers of needed mirror sources which are reported in the literature, although such estimations of needed numbers of mirror sources mostly are used for the justification of more or less heuristic modifications of the MS-method. The present, intentionally tutorial article accentuates the analytical foundations of the MS-method whereby the number of needed mirror sources is reduced already. Further, the task of field evaluation in three-dimensional spaces is reduced to a sequence of tasks in two-dimensional room edges. This not only allows the use of easier geometrical computations in two dimensions, but also the sound field in corner areas can be represented by a single (directional) source sitting on the corner line, so that only this "corner source" must be mirror-reflected in the further process. This procedure gives a drastic reduction of the number of needed equivalent sources. Finally, the traditional MS-method is not applicable in rooms with convex corners (the angle between the corner flanks, measured on the room side, exceeds 180°). In such cases, the MS-method is combined below with the second principle of superposition(PSP). It reduces the scattering task at convex corners to two sub-tasks between one flank and the median plane of the room wedge, i.e., always in concave corner areas where the MS-method can be applied.

  14. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2016-05-15

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.

  15. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    International Nuclear Information System (INIS)

    Kim, Tae Hoon; Kim, Yong Kyun; Chung, Hyun Tai

    2016-01-01

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results

  16. Strong source heat transfer simulations based on a GalerKin/Gradient - least - squares method

    International Nuclear Information System (INIS)

    Franca, L.P.; Carmo, E.G.D. do.

    1989-05-01

    Heat conduction problems with temperature-dependent strong sources are modeled by an equation with a laplacian term, a linear term and a given source distribution term. When the linear-temperature-dependent source term is much larger than the laplacian term, we have a singular perturbation problem. In this case, boundary layers are formed to satisfy the Dirichlet boundary conditions. Although this is an elliptic equation, the standard Galerkin method solution is contaminated by spurious oscillations in the neighborhood of the boundary layers. Herein we employ a Galerkin/Gradient-least-squares method which eliminates all pathological phenomena of the Galerkin method. The method is constructed by adding to the Galerkin method a mesh-dependent term obtained by the least-squares form of the gradient of the Euler-Lagrange equation. Error estimates, numerical simulations in one-and multi-dimensions are given that attest the good stability and accuracy properties of the method [pt

  17. Minimum-phase distribution of cosmic source brightness

    International Nuclear Information System (INIS)

    Gal'chenko, A.A.; Malov, I.F.; Mogil'nitskaya, L.F.; Frolov, V.A.

    1984-01-01

    Minimum-phase distributions of brightness (profiles) for cosmic radio sources 3C 144 (the wave lambda=21 cm), 3C 338 (lambda=3.5 m), and 3C 353 (labda=31.3 cm and 3.5 m) are obtained. A real possibility for the profile recovery from module fragments of its Fourier-image is shown

  18. A method for determining the analytical form of a radionuclide depth distribution using multiple gamma spectrometry measurements

    Energy Technology Data Exchange (ETDEWEB)

    Dewey, Steven Clifford, E-mail: sdewey001@gmail.com [United States Air Force School of Aerospace Medicine, Occupational Environmental Health Division, Health Physics Branch, Radiation Analysis Laboratories, 2350 Gillingham Drive, Brooks City-Base, TX 78235 (United States); Whetstone, Zachary David, E-mail: zacwhets@umich.edu [Radiological Health Engineering Laboratory, Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, 1906 Cooley Building, Ann Arbor, MI 48109-2104 (United States); Kearfott, Kimberlee Jane, E-mail: kearfott@umich.edu [Radiological Health Engineering Laboratory, Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, 1906 Cooley Building, Ann Arbor, MI 48109-2104 (United States)

    2011-06-15

    When characterizing environmental radioactivity, whether in the soil or within concrete building structures undergoing remediation or decommissioning, it is highly desirable to know the radionuclide depth distribution. This is typically modeled using continuous analytical expressions, whose forms are believed to best represent the true source distributions. In situ gamma ray spectroscopic measurements are combined with these models to fully describe the source. Currently, the choice of analytical expressions is based upon prior experimental core sampling results at similar locations, any known site history, or radionuclide transport models. This paper presents a method, employing multiple in situ measurements at a single site, for determining the analytical form that best represents the true depth distribution present. The measurements can be made using a variety of geometries, each of which has a different sensitivity variation with source spatial distribution. Using non-linear least squares numerical optimization methods, the results can be fit to a collection of analytical models and the parameters of each model determined. The analytical expression that results in the fit with the lowest residual is selected as the most accurate representation. A cursory examination is made of the effects of measurement errors on the method. - Highlights: > A new method for determining radionuclide distribution as a function of depth is presented. > Multiple measurements are used, with enough measurements to determine the unknowns in analytical functions that might describe the distribution. > The measurements must be as independent as possible, which is achieved through special collimation of the detector. > Although the effects of measurements errors may be significant on the results, an improvement over other methods is anticipated.

  19. A new method to estimate heat source parameters in gas metal arc welding simulation process

    International Nuclear Information System (INIS)

    Jia, Xiaolei; Xu, Jie; Liu, Zhaoheng; Huang, Shaojie; Fan, Yu; Sun, Zhi

    2014-01-01

    Highlights: •A new method for accurate simulation of heat source parameters was presented. •The partial least-squares regression analysis was recommended in the method. •The welding experiment results verified accuracy of the proposed method. -- Abstract: Heat source parameters were usually recommended by experience in welding simulation process, which induced error in simulation results (e.g. temperature distribution and residual stress). In this paper, a new method was developed to accurately estimate heat source parameters in welding simulation. In order to reduce the simulation complexity, a sensitivity analysis of heat source parameters was carried out. The relationships between heat source parameters and welding pool characteristics (fusion width (W), penetration depth (D) and peak temperature (T p )) were obtained with both the multiple regression analysis (MRA) and the partial least-squares regression analysis (PLSRA). Different regression models were employed in each regression method. Comparisons of both methods were performed. A welding experiment was carried out to verify the method. The results showed that both the MRA and the PLSRA were feasible and accurate for prediction of heat source parameters in welding simulation. However, the PLSRA was recommended for its advantages of requiring less simulation data

  20. Panchromatic spectral energy distributions of Herschel sources

    Science.gov (United States)

    Berta, S.; Lutz, D.; Santini, P.; Wuyts, S.; Rosario, D.; Brisbin, D.; Cooray, A.; Franceschini, A.; Gruppioni, C.; Hatziminaoglou, E.; Hwang, H. S.; Le Floc'h, E.; Magnelli, B.; Nordon, R.; Oliver, S.; Page, M. J.; Popesso, P.; Pozzetti, L.; Pozzi, F.; Riguccini, L.; Rodighiero, G.; Roseboom, I.; Scott, D.; Symeonidis, M.; Valtchanov, I.; Viero, M.; Wang, L.

    2013-03-01

    Combining far-infrared Herschel photometry from the PACS Evolutionary Probe (PEP) and Herschel Multi-tiered Extragalactic Survey (HerMES) guaranteed time programs with ancillary datasets in the GOODS-N, GOODS-S, and COSMOS fields, it is possible to sample the 8-500 μm spectral energy distributions (SEDs) of galaxies with at least 7-10 bands. Extending to the UV, optical, and near-infrared, the number of bands increases up to 43. We reproduce the distribution of galaxies in a carefully selected restframe ten colors space, based on this rich data-set, using a superposition of multivariate Gaussian modes. We use this model to classify galaxies and build median SEDs of each class, which are then fitted with a modified version of the magphys code that combines stellar light, emission from dust heated by stars and a possible warm dust contribution heated by an active galactic nucleus (AGN). The color distribution of galaxies in each of the considered fields can be well described with the combination of 6-9 classes, spanning a large range of far- to near-infrared luminosity ratios, as well as different strength of the AGN contribution to bolometric luminosities. The defined Gaussian grouping is used to identify rare or odd sources. The zoology of outliers includes Herschel-detected ellipticals, very blue z ~ 1 Ly-break galaxies, quiescent spirals, and torus-dominated AGN with star formation. Out of these groups and outliers, a new template library is assembled, consisting of 32 SEDs describing the intrinsic scatter in the restframe UV-to-submm colors of infrared galaxies. This library is tested against L(IR) estimates with and without Herschel data included, and compared to eightother popular methods often adopted in the literature. When implementing Herschel photometry, these approaches produce L(IR) values consistent with each other within a median absolute deviation of 10-20%, the scatter being dominated more by fine tuning of the codes, rather than by the choice of

  1. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  2. A method to analyze "source-sink" structure of non-point source pollution based on remote sensing technology.

    Science.gov (United States)

    Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui

    2013-11-01

    With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the "source-sink" theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of "source" of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km(2) in 2008, and the "sink" was 172.06 km(2). The "source" of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the "sink" was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of "source" gets weaker along with the distance from the seas boundary increase, while "sink" gets stronger. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Review and evaluation of spark source mass spectrometry as an analytical method

    International Nuclear Information System (INIS)

    Beske, H.E.

    1981-01-01

    The analytical features and most important fields of application of spark source mass spectrometry are described with respect to the trace analysis of high-purity materials and the multielement analysis of technical alloys, geochemical and cosmochemical, biological and radioactive materials, as well as in environmental analysis. Comparisons are made to other analytical methods. The distribution of the method as well as opportunities for contract analysis are indicated and developmental tendencies discussed. (orig.) [de

  4. A matrix-inversion method for gamma-source mapping from gamma-count data - 59082

    International Nuclear Information System (INIS)

    Bull, Richard K.; Adsley, Ian; Burgess, Claire

    2012-01-01

    Gamma ray counting is often used to survey the distribution of active waste material in various locations. Ideally the output from such surveys would be a map of the activity of the waste. In this paper a simple matrix-inversion method is presented. This allows an array of gamma-count data to be converted to an array of source activities. For each survey area the response matrix is computed using the gamma-shielding code Microshield [1]. This matrix links the activity array to the count array. The activity array is then obtained via matrix inversion. The method was tested on artificially-created arrays of count-data onto which statistical noise had been added. The method was able to reproduce, quite faithfully, the original activity distribution used to generate the dataset. The method has been applied to a number of practical cases, including the distribution of activated objects in a hot cell and to activated Nimonic springs amongst fuel-element debris in vaults at a nuclear plant. (authors)

  5. Power Law Distributions in the Experiment for Adjustment of the Ion Source of the NBI System

    International Nuclear Information System (INIS)

    Han Xiaopu; Hu Chundong

    2005-01-01

    The experiential adjustment process in an experiment on the ion source of the neutral beam injector system for the HT-7 Tokamak is reported in this paper. With regard to the data obtained in the same condition, in arranging the arc current intensities of every shot with a decay rank, the distributions of the arc current intensity correspond to the power laws, and the distribution obtained in the condition with the cryo-pump corresponds to the double Pareto distribution. Using the similar study method, the distributions of the arc duration are close to the power laws too. These power law distributions are formed rather naturally instead of being the results of purposeful seeking

  6. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    Science.gov (United States)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  7. Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...

  8. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...

  9. Distribution network planning method considering distributed generation for peak cutting

    International Nuclear Information System (INIS)

    Ouyang Wu; Cheng Haozhong; Zhang Xiubin; Yao Liangzhong

    2010-01-01

    Conventional distribution planning method based on peak load brings about large investment, high risk and low utilization efficiency. A distribution network planning method considering distributed generation (DG) for peak cutting is proposed in this paper. The new integrated distribution network planning method with DG implementation aims to minimize the sum of feeder investments, DG investments, energy loss cost and the additional cost of DG for peak cutting. Using the solution techniques combining genetic algorithm (GA) with the heuristic approach, the proposed model determines the optimal planning scheme including the feeder network and the siting and sizing of DG. The strategy for the site and size of DG, which is based on the radial structure characteristics of distribution network, reduces the complexity degree of solving the optimization model and eases the computational burden substantially. Furthermore, the operation schedule of DG at the different load level is also provided.

  10. Methods of computer experiment in gamma-radiation technologies using new radiation sources

    CERN Document Server

    Bratchenko, M I; Rozhkov, V V

    2001-01-01

    Presented id the methodology of computer modeling application for physical substantiation of new irradiation technologies and irradiators design work flow. Modeling tasks for irradiation technologies are structured along with computerized methods of their solution and appropriate types of software. Comparative analysis of available packages for Monte-Carlo modeling of electromagnetic processes in media is done concerning their application to irradiation technologies problems. The results of codes approbation and preliminary data on gamma-radiation absorbed dose distributions for nuclides of conventional sources and prospective Europium-based gamma-sources are presented.

  11. Calculation Methods for Wallenius’ Noncentral Hypergeometric Distribution

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    Two different probability distributions are both known in the literature as "the" noncentral hypergeometric distribution. Wallenius' noncentral hypergeometric distribution can be described by an urn model without replacement with bias. Fisher's noncentral hypergeometric distribution...... is the conditional distribution of independent binomial variates given their sum. No reliable calculation method for Wallenius' noncentral hypergeometric distribution has hitherto been described in the literature. Several new methods for calculating probabilities from Wallenius' noncentral hypergeometric...... distribution are derived. Range of applicability, numerical problems, and efficiency are discussed for each method. Approximations to the mean and variance are also discussed. This distribution has important applications in models of biased sampling and in models of evolutionary systems....

  12. Freeze drying method for preparing radiation source material

    International Nuclear Information System (INIS)

    Mosley, W.C.; Smith, P.K.

    1976-01-01

    Fabrication of a neutron source is specifically claimed. A palladium/californium solution is freeze dried to form a powder which, through conventional powder metallurgy, is shaped into a source containing the californium evenly distributed through a palladium metal matrix. (E.C.B.)

  13. Methods for Distributed Optimal Energy Management

    DEFF Research Database (Denmark)

    Brehm, Robert

    The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast to convent......The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast...... to conventional centralised optimal energy flow management systems, here-in, focus is set on how optimal energy management can be achieved in a decentralised distributed architecture such as a multi-agent system. Distributed optimisation methods are introduced, targeting optimisation of energy flow in virtual......-consumption of renewable energy resources in low voltage grids. It can be shown that this method prevents mutual discharging of batteries and prevents peak loads, a supervisory control instance can dictate the level of autarchy from the utility grid. Further it is shown that the problem of optimal energy flow management...

  14. Identifying (subsurface) anthropogenic heat sources that influence temperature in the drinking water distribution system

    Science.gov (United States)

    Agudelo-Vera, Claudia M.; Blokker, Mirjam; de Kater, Henk; Lafort, Rob

    2017-09-01

    The water temperature in the drinking water distribution system and at customers' taps approaches the surrounding soil temperature at a depth of 1 m. Water temperature is an important determinant of water quality. In the Netherlands drinking water is distributed without additional residual disinfectant and the temperature of drinking water at customers' taps is not allowed to exceed 25 °C. In recent decades, the urban (sub)surface has been getting more occupied by various types of infrastructures, and some of these can be heat sources. Only recently have the anthropogenic sources and their influence on the underground been studied on coarse spatial scales. Little is known about the urban shallow underground heat profile on small spatial scales, of the order of 10 m × 10 m. Routine water quality samples at the tap in urban areas have shown up locations - so-called hotspots - in the city, with relatively high soil temperatures - up to 7 °C warmer - compared to the soil temperatures in the surrounding rural areas. Yet the sources and the locations of these hotspots have not been identified. It is expected that with climate change during a warm summer the soil temperature in the hotspots can be above 25 °C. The objective of this paper is to find a method to identify heat sources and urban characteristics that locally influence the soil temperature. The proposed method combines mapping of urban anthropogenic heat sources, retrospective modelling of the soil temperature, analysis of water temperature measurements at the tap, and extensive soil temperature measurements. This approach provided insight into the typical range of the variation of the urban soil temperature, and it is a first step to identifying areas with potential underground heat stress towards thermal underground management in cities.

  15. A practical method for in-situ thickness determination using energy distribution of beta particles

    International Nuclear Information System (INIS)

    Yalcin, S.; Gurler, O.; Gundogdu, O.; Bradley, D.A.

    2012-01-01

    This paper discusses a method to determine the thickness of an absorber using the energy distribution of beta particles. An empirical relationship was obtained between the absorber thickness and the energy distribution of beta particles transmitted through. The thickness of a polyethylene radioactive source cover was determined by exploiting this relationship, which has largely been left unexploited allowing us to determine the in-situ cover thickness of beta sources in a fast, cheap and non-destructive way. - Highlights: ► A practical and in-situ unknown cover thickness determination ► Cheap and readily available compared to other techniques. ► Beta energy spectrum.

  16. A practical method for in-situ thickness determination using energy distribution of beta particles

    Energy Technology Data Exchange (ETDEWEB)

    Yalcin, S., E-mail: syalcin@kastamonu.edu.tr [Kastamonu University, Education Faculty, 37200 Kastamonu (Turkey); Gurler, O. [Physics Department, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey); Gundogdu, O. [Kocaeli University, Umuttepe Campus, 41380 Kocaeli (Turkey); Bradley, D.A. [CNRP, Department of Physics, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH (United Kingdom)

    2012-01-15

    This paper discusses a method to determine the thickness of an absorber using the energy distribution of beta particles. An empirical relationship was obtained between the absorber thickness and the energy distribution of beta particles transmitted through. The thickness of a polyethylene radioactive source cover was determined by exploiting this relationship, which has largely been left unexploited allowing us to determine the in-situ cover thickness of beta sources in a fast, cheap and non-destructive way. - Highlights: Black-Right-Pointing-Pointer A practical and in-situ unknown cover thickness determination Black-Right-Pointing-Pointer Cheap and readily available compared to other techniques. Black-Right-Pointing-Pointer Beta energy spectrum.

  17. Continuous-variable quantum key distribution with Gaussian source noise

    International Nuclear Information System (INIS)

    Shen Yujie; Peng Xiang; Yang Jian; Guo Hong

    2011-01-01

    Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.

  18. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks.

    Science.gov (United States)

    Ma, Junjie; Meng, Fansheng; Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-02-16

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  19. Radial dose distribution of 192Ir and 137Cs seed sources

    International Nuclear Information System (INIS)

    Thomason, C.; Higgins, P.

    1989-01-01

    The radial dose distributions in water around /sup 192/ Ir seed sources with both platinum and stainless steel encapsulation have been measured using LiF thermoluminescent dosimeters (TLD) for distances of 1 to 12 cm along the perpendicular bisector of the source to determine the effect of source encapsulation. Similar measurements also have been made around a /sup 137/ Cs seed source of comparable dimensions. The data were fit to a third order polynomial to obtain an empirical equation for the radial dose factor which then can be used in dosimetry. The coefficients of this equation for each of the three sources are given. The radial dose factor of the stainless steel encapsulated /sup 192/ Ir and that of the platinum encapsulated /sup 192/ Ir agree to within 2%. The radial dose distributions measured here for /sup 192/ Ir with either type of encapsulation and for /sup 137/ Cs are indistinguishable from those of other authors when considering uncertainties involved. For clinical dosimetry based on isotropic point or line source models, any of these equations may be used without significantly affecting accuracy

  20. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    . However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones......Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity...

  1. Future prospects for ECR ion sources with improved charge state distributions

    International Nuclear Information System (INIS)

    Alton, G.D.

    1995-01-01

    Despite the steady advance in the technology of the ECR ion source, present art forms have not yet reached their full potential in terms of charge state and intensity within a particular charge state, in part, because of the narrow band width. single-frequency microwave radiation used to heat the plasma electrons. This article identifies fundamentally important methods which may enhance the performances of ECR ion sources through the use of: (1) a tailored magnetic field configuration (spatial domain) in combination with single-frequency microwave radiation to create a large uniformly distributed ECR ''volume'' or (2) the use of broadband frequency domain techniques (variable-frequency, broad-band frequency, or multiple-discrete-frequency microwave radiation), derived from standard TWT technology, to transform the resonant plasma ''surfaces'' of traditional ECR ion sources into resonant plasma ''volume''. The creation of a large ECR plasma ''volume'' permits coupling of more power into the plasma, resulting in the heating of a much larger electron population to higher energies, thereby producing higher charge state ions and much higher intensities within a particular charge state than possible in present forms of' the source. The ECR ion source concepts described in this article offer exciting opportunities to significantly advance the-state-of-the-art of ECR technology and as a consequence, open new opportunities in fundamental and applied research and for a variety of industrial applications

  2. Tsunami Simulation Method Assimilating Ocean Bottom Pressure Data Near a Tsunami Source Region

    Science.gov (United States)

    Tanioka, Yuichiro

    2018-02-01

    A new method was developed to reproduce the tsunami height distribution in and around the source area, at a certain time, from a large number of ocean bottom pressure sensors, without information on an earthquake source. A dense cabled observation network called S-NET, which consists of 150 ocean bottom pressure sensors, was installed recently along a wide portion of the seafloor off Kanto, Tohoku, and Hokkaido in Japan. However, in the source area, the ocean bottom pressure sensors cannot observe directly an initial ocean surface displacement. Therefore, we developed the new method. The method was tested and functioned well for a synthetic tsunami from a simple rectangular fault with an ocean bottom pressure sensor network using 10 arc-min, or 20 km, intervals. For a test case that is more realistic, ocean bottom pressure sensors with 15 arc-min intervals along the north-south direction and sensors with 30 arc-min intervals along the east-west direction were used. In the test case, the method also functioned well enough to reproduce the tsunami height field in general. These results indicated that the method could be used for tsunami early warning by estimating the tsunami height field just after a great earthquake without the need for earthquake source information.

  3. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    Science.gov (United States)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  4. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA

    Science.gov (United States)

    Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.

    2017-10-01

    Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  5. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA.

    Science.gov (United States)

    Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M

    2017-10-01

    Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  6. Development of an asymmetric multiple-position neutron source (AMPNS) method to monitor the criticality of a degraded reactor core

    International Nuclear Information System (INIS)

    Kim, S.S.; Levine, S.H.

    1985-01-01

    An analytical/experimental method has been developed to monitor the subcritical reactivity and unfold the k/sub infinity/ distribution of a degraded reactor core. The method uses several fixed neutron detectors and a Cf-252 neutron source placed sequentially in multiple positions in the core. Therefore, it is called the Asymmetric Multiple Position Neutron Source (AMPNS) method. The AMPNS method employs nucleonic codes to analyze the neutron multiplication of a Cf-252 neutron source. An optimization program, GPM, is utilized to unfold the k/sub infinity/ distribution of the degraded core, in which the desired performance measure minimizes the error between the calculated and the measured count rates of the degraded reactor core. The analytical/experimental approach is validated by performing experiments using the Penn State Breazeale TRIGA Reactor (PSBR). A significant result of this study is that it provides a method to monitor the criticality of a damaged core during the recovery period

  7. Spatial distribution and source apportionment of water pollution in different administrative zones of Wen-Rui-Tang (WRT) river watershed, China.

    Science.gov (United States)

    Yang, Liping; Mei, Kun; Liu, Xingmei; Wu, Laosheng; Zhang, Minghua; Xu, Jianming; Wang, Fan

    2013-08-01

    Water quality degradation in river systems has caused great concerns all over the world. Identifying the spatial distribution and sources of water pollutants is the very first step for efficient water quality management. A set of water samples collected bimonthly at 12 monitoring sites in 2009 and 2010 were analyzed to determine the spatial distribution of critical parameters and to apportion the sources of pollutants in Wen-Rui-Tang (WRT) river watershed, near the East China Sea. The 12 monitoring sites were divided into three administrative zones of urban, suburban, and rural zones considering differences in land use and population density. Multivariate statistical methods [one-way analysis of variance, principal component analysis (PCA), and absolute principal component score-multiple linear regression (APCS-MLR) methods] were used to investigate the spatial distribution of water quality and to apportion the pollution sources. Results showed that most water quality parameters had no significant difference between the urban and suburban zones, whereas these two zones showed worse water quality than the rural zone. Based on PCA and APCS-MLR analysis, urban domestic sewage and commercial/service pollution, suburban domestic sewage along with fluorine point source pollution, and agricultural nonpoint source pollution with rural domestic sewage pollution were identified to the main pollution sources in urban, suburban, and rural zones, respectively. Understanding the water pollution characteristics of different administrative zones could put insights into effective water management policy-making especially in the area across various administrative zones.

  8. Alpha-particle autoradiography by solid state track detectors to spatial distribution of radioactivity in alpha-counting source

    International Nuclear Information System (INIS)

    Ishigure, Nobuhito; Nakano, Takashi; Enomoto, Hiroko; Koizumi, Akira; Miyamoto, Katsuhiro

    1989-01-01

    A technique of autoradiography using solid state track detectors is described by which spatial distribution of radioactivity in an alpha-counting source can easily be visualized. As solid state track detectors, polymer of allyl diglycol carbonate was used. The advantage of the present technique was proved that alpha-emitters can be handled in the light place alone through the whole course of autoradiography, otherwise in the conventional autoradiography the alpha-emitters, which requires special carefulness from the point of radiation protection, must be handled in the dark place with difficulty. This technique was applied to rough examination of self-absorption of the plutonium source prepared by the following different methods; the source (A) was prepared by drying at room temperature, (B) by drying under an infrared lamp, (C) by drying in ammonia atmosphere after redissolving by the addition of a drop of distilled water which followed complete evaporation under an infrared lamp and (D) by drying under an infrared lamp after adding a drop of diluted neutral detergent. The difference in the spatial distributions of radioactivity could clearly be observed on the autoradiographs. For example, the source (C) showed the most diffuse distribution, which suggested that the self-absorption of this source was the smallest. The present autoradiographic observation was in accordance with the result of the alpha-spectrometry with a silicon surface-barrier detector. (author)

  9. Source signature estimation from multimode surface waves via mode-separated virtual real source method

    Science.gov (United States)

    Gao, Lingli; Pan, Yudi

    2018-05-01

    The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.

  10. Multi-source analysis reveals latitudinal and altitudinal shifts in range of Ixodes ricinus at its northern distribution limit

    Directory of Open Access Journals (Sweden)

    Kristoffersen Anja B

    2011-05-01

    Full Text Available Abstract Background There is increasing evidence for a latitudinal and altitudinal shift in the distribution range of Ixodes ricinus. The reported incidence of tick-borne disease in humans is on the rise in many European countries and has raised political concern and attracted media attention. It is disputed which factors are responsible for these trends, though many ascribe shifts in distribution range to climate changes. Any possible climate effect would be most easily noticeable close to the tick's geographical distribution limits. In Norway- being the northern limit of this species in Europe- no documentation of changes in range has been published. The objectives of this study were to describe the distribution of I. ricinus in Norway and to evaluate if any range shifts have occurred relative to historical descriptions. Methods Multiple data sources - such as tick-sighting reports from veterinarians, hunters, and the general public - and surveillance of human and animal tick-borne diseases were compared to describe the present distribution of I. ricinus in Norway. Correlation between data sources and visual comparison of maps revealed spatial consistency. In order to identify the main spatial pattern of tick abundance, a principal component analysis (PCA was used to obtain a weighted mean of four data sources. The weighted mean explained 67% of the variation of the data sources covering Norway's 430 municipalities and was used to depict the present distribution of I. ricinus. To evaluate if any geographical range shift has occurred in recent decades, the present distribution was compared to historical data from 1943 and 1983. Results Tick-borne disease and/or observations of I. ricinus was reported in municipalities up to an altitude of 583 metres above sea level (MASL and is now present in coastal municipalities north to approximately 69°N. Conclusion I. ricinus is currently found further north and at higher altitudes than described in

  11. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Junjie Ma

    2018-02-01

    Full Text Available Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  12. Quantum key distribution with an unknown and untrusted source

    Science.gov (United States)

    Zhao, Yi; Qi, Bing; Lo, Hoi-Kwong

    2009-03-01

    The security of a standard bi-directional ``plug & play'' quantum key distribution (QKD) system has been an open question for a long time. This is mainly because its source is equivalently controlled by an eavesdropper, which means the source is unknown and untrusted. Qualitative discussion on this subject has been made previously. In this paper, we present the first quantitative security analysis on a general class of QKD protocols whose sources are unknown and untrusted. The securities of standard BB84 protocol, weak+vacuum decoy state protocol, and one-decoy decoy state protocol, with unknown and untrusted sources are rigorously proved. We derive rigorous lower bounds to the secure key generation rates of the above three protocols. Our numerical simulation results show that QKD with an untrusted source gives a key generation rate that is close to that with a trusted source. Our work is published in [1]. [4pt] [1] Y. Zhao, B. Qi, and H.-K. Lo, Phys. Rev. A, 77:052327 (2008).

  13. CDFMC: a program that calculates the fixed neutron source distribution for a BWR using Monte Carlo

    International Nuclear Information System (INIS)

    Gomez T, A.M.; Xolocostli M, J.V.; Palacios H, J.C.

    2006-01-01

    The three-dimensional neutron flux calculation using the synthesis method, it requires of the determination of the neutron flux in two two-dimensional configurations as well as in an unidimensional one. Most of the standard guides for the neutron flux calculation or fluences in the vessel of a nuclear reactor, make special emphasis in the appropriate calculation of the fixed neutron source that should be provided to the used transport code, with the purpose of finding sufficiently approximated flux values. The reactor core assemblies configuration is based on X Y geometry, however the considered problem is solved in R θ geometry for what is necessary to make an appropriate mapping to find the source term associated to the R θ intervals starting from a source distribution in rectangular coordinates. To develop the CDFMC computer program (Source Distribution calculation using Monte Carlo), it was necessary to develop a theory of independent mapping to those that have been in the literature. The method of meshes overlapping here used, is based on a technique of random points generation, commonly well-known as Monte Carlo technique. Although the 'randomness' of this technique it implies considering errors in the calculations, it is well known that when increasing the number of points randomly generated to measure an area or some other quantity of interest, the precision of the method increases. In the particular case of the CDFMC computer program, the developed technique reaches a good general behavior when it is used a considerably high number of points (bigger or equal to a hundred thousand), with what makes sure errors in the calculations of the order of 1%. (Author)

  14. Use of Monte Carlo Methods in the modeling of the dose/INAK distribution of natural radioactive sources: First studies

    Energy Technology Data Exchange (ETDEWEB)

    Bezerra, Luis R.A.; Vieira, Jose W.; Amaral, Romilton dos S.; Santos Junior, Jose A. dos; Silva, Arykerne N.C. da; Silva, Alberto A. da; Damascena, Kennedy F.; Santos Junior, Otavio P.; Medeiros, Nilson V.S.; Santos, Josineide M.N. dos, E-mail: jaraujo@ufpe.br, E-mail: romilton@ufpe.br, E-mail: kennedy.eng.ambiental@gmail.com, E-mail: nvsmedeiros@gmail.com, E-mail: josineide.santos@ufpe.br, E-mail: arykerne.silva@ufpe.br, E-mail: luis.rodrigo@vitoria.ifpe.edu.br, E-mail: otavio.santos@vitoria.ifpe.edu.br, E-mail: s, E-mail: jose.wilson@recife.ifpe.edu.br, E-mail: alberto.silva@barreiros.ifpe.edu.br, E-mail: jose.wilson59@uol.com.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), PE (Brazil); Universidade de Pernambuco (UPE), Recife, PE (Brazil)

    2017-11-01

    One of the means of exposure that the world population is subjected to daily is natural radiation, which covers exposure to sources of cosmic origin and terrestrial origin, which accounts for about 84.1% of all exposure due to natural radiation. Some research groups have been estimating the distribution of the dose by the radiosensitive organs and tissues of people submitted to gamma radiation using Computational Exposure Models (MCE). The MCE is composed, fundamentally, of an anthropomorphic simulator (phantom), a Monte Carlo code and a radioactive source algorithm. The Group of Computational Dosimetry and Embedded Systems (DCSE), together with the group of Radioecology (RAE), have been developing a variety of MCEs to simulate exposure to natural environmental gamma radiation. Such models estimate the dose distribution absorbed by the organs and tissues radiosensitive to ionizing radiation from a flat portion of the ground in which photons emerge from within a circle of radius r, reaching a person in an orthostatic position and centered on the circumference. We investigated in this work the exposure of an individual by a radioactive cloud of gamma emission of Potassium-40, which emits a photon characteristic of energy 1461 keV. It was optimized the number of histories to obtain Dose/Kerma values in the air, with low dispersion and viable computational time for the available PCs, statistically validating the results. To do so, was adapted the MCE MSTA, composed by the MASH (Male Adult meSH) phantom in an orthostatic position coupled to the EGSnrc, with the planar source algorithm. (author)

  15. Calibration methods for ECE systems with microwave sources

    International Nuclear Information System (INIS)

    Tubbing, B.J.D.; Kissel, S.E.

    1987-01-01

    The authors investigated the feasibility of two methods for calibration of electron cyclotron emission (ECE) systems, both based on the use of a microwave source. In the first method -called the Antenna Pattern Integration (API) method - the microwave source is scanned in space, so as to simulate a large - area - blackbody -source. In the second method -called the Untuned Cavity (UC) method -an untuned cavity, fed by the microwave source, is used to simulate a blackbody. For both methods, the hardware required to perform partly automated calibrations was developed. The microwave based methods were compared with a large area blackbody calibration on two different ECE systems, a Michelson interferometer and a grating polychromator. The API method was found to be more successful than the UC method. (author)

  16. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    International Nuclear Information System (INIS)

    Gora, D.; Bernardini, E.; Cruz Silva, A.H.

    2011-04-01

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  17. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    Energy Technology Data Exchange (ETDEWEB)

    Gora, D. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute of Nuclear Physics PAN, Cracow (Poland); Bernardini, E.; Cruz Silva, A.H. [Institute of Nuclear Physics PAN, Cracow (Poland)

    2011-04-15

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  18. Confusion-limited extragalactic source survey at 4.755 GHz. I. Source list and areal distributions

    International Nuclear Information System (INIS)

    Ledden, J.E.; Broderick, J.J.; Condon, J.J.; Brown, R.L.

    1980-01-01

    A confusion-limited 4.755-GHz survey covering 0.00 956 sr between right ascensions 07/sup h/05/sup m/ and 18/sup h/ near declination +35 0 has been made with the NRAO 91-m telescope. The survey found 237 sources and is complete above 15 mJy. Source counts between 15 and 100 mJy were obtained directly. The P(D) distribution was used to determine the number counts between 0.5 and 13.2 mJy, to search for anisotropy in the density of faint extragalactic sources, and to set a 99%-confidence upper limit of 1.83 mK to the rms temperature fluctuation of the 2.7-K cosmic microwave background on angular scales smaller than 7.3 arcmin. The discrete-source density, normalized to the static Euclidean slope, falls off sufficiently rapidly below 100 mJy that no new population of faint flat-spectrum sources is required to explain the 4.755-GHz source counts

  19. Reconstruction of Sound Source Pressures in an Enclosure Using the Phased Beam Tracing Method

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho; Ih, Jeong-Guon

    2009-01-01

    . First, surfaces of an extended source are divided into reasonably small segments. From each source segment, one beam is projected into the field and all emitted beams are traced. Radiated beams from the source reach array sensors after traveling various paths including the wall reflections. Collecting...... all the pressure histories at the field points, source-observer relations can be constructed in a matrix-vector form for each frequency. By multiplying the measured field data with the pseudo-inverse of the calculated transfer function, one obtains the distribution of source pressure. An omni......-directional sphere and a cubic source in a rectangular enclosure were taken as examples in the simulation tests. A reconstruction error was investigated by Monte Carlo simulation in terms of field point locations. When the source information was reconstructed by the present method, it was shown that the sound power...

  20. Chemical and isotopic methods for characterization of pollutant sources in rain water

    International Nuclear Information System (INIS)

    Verma, M.P.

    1996-01-01

    The acid rain formation is related with industrial pollution. An isotopic and chemical study of the spatial and temporary distribution of the acidity in the rain gives information about the acidity source. The predominant species in the acid rain are nitrates and sulfates. For the rain monitoring is required the determination of the anion species such as HCO 3 , Cl, SO 4 , NO 3 and p H. So it was analyzed the cations Na + , K + , Ca 2+ and Mg 2+ to determine the quality analysis. All of them species can be determined with enough accuracy, except HCO 3 by modern equipment such as, liquid chromatograph, atomic absorption, etc. The HCO 3 concentration is determined by traditional methods like acid-base titration. This work presents the fundamental concepts of the titration method for samples with low alkalinity (carbonic species), for rain water. There is presented a general overview over the isotopic methods for the characterization of the origin of pollutant sources in the rain. (Author)

  1. Size distribution, directional source contributions and pollution status of PM from Chengdu, China during a long-term sampling campaign.

    Science.gov (United States)

    Shi, Guo-Liang; Tian, Ying-Ze; Ma, Tong; Song, Dan-Lin; Zhou, Lai-Dong; Han, Bo; Feng, Yin-Chang; Russell, Armistead G

    2017-06-01

    Long-term and synchronous monitoring of PM 10 and PM 2.5 was conducted in Chengdu in China from 2007 to 2013. The levels, variations, compositions and size distributions were investigated. The sources were quantified by two-way and three-way receptor models (PMF2, ME2-2way and ME2-3way). Consistent results were found: the primary source categories contributed 63.4% (PMF2), 64.8% (ME2-2way) and 66.8% (ME2-3way) to PM 10 , and contributed 60.9% (PMF2), 65.5% (ME2-2way) and 61.0% (ME2-3way) to PM 2.5 . Secondary sources contributed 31.8% (PMF2), 32.9% (ME2-2way) and 31.7% (ME2-3way) to PM 10 , and 35.0% (PMF2), 33.8% (ME2-2way) and 36.0% (ME2-3way) to PM 2.5 . The size distribution of source categories was estimated better by the ME2-3way method. The three-way model can simultaneously consider chemical species, temporal variability and PM sizes, while a two-way model independently computes datasets of different sizes. A method called source directional apportionment (SDA) was employed to quantify the contributions from various directions for each source category. Crustal dust from east-north-east (ENE) contributed the highest to both PM 10 (12.7%) and PM 2.5 (9.7%) in Chengdu, followed by the crustal dust from south-east (SE) for PM 10 (9.8%) and secondary nitrate & secondary organic carbon from ENE for PM 2.5 (9.6%). Source contributions from different directions are associated with meteorological conditions, source locations and emission patterns during the sampling period. These findings and methods provide useful tools to better understand PM pollution status and to develop effective pollution control strategies. Copyright © 2016. Published by Elsevier B.V.

  2. Demonstration of a collimated in situ method for determining depth distributions using gamma-ray spectrometry

    CERN Document Server

    Benke, R R

    2002-01-01

    In situ gamma-ray spectrometry uses a portable detector to quantify radionuclides in materials. The main shortcoming of in situ gamma-ray spectrometry has been its inability to determine radionuclide depth distributions. Novel collimator designs were paired with a commercial in situ gamma-ray spectrometry system to overcome this limitation for large area sources. Positioned with their axes normal to the material surface, the cylindrically symmetric collimators limited the detection of un attenuated gamma-rays from a selected range of polar angles (measured off the detector axis). Although this approach does not alleviate the need for some knowledge of the gamma-ray attenuation characteristics of the materials being measured, the collimation method presented in this paper represents an absolute method that determines the depth distribution as a histogram, while other in situ methods require a priori knowledge of the depth distribution shape. Other advantages over previous in situ methods are that this method d...

  3. Fast Reliability Assessing Method for Distribution Network with Distributed Renewable Energy Generation

    Science.gov (United States)

    Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming

    2018-01-01

    This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.

  4. Quantum key distribution with entangled photon sources

    International Nuclear Information System (INIS)

    Ma Xiongfeng; Fung, Chi-Hang Fred; Lo, H.-K.

    2007-01-01

    A parametric down-conversion (PDC) source can be used as either a triggered single-photon source or an entangled-photon source in quantum key distribution (QKD). The triggering PDC QKD has already been studied in the literature. On the other hand, a model and a post-processing protocol for the entanglement PDC QKD are still missing. We fill in this important gap by proposing such a model and a post-processing protocol for the entanglement PDC QKD. Although the PDC model is proposed to study the entanglement-based QKD, we emphasize that our generic model may also be useful for other non-QKD experiments involving a PDC source. Since an entangled PDC source is a basis-independent source, we apply Koashi and Preskill's security analysis to the entanglement PDC QKD. We also investigate the entanglement PDC QKD with two-way classical communications. We find that the recurrence scheme increases the key rate and the Gottesman-Lo protocol helps tolerate higher channel losses. By simulating a recent 144-km open-air PDC experiment, we compare three implementations: entanglement PDC QKD, triggering PDC QKD, and coherent-state QKD. The simulation result suggests that the entanglement PDC QKD can tolerate higher channel losses than the coherent-state QKD. The coherent-state QKD with decoy states is able to achieve highest key rate in the low- and medium-loss regions. By applying the Gottesman-Lo two-way post-processing protocol, the entanglement PDC QKD can tolerate up to 70 dB combined channel losses (35 dB for each channel) provided that the PDC source is placed in between Alice and Bob. After considering statistical fluctuations, the PDC setup can tolerate up to 53 dB channel losses

  5. Affordable non-traditional source data mining for context assessment to improve distributed fusion system robustness

    Science.gov (United States)

    Bowman, Christopher; Haith, Gary; Steinberg, Alan; Morefield, Charles; Morefield, Michael

    2013-05-01

    This paper describes methods to affordably improve the robustness of distributed fusion systems by opportunistically leveraging non-traditional data sources. Adaptive methods help find relevant data, create models, and characterize the model quality. These methods also can measure the conformity of this non-traditional data with fusion system products including situation modeling and mission impact prediction. Non-traditional data can improve the quantity, quality, availability, timeliness, and diversity of the baseline fusion system sources and therefore can improve prediction and estimation accuracy and robustness at all levels of fusion. Techniques are described that automatically learn to characterize and search non-traditional contextual data to enable operators integrate the data with the high-level fusion systems and ontologies. These techniques apply the extension of the Data Fusion & Resource Management Dual Node Network (DNN) technical architecture at Level 4. The DNN architecture supports effectively assessment and management of the expanded portfolio of data sources, entities of interest, models, and algorithms including data pattern discovery and context conformity. Affordable model-driven and data-driven data mining methods to discover unknown models from non-traditional and `big data' sources are used to automatically learn entity behaviors and correlations with fusion products, [14 and 15]. This paper describes our context assessment software development, and the demonstration of context assessment of non-traditional data to compare to an intelligence surveillance and reconnaissance fusion product based upon an IED POIs workflow.

  6. Dark Energy Survey Year 1 Results: Cross-Correlation Redshifts in the DES -- Calibration of the Weak Lensing Source Redshift Distributions

    Energy Technology Data Exchange (ETDEWEB)

    Davis, C.; et al.

    2017-10-06

    We present the calibration of the Dark Energy Survey Year 1 (DES Y1) weak lensing source galaxy redshift distributions from clustering measurements. By cross-correlating the positions of source galaxies with luminous red galaxies selected by the redMaGiC algorithm we measure the redshift distributions of the source galaxies as placed into different tomographic bins. These measurements constrain any such shifts to an accuracy of $\\sim0.02$ and can be computed even when the clustering measurements do not span the full redshift range. The highest-redshift source bin is not constrained by the clustering measurements because of the minimal redshift overlap with the redMaGiC galaxies. We compare our constraints with those obtained from $\\texttt{COSMOS}$ 30-band photometry and find that our two very different methods produce consistent constraints.

  7. The Integration of Renewable Energy Sources into Electric Power Distribution Systems, Vol. II Utility Case Assessments

    Energy Technology Data Exchange (ETDEWEB)

    Zaininger, H.W.

    1994-01-01

    Electric utility distribution system impacts associated with the integration of renewable energy sources such as photovoltaics (PV) and wind turbines (WT) are considered in this project. The impacts are expected to vary from site to site according to the following characteristics: the local solar insolation and/or wind characteristics, renewable energy source penetration level, whether battery or other energy storage systems are applied, and local utility distribution design standards and planning practices. Small, distributed renewable energy sources are connected to the utility distribution system like other, similar kW- and MW-scale equipment and loads. Residential applications are expected to be connected to single-phase 120/240-V secondaries. Larger kW-scale applications may be connected to three+phase secondaries, and larger hundred-kW and y-scale applications, such as MW-scale windfarms, or PV plants, may be connected to electric utility primary systems via customer-owned primary and secondary collection systems. In any case, the installation of small, distributed renewable energy sources is expected to have a significant impact on local utility distribution primary and secondary system economics. Small, distributed renewable energy sources installed on utility distribution systems will also produce nonsite-specific utility generation system benefits such as energy and capacity displacement benefits, in addition to the local site-specific distribution system benefits. Although generation system benefits are not site-specific, they are utility-specific, and they vary significantly among utilities in different regions. In addition, transmission system benefits, environmental benefits and other benefits may apply. These benefits also vary significantly among utilities and regions. Seven utility case studies considering PV, WT, and battery storage were conducted to identify a range of potential renewable energy source distribution system applications. The

  8. Searching Malware and Sources of Its Distribution in the Internet

    Directory of Open Access Journals (Sweden)

    L. L. Protsenko

    2011-09-01

    Full Text Available In the article is considered for the first time developed by the author algorithm of searching malware and sources of its distribution, based on published HijackThis logs in the Internet.

  9. Application of Phasor Measurement Units for Protection of Distribution Networks with High Penetration of Photovoltaic Sources

    Science.gov (United States)

    Meskin, Matin

    The rate of the integration of distributed generation (DG) units to the distribution level to meet the growth in demand increases as a reasonable replacement for costly network expansion. This integration brings many advantages to the consumers and power grids, as well as giving rise to more challenges in relation to protection and control. Recent research has brought to light the negative effects of DG units on short circuit currents and overcurrent (OC) protection systems in distribution networks. Change in the direction of fault current flow, increment or decrement of fault current magnitude, blindness of protection, feeder sympathy trip, nuisance trip of interrupting devices, and the disruption of coordination between protective devices are some potential impacts of DG unit integration. Among other types of DG units, the integration of renewable energy resources into the electric grid has seen a vast improvement in recent years. In particular, the interconnection of photovoltaic (PV) sources to the medium voltage (MV) distribution networks has experienced a rapid increase in the last decade. In this work, the effect of PV source on conventional OC relays in MV distribution networks is shown. It is indicated that the PV output fluctuation, due to changes in solar radiation, causes the magnitude and direction of the current to change haphazardly. These variations may result in the poor operation of OC relays as the main protective devices in the MV distribution networks. In other words, due to the bi-directional power flow characteristic and the fluctuation of current magnitude occurring in the presence of PV sources, a specific setting of OC relays is difficult to realize. Therefore, OC relays may operate in normal conditions. To improve the OC relay operation, a voltage-dependent-overcurrent protection is proposed. Although, this new method prevents the OC relay from maloperation, its ability to detect earth faults and high impedance faults is poor. Thus, a

  10. SU-F-T-24: Impact of Source Position and Dose Distribution Due to Curvature of HDR Transfer Tubes

    Energy Technology Data Exchange (ETDEWEB)

    Khan, A; Yue, N [Rutgers University, New Brunswick, NJ (United States)

    2016-06-15

    Purpose: Brachytherapy is a highly targeted from of radiotherapy. While this may lead to ideal dose distributions on the treatment planning system, a small error in source location can lead to change in the dose distribution. The purpose of this study is to quantify the impact on source position error due to curvature of the transfer tubes and the impact this may have on the dose distribution. Methods: Since the source travels along the midline of the tube, an estimate of the positioning error for various angles of curvature was determined using geometric properties of the tube. Based on the range of values a specific shift was chosen to alter the treatment plans for a number of cervical cancer patients who had undergone HDR brachytherapy boost using tandem and ovoids. Impact of dose to target and organs at risk were determined and checked against guidelines outlined by radiation oncologist. Results: The estimate of the positioning error was 2mm short of the expected position (the curved tube can only cause the source to not reach as far as with a flat tube). Quantitative impact on the dose distribution is still in the process of being analyzed. Conclusion: The accepted positioning tolerance for the source position of a HDR brachytherapy unit is plus or minus 1mm. If there is an additional 2mm discrepancy due to tube curvature, this can result in a source being 1mm to 3mm short of the expected location. While we do always attempt to keep the tubes straight, in some cases such as with tandem and ovoids, the tandem connector does not extend as far out from the patient so the ovoid tubes always contain some degree of curvature. The dose impact of this may be significant.

  11. Distributed quantum computing with single photon sources

    International Nuclear Information System (INIS)

    Beige, A.; Kwek, L.C.

    2005-01-01

    Full text: Distributed quantum computing requires the ability to perform nonlocal gate operations between the distant nodes (stationary qubits) of a large network. To achieve this, it has been proposed to interconvert stationary qubits with flying qubits. In contrast to this, we show that distributed quantum computing only requires the ability to encode stationary qubits into flying qubits but not the conversion of flying qubits into stationary qubits. We describe a scheme for the realization of an eventually deterministic controlled phase gate by performing measurements on pairs of flying qubits. Our scheme could be implemented with a linear optics quantum computing setup including sources for the generation of single photons on demand, linear optics elements and photon detectors. In the presence of photon loss and finite detector efficiencies, the scheme could be used to build large cluster states for one way quantum computing with a high fidelity. (author)

  12. Investigation of Anisotropy Caused by Cylinder Applicator on Dose Distribution around Cs-137 Brachytherapy Source using MCNP4C Code

    Directory of Open Access Journals (Sweden)

    Sedigheh Sina

    2011-06-01

    Full Text Available Introduction: Brachytherapy is a type of radiotherapy in which radioactive sources are used in proximity of tumors normally for treatment of malignancies in the head, prostate and cervix. Materials and Methods: The Cs-137 Selectron source is a low-dose-rate (LDR brachytherapy source used in a remote afterloading system for treatment of different cancers. This system uses active and inactive spherical sources of 2.5 mm diameter, which can be used in different configurations inside the applicator to obtain different dose distributions. In this study, first the dose distribution at different distances from the source was obtained around a single pellet inside the applicator in a water phantom using the MCNP4C Monte Carlo code. The simulations were then repeated for six active pellets in the applicator and for six point sources.  Results: The anisotropy of dose distribution due to the presence of the applicator was obtained by division of dose at each distance and angle to the dose at the same distance and angle of 90 degrees. According to the results, the doses decreased towards the applicator tips. For example, for points at the distances of 5 and 7 cm from the source and angle of 165 degrees, such discrepancies reached 5.8% and 5.1%, respectively.  By increasing the number of pellets to six, these values reached 30% for the angle of 5 degrees. Discussion and Conclusion: The results indicate that the presence of the applicator causes a significant dose decrease at the tip of the applicator compared with the dose in the transverse plane. However, the treatment planning systems consider an isotropic dose distribution around the source and this causes significant errors in treatment planning, which are not negligible, especially for a large number of sources inside the applicator.

  13. New reversing freeform lens design method for LED uniform illumination with extended source and near field

    Science.gov (United States)

    Zhao, Zhili; Zhang, Honghai; Zheng, Huai; Liu, Sheng

    2018-03-01

    In light-emitting diode (LED) array illumination (e.g. LED backlighting), obtainment of high uniformity in the harsh condition of the large distance height ratio (DHR), extended source and near field is a key as well as challenging issue. In this study, we present a new reversing freeform lens design algorithm based on the illuminance distribution function (IDF) instead of the traditional light intensity distribution, which allows uniform LED illumination in the above mentioned harsh conditions. IDF of freeform lens can be obtained by the proposed mathematical method, considering the effects of large DHR, extended source and near field target at the same time. In order to prove the claims, a slim direct-lit LED backlighting with DHR equal to 4 is designed. In comparison with the traditional lenses, illuminance uniformity of LED backlighting with the new lens increases significantly from 0.45 to 0.84, and CV(RMSE) decreases dramatically from 0.24 to 0.03 in the harsh condition. Meanwhile, luminance uniformity of LED backlighting with the new lens is obtained as high as 0.92 at the condition of extended source and near field. This new method provides a practical and effective way to solve the problem of large DHR, extended source and near field for LED array illumination.

  14. Correlated Sources in Distributed Networks--Data Transmission, Common Information Characterization and Inferencing

    Science.gov (United States)

    Liu, Wei

    2011-01-01

    Correlation is often present among observations in a distributed system. This thesis deals with various design issues when correlated data are observed at distributed terminals, including: communicating correlated sources over interference channels, characterizing the common information among dependent random variables, and testing the presence of…

  15. Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic Sensor Networks

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider the problem of remote vector Gaussian source coding for a wireless acoustic sensor network. Each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information. The node’s measurement...... and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network. We show that for this distributed source coding scenario, one can encode a so-called conditional sufficient statistic of the sources instead of jointly...

  16. Dataset for Testing Contamination Source Identification Methods for Water Distribution Networks

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset includes the results of a simulation study using the source inversion techniques available in the Water Security Toolkit. The data was created to test...

  17. Electron Source Brightness and Illumination Semi-Angle Distribution Measurement in a Transmission Electron Microscope.

    Science.gov (United States)

    Börrnert, Felix; Renner, Julian; Kaiser, Ute

    2018-05-21

    The electron source brightness is an important parameter in an electron microscope. Reliable and easy brightness measurement routes are not easily found. A determination method for the illumination semi-angle distribution in transmission electron microscopy is even less well documented. Herein, we report a simple measurement route for both entities and demonstrate it on a state-of-the-art instrument. The reduced axial brightness of the FEI X-FEG with a monochromator was determined to be larger than 108 A/(m2 sr V).

  18. The Approximate Bayesian Computation methods in the localization of the atmospheric contamination source

    International Nuclear Information System (INIS)

    Kopka, P; Wawrzynczak, A; Borysiewicz, M

    2015-01-01

    In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found. (paper)

  19. Source inversion in the full-wave tomography; Full wave tomography ni okeru source inversion

    Energy Technology Data Exchange (ETDEWEB)

    Tsuchiya, T [DIA Consultants Co. Ltd., Tokyo (Japan)

    1997-10-22

    In order to consider effects of characteristics of a vibration source in the full-wave tomography (FWT), a study has been performed on a method to invert vibration source parameters together with V(p)/V(s) distribution. The study has expanded an analysis method which uses as the basic the gradient method invented by Tarantola and the partial space method invented by Sambridge, and conducted numerical experiments. The experiment No. 1 has performed inversion of only the vibration source parameters, and the experiment No. 2 has executed simultaneous inversion of the V(p)/V(s) distribution and the vibration source parameters. The result of the discussions revealed that and effective analytical procedure would be as follows: in order to predict maximum stress, the average vibration source parameters and the property parameters are first inverted simultaneously; in order to estimate each vibration source parameter at a high accuracy, the property parameters are fixed, and each vibration source parameter is inverted individually; and the derived vibration source parameters are fixed, and the property parameters are again inverted from the initial values. 5 figs., 2 tabs.

  20. Advanced neutron imaging methods with a potential to benefit from pulsed sources

    International Nuclear Information System (INIS)

    Strobl, M.; Kardjilov, N.; Hilger, A.; Penumadu, D.; Manke, I.

    2011-01-01

    During the last decade neutron imaging has seen significant improvements in instrumentation, detection and spatial resolution. Additionally, a variety of new applications and methods have been explored. As a consequence of an outstanding development nowadays various techniques of neutron imaging go far beyond a two- and three-dimensional mapping of the attenuation coefficients for a broad range of samples. Neutron imaging has become sensitive to neutron scattering in the small angle scattering range as well as with respect to Bragg scattering. Corresponding methods potentially provide spatially resolved and volumetric data revealing microstructural inhomogeneities, texture variations, crystalline phase distributions and even strains in bulk samples. Other techniques allow for the detection of refractive index distribution through phase sensitive measurements and the utilization of polarized neutrons enables radiographic and tomographic investigations of magnetic fields and properties as well as electrical currents within massive samples. All these advanced methods utilize or depend on wavelength dependent signals, and are hence suited to profit significantly from pulsed neutron sources as will be discussed.

  1. Models, methods and software for distributed knowledge acquisition for the automated construction of integrated expert systems knowledge bases

    International Nuclear Information System (INIS)

    Dejneko, A.O.

    2011-01-01

    Based on an analysis of existing models, methods and means of acquiring knowledge, a base method of automated knowledge acquisition has been chosen. On the base of this method, a new approach to integrate information acquired from knowledge sources of different typologies has been proposed, and the concept of a distributed knowledge acquisition with the aim of computerized formation of the most complete and consistent models of problem areas has been introduced. An original algorithm for distributed knowledge acquisition from databases, based on the construction of binary decision trees has been developed [ru

  2. A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods

    Science.gov (United States)

    Ritter, Nicola L.

    2012-01-01

    Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…

  3. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  4. Characteristics of radon concentration distributions measurement with two-filter method in the storage rooms of Ra-Be neutron source and the adjacent laboratories

    International Nuclear Information System (INIS)

    Liu Shuhuan; Chu Jun; Zhao Yaolin; Bao Lihong; Chen Wei; Wu Yuelei

    2012-01-01

    The basic principle of radon measurement with two-filter method is introduced in this paper. The levels of radon concentration in the storage rooms and the adjacent laboratories are measured and compared with the type of FT-648 radon measurement instrument. The measurement results showed that the levels of radon concentration in Ra-Be neutron source storage rooms are not higher than those in the adjacent laboratories, then it can be deduced that no radon gas was leaked out form the shielded and sealed Ra-Be neutron sources. The radon concentrations measured in the laboratories were near to the average level compared to the statistical results of the indoor dwellings' radon concentrations in Xi'an. The values didn't exceed the national standard limits(200 Bq·m -3 ). Furthermore, it is found that the radon concentration value measured at rainy or cloudy day is lesser than that at sunny day, and good ventilation conditions can effectively decrease the indoor radon concentration level. Meanwhile, the mechanisms of the radon concentration distributions in the laboratories influenced by the factors including various weather conditions (rainy, sunny and cloudy), ventilation and different measurement time periods in a day, etc. are primarily analyzed. The measurement results in this work provide some relative reference data for prediction the situations of Ra-Be neutron sources safety storage and experimenters' radiation protection in the laboratories. (authors)

  5. Generalized Analysis of a Distribution Separation Method

    Directory of Open Access Journals (Sweden)

    Peng Zhang

    2016-04-01

    Full Text Available Separating two probability distributions from a mixture model that is made up of the combinations of the two is essential to a wide range of applications. For example, in information retrieval (IR, there often exists a mixture distribution consisting of a relevance distribution that we need to estimate and an irrelevance distribution that we hope to get rid of. Recently, a distribution separation method (DSM was proposed to approximate the relevance distribution, by separating a seed irrelevance distribution from the mixture distribution. It was successfully applied to an IR task, namely pseudo-relevance feedback (PRF, where the query expansion model is often a mixture term distribution. Although initially developed in the context of IR, DSM is indeed a general mathematical formulation for probability distribution separation. Thus, it is important to further generalize its basic analysis and to explore its connections to other related methods. In this article, we first extend DSM’s theoretical analysis, which was originally based on the Pearson correlation coefficient, to entropy-related measures, including the KL-divergence (Kullback–Leibler divergence, the symmetrized KL-divergence and the JS-divergence (Jensen–Shannon divergence. Second, we investigate the distribution separation idea in a well-known method, namely the mixture model feedback (MMF approach. We prove that MMF also complies with the linear combination assumption, and then, DSM’s linear separation algorithm can largely simplify the EM algorithm in MMF. These theoretical analyses, as well as further empirical evaluation results demonstrate the advantages of our DSM approach.

  6. Method of forecasting power distribution

    International Nuclear Information System (INIS)

    Kaneto, Kunikazu.

    1981-01-01

    Purpose: To obtain forecasting results at high accuracy by reflecting the signals from neutron detectors disposed in the reactor core on the forecasting results. Method: An on-line computer transfers, to a simulator, those process data such as temperature and flow rate for coolants in each of the sections and various measuring signals such as control rod positions from the nuclear reactor. The simulator calculates the present power distribution before the control operation. The signals from the neutron detectors at each of the positions in the reactor core are estimated from the power distribution and errors are determined based on the estimated values and the measured values to determine the smooth error distribution in the axial direction. Then, input conditions at the time to be forecast are set by a data setter. The simulator calculates the forecast power distribution after the control operation based on the set conditions. The forecast power distribution is corrected using the error distribution. (Yoshino, Y.)

  7. Measurement-device-independent quantum key distribution with correlated source-light-intensity errors

    Science.gov (United States)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2018-04-01

    We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.

  8. Boosting up quantum key distribution by learning statistics of practical single-photon sources

    International Nuclear Information System (INIS)

    Adachi, Yoritoshi; Yamamoto, Takashi; Koashi, Masato; Imoto, Nobuyuki

    2009-01-01

    We propose a simple quantum-key-distribution (QKD) scheme for practical single-photon sources (SPSs), which works even with a moderate suppression of the second-order correlation g (2) of the source. The scheme utilizes a passive preparation of a decoy state by monitoring a fraction of the signal via an additional beam splitter and a detector at the sender's side to monitor photon-number splitting attacks. We show that the achievable distance increases with the precision with which the sub-Poissonian tendency is confirmed in higher photon-number distribution of the source, rather than with actual suppression of the multiphoton emission events. We present an example of the secure key generation rate in the case of a poor SPS with g (2) =0.19, in which no secure key is produced with the conventional QKD scheme, and show that learning the photon-number distribution up to several numbers is sufficient for achieving almost the same distance as that of an ideal SPS.

  9. Calculation of neutron interior source distribution within subcritical fission-chain reacting systems for a prescribed power density generation

    International Nuclear Information System (INIS)

    Moraes, Leonardo R.C.; Alves Filho, Hermes; Barros, Ricardo C.

    2017-01-01

    Accelerator Driven Systems (ADS) are sub-critical systems stabilized by stationary external sources of neutrons. A system is subcritical when the removal by absorption and leakage exceeds the production by fission and tends to shut down. On the other hand, any subcritical system can be stabilized by including time-independent external sources of neutrons. The goal of this work is to determine the intensity of uniform and isotropic sources of neutrons that must be added inside all fuel regions of a subcritical system so that it becomes stabilized, generating a prescribed distribution of electric power. A computer program has been developed in Java language to estimate the intensity of stationary sources of neutrons that must be included in the fuel regions to drive the subcritical system with a fixed power distribution prescribed by the user. The mathematical model used to achieve this goal was the energy multigroup, slab-geometry neutron transport equation in the discrete ordinates (S N ) formulation and the response matrix method was applied to solve the forward and the adjoint S N problems. Numerical results are given to verify the present. (author)

  10. Calculation of neutron interior source distribution within subcritical fission-chain reacting systems for a prescribed power density generation

    Energy Technology Data Exchange (ETDEWEB)

    Moraes, Leonardo R.C.; Alves Filho, Hermes; Barros, Ricardo C., E-mail: lrcmoraes@iprj.uerj.br, E-mail: halves@iprj.uerj.br, E-mail: ricardob@iprj.uerj.br [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Programa de Pós-Graduação em Modelagem Computacional

    2017-07-01

    Accelerator Driven Systems (ADS) are sub-critical systems stabilized by stationary external sources of neutrons. A system is subcritical when the removal by absorption and leakage exceeds the production by fission and tends to shut down. On the other hand, any subcritical system can be stabilized by including time-independent external sources of neutrons. The goal of this work is to determine the intensity of uniform and isotropic sources of neutrons that must be added inside all fuel regions of a subcritical system so that it becomes stabilized, generating a prescribed distribution of electric power. A computer program has been developed in Java language to estimate the intensity of stationary sources of neutrons that must be included in the fuel regions to drive the subcritical system with a fixed power distribution prescribed by the user. The mathematical model used to achieve this goal was the energy multigroup, slab-geometry neutron transport equation in the discrete ordinates (S{sub N}) formulation and the response matrix method was applied to solve the forward and the adjoint S{sub N} problems. Numerical results are given to verify the present. (author)

  11. Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    the mode, ratio-of-uniforms rejection method, and rejection by sampling in the tau domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models...... of biased sampling and models of evolution and for calculating moments and quantiles of the distributions.......Several methods for generating variates with univariate and multivariate Wallenius' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from...

  12. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    Directory of Open Access Journals (Sweden)

    Shan Yang

    2016-01-01

    Full Text Available Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverter based distributed generation is proposed. The proposed method let the inverter based distributed generation be equivalent to Iθ bus, which makes it suitable to calculate the power flow of distribution network with a current limited inverter based distributed generation. And the low voltage ride through capability of inverter based distributed generation can be considered as well in this paper. Finally, some tests of power flow and short circuit current calculation are performed on a 33-bus distribution network. The calculated results from the proposed method in this paper are contrasted with those by the traditional method and the simulation method, whose results have verified the effectiveness of the integrated method suggested in this paper.

  13. Review of Congestion Management Methods for Distribution Networks with High Penetration of Distributed Energy Resources

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi

    2014-01-01

    This paper reviews the existing congestion management methods for distribution networks with high penetration of DERs documented in the recent research literatures. The congestion management methods for distribution networks reviewed can be grouped into two categories – market methods and direct...... control methods. The market methods consist of dynamic tariff, distribution capacity market, shadow price and flexible service market. The direct control methods are comprised of network reconfiguration, reactive power control and active power control. Based on the review of the existing methods...

  14. Sources and distribution of anthropogenic radionuclides in different marine environments

    International Nuclear Information System (INIS)

    Holm, E.

    1997-01-01

    The knowledge of the distribution in time and space radiologically important radionuclides from different sources in different marine environments is important for assessment of dose commitment following controlled or accidental releases and for detecting eventual new sources. Present sources from nuclear explosion tests, releases from nuclear facilities and the Chernobyl accident provide a tool for such studies. The different sources can be distinguished by different isotopic and radionuclide composition. Results show that radiocaesium behaves rather conservatively in the south and north Atlantic while plutonium has a residence time of about 8 years. On the other hand enhanced concentrations of plutonium in surface waters in arctic regions where vertical mixing is small and iceformation plays an important role. Significantly increased concentrations of plutonium are also found below the oxic layer in anoxic basins due to geochemical concentration. (author)

  15. Voltage management of distribution networks with high penetration of distributed photovoltaic generation sources

    Science.gov (United States)

    Alyami, Saeed

    Installation of photovoltaic (PV) units could lead to great challenges to the existing electrical systems. Issues such as voltage rise, protection coordination, islanding detection, harmonics, increased or changed short-circuit levels, etc., need to be carefully addressed before we can see a wide adoption of this environmentally friendly technology. Voltage rise or overvoltage issues are of particular importance to be addressed for deploying more PV systems to distribution networks. This dissertation proposes a comprehensive solution to deal with the voltage violations in distribution networks, from controlling PV power outputs and electricity consumption of smart appliances in real time to optimal placement of PVs at the planning stage. The dissertation is composed of three parts: the literature review, the work that has already been done and the future research tasks. An overview on renewable energy generation and its challenges are given in Chapter 1. The overall literature survey, motivation and the scope of study are also outlined in the chapter. Detailed literature reviews are given in the rest of chapters. The overvoltage and undervoltage phenomena in typical distribution networks with integration of PVs are further explained in Chapter 2. Possible approaches for voltage quality control are also discussed in this chapter, followed by the discussion on the importance of the load management for PHEVs and appliances and its benefits to electric utilities and end users. A new real power capping method is presented in Chapter 3 to prevent overvoltage by adaptively setting the power caps for PV inverters in real time. The proposed method can maintain voltage profiles below a pre-set upper limit while maximizing the PV generation and fairly distributing the real power curtailments among all the PV systems in the network. As a result, each of the PV systems in the network has equal opportunity to generate electricity and shares the responsibility of voltage

  16. Coded aperture imaging of alpha source spatial distribution

    International Nuclear Information System (INIS)

    Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul

    2012-01-01

    The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.

  17. Reduction Method for Active Distribution Networks

    DEFF Research Database (Denmark)

    Raboni, Pietro; Chen, Zhe

    2013-01-01

    On-line security assessment is traditionally performed by Transmission System Operators at the transmission level, ignoring the effective response of distributed generators and small loads. On the other hand the required computation time and amount of real time data for including Distribution...... Networks also would be too large. In this paper an adaptive aggregation method for subsystems with power electronic interfaced generators and voltage dependant loads is proposed. With this tool may be relatively easier including distribution networks into security assessment. The method is validated...... by comparing the results obtained in PSCAD® with the detailed network model and with the reduced one. Moreover the control schemes of a wind turbine and a photovoltaic plant included in the detailed network model are described....

  18. Methods for differentiating identity and sources of mixed petroleum pollutants in the environment

    International Nuclear Information System (INIS)

    Kaplan, I.R.; Alimi, H.; Lee, R.P.

    1993-01-01

    When crude or refined oil products enter the environment they begin to degrade by numerous microbiological or physical processes. The result of such changes is to alter the molecular composition of the product so that its source is unrecognizable by application of conventional EPA-type methodology. Numerous methods have been devised in the petroleum exploration industry to characterize source rock bitumens and reservoir hydrocarbons. A modification of these methods has been successfully applied at the authors company to identify the source of the fugitive hydrocarbons. For mildly altered products a statistical comparison is made using pattern recognition of the n-alkane distribution between C 10 -C 35 for heavy products and C 3 -C 10 for the gasoline range products. For highly altered products, a search is made for complex organic molecules that have undergone the least alteration, which include long chain polynuclear aromatic hydrocarbons and the polycyclic paraffinic hydrocarbons. These biomarker compounds have many isomeric forms which help characterize their sources. Elemental composition; especially sulfur, vanadium and nickel, and other transition and base metals help differentiate crude oil from refined products. Lead alkyls and MTBE are especially useful in determining residence time of gasoline products in soil and ground water. Petroporphyrin characterization can help differentiate crude oil from heavy refined oils or fluids. Stable isotope ratios are particularly useful for differentiating sources of highly altered petroleum products

  19. Spatial distribution of saline water and possible sources of intrusion ...

    African Journals Online (AJOL)

    The spatial distribution of saline water and possible sources of intrusion into Lekki lagoon and transitional effects on the lacustrine ichthyofaunal characteristics were studied during March, 2006 and February, 2008. The water quality analysis indicated that, salinity has drastically increased recently in the lagoon (0.007 to ...

  20. A Survey of Sound Source Localization Methods in Wireless Acoustic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Maximo Cobos

    2017-01-01

    Full Text Available Wireless acoustic sensor networks (WASNs are formed by a distributed group of acoustic-sensing devices featuring audio playing and recording capabilities. Current mobile computing platforms offer great possibilities for the design of audio-related applications involving acoustic-sensing nodes. In this context, acoustic source localization is one of the application domains that have attracted the most attention of the research community along the last decades. In general terms, the localization of acoustic sources can be achieved by studying energy and temporal and/or directional features from the incoming sound at different microphones and using a suitable model that relates those features with the spatial location of the source (or sources of interest. This paper reviews common approaches for source localization in WASNs that are focused on different types of acoustic features, namely, the energy of the incoming signals, their time of arrival (TOA or time difference of arrival (TDOA, the direction of arrival (DOA, and the steered response power (SRP resulting from combining multiple microphone signals. Additionally, we discuss methods not only aimed at localizing acoustic sources but also designed to locate the nodes themselves in the network. Finally, we discuss current challenges and frontiers in this field.

  1. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local universe

    DEFF Research Database (Denmark)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene

    2017-01-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe....... Assuming that the distribution of the neutrino sources follows that of matter we look for correlations between `warm' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance...... (including that of IceCube-Gen2) we demonstrate that sources with local density exceeding $10^{-6} \\, \\text{Mpc}^{-3}$ and neutrino luminosity $L_{\

  2. Flows and Stratification of an Enclosure Containing Both Localised and Vertically Distributed Sources of Buoyancy

    Science.gov (United States)

    Partridge, Jamie; Linden, Paul

    2013-11-01

    We examine the flows and stratification established in a naturally ventilated enclosure containing both a localised and vertically distributed source of buoyancy. The enclosure is ventilated through upper and lower openings which connect the space to an external ambient. Small scale laboratory experiments were carried out with water as the working medium and buoyancy being driven directly by temperature differences. A point source plume gave localised heating while the distributed source was driven by a controllable heater mat located in the side wall of the enclosure. The transient temperatures, as well as steady state temperature profiles, were recorded and are reported here. The temperature profiles inside the enclosure were found to be dependent on the effective opening area A*, a combination of the upper and lower openings, and the ratio of buoyancy fluxes from the distributed and localised source Ψ =Bw/Bp . Industrial CASE award with ARUP.

  3. Optimized computational method for determining the beta dose distribution using a multiple-element thermoluminescent dosimeter system

    International Nuclear Information System (INIS)

    Shen, L.; Levine, S.H.; Catchen, G.L.

    1987-01-01

    This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration

  4. Production, Distribution, and Applications of Californium-252 Neutron Sources

    International Nuclear Information System (INIS)

    Balo, P.A.; Knauer, J.B.; Martin, R.C.

    1999-01-01

    The radioisotope 252 Cf is routinely encapsulated into compact, portable, intense neutron sources with a 2.6-year half-life. A source the size of a person's little finger can emit up to 10 11 neutrons/s. Californium-252 is used commercially as a reliable, cost-effective neutron source for prompt gamma neutron activation analysis (PGNAA) of coal, cement, and minerals, as well as for detection and identification of explosives, laud mines, and unexploded military ordnance. Other uses are neutron radiography, nuclear waste assays, reactor start-up sources, calibration standards, and cancer therapy. The inherent safety of source encapsulations is demonstrated by 30 years of experience and by U.S. Bureau of Mines tests of source survivability during explosions. The production and distribution center for the U. S Department of Energy (DOE) Californium Program is the Radiochemical Engineering Development Center (REDC) at Oak Ridge National Laboratory (ORNL). DOE sells 252 Cf to commercial reencapsulators domestically and internationally. Sealed 252 Cf sources are also available for loan to agencies and subcontractors of the U.S. government and to universities for educational, research, and medical applications. The REDC has established the Californium User Facility (CUF) for Neutron Science to make its large inventory of 252 Cf sources available to researchers for irradiations inside uncontaminated hot cells. Experiments at the CUF include a land mine detection system, neutron damage testing of solid-state detectors, irradiation of human cancer cells for boron neutron capture therapy experiments, and irradiation of rice to induce genetic mutations

  5. A new method of optimal capacitor switching based on minimum spanning tree theory in distribution systems

    Science.gov (United States)

    Li, H. W.; Pan, Z. Y.; Ren, Y. B.; Wang, J.; Gan, Y. L.; Zheng, Z. Z.; Wang, W.

    2018-03-01

    According to the radial operation characteristics in distribution systems, this paper proposes a new method based on minimum spanning trees method for optimal capacitor switching. Firstly, taking the minimal active power loss as objective function and not considering the capacity constraints of capacitors and source, this paper uses Prim algorithm among minimum spanning trees algorithms to get the power supply ranges of capacitors and source. Then with the capacity constraints of capacitors considered, capacitors are ranked by the method of breadth-first search. In term of the order from high to low of capacitor ranking, capacitor compensation capacity based on their power supply range is calculated. Finally, IEEE 69 bus system is adopted to test the accuracy and practicality of the proposed algorithm.

  6. Geometric effects in alpha particle detection from distributed air sources

    International Nuclear Information System (INIS)

    Gil, L.R.; Leitao, R.M.S.; Marques, A.; Rivera, A.

    1994-08-01

    Geometric effects associated to detection of alpha particles from distributed air sources, as it happens in Radon and Thoron measurements, are revisited. The volume outside which no alpha particle may reach the entrance window of the detector is defined and determined analytically for rectangular and cylindrical symmetry geometries. (author). 3 figs

  7. Radiation sources and methods for producing them

    International Nuclear Information System (INIS)

    Malson, H.A.; Moyer, S.E.; Honious, H.B.; Janzow, E.F.

    1979-01-01

    The radiation sources contain a substrate with an electrically conducting, non-radioactive metal surface, a layer of a metal isotope of the scandium group as well as a percentage of non-radioactive binding metal being coated on the surface by means of an electroplating method. Besides examples for β sources ( 147 Pm), γ sources ( 241 Am), and neutron sources ( 252 Cf) there is described an α-radiation source ( 241 Am, 244 Cu, 238 Pu) for smoke detectors. There are given extensive tables and a bibliography. (DG) [de

  8. The Source Equivalence Acceleration Method

    International Nuclear Information System (INIS)

    Everson, Matthew S.; Forget, Benoit

    2015-01-01

    Highlights: • We present a new acceleration method, the Source Equivalence Acceleration Method. • SEAM forms an equivalent coarse group problem for any spatial method. • Equivalence is also formed across different spatial methods and angular quadratures. • Testing is conducted using OpenMOC and performance is compared with CMFD. • Results show that SEAM is preferable for very expensive transport calculations. - Abstract: Fine-group whole-core reactor analysis remains one of the long sought goals of the reactor physics community. Such a detailed analysis is typically too computationally expensive to be realized on anything except the largest of supercomputers. Recondensation using the Discrete Generalized Multigroup (DGM) method, though, offers a relatively cheap alternative to solving the fine group transport problem. DGM, however, suffered from inconsistencies when applied to high-order spatial methods. While an exact spatial recondensation method was developed and provided full spatial consistency with the fine group problem, this approach substantially increased memory requirements for realistic problems. The method described in this paper, called the Source Equivalence Acceleration Method (SEAM), forms a coarse-group problem which preserves the fine-group problem even when using higher order spatial methods. SEAM allows recondensation to converge to the fine-group solution with minimal memory requirements and little additional overhead. This method also provides for consistency when using different spatial methods and angular quadratures between the coarse group and fine group problems. SEAM was implemented in OpenMOC, a 2D MOC code developed at MIT, and its performance tested against Coarse Mesh Finite Difference (CMFD) acceleration on the C5G7 benchmark problem and on a 361 group version of the problem. For extremely expensive transport calculations, SEAM was able to outperform CMFD, resulting in speed-ups of 20–45 relative to the normal power

  9. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    Energy Technology Data Exchange (ETDEWEB)

    AlRashidi, M.R., E-mail: malrash2002@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait); AlHajri, M.F., E-mail: mfalhajri@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait)

    2011-10-15

    Highlights: {yields} A new hybrid PSO for optimal DGs placement and sizing. {yields} Statistical analysis to fine tune PSO parameters. {yields} Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  10. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    International Nuclear Information System (INIS)

    AlRashidi, M.R.; AlHajri, M.F.

    2011-01-01

    Highlights: → A new hybrid PSO for optimal DGs placement and sizing. → Statistical analysis to fine tune PSO parameters. → Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  11. Distribution-independent hierarchicald N-body methods

    International Nuclear Information System (INIS)

    Aluru, S.

    1994-01-01

    The N-body problem is to simulate the motion of N particles under the influence of mutual force fields based on an inverse square law. The problem has applications in several domains including astrophysics, molecular dynamics, fluid dynamics, radiosity methods in computer graphics and numerical complex analysis. Research efforts have focused on reducing the O(N 2 ) time per iteration required by the naive algorithm of computing each pairwise interaction. Widely respected among these are the Barnes-Hut and Greengard methods. Greengard claims his algorithm reduces the complexity to O(N) time per iteration. Throughout this thesis, we concentrate on rigorous, distribution-independent, worst-case analysis of the N-body methods. We show that Greengard's algorithm is not O(N), as claimed. Both Barnes-Hut and Greengard's methods depend on the same data structure, which we show is distribution-dependent. For the distribution that results in the smallest running time, we show that Greengard's algorithm is Ω(N log 2 N) in two dimensions and Ω(N log 4 N) in three dimensions. We have designed a hierarchical data structure whose size depends entirely upon the number of particles and is independent of the distribution of the particles. We show that both Greengard's and Barnes-Hut algorithms can be used in conjunction with this data structure to reduce their complexity. Apart from reducing the complexity of the Barnes-Hut algorithm, the data structure also permits more accurate error estimation. We present two- and three-dimensional algorithms for creating the data structure. The multipole method designed using this data structure has a complexity of O(N log N) in two dimensions and O(N log 2 N) in three dimensions

  12. Identifying and characterizing major emission point sources as a basis for geospatial distribution of mercury emissions inventories

    Science.gov (United States)

    Steenhuisen, Frits; Wilson, Simon J.

    2015-07-01

    Mercury is a global pollutant that poses threats to ecosystem and human health. Due to its global transport, mercury contamination is found in regions of the Earth that are remote from major emissions areas, including the Polar regions. Global anthropogenic emission inventories identify important sectors and industries responsible for emissions at a national level; however, to be useful for air transport modelling, more precise information on the locations of emission is required. This paper describes the methodology applied, and the results of work that was conducted to assign anthropogenic mercury emissions to point sources as part of geospatial mapping of the 2010 global anthropogenic mercury emissions inventory prepared by AMAP/UNEP. Major point-source emission sectors addressed in this work account for about 850 tonnes of the emissions included in the 2010 inventory. This work allocated more than 90% of these emissions to some 4600 identified point source locations, including significantly more point source locations in Africa, Asia, Australia and South America than had been identified during previous work to geospatially-distribute the 2005 global inventory. The results demonstrate the utility and the limitations of using existing, mainly public domain resources to accomplish this work. Assumptions necessary to make use of selected online resources are discussed, as are artefacts that can arise when these assumptions are applied to assign (national-sector) emissions estimates to point sources in various countries and regions. Notwithstanding the limitations of the available information, the value of this procedure over alternative methods commonly used to geo-spatially distribute emissions, such as use of 'proxy' datasets to represent emissions patterns, is illustrated. Improvements in information that would facilitate greater use of these methods in future work to assign emissions to point-sources are discussed. These include improvements to both national

  13. Uncertainty analysis methods for quantification of source terms using a large computer code

    International Nuclear Information System (INIS)

    Han, Seok Jung

    1997-02-01

    Quantification of uncertainties in the source term estimations by a large computer code, such as MELCOR and MAAP, is an essential process of the current probabilistic safety assessments (PSAs). The main objectives of the present study are (1) to investigate the applicability of a combined procedure of the response surface method (RSM) based on input determined from a statistical design and the Latin hypercube sampling (LHS) technique for the uncertainty analysis of CsI release fractions under a hypothetical severe accident sequence of a station blackout at Young-Gwang nuclear power plant using MAAP3.0B code as a benchmark problem; and (2) to propose a new measure of uncertainty importance based on the distributional sensitivity analysis. On the basis of the results obtained in the present work, the RSM is recommended to be used as a principal tool for an overall uncertainty analysis in source term quantifications, while using the LHS in the calculations of standardized regression coefficients (SRC) and standardized rank regression coefficients (SRRC) to determine the subset of the most important input parameters in the final screening step and to check the cumulative distribution functions (cdfs) obtained by RSM. Verification of the response surface model for its sufficient accuracy is a prerequisite for the reliability of the final results obtained by the combined procedure proposed in the present work. In the present study a new measure has been developed to utilize the metric distance obtained from cumulative distribution functions (cdfs). The measure has been evaluated for three different cases of distributions in order to assess the characteristics of the measure: The first case and the second are when the distribution is known as analytical distributions and the other case is when the distribution is unknown. The first case is given by symmetry analytical distributions. The second case consists of two asymmetry distributions of which the skewness is non zero

  14. Space power distribution of soft x-ray source ANGARA-5-1

    Energy Technology Data Exchange (ETDEWEB)

    Dyabilin, K S [High Energy Density Research Center, Moscow (Russian Federation); Fortov, V E; Grabovskij, E V; Lebedev, M E; Smirnov, V P [Troitsk Inst. of Innovative and Fusion Research, Troitsk (Russian Federation)

    1997-12-31

    The contribution deals with the investigation of shock waves in condensed targets generated by intense pulses of soft X radiation. Main attention is paid to the spatial distribution of the soft x-ray power, which influence strongly the shock wave front uniformity. Hot z-pinch plasma with the temperature of 60-100 eV produced by imploding double liner in the ANGARA-5-1 machine was used as a source of x rays. The maximum pinch current was as high as 3.5 MA. In order to eliminate the thermal heating of the targets, thick stepped Al/Pb, Sn/Pb, or pure Pb targets were used. The velocity of shock waves was determined by means of optical methods. Very uniform shock waves and shock pressures of up to several hundreds of GPa have been achieved. (J.U.). 3 figs., 2 refs.

  15. Impact source identification in finite isotropic plates using a time-reversal method: theoretical study

    International Nuclear Information System (INIS)

    Chen, Chunlin; Yuan, Fuh-Gwo

    2010-01-01

    This paper aims to identify impact sources on plate-like structures based on the synthetic time-reversal (T-R) concept using an array of sensors. The impact source characteristics, namely, impact location and impact loading time history, are reconstructed using the invariance of time-reversal concept, reciprocal theory, and signal processing algorithms. Numerical verification for two finite isotropic plates under low and high velocity impacts is performed to demonstrate the versatility of the synthetic T-R method for impact source identification. The results show that the impact location and time history of the impact force with various shapes and frequency bands can be readily obtained with only four sensors distributed around the impact location. The effects of time duration and the inaccuracy in the estimated impact location on the accuracy of the time history of the impact force using the T-R method are investigated. Since the T-R technique retraces all the multi-paths of reflected waves from the geometrical boundaries back to the impact location, it is well suited for quantifying the impact characteristics for complex structures. In addition, this method is robust against noise and it is suggested that a small number of sensors is sufficient to quantify the impact source characteristics through simple computation; thus it holds promise for the development of passive structural health monitoring (SHM) systems for impact monitoring in near real-time

  16. Two-step design method for highly compact three-dimensional freeform optical system for LED surface light source.

    Science.gov (United States)

    Mao, Xianglong; Li, Hongtao; Han, Yanjun; Luo, Yi

    2014-10-20

    Designing an illumination system for a surface light source with a strict compactness requirement is quite challenging, especially for the general three-dimensional (3D) case. In accordance with the two key features of an expected illumination distribution, i.e., a well-controlled boundary and a precise illumination pattern, a two-step design method is proposed in this paper for highly compact 3D freeform illumination systems. In the first step, a target shape scaling strategy is combined with an iterative feedback modification algorithm to generate an optimized freeform optical system with a well-controlled boundary of the target distribution. In the second step, a set of selected radii of the system obtained in the first step are optimized to further improve the illuminating quality within the target region. The method is quite flexible and effective to design highly compact optical systems with almost no restriction on the shape of the desired target field. As examples, three highly compact freeform lenses with ratio of center height h of the lens and the maximum dimension D of the source ≤ 2.5:1 are designed for LED surface light sources to form a uniform illumination distribution on a rectangular, a cross-shaped and a complex cross pierced target plane respectively. High light control efficiency of η > 0.7 as well as low relative standard illumination deviation of RSD < 0.07 is obtained simultaneously for all the three design examples.

  17. Study of Different Tissue Density Effects on the Dose Distribution of a 103Pd Brachytherapy Source Model MED3633

    Directory of Open Access Journals (Sweden)

    Ali Asghar Mowlavi

    2010-09-01

    Full Text Available Introduction: Clinical application of encapsulated radioactive brachytherapy sources has a major role in cancer treatment. In the present research, the effects of different tissue densities on the dose distribution of a 103Pd brachytherapy source in a spherical phantom of 50 cm radius have been studied. Material and Methods: As is well known, absorbed dose in tissue depends to its density, but this difference is not clear in measurements. Therefore, we applied the MCNP code to evaluate the effect of density on the dose distribution. 103Pd brachytherapy sources are used to treat prostate, breast and other cancers. Results: Absorbed dose has been calculated and presented around a source placed in the center of the phantom for different tissue densities. Also, we derived anisotropy and radial dose functions and compared our Monte Carlo results with experimental results of Rivard and Li et al. for F(1, θ and g(r in 1.040 g/cm3 tissue. Conclusion: The results of this study show that relative dose variations around the source center are very considerable at different densities, because of the presence of a photoabsorber (Au-Cu alloy in the source core. Dose variation exceeds 80% at the point (Z = 2.4 mm, Y = 0 mm. Computed values of anisotropy and radial dose functions are in good agreement with the experimental results of Rivard and Li et al.

  18. Method of measuring the current density distribution and emittance of pulsed electron beams

    International Nuclear Information System (INIS)

    Schilling, H.B.

    1979-07-01

    This method of current density measurement employs an array of many Faraday cups, each cup being terminated by an integrating capacitor. The voltages of the capacitors are subsequently displayed on a scope, thus giving the complete current density distribution with one shot. In the case of emittance measurements, a moveable small-diameter aperture is inserted at some distance in front of the cup array. Typical results with a two-cathode, two-energy electron source are presented. (orig.)

  19. Methods for forming particles from single source precursors

    Science.gov (United States)

    Fox, Robert V [Idaho Falls, ID; Rodriguez, Rene G [Pocatello, ID; Pak, Joshua [Pocatello, ID

    2011-08-23

    Single source precursors are subjected to carbon dioxide to form particles of material. The carbon dioxide may be in a supercritical state. Single source precursors also may be subjected to supercritical fluids other than supercritical carbon dioxide to form particles of material. The methods may be used to form nanoparticles. In some embodiments, the methods are used to form chalcopyrite materials. Devices such as, for example, semiconductor devices may be fabricated that include such particles. Methods of forming semiconductor devices include subjecting single source precursors to carbon dioxide to form particles of semiconductor material, and establishing electrical contact between the particles and an electrode.

  20. An Adjoint Sensitivity Method Applied to Time Reverse Imaging of Tsunami Source for the 2009 Samoa Earthquake

    Science.gov (United States)

    Hossen, M. Jakir; Gusman, Aditya; Satake, Kenji; Cummins, Phil R.

    2018-01-01

    We have previously developed a tsunami source inversion method based on "Time Reverse Imaging" and demonstrated that it is computationally very efficient and has the ability to reproduce the tsunami source model with good accuracy using tsunami data of the 2011 Tohoku earthquake tsunami. In this paper, we implemented this approach in the 2009 Samoa earthquake tsunami triggered by a doublet earthquake consisting of both normal and thrust faulting. Our result showed that the method is quite capable of recovering the source model associated with normal and thrust faulting. We found that the inversion result is highly sensitive to some stations that must be removed from the inversion. We applied an adjoint sensitivity method to find the optimal set of stations in order to estimate a realistic source model. We found that the inversion result is improved significantly once the optimal set of stations is used. In addition, from the reconstructed source model we estimated the slip distribution of the fault from which we successfully determined the dipping orientation of the fault plane for the normal fault earthquake. Our result suggests that the fault plane dip toward the northeast.

  1. Distributed MIMO-ISAR Sub-image Fusion Method

    Directory of Open Access Journals (Sweden)

    Gu Wenkun

    2017-02-01

    Full Text Available The fast fluctuation associated with maneuvering a target’s radar cross-section often affects the imaging performance stability of traditional monostatic Inverse Synthetic Aperture Radar (ISAR. To address this problem, in this study, we propose an imaging method based on the fusion of sub-images of frequencydiversity-distributed multiple Input-Multiple Output-Inverse Synthetic Aperture Radar (MIMO-ISAR. First, we establish the analytic expression of a two-dimensional ISAR sub-image acquired by different channels of distributed MIMO-ISAR. Then, we derive the distance and azimuth distortion factors of the image acquired by the different channels. By compensating for the distortion of the ISAR image, we ultimately realize distributed MIMO-ISAR fusion imaging. Simulations verify the validity of this imaging method using distributed MIMO-ISAR.

  2. The versatile biopolymer chitosan: potential sources, evaluation of extraction methods and applications.

    Science.gov (United States)

    Kaur, Surinder; Dhillon, Gurpreet Singh

    2014-05-01

    Among the biopolymers, chitin and its derivative chitosan (CTS) have been receiving increasing attention. Both are composed of randomly distributed β-(1-4)-linked d-glucosamine and N-acetyl glucosamine units. On commercial scale, CTS is mainly obtained from the crustacean shells. The chemical methods employed for extraction of CTS from crustacean shells are laden with many disadvantages. Waste fungal biomass represents a potential biological source of CTS, in fact with superior physico-chemical properties, such as high degree of deacetylation, low molecular weight, devoid of protein contamination and high bioactivity. Researchers around the globe are attempting to commercialize CTS production and extraction from fungal sources. Fungi are promising and environmentally benign source of CTS and they have the potential to completely replace crustacean-derived CTS. Waste fungal biomass resulting from various pharmaceutical and biotechnological industries is grown on inexpensive agro-industrial wastes and its by-products are a rich and inexpensive source of CTS. CTS is emerging as an important natural polymer having broad range of applications in different fields. In this context, the present review discusses the potential sources of CTS and their advantages and disadvantages. This review also deals with potential applications of CTS in different fields. Finally, the various attributes of CTS sought in different applications are discussed.

  3. Imaging phase holdup distribution of three phase flow systems using dual source gamma ray tomography

    International Nuclear Information System (INIS)

    Varma, Rajneesh; Al-Dahhan, Muthanna; O'Sullivan, Joseph

    2008-01-01

    Full text: Multiphase reaction and process systems are used in abundance in the chemical and biochemical industry. Tomography has been successfully employed to visualize the hydrodynamics of multiphase systems. Most of the tomography methods (gamma ray, x-ray and electrical capacitance and resistance) have been successfully implemented for two phase dynamic systems. However, a significant number of chemical and biochemical systems consists of dynamic three phases. Research effort directed towards the development of tomography techniques to image such dynamic system has met with partial successes for specific systems with applicability to limited operating conditions. A dual source tomography scanner has been developed that uses the 661 keV and 1332 keV photo peaks from the 137 Cs and 60 Co for imaging three phase systems. A new approach has been developed and applied that uses the polyenergetic Alternating Minimization (A-M) algorithm, developed by O'Sullivan and Benac (2007), for imaging the holdup distribution in three phases' dynamic systems. The new approach avoids the traditional post image processing approach used to determine the holdup distribution where the attenuation images of the mixed flow obtained from gamma ray photons of two different energies are used to determine the holdup of three phases. In this approach the holdup images are directly reconstructed from the gamma ray transmission data. The dual source gamma ray tomography scanner and the algorithm were validated using a three phase phantom. Based in the validation, three phase holdup studies we carried out in slurry bubble column containing gas liquid and solid phases in a dynamic state using the dual energy gamma ray tomography. The key results of the holdup distribution studies in the slurry bubble column along with the validation of the dual source gamma ray tomography system would be presented and discussed

  4. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  5. Extending the alias Monte Carlo sampling method to general distributions

    International Nuclear Information System (INIS)

    Edwards, A.L.; Rathkopf, J.A.; Smidt, R.K.

    1991-01-01

    The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs

  6. The Sources and Methods of Engineering Design Requirement

    DEFF Research Database (Denmark)

    Li, Xuemeng; Zhang, Zhinan; Ahmed-Kristensen, Saeema

    2014-01-01

    to be defined in a new context. This paper focuses on understanding the design requirement sources at the requirement elicitation phase. It aims at proposing an improved design requirement source classification considering emerging markets and presenting current methods for eliciting requirement for each source...

  7. Advection-diffusion model for the simulation of air pollution distribution from a point source emission

    Science.gov (United States)

    Ulfah, S.; Awalludin, S. A.; Wahidin

    2018-01-01

    Advection-diffusion model is one of the mathematical models, which can be used to understand the distribution of air pollutant in the atmosphere. It uses the 2D advection-diffusion model with time-dependent to simulate air pollution distribution in order to find out whether the pollutants are more concentrated at ground level or near the source of emission under particular atmospheric conditions such as stable, unstable, and neutral conditions. Wind profile, eddy diffusivity, and temperature are considered in the model as parameters. The model is solved by using explicit finite difference method, which is then visualized by a computer program developed using Lazarus programming software. The results show that the atmospheric conditions alone influencing the level of concentration of pollutants is not conclusive as the parameters in the model have their own effect on each atmospheric condition.

  8. Relationship of Source Selection Methods to Contract Outcomes: an Analysis of Air Force Source Selection

    Science.gov (United States)

    2015-12-01

    some occasions, performance is terminated early; this can occur due to either mutual agreement or a breach of contract by one of the parties (Garrett...Relationship of Source Selection Methods to Contract Outcomes: an Analysis of Air Force Source Selection December 2015 Capt Jacques Lamoureux, USAF...on the contract management process, with special emphasis on the source selection methods of tradeoff and lowest price technically acceptable (LPTA

  9. 10 CFR 32.74 - Manufacture and distribution of sources or devices containing byproduct material for medical use.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Manufacture and distribution of sources or devices... SPECIFIC DOMESTIC LICENSES TO MANUFACTURE OR TRANSFER CERTAIN ITEMS CONTAINING BYPRODUCT MATERIAL Generally Licensed Items § 32.74 Manufacture and distribution of sources or devices containing byproduct material for...

  10. Probing the Spatial Distribution of the Interstellar Dust Medium by High Angular Resolution X-ray Halos of Point Sources

    Science.gov (United States)

    Xiang, Jingen

    X-rays are absorbed and scattered by dust grains when they travel through the interstellar medium. The scattering within small angles results in an X-ray ``halo''. The halo properties are significantly affected by the energy of radiation, the optical depth of the scattering, the grain size distributions and compositions, and the spatial distribution of dust along the line of sight (LOS). Therefore analyzing the X-ray halo properties is an important tool to study the size distribution and spatial distribution of interstellar grains, which plays a central role in the astrophysical study of the interstellar medium, such as the thermodynamics and chemistry of the gas and the dynamics of star formation. With excellent angular resolution, good energy resolution and broad energy band, the Chandra ACIS is so far the best instrument for studying the X-ray halos. But the direct images of bright sources obtained with ACIS usually suffer from severe pileup which prevents us from obtaining the halos in small angles. We first improve the method proposed by Yao et al to resolve the X-ray dust scattering halos of point sources from the zeroth order data in CC-mode or the first order data in TE mode with Chandra HETG/ACIS. Using this method we re-analyze the Cygnus X-1 data observed with Chandra. Then we studied the X-ray dust scattering halos around 17 bright X-ray point sources using Chandra data. All sources were observed with the HETG/ACIS in CC-mode or TE-mode. Using the interstellar grain models of WD01 model and MRN model to fit the halo profiles, we get the hydrogen column densities and the spatial distributions of the scattering dust grains along the line of sights (LOS) to these sources. We find there is a good linear correlation not only between the scattering hydrogen column density from WD01 model and the one from MRN model, but also between N_{H} derived from spectral fits and the one derived from the grain models WD01 and MRN (except for GX 301-2 and Vela X-1): N

  11. Investigating The Neutron Flux Distribution Of The Miniature Neutron Source Reactor MNSR Type

    International Nuclear Information System (INIS)

    Nguyen Hoang Hai; Do Quang Binh

    2011-01-01

    Neutron flux distribution is the important characteristic of nuclear reactor. In this article, four energy group neutron flux distributions of the miniature neutron source reactor MNSR type versus radial and axial directions are investigated in case the control rod is fully withdrawn. In addition, the effect of control rod positions on the thermal neutron flux distribution is also studied. The group constants for all reactor components are generated by the WIMSD code, and the neutron flux distributions are calculated by the CITATION code. The results show that the control rod positions only affect in the planning area for distribution in the region around the control rod. (author)

  12. Methods of formation of efficiency indexes of electric power sources integration in regional electric power systems

    International Nuclear Information System (INIS)

    Marder, L.I.; Myzin, A.I.

    1993-01-01

    A methodic approach to the grounding of the integration process efficiency within the Unified electric power system is given together with the selection of a rational areal structure and concentration of power-generating source capacities. Formation of an economic functional according to alternative scenavies including the cost components taking account of the regional interests is considered. A method for estimation and distribution of the effect from electric power production integration in the power systems under new economic conditions is proposed

  13. A practical algorithm for distribution state estimation including renewable energy sources

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher [Electronic and Electrical Department, Shiraz University of Technology, Modares Blvd., P.O. 71555-313, Shiraz (Iran); Firouzi, Bahman Bahmani [Islamic Azad University Marvdasht Branch, Marvdasht (Iran)

    2009-11-15

    Renewable energy is energy that is in continuous supply over time. These kinds of energy sources are divided into five principal renewable sources of energy: the sun, the wind, flowing water, biomass and heat from within the earth. According to some studies carried out by the research institutes, about 25% of the new generation will be generated by Renewable Energy Sources (RESs) in the near future. Therefore, it is necessary to study the impact of RESs on the power systems, especially on the distribution networks. This paper presents a practical Distribution State Estimation (DSE) including RESs and some practical consideration. The proposed algorithm is based on the combination of Nelder-Mead simplex search and Particle Swarm Optimization (PSO) algorithms, called PSO-NM. The proposed algorithm can estimate load and RES output values by Weighted Least-Square (WLS) approach. Some practical considerations are var compensators, Voltage Regulators (VRs), Under Load Tap Changer (ULTC) transformer modeling, which usually have nonlinear and discrete characteristics, and unbalanced three-phase power flow equations. The comparison results with other evolutionary optimization algorithms such as original PSO, Honey Bee Mating Optimization (HBMO), Neural Networks (NNs), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) for a test system demonstrate that PSO-NM is extremely effective and efficient for the DSE problems. (author)

  14. Distribution of hadron intranuclear cascade for large distance from a source

    International Nuclear Information System (INIS)

    Bibin, V.L.; Kazarnovskij, M.V.; Serezhnikov, S.V.

    1985-01-01

    Analytical solution of the problem of three-component hadron cascade development for large distances from a source is obtained in the framework of a series of simplifying assumptions. It makes possible to understand physical mechanisms of the process studied and to obtain approximate asymptotic expressions for hadron distribution functions

  15. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    Science.gov (United States)

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-07

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  16. Risk Assessment for Distribution Systems Using an Improved PEM-Based Method Considering Wind and Photovoltaic Power Distribution

    Directory of Open Access Journals (Sweden)

    Qingwu Gong

    2017-03-01

    Full Text Available The intermittency and variability of permeated distributed generators (DGs could cause many critical security and economy risks to distribution systems. This paper applied a certain mathematical distribution to imitate the output variability and uncertainty of DGs. Then, four risk indices—EENS (expected energy not supplied, PLC (probability of load curtailment, EFLC (expected frequency of load curtailment, and SI (severity index—were established to reflect the system risk level of the distribution system. For the certain mathematical distribution of the DGs’ output power, an improved PEM (point estimate method-based method was proposed to calculate these four system risk indices. In this improved PEM-based method, an enumeration method was used to list the states of distribution systems, and an improved PEM was developed to deal with the uncertainties of DGs, and the value of load curtailment in distribution systems was calculated by an optimal power flow algorithm. Finally, the effectiveness and advantages of this proposed PEM-based method for distribution system assessment were verified by testing a modified IEEE 30-bus system. Simulation results have shown that this proposed PEM-based method has a high computational accuracy and highly reduced computational costs compared with other risk assessment methods and is very effective for risk assessments.

  17. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  18. Source-based neurofeedback methods using EEG recordings: training altered brain activity in a functional brain source derived from blind source separation

    Science.gov (United States)

    White, David J.; Congedo, Marco; Ciorciari, Joseph

    2014-01-01

    A developing literature explores the use of neurofeedback in the treatment of a range of clinical conditions, particularly ADHD and epilepsy, whilst neurofeedback also provides an experimental tool for studying the functional significance of endogenous brain activity. A critical component of any neurofeedback method is the underlying physiological signal which forms the basis for the feedback. While the past decade has seen the emergence of fMRI-based protocols training spatially confined BOLD activity, traditional neurofeedback has utilized a small number of electrode sites on the scalp. As scalp EEG at a given electrode site reflects a linear mixture of activity from multiple brain sources and artifacts, efforts to successfully acquire some level of control over the signal may be confounded by these extraneous sources. Further, in the event of successful training, these traditional neurofeedback methods are likely influencing multiple brain regions and processes. The present work describes the use of source-based signal processing methods in EEG neurofeedback. The feasibility and potential utility of such methods were explored in an experiment training increased theta oscillatory activity in a source derived from Blind Source Separation (BSS) of EEG data obtained during completion of a complex cognitive task (spatial navigation). Learned increases in theta activity were observed in two of the four participants to complete 20 sessions of neurofeedback targeting this individually defined functional brain source. Source-based EEG neurofeedback methods using BSS may offer important advantages over traditional neurofeedback, by targeting the desired physiological signal in a more functionally and spatially specific manner. Having provided preliminary evidence of the feasibility of these methods, future work may study a range of clinically and experimentally relevant brain processes where individual brain sources may be targeted by source-based EEG neurofeedback. PMID

  19. STATCONT: A statistical continuum level determination method for line-rich sources

    Science.gov (United States)

    Sánchez-Monge, Á.; Schilke, P.; Ginsburg, A.; Cesaroni, R.; Schmiedeke, A.

    2018-01-01

    STATCONT is a python-based tool designed to determine the continuum emission level in spectral data, in particular for sources with a line-rich spectrum. The tool inspects the intensity distribution of a given spectrum and automatically determines the continuum level by using different statistical approaches. The different methods included in STATCONT are tested against synthetic data. We conclude that the sigma-clipping algorithm provides the most accurate continuum level determination, together with information on the uncertainty in its determination. This uncertainty can be used to correct the final continuum emission level, resulting in the here called `corrected sigma-clipping method' or c-SCM. The c-SCM has been tested against more than 750 different synthetic spectra reproducing typical conditions found towards astronomical sources. The continuum level is determined with a discrepancy of less than 1% in 50% of the cases, and less than 5% in 90% of the cases, provided at least 10% of the channels are line free. The main products of STATCONT are the continuum emission level, together with a conservative value of its uncertainty, and datacubes containing only spectral line emission, i.e., continuum-subtracted datacubes. STATCONT also includes the option to estimate the spectral index, when different files covering different frequency ranges are provided.

  20. Energy spectra unfolding of fast neutron sources using the group method of data handling and decision tree algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, Seyed Abolfazl, E-mail: sahosseini@sharif.edu [Department of Energy Engineering, Sharif University of Technology, Tehran 8639-11365 (Iran, Islamic Republic of); Afrakoti, Iman Esmaili Paeen [Faculty of Engineering & Technology, University of Mazandaran, Pasdaran Street, P.O. Box: 416, Babolsar 47415 (Iran, Islamic Republic of)

    2017-04-11

    Accurate unfolding of the energy spectrum of a neutron source gives important information about unknown neutron sources. The obtained information is useful in many areas like nuclear safeguards, nuclear nonproliferation, and homeland security. In the present study, the energy spectrum of a poly-energetic fast neutron source is reconstructed using the developed computational codes based on the Group Method of Data Handling (GMDH) and Decision Tree (DT) algorithms. The neutron pulse height distribution (neutron response function) in the considered NE-213 liquid organic scintillator has been simulated using the developed MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). The developed computational codes based on the GMDH and DT algorithms use some data for training, testing and validation steps. In order to prepare the required data, 4000 randomly generated energy spectra distributed over 52 bins are used. The randomly generated energy spectra and the simulated neutron pulse height distributions by MCNPX-ESUT for each energy spectrum are used as the output and input data. Since there is no need to solve the inverse problem with an ill-conditioned response matrix, the unfolded energy spectrum has the highest accuracy. The {sup 241}Am-{sup 9}Be and {sup 252}Cf neutron sources are used in the validation step of the calculation. The unfolded energy spectra for the used fast neutron sources have an excellent agreement with the reference ones. Also, the accuracy of the unfolded energy spectra obtained using the GMDH is slightly better than those obtained from the DT. The results obtained in the present study have good accuracy in comparison with the previously published paper based on the logsig and tansig transfer functions. - Highlights: • The neutron pulse height distribution was simulated using MCNPX-ESUT. • The energy spectrum of the neutron source was unfolded using GMDH. • The energy spectrum of the neutron source was

  1. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    Science.gov (United States)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit

  2. Forward-weighted CADIS method for variance reduction of Monte Carlo calculations of distributions and multiple localized quantities

    International Nuclear Information System (INIS)

    Wagner, J. C.; Blakeman, E. D.; Peplow, D. E.

    2009-01-01

    This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is a variation on the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for some time to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain approximately uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented in the ADVANTG/MCNP framework and has been fully automated within the MAVRIC sequence of SCALE 6. Results of the application of the method to enabling the calculation of dose rates throughout an entire full-scale pressurized-water reactor facility are presented and discussed. (authors)

  3. An ESPRIT-Based Approach for 2-D Localization of Incoherently Distributed Sources in Massive MIMO Systems

    Science.gov (United States)

    Hu, Anzhong; Lv, Tiejun; Gao, Hui; Zhang, Zhang; Yang, Shaoshi

    2014-10-01

    In this paper, an approach of estimating signal parameters via rotational invariance technique (ESPRIT) is proposed for two-dimensional (2-D) localization of incoherently distributed (ID) sources in large-scale/massive multiple-input multiple-output (MIMO) systems. The traditional ESPRIT-based methods are valid only for one-dimensional (1-D) localization of the ID sources. By contrast, in the proposed approach the signal subspace is constructed for estimating the nominal azimuth and elevation direction-of-arrivals and the angular spreads. The proposed estimator enjoys closed-form expressions and hence it bypasses the searching over the entire feasible field. Therefore, it imposes significantly lower computational complexity than the conventional 2-D estimation approaches. Our analysis shows that the estimation performance of the proposed approach improves when the large-scale/massive MIMO systems are employed. The approximate Cram\\'{e}r-Rao bound of the proposed estimator for the 2-D localization is also derived. Numerical results demonstrate that albeit the proposed estimation method is comparable with the traditional 2-D estimators in terms of performance, it benefits from a remarkably lower computational complexity.

  4. Dynamic Subsidy Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei

    2016-01-01

    Dynamic subsidy (DS) is a locational price paid by the distribution system operator (DSO) to its customers in order to shift energy consumption to designated hours and nodes. It is promising for demand side management and congestion management. This paper proposes a new DS method for congestion...... management in distribution networks, including the market mechanism, the mathematical formulation through a two-level optimization, and the method solving the optimization by tightening the constraints and linearization. Case studies were conducted with a one node system and the Bus 4 distribution network...... of the Roy Billinton Test System (RBTS) with high penetration of electric vehicles (EVs) and heat pumps (HPs). The case studies demonstrate the efficacy of the DS method for congestion management in distribution networks. Studies in this paper show that the DS method offers the customers a fair opportunity...

  5. Temperature distribution of a simplified rotor due to a uniform heat source

    Science.gov (United States)

    Welzenbach, Sarah; Fischer, Tim; Meier, Felix; Werner, Ewald; kyzy, Sonun Ulan; Munz, Oliver

    2018-03-01

    In gas turbines, high combustion efficiency as well as operational safety are required. Thus, labyrinth seal systems with honeycomb liners are commonly used. In the case of rubbing events in the seal system, the components can be damaged due to cyclic thermal and mechanical loads. Temperature differences occurring at labyrinth seal fins during rubbing events can be determined by considering a single heat source acting periodically on the surface of a rotating cylinder. Existing literature analysing the temperature distribution on rotating cylindrical bodies due to a stationary heat source is reviewed. The temperature distribution on the circumference of a simplified labyrinth seal fin is calculated using an available and easy to implement analytical approach. A finite element model of the simplified labyrinth seal fin is created and the numerical results are compared to the analytical results. The temperature distributions calculated by the analytical and the numerical approaches coincide for low sliding velocities, while there are discrepancies of the calculated maximum temperatures for higher sliding velocities. The use of the analytical approach allows the conservative estimation of the maximum temperatures arising in labyrinth seal fins during rubbing events. At the same time, high calculation costs can be avoided.

  6. Cathode power distribution system and method of using the same for power distribution

    Science.gov (United States)

    Williamson, Mark A; Wiedmeyer, Stanley G; Koehl, Eugene R; Bailey, James L; Willit, James L; Barnes, Laurel A; Blaskovitz, Robert J

    2014-11-11

    Embodiments include a cathode power distribution system and/or method of using the same for power distribution. The cathode power distribution system includes a plurality of cathode assemblies. Each cathode assembly of the plurality of cathode assemblies includes a plurality of cathode rods. The system also includes a plurality of bus bars configured to distribute current to each of the plurality of cathode assemblies. The plurality of bus bars include a first bus bar configured to distribute the current to first ends of the plurality of cathode assemblies and a second bus bar configured to distribute the current to second ends of the plurality of cathode assemblies.

  7. Performance Analysis of Fission and Surface Source Iteration Method for Domain Decomposed Monte Carlo Whole-Core Calculation

    International Nuclear Information System (INIS)

    Jo, Yu Gwon; Oh, Yoo Min; Park, Hyang Kyu; Park, Kang Soon; Cho, Nam Zin

    2016-01-01

    In this paper, two issues in the FSS iteration method, i.e., the waiting time for surface source data and the variance biases in local tallies are investigated for the domain decomposed, 3-D continuous-energy whole-core calculation. The fission sources are provided as usual, while the surface sources are provided by banking MC particles crossing local domain boundaries. The surface sources serve as boundary conditions for nonoverlapping local problems, so that each local problem can be solved independently. In this paper, two issues in the FSS iteration are investigated. One is quantifying the waiting time of processors to receive surface source data. By using nonblocking communication, 'time penalty' to wait for the arrival of the surface source data is reduced. The other important issue is underestimation of the sample variance of the tally because of additional inter-iteration correlations in surface sources. From the numerical results on a 3-D whole-core test problem, it is observed that the time penalty is negligible in the FSS iteration method and that the real variances of both pin powers and assembly powers are estimated by the HB method. For those purposes, three cases; Case 1 (1 local domain), Case 2 (4 local domains), Case 3 (16 local domains) are tested. For both Cases 2 and 3, the time penalties for waiting are negligible compared to the source-tracking times. However, for finer divisions of local domains, the loss of parallel efficiency caused by the different number of sources for local domains in symmetric locations becomes larger due to the stochastic errors in source distributions. For all test cases, the HB method very well estimates the real variances of local tallies. However, it is also noted that the real variances of local tallies estimated by the HB method show slightly smaller than the real variances obtained from 30 independent batch runs and the deviations become larger for finer divisions of local domains. The batch size used for the HB

  8. Computation of the efficiency distribution of a multichannel focusing collimator

    International Nuclear Information System (INIS)

    Balasubramanian, A.; Venkateswaran, T.V.

    1977-01-01

    This article describes two computer methods of calculating the point source efficiency distribution functions of a focusing collimator with round tapered holes. The first method which computes only the geometric efficiency distribution is adequate for low energy collimators while the second method which computes both geometric and penetration efficiencies can be made use of for medium and high energy collimators. The scatter contribution to the efficiency is not taken into account. In the first method the efficiency distribution of a single cone of the collimator is obtained and the data are used for computing the distribution of the whole collimator. For high energy collimator the entire detector region is imagined to be divided into elemental areas. Efficiency of the elemental area is computed after suitably weighting for the penetration within the collimator septa, which is determined by three dimensional geometric techniques. The method of computing the line source efficiency distribution from point source distribution is also explained. The formulations have been tested by computing the efficiency distribution of several commercial collimators and collimators fabricated by us. (Auth.)

  9. Distributed Optimization System

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  10. Light source distribution and scattering phase function influence light transport in diffuse multi-layered media

    Science.gov (United States)

    Vaudelle, Fabrice; L'Huillier, Jean-Pierre; Askoura, Mohamed Lamine

    2017-06-01

    Red and near-Infrared light is often used as a useful diagnostic and imaging probe for highly scattering media such as biological tissues, fruits and vegetables. Part of diffusively reflected light gives interesting information related to the tissue subsurface, whereas light recorded at further distances may probe deeper into the interrogated turbid tissues. However, modelling diffusive events occurring at short source-detector distances requires to consider both the distribution of the light sources and the scattering phase functions. In this report, a modified Monte Carlo model is used to compute light transport in curved and multi-layered tissue samples which are covered with a thin and highly diffusing tissue layer. Different light source distributions (ballistic, diffuse or Lambertian) are tested with specific scattering phase functions (modified or not modified Henyey-Greenstein, Gegenbauer and Mie) to compute the amount of backscattered and transmitted light in apple and human skin structures. Comparisons between simulation results and experiments carried out with a multispectral imaging setup confirm the soundness of the theoretical strategy and may explain the role of the skin on light transport in whole and half-cut apples. Other computational results show that a Lambertian source distribution combined with a Henyey-Greenstein phase function provides a higher photon density in the stratum corneum than in the upper dermis layer. Furthermore, it is also shown that the scattering phase function may affect the shape and the magnitude of the Bidirectional Reflectance Distribution (BRDF) exhibited at the skin surface.

  11. Neutron source multiplication method

    International Nuclear Information System (INIS)

    Clayton, E.D.

    1985-01-01

    Extensive use has been made of neutron source multiplication in thousands of measurements of critical masses and configurations and in subcritical neutron-multiplication measurements in situ that provide data for criticality prevention and control in nuclear materials operations. There is continuing interest in developing reliable methods for monitoring the reactivity, or k/sub eff/, of plant operations, but the required measurements are difficult to carry out and interpret on the far subcritical configurations usually encountered. The relationship between neutron multiplication and reactivity is briefly discussed and data presented to illustrate problems associated with the absolute measurement of neutron multiplication and reactivity in subcritical systems. A number of curves of inverse multiplication have been selected from a variety of experiments showing variations observed in multiplication during the course of critical and subcritical experiments where different methods of reactivity addition were used, with different neutron source detector position locations. Concern is raised regarding the meaning and interpretation of k/sub eff/ as might be measured in a far subcritical system because of the modal effects and spectrum differences that exist between the subcritical and critical systems. Because of this, the calculation of k/sub eff/ identical with unity for the critical assembly, although necessary, may not be sufficient to assure safety margins in calculations pertaining to far subcritical systems. Further study is needed on the interpretation and meaning of k/sub eff/ in the far subcritical system

  12. Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices

    Science.gov (United States)

    Chassin, David P [Pasco, WA; Donnelly, Matthew K [Kennewick, WA; Dagle, Jeffery E [Richland, WA

    2011-12-06

    Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices are described. In one aspect, an electrical power distribution control method includes providing electrical energy from an electrical power distribution system, applying the electrical energy to a load, providing a plurality of different values for a threshold at a plurality of moments in time and corresponding to an electrical characteristic of the electrical energy, and adjusting an amount of the electrical energy applied to the load responsive to an electrical characteristic of the electrical energy triggering one of the values of the threshold at the respective moment in time.

  13. A method to measure depth distributions of implanted ions

    International Nuclear Information System (INIS)

    Arnesen, A.; Noreland, T.

    1977-04-01

    A new variant of the radiotracer method for depth distribution determinations has been tested. Depth distributions of radioactive implanted ions are determined by dissolving thin, uniform layers of evaporated material from the surface of a backing and by measuring the activity before and after the layer removal. The method has been used to determine depth distributions for 25 keV and 50 keV 57 Co ions in aluminium and gold. (Auth.)

  14. FDTD verification of deep-set brain tumor hyperthermia using a spherical microwave source distribution

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, D. [20th Intelligence Squadron, Offutt AFB, NE (United States); Rappaport, C.M. [Northeastern Univ., Boston, MA (United States). Center for Electromagnetics Research; Terzuoli, A.J. Jr. [Air Force Inst. of Tech., Dayton, OH (United States). Graduate School of Engineering

    1996-10-01

    Although use of noninvasive microwave hyperthermia to treat cancer is problematic in many human body structures, careful selection of the source electric field distribution around the entire surface of the head can generate a tightly focused global power density maximum at the deepest point within the brain. An analytic prediction of the optimum volume field distribution in a layered concentric head model based on summing spherical harmonic modes is derived and presented. This ideal distribution is then verified using a three-dimensional finite difference time domain (TDTD) simulation with a discretized, MRI-based head model excited by the spherical source. The numerical computation gives a very similar dissipated power pattern as the analytic prediction. This study demonstrates that microwave hyperthermia can theoretically be a feasible cancer treatment modality for tumors in the head, providing a well-resolved hot-spot at depth without overheating any other healthy tissue.

  15. Comparison of estimation methods for fitting weibull distribution to ...

    African Journals Online (AJOL)

    Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.

  16. Guidelines for the calibration of low energy photon sources at beta-ray brachytherapy sources

    International Nuclear Information System (INIS)

    2000-01-01

    With the development of improved methods of implanting brachytherapy sources in a precise manner for treating prostate cancer and other disease processes, there has been a tremendous growth in the use of low energy photon sources, such as 125 I and 103 Pd brachytherapy seeds. Low energy photon sources have the advantage of easier shielding and also lowering the dose to normal tissue. However, the dose distributions around these sources are affected by the details in construction of the source and its encapsulation more than other sources used for brachytherapy treatments, such as 192 Ir. With increasing number of new low energy photon sources on the market, care should be taken with regard to its traceability to primary standards. It cannot be assumed that a calibration factor for an ionization chamber that is valid for one type of low energy photon source, automatically is valid for another source even if both would use the same isotope. Moreover, the method used to calculate the dose must also take into account the structure of the source and the encapsulation. The dose calculation algorithm that is valid for one type of low energy source may not be valid for another source even if in both cases the same radionuclide is used. Simple ''point source'' approximations, i.e. where the source is modeled as a point, should be avoided, as such methods do not account for any details in the source construction. In this document, the dose calculation formalism adopted for low energy photon sources is that recommended by the American Association of Physicists in Medicine (AAPM) as outlined by Task Group-43 (TG-43). This method accounts for the source and capsule geometry. The AAPM recommends brachytherapy photon sources to be specified in terms of 'Air Kerma Strength' that is also used in the formalism mentioned above. On the other hand, the International Commission on Radiation Units and Measurements (ICRU) recommends that the specification be done in terms of Reference Air

  17. Measurement of subcritical multiplication by the interval distribution method

    International Nuclear Information System (INIS)

    Nelson, G.W.

    1985-01-01

    The prompt decay constant or the subcritical neutron multiplication may be determined by measuring the distribution of the time intervals between successive neutron counts. The distribution data is analyzed by least-squares fitting to a theoretical distribution function derived from a point reactor probability model. Published results of measurements with one- and two-detector systems are discussed. Data collection times are shorter, and statistical errors are smaller the nearer the system is to delayed critical. Several of the measurements indicate that a shorter data collection time and higher accuracy are possible with the interval distribution method than with the Feynman variance method

  18. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  19. A practical two-way system of quantum key distribution with untrusted source

    International Nuclear Information System (INIS)

    Chen Ming-Juan; Liu Xiang

    2011-01-01

    The most severe problem of a two-way 'plug-and-play' (p and p) quantum key distribution system is that the source can be controlled by the eavesdropper. This kind of source is defined as an “untrusted source . This paper discusses the effects of the fluctuation of internal transmittance on the final key generation rate and the transmission distance. The security of the standard BB84 protocol, one-decoy state protocol, and weak+vacuum decoy state protocol, with untrusted sources and the fluctuation of internal transmittance are studied. It is shown that the one-decoy state is sensitive to the statistical fluctuation but weak+vacuum decoy state is only slightly affected by the fluctuation. It is also shown that both the maximum secure transmission distance and final key generation rate are reduced when Alice's laboratory transmittance fluctuation is considered. (general)

  20. Improvement of spatial discretization error on the semi-analytic nodal method using the scattered source subtraction method

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Tatsumi, Masahiro

    2006-01-01

    In this paper, the scattered source subtraction (SSS) method is newly proposed to improve the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. In the SSS method, the scattered source is subtracted from both side of the diffusion or the transport equation to make spatial variation of the source term to be small. The same neutron balance equation is still used in the SSS method. Since the SSS method just modifies coefficients of node coupling equations (those used in evaluation for the response of partial currents), its implementation is easy. Validity of the present method is verified through test calculations that are carried out in PWR multi-assemblies configurations. The calculation results show that the SSS method can significantly improve the spatial discretization error. Since the SSS method does not have any negative impact on execution time, convergence behavior and memory requirement, it will be useful to reduce the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. (author)

  1. Energy-Based Acoustic Source Localization Methods: A Survey

    Directory of Open Access Journals (Sweden)

    Wei Meng

    2017-02-01

    Full Text Available Energy-based source localization is an important problem in wireless sensor networks (WSNs, which has been studied actively in the literature. Numerous localization algorithms, e.g., maximum likelihood estimation (MLE and nonlinear-least-squares (NLS methods, have been reported. In the literature, there are relevant review papers for localization in WSNs, e.g., for distance-based localization. However, not much work related to energy-based source localization is covered in the existing review papers. Energy-based methods are proposed and specially designed for a WSN due to its limited sensor capabilities. This paper aims to give a comprehensive review of these different algorithms for energy-based single and multiple source localization problems, their merits and demerits and to point out possible future research directions.

  2. Standard test method for distribution coefficients of inorganic species by the batch method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers the determination of distribution coefficients of chemical species to quantify uptake onto solid materials by a batch sorption technique. It is a laboratory method primarily intended to assess sorption of dissolved ionic species subject to migration through pores and interstices of site specific geomedia. It may also be applied to other materials such as manufactured adsorption media and construction materials. Application of the results to long-term field behavior is not addressed in this method. Distribution coefficients for radionuclides in selected geomedia are commonly determined for the purpose of assessing potential migratory behavior of contaminants in the subsurface of contaminated sites and waste disposal facilities. This test method is also applicable to studies for parametric studies of the variables and mechanisms which contribute to the measured distribution coefficient. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement a...

  3. Research on neutron source multiplication method in nuclear critical safety

    International Nuclear Information System (INIS)

    Zhu Qingfu; Shi Yongqian; Hu Dingsheng

    2005-01-01

    The paper concerns in the neutron source multiplication method research in nuclear critical safety. Based on the neutron diffusion equation with external neutron source the effective sub-critical multiplication factor k s is deduced, and k s is different to the effective neutron multiplication factor k eff in the case of sub-critical system with external neutron source. The verification experiment on the sub-critical system indicates that the parameter measured with neutron source multiplication method is k s , and k s is related to the external neutron source position in sub-critical system and external neutron source spectrum. The relation between k s and k eff and the effect of them on nuclear critical safety is discussed. (author)

  4. General and Simple Decision Method for DG Penetration Level in View of Voltage Regulation at Distribution Substation Transformers

    Directory of Open Access Journals (Sweden)

    Joon-Ho Choi

    2013-09-01

    Full Text Available A distribution system was designed and operated by considering unidirectional power flow from a utility source to end-use loads. The large penetrations of distributed generation (DG into the existing distribution system causes a variety of technical problems, such as frequent tap changing problems of the on-load tap changer (OLTC transformer, local voltage rise, protection coordination, exceeding short-circuit capacity, and harmonic distortion. In view of voltage regulation, the intermittent fluctuation of the DG output power results in frequent tap changing operations of the OLTC transformer. Thus, many utilities limit the penetration level of DG and are eager to find the reasonable penetration limits of DG in the distribution system. To overcome this technical problem, utilities have developed a new voltage regulation method in the distribution system with a large DG penetration level. In this paper, the impact of DG on the OLTC operations controlled by the line drop compensation (LDC method is analyzed. In addition, a generalized determination methodology for the DG penetration limits in a distribution substation transformer is proposed. The proposed DG penetration limits could be adopted for a simplified interconnection process in DG interconnection guidelines.

  5. Theoretical method for determining particle distribution functions of classical systems

    International Nuclear Information System (INIS)

    Johnson, E.

    1980-01-01

    An equation which involves the triplet distribution function and the three-particle direct correlation function is obtained. This equation was derived using an analogue of the Ornstein--Zernike equation. The new equation is used to develop a variational method for obtaining the triplet distribution function of uniform one-component atomic fluids from the pair distribution function. The variational method may be used with the first and second equations in the YBG hierarchy to obtain pair and triplet distribution functions. It should be easy to generalize the results to the n-particle distribution function

  6. Microseismic imaging using a source-independent full-waveform inversion method

    KAUST Repository

    Wang, Hanchen

    2016-09-06

    Using full waveform inversion (FWI) to locate microseismic and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, waveform inversion of microseismic events faces incredible nonlinearity due to the unknown source location (space) and function (time). We develop a source independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for source wavelet in z axis is extracted to check the accuracy of the inverted source image and velocity model. Also the angle gather is calculated to see if the velocity model is correct. By inverting for all the source image, source wavelet and the velocity model, the proposed method produces good estimates of the source location, ignition time and the background velocity for part of the SEG overthrust model.

  7. Microseismic imaging using a source-independent full-waveform inversion method

    KAUST Repository

    Wang, Hanchen

    2016-01-01

    Using full waveform inversion (FWI) to locate microseismic and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, waveform inversion of microseismic events faces incredible nonlinearity due to the unknown source location (space) and function (time). We develop a source independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for source wavelet in z axis is extracted to check the accuracy of the inverted source image and velocity model. Also the angle gather is calculated to see if the velocity model is correct. By inverting for all the source image, source wavelet and the velocity model, the proposed method produces good estimates of the source location, ignition time and the background velocity for part of the SEG overthrust model.

  8. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    Science.gov (United States)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models

  9. Three-Phase Harmonic Analysis Method for Unbalanced Distribution Systems

    Directory of Open Access Journals (Sweden)

    Jen-Hao Teng

    2014-01-01

    Full Text Available Due to the unbalanced features of distribution systems, a three-phase harmonic analysis method is essential to accurately analyze the harmonic impact on distribution systems. Moreover, harmonic analysis is the basic tool for harmonic filter design and harmonic resonance mitigation; therefore, the computational performance should also be efficient. An accurate and efficient three-phase harmonic analysis method for unbalanced distribution systems is proposed in this paper. The variations of bus voltages, bus current injections and branch currents affected by harmonic current injections can be analyzed by two relationship matrices developed from the topological characteristics of distribution systems. Some useful formulas are then derived to solve the three-phase harmonic propagation problem. After the harmonic propagation for each harmonic order is calculated, the total harmonic distortion (THD for bus voltages can be calculated accordingly. The proposed method has better computational performance, since the time-consuming full admittance matrix inverse employed by the commonly-used harmonic analysis methods is not necessary in the solution procedure. In addition, the proposed method can provide novel viewpoints in calculating the branch currents and bus voltages under harmonic pollution which are vital for harmonic filter design. Test results demonstrate the effectiveness and efficiency of the proposed method.

  10. Herschel-ATLAS: Dust Temperature and Redshift Distribution of SPIRE and PACS Detected Sources Using Submillimetre Colours

    Science.gov (United States)

    Amblard, A.; Cooray, Asantha; Serra, P.; Temi, P.; Barton, E.; Negrello, M.; Auld, R.; Baes, M.; Baldry, I. K.; Bamford, S.; hide

    2010-01-01

    We present colour-colour diagrams of detected sources in the Herschel-ATLAS Science Demonstration Field from 100 to 500/microns using both PACS and SPIRE. We fit isothermal modified-blackbody spectral energy distribution (SED) models in order to extract the dust temperature of sources with counterparts in GAMA or SDSS with either a spectroscopic or a photometric redshift. For a subsample of 331 sources detected in at least three FIR bands with significance greater than 30 sigma, we find an average dust temperature of (28 plus or minus 8)K. For sources with no known redshifts, we populate the colour-colour diagram with a large number of SEDs generated with a broad range of dust temperatures and emissivity parameters and compare to colours of observed sources to establish the redshift distribution of those samples. For another subsample of 1686 sources with fluxes above 35 mJy at 350 microns and detected at 250 and 500 microns with a significance greater than 3sigma, we find an average redshift of 2.2 plus or minus 0.6.

  11. Methods of forming single source precursors, methods of forming polymeric single source precursors, and single source precursors formed by such methods

    Science.gov (United States)

    Fox, Robert V.; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin; Margulieux, Kelsey R.; Holland, Andrew W.

    2014-09-09

    Methods of forming single source precursors (SSPs) include forming intermediate products having the empirical formula 1/2{L.sub.2N(.mu.-X).sub.2M'X.sub.2}.sub.2, and reacting MER with the intermediate products to form SSPs of the formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2, wherein L is a Lewis base, M is a Group IA atom, N is a Group IB atom, M' is a Group IIIB atom, each E is a Group VIB atom, each X is a Group VIIA atom or a nitrate group, and each R group is an alkyl, aryl, vinyl, (per)fluoro alkyl, (per)fluoro aryl, silane, or carbamato group. Methods of forming polymeric or copolymeric SSPs include reacting at least one of HE.sup.1R.sup.1E.sup.1H and MER with one or more substances having the empirical formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2 or L.sub.2N(.mu.-X).sub.2M'(X).sub.2 to form a polymeric or copolymeric SSP. New SSPs and intermediate products are formed by such methods.

  12. Imaging Seismic Source Variations Using Back-Projection Methods at El Tatio Geyser Field, Northern Chile

    Science.gov (United States)

    Kelly, C. L.; Lawrence, J. F.

    2014-12-01

    During October 2012, 51 geophones and 6 broadband seismometers were deployed in an ~50x50m region surrounding a periodically erupting columnar geyser in the El Tatio Geyser Field, Chile. The dense array served as the seismic framework for a collaborative project to study the mechanics of complex hydrothermal systems. Contemporaneously, complementary geophysical measurements (including down-hole temperature and pressure, discharge rates, thermal imaging, water chemistry, and video) were also collected. Located on the western flanks of the Andes Mountains at an elevation of 4200m, El Tatio is the third largest geyser field in the world. Its non-pristine condition makes it an ideal location to perform minutely invasive geophysical studies. The El Jefe Geyser was chosen for its easily accessible conduit and extremely periodic eruption cycle (~120s). During approximately 2 weeks of continuous recording, we recorded ~2500 nighttime eruptions which lack cultural noise from tourism. With ample data, we aim to study how the source varies spatially and temporally during each phase of the geyser's eruption cycle. We are developing a new back-projection processing technique to improve source imaging for diffuse signals. Our method was previously applied to the Sierra Negra Volcano system, which also exhibits repeating harmonic and diffuse seismic sources. We back-project correlated seismic signals from the receivers back to their sources, assuming linear source to receiver paths and a known velocity model (obtained from ambient noise tomography). We apply polarization filters to isolate individual and concurrent geyser energy associated with P and S phases. We generate 4D, time-lapsed images of the geyser source field that illustrate how the source distribution changes through the eruption cycle. We compare images for pre-eruption, co-eruption, post-eruption and quiescent periods. We use our images to assess eruption mechanics in the system (i.e. top-down vs. bottom-up) and

  13. Optimal placement and sizing of wind / solar based DG sources in distribution system

    Science.gov (United States)

    Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng

    2017-06-01

    Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.

  14. Model of charge-state distributions for electron cyclotron resonance ion source plasmas

    Directory of Open Access Journals (Sweden)

    D. H. Edgell

    1999-12-01

    Full Text Available A computer model for the ion charge-state distribution (CSD in an electron cyclotron resonance ion source (ECRIS plasma is presented that incorporates non-Maxwellian distribution functions, multiple atomic species, and ion confinement due to the ambipolar potential well that arises from confinement of the electron cyclotron resonance (ECR heated electrons. Atomic processes incorporated into the model include multiple ionization and multiple charge exchange with rate coefficients calculated for non-Maxwellian electron distributions. The electron distribution function is calculated using a Fokker-Planck code with an ECR heating term. This eliminates the electron temperature as an arbitrary user input. The model produces results that are a good match to CSD data from the ANL-ECRII ECRIS. Extending the model to 1D axial will also allow the model to determine the plasma and electrostatic potential profiles, further eliminating arbitrary user input to the model.

  15. Modulating Function-Based Method for Parameter and Source Estimation of Partial Differential Equations

    KAUST Repository

    Asiri, Sharefa M.

    2017-10-08

    Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown and are estimated from available measurements. Despite its importance, solving the estimation problem is mathematically and numerically challenging and especially when the measurements are corrupted by noise, which is often the case. Various methods have been proposed to solve estimation problems in PDEs which can be classified into optimization methods and recursive methods. The optimization methods are usually heavy computationally, especially when the number of unknowns is large. In addition, they are sensitive to the initial guess and stop condition, and they suffer from the lack of robustness to noise. Recursive methods, such as observer-based approaches, are limited by their dependence on some structural properties such as observability and identifiability which might be lost when approximating the PDE numerically. Moreover, most of these methods provide asymptotic estimates which might not be useful for control applications for example. An alternative non-asymptotic approach with less computational burden has been proposed in engineering fields based on the so-called modulating functions. In this dissertation, we propose to mathematically and numerically analyze the modulating functions based approaches. We also propose to extend these approaches to different situations. The contributions of this thesis are as follows. (i) Provide a mathematical analysis of the modulating function-based method (MFBM) which includes: its well-posedness, statistical properties, and estimation errors. (ii) Provide a numerical analysis of the MFBM through some estimation problems, and study the sensitivity of the method to the modulating functions\\' parameters. (iii) Propose an effective algorithm for selecting the method\\'s design parameters

  16. The Emergent Capabilities of Distributed Satellites and Methods for Selecting Distributed Satellite Science Missions

    Science.gov (United States)

    Corbin, B. A.; Seager, S.; Ross, A.; Hoffman, J.

    2017-12-01

    Distributed satellite systems (DSS) have emerged as an effective and cheap way to conduct space science, thanks to advances in the small satellite industry. However, relatively few space science missions have utilized multiple assets to achieve their primary scientific goals. Previous research on methods for evaluating mission concepts designs have shown that distributed systems are rarely competitive with monolithic systems, partially because it is difficult to quantify the added value of DSSs over monolithic systems. Comparatively little research has focused on how DSSs can be used to achieve new, fundamental space science goals that cannot be achieved with monolithic systems or how to choose a design from a larger possible tradespace of options. There are seven emergent capabilities of distributed satellites: shared sampling, simultaneous sampling, self-sampling, census sampling, stacked sampling, staged sampling, and sacrifice sampling. These capabilities are either fundamentally, analytically, or operationally unique in their application to distributed science missions, and they can be leveraged to achieve science goals that are either impossible or difficult and costly to achieve with monolithic systems. The Responsive Systems Comparison (RSC) method combines Multi-Attribute Tradespace Exploration with Epoch-Era Analysis to examine benefits, costs, and flexible options in complex systems over the mission lifecycle. Modifications to the RSC method as it exists in previously published literature were made in order to more accurately characterize how value is derived from space science missions. New metrics help rank designs by the value derived over their entire mission lifecycle and show more accurate cumulative value distributions. The RSC method was applied to four case study science missions that leveraged the emergent capabilities of distributed satellites to achieve their primary science goals. In all four case studies, RSC showed how scientific value was

  17. Fast temperature optimization of multi-source hyperthermia applicators with reduced-order modeling of 'virtual sources'

    International Nuclear Information System (INIS)

    Cheng, K-S; Stakhursky, Vadim; Craciunescu, Oana I; Stauffer, Paul; Dewhirst, Mark; Das, Shiva K

    2008-01-01

    The goal of this work is to build the foundation for facilitating real-time magnetic resonance image guided patient treatment for heating systems with a large number of physical sources (e.g. antennas). Achieving this goal requires knowledge of how the temperature distribution will be affected by changing each source individually, which requires time expenditure on the order of the square of the number of sources. To reduce computation time, we propose a model reduction approach that combines a smaller number of predefined source configurations (fewer than the number of actual sources) that are most likely to heat tumor. The source configurations consist of magnitude and phase source excitation values for each actual source and may be computed from a CT scan based plan or a simplified generic model of the corresponding patient anatomy. Each pre-calculated source configuration is considered a 'virtual source'. We assume that the actual best source settings can be represented effectively as weighted combinations of the virtual sources. In the context of optimization, each source configuration is treated equivalently to one physical source. This model reduction approach is tested on a patient upper-leg tumor model (with and without temperature-dependent perfusion), heated using a 140 MHz ten-antenna cylindrical mini-annular phased array. Numerical simulations demonstrate that using only a few pre-defined source configurations can achieve temperature distributions that are comparable to those from full optimizations using all physical sources. The method yields close to optimal temperature distributions when using source configurations determined from a simplified model of the tumor, even when tumor position is erroneously assumed to be ∼2.0 cm away from the actual position as often happens in practical clinical application of pre-treatment planning. The method also appears to be robust under conditions of changing, nonlinear, temperature-dependent perfusion. The

  18. Cross correlations of quantum key distribution based on single-photon sources

    International Nuclear Information System (INIS)

    Dong Shuangli; Wang Xiaobo; Zhang Guofeng; Sun Jianhu; Zhang Fang; Xiao Liantuan; Jia Suotang

    2009-01-01

    We theoretically analyze the second-order correlation function in a quantum key distribution system with real single-photon sources. Based on single-event photon statistics, the influence of the modification caused by an eavesdropper's intervention and the effects of background signals on the cross correlations between authorized partners are presented. On this basis, we have shown a secure range of correlation against the intercept-resend attacks.

  19. Feasibility study on X-ray source with pinhole imaging method

    International Nuclear Information System (INIS)

    Qiu Rui; Li Junli

    2007-01-01

    In order to verify the feasibility of study on X-ray source with pinhole imaging method, and optimize the design of X-ray pinhole imaging system, an X-ray pinhole imaging equipment was set up. The change of image due to the change of the position and intensity of X-ray source was estimated with mathematical method and validated with experiment. The results show that the change of the spot position and gray of the spot is linearly related with the change of the position and intensity of X-ray source, so it is feasible to study X-ray source with pinhole imaging method in this application. The results provide some references for the design of X-ray pinhole imaging system. (authors)

  20. Stochastic Industrial Source Detection Using Lower Cost Methods

    Science.gov (United States)

    Thoma, E.; George, I. J.; Brantley, H.; Deshmukh, P.; Cansler, J.; Tang, W.

    2017-12-01

    Hazardous air pollutants (HAPs) can be emitted from a variety of sources in industrial facilities, energy production, and commercial operations. Stochastic industrial sources (SISs) represent a subcategory of emissions from fugitive leaks, variable area sources, malfunctioning processes, and improperly controlled operations. From the shared perspective of industries and communities, cost-effective detection of mitigable SIS emissions can yield benefits such as safer working environments, cost saving through reduced product loss, lower air shed pollutant impacts, and improved transparency and community relations. Methods for SIS detection can be categorized by their spatial regime of operation, ranging from component-level inspection to high-sensitivity kilometer scale surveys. Methods can be temporally intensive (providing snap-shot measures) or sustained in both time-integrated and continuous forms. Each method category has demonstrated utility, however, broad adoption (or routine use) has thus far been limited by cost and implementation viability. Described here are a subset of SIS methods explored by the U.S EPA's next generation emission measurement (NGEM) program that focus on lower cost methods and models. An emerging systems approach that combines multiple forms to help compensate for reduced performance factors of lower cost systems is discussed. A case study of a multi-day HAP emission event observed by a combination of low cost sensors, open-path spectroscopy, and passive samplers is detailed. Early field results of a novel field gas chromatograph coupled with a fast HAP concentration sensor is described. Progress toward near real-time inverse source triangulation assisted by pre-modeled facility profiles using the Los Alamos Quick Urban & Industrial Complex (QUIC) model is discussed.

  1. A novel design method for ground source heat pump

    Directory of Open Access Journals (Sweden)

    Dong Xing-Jie

    2014-01-01

    Full Text Available This paper proposes a novel design method for ground source heat pump. The ground source heat pump operation is controllable by using several parameters, such as the total meters of buried pipe, the space between wells, the thermal properties of soil, thermal resistance of the well, the initial temperature of soil, and annual dynamic load. By studying the effect of well number and well space, we conclude that with the increase of the well number, the inlet and outlet water temperatures decrease in summer and increase in winter, which enhance the efficiency of ground source heat pump. The well space slightly affects the water temperatures, but it affects the soil temperature to some extent. Also the ground source heat pump operations matching with cooling tower are investigated to achieve the thermal balance. This method greatly facilitates ground source heat pump design.

  2. Studies on the supposition of liquid source for irradiation and its dose distribution, (1)

    International Nuclear Information System (INIS)

    Yoshimura, Seiji; Nishida, Tsuneo

    1977-01-01

    Recently radio isotope has been used and applied in the respective spheres. The application of the effects by irradiation will be specially paid attention to in the future. Today the source for irradiation has been considered to be the thing sealed in the solid state into various capsules. So we suppose that we use liquid radio isotope as the source for irradiation. This is because there are some advantages compared with the solid source in its freedom of the shape or additional easiness at attenuation. In these experiments we measured the dose distribution by the columnar liquid source. We expect that these will be put to practical use. (auth.)

  3. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    Science.gov (United States)

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  4. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  5. Selective structural source identification

    Science.gov (United States)

    Totaro, Nicolas

    2018-04-01

    In the field of acoustic source reconstruction, the inverse Patch Transfer Function (iPTF) has been recently proposed and has shown satisfactory results whatever the shape of the vibrating surface and whatever the acoustic environment. These two interesting features are due to the virtual acoustic volume concept underlying the iPTF methods. The aim of the present article is to show how this concept of virtual subsystem can be used in structures to reconstruct the applied force distribution. Some virtual boundary conditions can be applied on a part of the structure, called virtual testing structure, to identify the force distribution applied in that zone regardless of the presence of other sources outside the zone under consideration. In the present article, the applicability of the method is only demonstrated on planar structures. However, the final example show how the method can be applied to a complex shape planar structure with point welded stiffeners even in the tested zone. In that case, if the virtual testing structure includes the stiffeners the identified force distribution only exhibits the positions of external applied forces. If the virtual testing structure does not include the stiffeners, the identified force distribution permits to localize the forces due to the coupling between the structure and the stiffeners through the welded points as well as the ones due to the external forces. This is why this approach is considered here as a selective structural source identification method. It is demonstrated that this approach clearly falls in the same framework as the Force Analysis Technique, the Virtual Fields Method or the 2D spatial Fourier transform. Even if this approach has a lot in common with these latters, it has some interesting particularities like its low sensitivity to measurement noise.

  6. Enhancing the performance of the measurement-device-independent quantum key distribution with heralded pair-coherent sources

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Feng; Zhang, Chun-Hui; Liu, Ai-Ping [Institute of Signal Processing Transmission, Nanjing University of Posts and Telecommunications, Nanjing 210003 (China); Key Lab of Broadband Wireless Communication and Sensor Network Technology, Nanjing University of Posts and Telecommunications, Ministry of Education, Nanjing 210003 (China); Wang, Qin, E-mail: qinw@njupt.edu.cn [Institute of Signal Processing Transmission, Nanjing University of Posts and Telecommunications, Nanjing 210003 (China); Key Lab of Broadband Wireless Communication and Sensor Network Technology, Nanjing University of Posts and Telecommunications, Ministry of Education, Nanjing 210003 (China); Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026 (China)

    2016-04-01

    In this paper, we propose to implement the heralded pair-coherent source into the measurement-device-independent quantum key distribution. By comparing its performance with other existing schemes, we demonstrate that our new scheme can overcome many shortcomings existing in current schemes, and show excellent behavior in the quantum key distribution. Moreover, even when taking the statistical fluctuation into account, we can still obtain quite high key generation rate at very long transmission distance by using our new scheme. - Highlights: • Implement the heralded pair-coherent source into the measurement-device-independent quantum key distribution. • Overcome many shortcomings existing in current schemes and show excellent behavior. • Obtain quite high key generation rate even when taking statistical fluctuation into account.

  7. A stationary computed tomography system with cylindrically distributed sources and detectors.

    Science.gov (United States)

    Chen, Yi; Xi, Yan; Zhao, Jun

    2014-01-01

    The temporal resolution of current computed tomography (CT) systems is limited by the rotation speed of their gantries. A helical interlaced source detector array (HISDA) CT, which is a stationary CT system with distributed X-ray sources and detectors, is presented in this paper to overcome the aforementioned limitation and achieve high temporal resolution. Projection data can be obtained from different angles in a short time and do not require source, detector, or object motion. Axial coverage speed is increased further by employing a parallel scan scheme. Interpolation is employed to approximate the missing data in the gaps, and then a Katsevich-type reconstruction algorithm is applied to enable an approximate reconstruction. The proposed algorithm suppressed the cone beam and gap-induced artifacts in HISDA CT. The results also suggest that gap-induced artifacts can be reduced by employing a large helical pitch for a fixed gap height. HISDA CT is a promising 3D dynamic imaging architecture given its good temporal resolution and stationary advantage.

  8. Tau method approximation of the Hubbell rectangular source integral

    International Nuclear Information System (INIS)

    Kalla, S.L.; Khajah, H.G.

    2000-01-01

    The Tau method is applied to obtain expansions, in terms of Chebyshev polynomials, which approximate the Hubbell rectangular source integral:I(a,b)=∫ b 0 (1/(√(1+x 2 )) arctan(a/(√(1+x 2 )))) This integral corresponds to the response of an omni-directional radiation detector situated over a corner of a plane isotropic rectangular source. A discussion of the error in the Tau method approximation follows

  9. [Sampling methods for PM2.5 from stationary sources: a review].

    Science.gov (United States)

    Jiang, Jing-Kun; Deng, Jian-Guo; Li, Zhen; Li, Xing-Hua; Duan, Lei; Hao, Ji-Ming

    2014-05-01

    The new China national ambient air quality standard has been published in 2012 and will be implemented in 2016. To meet the requirements in this new standard, monitoring and controlling PM2,,5 emission from stationary sources are very important. However, so far there is no national standard method on sampling PM2.5 from stationary sources. Different sampling methods for PM2.5 from stationary sources and relevant international standards were reviewed in this study. It includes the methods for PM2.5 sampling in flue gas and the methods for PM2.5 sampling after dilution. Both advantages and disadvantages of these sampling methods were discussed. For environmental management, the method for PM2.5 sampling in flue gas such as impactor and virtual impactor was suggested as a standard to determine filterable PM2.5. To evaluate environmental and health effects of PM2.5 from stationary sources, standard dilution method for sampling of total PM2.5 should be established.

  10. The use of cluster analysis method for the localization of acoustic emission sources detected during the hydrotest of PWR pressure vessels

    International Nuclear Information System (INIS)

    Liska, J.; Svetlik, M.; Slama, K.

    1982-01-01

    The acoustic emission method is a promising tool for checking reactor pressure vessel integrity. Localization of emission sources is the first and the most important step in processing emission signals. The paper describes the emission sources localization method which is based on cluster analysis of a set of points depicting the emission events in the plane of coordinates of their occurrence. The method is based on using this set of points for constructing the minimum spanning tree and its partition into fragments corresponding to point clusters. Furthermore, the laws are considered of probability distribution of the minimum spanning tree edge length for one and several clusters with the aim of finding the optimum length of the critical edge for the partition of the tree. Practical application of the method is demonstrated on localizing the emission sources detected during a hydrotest of a pressure vessel used for testing the reactor pressure vessel covers. (author)

  11. Comparing four methods to estimate usual intake distributions

    NARCIS (Netherlands)

    Souverein, O.W.; Dekkers, A.L.; Geelen, A.; Haubrock, J.; Vries, de J.H.M.; Ocke, M.C.; Harttig, U.; Boeing, H.; Veer, van 't P.

    2011-01-01

    Background/Objectives: The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As ‘true’ usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data

  12. The radio spectral energy distribution of infrared-faint radio sources

    Science.gov (United States)

    Herzog, A.; Norris, R. P.; Middelberg, E.; Seymour, N.; Spitler, L. R.; Emonts, B. H. C.; Franzen, T. M. O.; Hunstead, R.; Intema, H. T.; Marvil, J.; Parker, Q. A.; Sirothia, S. K.; Hurley-Walker, N.; Bell, M.; Bernardi, G.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Callingham, J. R.; Deshpande, A. A.; Dwarakanath, K. S.; For, B.-Q.; Greenhill, L. J.; Hancock, P.; Hazelton, B. J.; Hindson, L.; Johnston-Hollitt, M.; Kapińska, A. D.; Kaplan, D. L.; Lenc, E.; Lonsdale, C. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Morgan, J.; Oberoi, D.; Offringa, A.; Ord, S. M.; Prabu, T.; Procopio, P.; Udaya Shankar, N.; Srivani, K. S.; Staveley-Smith, L.; Subrahmanyan, R.; Tingay, S. J.; Wayth, R. B.; Webster, R. L.; Williams, A.; Williams, C. L.; Wu, C.; Zheng, Q.; Bannister, K. W.; Chippendale, A. P.; Harvey-Smith, L.; Heywood, I.; Indermuehle, B.; Popping, A.; Sault, R. J.; Whiting, M. T.

    2016-10-01

    Context. Infrared-faint radio sources (IFRS) are a class of radio-loud (RL) active galactic nuclei (AGN) at high redshifts (z ≥ 1.7) that are characterised by their relative infrared faintness, resulting in enormous radio-to-infrared flux density ratios of up to several thousand. Aims: Because of their optical and infrared faintness, it is very challenging to study IFRS at these wavelengths. However, IFRS are relatively bright in the radio regime with 1.4 GHz flux densities of a few to a few tens of mJy. Therefore, the radio regime is the most promising wavelength regime in which to constrain their nature. We aim to test the hypothesis that IFRS are young AGN, particularly GHz peaked-spectrum (GPS) and compact steep-spectrum (CSS) sources that have a low frequency turnover. Methods: We use the rich radio data set available for the Australia Telescope Large Area Survey fields, covering the frequency range between 150 MHz and 34 GHz with up to 19 wavebands from different telescopes, and build radio spectral energy distributions (SEDs) for 34 IFRS. We then study the radio properties of this class of object with respect to turnover, spectral index, and behaviour towards higher frequencies. We also present the highest-frequency radio observations of an IFRS, observed with the Plateau de Bure Interferometer at 105 GHz, and model the multi-wavelength and radio-far-infrared SED of this source. Results: We find IFRS usually follow single power laws down to observed frequencies of around 150 MHz. Mostly, the radio SEDs are steep (α IFRS show statistically significantly steeper radio SEDs than the broader RL AGN population. Our analysis reveals that the fractions of GPS and CSS sources in the population of IFRS are consistent with the fractions in the broader RL AGN population. We find that at least % of IFRS contain young AGN, although the fraction might be significantly higher as suggested by the steep SEDs and the compact morphology of IFRS. The detailed multi

  13. Methods and Tools for Profiling and Control of Distributed Systems

    Directory of Open Access Journals (Sweden)

    Sukharev Roman

    2017-01-01

    Full Text Available The article analyzes and standardizes methods for profiling distributed systems that focus on simulation to conduct experiments and build a graph model of the system. The theory of queueing networks is used for simulation modeling of distributed systems, receiving and processing user requests. To automate the above method of profiling distributed systems the software application was developed with a modular structure and similar to a SCADA-system.

  14. HARVESTING, INTEGRATING AND DISTRIBUTING LARGE OPEN GEOSPATIAL DATASETS USING FREE AND OPEN-SOURCE SOFTWARE

    Directory of Open Access Journals (Sweden)

    R. Oliveira

    2016-06-01

    Full Text Available Federal, State and Local government agencies in the USA are investing heavily on the dissemination of Open Data sets produced by each of them. The main driver behind this thrust is to increase agencies’ transparency and accountability, as well as to improve citizens’ awareness. However, not all Open Data sets are easy to access and integrate with other Open Data sets available even from the same agency. The City and County of Denver Open Data Portal distributes several types of geospatial datasets, one of them is the city parcels information containing 224,256 records. Although this data layer contains many pieces of information it is incomplete for some custom purposes. Open-Source Software were used to first collect data from diverse City of Denver Open Data sets, then upload them to a repository in the Cloud where they were processed using a PostgreSQL installation on the Cloud and Python scripts. Our method was able to extract non-spatial information from a ‘not-ready-to-download’ source that could then be combined with the initial data set to enhance its potential use.

  15. The dislocation distribution function near a crack tip generated by external sources

    International Nuclear Information System (INIS)

    Lung, C.W.; Deng, K.M.

    1988-06-01

    The dislocation distribution function near a crack tip generated by external sources is calculated. It is similar to the shape of curves calculated for the crack tip emission case but the quantative difference is quite large. The image forces enlarges the negative dislocation zone but does not change the form of the curve. (author). 10 refs, 3 figs

  16. From the Kirsch-Kress potential method via the range test to the singular sources method

    International Nuclear Information System (INIS)

    Potthast, R; Schulz, J

    2005-01-01

    We review three reconstruction methods for inverse obstacle scattering problems. We will analyse the relation between the Kirsch-Kress potential method 1986, the range test of Kusiak, Potthast and Sylvester (2003) and the singular sources method of Potthast (2000). In particular, we show that the range test is a logical extension of the Kirsch-Kress method into the category of sampling methods employing the tool of domain sampling. Then we will show how a multi-wave version of the range test can be set up and we will work out its relation to the singular sources method. Numerical examples and demonstrations will be provided

  17. Global inventory of NOx sources

    International Nuclear Information System (INIS)

    Delmas, R.; Serca, D.; Jambert, C.

    1997-01-01

    Nitrogen oxides are key compounds for the oxidation capacity of the troposphere. Their concentration depends on the proximity of sources because of their short atmospheric lifetime. An accurate knowledge of the distribution of their sources and sinks is therefore crucial. At the global scale, the dominant sources of nitrogen oxides - combustion of fossil fuel (about 50%) and biomass burning (about 20%) - are basically anthropogenic. Natural sources, including lightning and microbial activity in soils, represent therefore less than 30% of total emissions. Fertilizer use in agriculture constitutes an anthropogenic perturbation to the microbial source. The methods to estimate the magnitude and distribution of these dominant sources of nitrogen oxides are discussed. Some minor sources which may play a specific role in tropospheric chemistry such as NO x emission from aircraft in the upper troposphere or input from production in the stratosphere from N 2 O photodissociation are also considered

  18. Micro-seismic Imaging Using a Source Independent Waveform Inversion Method

    KAUST Repository

    Wang, Hanchen

    2016-04-18

    Micro-seismology is attracting more and more attention in the exploration seismology community. The main goal in micro-seismic imaging is to find the source location and the ignition time in order to track the fracture expansion, which will help engineers monitor the reservoirs. Conventional imaging methods work fine in this field but there are many limitations such as manual picking, incorrect migration velocity and low signal to noise ratio (S/N). In traditional surface survey imaging, full waveform inversion (FWI) is widely used. The FWI method updates the velocity model by minimizing the misfit between the observed data and the predicted data. Using FWI to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. Use the FWI technique, and overcomes the difficulties of manual pickings and incorrect velocity model for migration. However, the technique of waveform inversion of micro-seismic events faces its own problems. There is significant nonlinearity due to the unknown source location (space) and function (time). We have developed a source independent FWI of micro-seismic events to simultaneously invert for the source image, source function and velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. To examine the accuracy of the inverted source image and velocity model the extended image for source wavelet in z-axis is extracted. Also the angle gather is calculated to check the applicability of the migration velocity. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity in the synthetic experiments with both parts of the Marmousi and the SEG

  19. Distributed gas detection system and method

    Science.gov (United States)

    Challener, William Albert; Palit, Sabarni; Karp, Jason Harris; Kasten, Ansas Matthias; Choudhury, Niloy

    2017-11-21

    A distributed gas detection system includes one or more hollow core fibers disposed in different locations, one or more solid core fibers optically coupled with the one or more hollow core fibers and configured to receive light of one or more wavelengths from a light source, and an interrogator device configured to receive at least some of the light propagating through the one or more solid core fibers and the one or more hollow core fibers. The interrogator device is configured to identify a location of a presence of a gas-of-interest by examining absorption of at least one of the wavelengths of the light at least one of the hollow core fibers.

  20. A review on the sources and spatial-temporal distributions of Pb in Jiaozhou Bay

    Science.gov (United States)

    Yang, Dongfang; Zhang, Jie; Wang, Ming; Zhu, Sixi; Wu, Yunjie

    2017-12-01

    This paper provided a review on the source, spatial-distribution, temporal variations of Pb in Jiaozhou Bay based on investigation of Pb in surface and waters in different seasons during 1979-1983. The source strengths of Pb sources in Jiaozhou Bay were showing increasing trends, and the pollution level of Pb in this bay was slight or moderate in the early stage of reform and opening-up. Pb contents in the marine bay were mainly determined by the strength and frequency of Pb inputs from human activities, and Pb could be moving from high content areas to low content areas in the ocean interior. Surface waters in the ocean was polluted by human activities, and bottom waters was polluted by means of vertical water’s effect. The process of spatial distribution of Pb in waters was including three steps, i.e., 1), Pb was transferring to surface waters in the bay, 2) Pb was transferring to surface waters, and 3) Pb was transferring to and accumulating in bottom waters.

  1. Dark Energy Survey Year 1 Results: Redshift distributions of the weak lensing source galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Hoyle, B.; et al.

    2017-08-04

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributions $n^i_{PZ}(z)$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $n^i(z)=n^i_{PZ}(z-\\Delta z^i)$ to correct the mean redshift of $n^i(z)$ for biases in $n^i_{\\rm PZ}$. The $\\Delta z^i$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $\\Delta z^i$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15methods produce consistent estimates of $\\Delta z^i$, with combined uncertainties of $\\sigma_{\\Delta z^i}=$0.015, 0.013, 0.011, and 0.022 in the four bins. We marginalize over these in all analyses to follow, which does not diminish the constraining power significantly. Repeating the photo-z procedure using the Directional Neighborhood Fitting (DNF) algorithm instead of BPZ, or using the $n^i(z)$ directly estimated from COSMOS, yields no discernible difference in cosmological inferences.

  2. Dark Energy Survey Year 1 Results: Redshift distributions of the weak lensing source galaxies

    Science.gov (United States)

    Hoyle, B.; Gruen, D.; Bernstein, G. M.; Rau, M. M.; De Vicente, J.; Hartley, W. G.; Gaztanaga, E.; DeRose, J.; Troxel, M. A.; Davis, C.; Alarcon, A.; MacCrann, N.; Prat, J.; Sánchez, C.; Sheldon, E.; Wechsler, R. H.; Asorey, J.; Becker, M. R.; Bonnett, C.; Carnero Rosell, A.; Carollo, D.; Carrasco Kind, M.; Castander, F. J.; Cawthon, R.; Chang, C.; Childress, M.; Davis, T. M.; Drlica-Wagner, A.; Gatti, M.; Glazebrook, K.; Gschwend, J.; Hinton, S. R.; Hoormann, J. K.; Kim, A. G.; King, A.; Kuehn, K.; Lewis, G.; Lidman, C.; Lin, H.; Macaulay, E.; Maia, M. A. G.; Martini, P.; Mudd, D.; Möller, A.; Nichol, R. C.; Ogando, R. L. C.; Rollins, R. P.; Roodman, A.; Ross, A. J.; Rozo, E.; Rykoff, E. S.; Samuroff, S.; Sevilla-Noarbe, I.; Sharp, R.; Sommer, N. E.; Tucker, B. E.; Uddin, S. A.; Varga, T. N.; Vielzeuf, P.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Busha, M. T.; Capozzi, D.; Carretero, J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Eifler, T. F.; Estrada, J.; Evrard, A. E.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Giannantonio, T.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Kirk, D.; Krause, E.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Miquel, R.; Nord, B.; O'Neill, C. R.; Plazas, A. A.; Romer, A. K.; Sako, M.; Sanchez, E.; Santiago, B.; Scarpine, V.; Schindler, R.; Schubnell, M.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.; Yanny, B.; Zuntz, J.; DES Collaboration

    2018-04-01

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the populations of galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z ≈ 0.2 and ≈1.3, and to produce initial estimates of the lensing-weighted redshift distributions n^i_PZ(z)∝ dn^i/dz for members of bin i. Accurate determination of cosmological parameters depends critically on knowledge of ni but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts n^i(z)=n^i_PZ(z-Δ z^i) to correct the mean redshift of ni(z) for biases in n^i_PZ. The Δzi are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the Δzi of the three lowest redshift bins are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15 < z < 0.9. This paper details the BPZ and COSMOS procedures, and demonstrates that the cosmological inference is insensitive to details of the ni(z) beyond the choice of Δzi. The clustering and COSMOS validation methods produce consistent estimates of Δzi in the bins where both can be applied, with combined uncertainties of σ _{Δ z^i}=0.015, 0.013, 0.011, and 0.022 in the four bins. Repeating the photo-z proceedure instead using the Directional Neighborhood Fitting (DNF) algorithm, or using the ni(z) estimated from the matched sample in COSMOS, yields no discernible difference in cosmological inferences.

  3. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    International Nuclear Information System (INIS)

    Sig Drellack, Lance Prothro

    2007-01-01

    simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions

  4. SoC-Based Droop Method for Distributed Energy Storage in DC Microgrid Applications

    DEFF Research Database (Denmark)

    Lu, Xiaonan; Sun, Kai; Guerrero, Josep M.

    2012-01-01

    With the progress of distributed generation nowadays, microgrid is employed to integrate different renewable energy sources into a certain area. For several kinds of renewable sources have DC outputs, DC microgrid has drawn more attention recently. Meanwhile, to deal with the uncertainty...

  5. Geometric calibration of a stationary digital breast tomosynthesis system based on distributed carbon nanotube X-ray source arrays.

    Directory of Open Access Journals (Sweden)

    Changhui Jiang

    Full Text Available Stationary digital breast tomosynthesis (sDBT with distributed X-ray sources based on carbon nanotube (CNT field emission cathodes has been recently proposed as an approach that can prevent motion blur produced by traditional DBT systems. In this paper, we simulate a geometric calibration method based on a proposed multi-source CNT X-ray sDBT system. This method is a projection matrix-based approach with seven geometric parameters, all of which can be obtained from only one projection datum of the phantom. To our knowledge, this study reports the first application of this approach in a CNT-based multi-beam X-ray sDBT system. The simulation results showed that the extracted geometric parameters from the calculated projection matrix are extremely close to the input values and that the proposed method is effective and reliable for a square sDBT system. In addition, a traditional cone-beam computed tomography (CT system was also simulated, and the uncalibrated and calibrated geometric parameters were used in image reconstruction based on the filtered back-projection (FBP method. The results indicated that the images reconstructed with calibrated geometric parameters have fewer artifacts and are closer to the reference image. All the simulation tests showed that this geometric calibration method is optimized for sDBT systems but can also be applied to other application-specific CT imaging systems.

  6. Geometric calibration of a stationary digital breast tomosynthesis system based on distributed carbon nanotube X-ray source arrays.

    Science.gov (United States)

    Jiang, Changhui; Zhang, Na; Gao, Juan; Hu, Zhanli

    2017-01-01

    Stationary digital breast tomosynthesis (sDBT) with distributed X-ray sources based on carbon nanotube (CNT) field emission cathodes has been recently proposed as an approach that can prevent motion blur produced by traditional DBT systems. In this paper, we simulate a geometric calibration method based on a proposed multi-source CNT X-ray sDBT system. This method is a projection matrix-based approach with seven geometric parameters, all of which can be obtained from only one projection datum of the phantom. To our knowledge, this study reports the first application of this approach in a CNT-based multi-beam X-ray sDBT system. The simulation results showed that the extracted geometric parameters from the calculated projection matrix are extremely close to the input values and that the proposed method is effective and reliable for a square sDBT system. In addition, a traditional cone-beam computed tomography (CT) system was also simulated, and the uncalibrated and calibrated geometric parameters were used in image reconstruction based on the filtered back-projection (FBP) method. The results indicated that the images reconstructed with calibrated geometric parameters have fewer artifacts and are closer to the reference image. All the simulation tests showed that this geometric calibration method is optimized for sDBT systems but can also be applied to other application-specific CT imaging systems.

  7. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    OpenAIRE

    Yang, Shan; Tong, Xiangqian

    2016-01-01

    Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverte...

  8. Ion-source dependence of the distributions of internuclear separations in 2-MeV HeH+ beams

    International Nuclear Information System (INIS)

    Kanter, E.P.; Gemmell, D.S.; Plesser, I.; Vager, Z.

    1981-01-01

    Experiments involving the use of MeV molecular-ion beams have yielded new information on atomic collisions in solids. A central part of the analyses of such experiments is a knowledge of the distribution of internuclear separations contained in the incident beam. In an attempt to determine how these distributions depend on ion-source gas conditions, we have studied foil-induced dissociations of H 2+ , H 3+ , HeH + , and OH 2+ ions. Although changes of ion-source gas compositions and pressure were found to have no measurable influence on the vibrational state populations of the beams reaching our target, for HeH + we found that beams produced in our rf source were vibrationally hotter than beams produced in a duoplasmatron. This was also seen in studies of neutral fragments and transmitted molecules

  9. Free-Space Quantum Key Distribution with a High Generation Rate KTP Waveguide Photon-Pair Source

    Science.gov (United States)

    Wilson, J.; Chaffee, D.; Wilson, N.; Lekki, J.; Tokars, R.; Pouch, J.; Lind, A.; Cavin, J.; Helmick, S.; Roberts, T.; hide

    2016-01-01

    NASA awarded Small Business Innovative Research (SBIR) contracts to AdvR, Inc to develop a high generation rate source of entangled photons that could be used to explore quantum key distribution (QKD) protocols. The final product, a photon pair source using a dual-element periodically- poled potassium titanyl phosphate (KTP) waveguide, was delivered to NASA Glenn Research Center in June of 2015. This paper describes the source, its characterization, and its performance in a B92 (Bennett, 1992) protocol QKD experiment.

  10. A finite-difference contrast source inversion method

    International Nuclear Information System (INIS)

    Abubakar, A; Hu, W; Habashy, T M; Van den Berg, P M

    2008-01-01

    We present a contrast source inversion (CSI) algorithm using a finite-difference (FD) approach as its backbone for reconstructing the unknown material properties of inhomogeneous objects embedded in a known inhomogeneous background medium. Unlike the CSI method using the integral equation (IE) approach, the FD-CSI method can readily employ an arbitrary inhomogeneous medium as its background. The ability to use an inhomogeneous background medium has made this algorithm very suitable to be used in through-wall imaging and time-lapse inversion applications. Similar to the IE-CSI algorithm the unknown contrast sources and contrast function are updated alternately to reconstruct the unknown objects without requiring the solution of the full forward problem at each iteration step in the optimization process. The FD solver is formulated in the frequency domain and it is equipped with a perfectly matched layer (PML) absorbing boundary condition. The FD operator used in the FD-CSI method is only dependent on the background medium and the frequency of operation, thus it does not change throughout the inversion process. Therefore, at least for the two-dimensional (2D) configurations, where the size of the stiffness matrix is manageable, the FD stiffness matrix can be inverted using a non-iterative inversion matrix approach such as a Gauss elimination method for the sparse matrix. In this case, an LU decomposition needs to be done only once and can then be reused for multiple source positions and in successive iterations of the inversion. Numerical experiments show that this FD-CSI algorithm has an excellent performance for inverting inhomogeneous objects embedded in an inhomogeneous background medium

  11. HOW DO FIRMS SOURCE EXTERNAL KNOWLEDGE FOR INNOVATION? ANALYSING EFFECTS OF DIFFERENT KNOWLEDGE SOURCING METHODS

    OpenAIRE

    KI H. KANG; JINA KANG

    2009-01-01

    In the era of "open innovation", external knowledge is a very important source for technology innovation. In this paper, we investigate the relationship between external knowledge and performance of technology innovation. The effect of external knowledge on the performance of technology innovation can vary with different external knowledge sourcing methods. We identify three ways of external knowledge sourcing: information transfer from informal network, R&D collaboration and technology acqui...

  12. Generalized hybrid Monte Carlo - CMFD methods for fission source convergence

    International Nuclear Information System (INIS)

    Wolters, Emily R.; Larsen, Edward W.; Martin, William R.

    2011-01-01

    In this paper, we generalize the recently published 'CMFD-Accelerated Monte Carlo' method and present two new methods that reduce the statistical error in CMFD-Accelerated Monte Carlo. The CMFD-Accelerated Monte Carlo method uses Monte Carlo to estimate nonlinear functionals used in low-order CMFD equations for the eigenfunction and eigenvalue. The Monte Carlo fission source is then modified to match the resulting CMFD fission source in a 'feedback' procedure. The two proposed methods differ from CMFD-Accelerated Monte Carlo in the definition of the required nonlinear functionals, but they have identical CMFD equations. The proposed methods are compared with CMFD-Accelerated Monte Carlo on a high dominance ratio test problem. All hybrid methods converge the Monte Carlo fission source almost immediately, leading to a large reduction in the number of inactive cycles required. The proposed methods stabilize the fission source more efficiently than CMFD-Accelerated Monte Carlo, leading to a reduction in the number of active cycles required. Finally, as in CMFD-Accelerated Monte Carlo, the apparent variance of the eigenfunction is approximately equal to the real variance, so the real error is well-estimated from a single calculation. This is an advantage over standard Monte Carlo, in which the real error can be underestimated due to inter-cycle correlation. (author)

  13. Seismic tomography inversion in the case that sources and receivers are distributed out of a 2-D plane; Shingen jushinten ga nijigen heimennai ni nai baai no danseiha tomography kaiseki ni kansuru kosatsu

    Energy Technology Data Exchange (ETDEWEB)

    Yokota, T; Miyazaki, T [Geological Survey of Japan, Tsukuba (Japan); Rokugawa, S; Matsushima, J [The University of Tokyo, Tokyo (Japan). Faculty of Engineering; Ashida, Y [Kyoto University, Kyoto (Japan). Faculty of Engineering

    1996-10-01

    In the case where sources and receivers are not distributed on a 2-D plane, seismic tomography inversion was studied. In tomography experiments, the existing wells are generally used. In such case, sources and receivers are frequently not distributed on a 2-D plane. The 2.5-D analysis method including 2-D structure and 3-D ray-tracing was thus developed. This method is featured by less memory necessary for ray-tracing calculation, and the same algorithm for velocity determination as 2-D analysis method. In previous methods, since analysis is generally carried out by projecting sources and receivers on a certain assumed 2-D plane, it can derive correct results in the case of constant velocity and straight ray, however, in the other case, it derives incorrect results. Application of 3-D tomography requires a large amount of memory, and falls into poor convergence because of various parameters. The 2.5-D analysis method can avoid these demerits. This analysis method was applied to the data obtained in Ogiri area, Kagoshima prefecture. 5 refs., 3 figs., 2 tabs.

  14. Distribution and sources of particulate organic matter in the Indian monsoonal estuaries during monsoon

    Digital Repository Service at National Institute of Oceanography (India)

    Sarma, V.V.S.S.; Krishna, M.S.; Prasad, V.R.; Kumar, B.S.K.; Naidu, S.A.; Rao, G.D.; Viswanadham, R.; Sridevi, T.; Kumar, P.P.; Reddy, N.P.C.

    The distribution and sources of particulate organic carbon (POC) and nitrogen (PN) in 27 Indian estuaries were examined during the monsoon using the content and isotopic composition of carbon and nitrogen. Higher phytoplankton biomass was noticed...

  15. Measuring the plutonium distribution in fuel elements by the gamma scanning method

    International Nuclear Information System (INIS)

    Gorobets, A.K.; Leshchenko, Yu.I.; Semenov, A.L.

    1982-01-01

    An on-line system designed for measuring Pu distribution in the length of fresh fuel elements with vibrocompacted UO 2 -PuO 2 fuel rods by the γ-scanning method is described. An algorithm for measurement result processing and the procedure of determination of calibration parameters necessary for the valid signal separat.ion by means of a two-channel analyzer and for evaluation of the self-absorption effect are considered. The device scanning unit consists of two NaI(Tl) detectors simultaneously detecting γ-radiation from the opposite sides of a measured fuel rod section. The cesium source with Esub(γ)=660 keV is used for fuel scanning. On the base of the analysis of the results obtained when studying the BOR-60 experimental fuel elements with fuel rods of 400 mm long by means of the described device clusion is made that fuel element scanning during 20 min (scanning step is 4 mm, measuring time at each step is 10 s) makes it possible to determine Pu distribution with the error less than +-4% at the confidence probability of 0.68

  16. MIVOC Method at the mVINIS Ion Source

    International Nuclear Information System (INIS)

    Jovovic, J.; Cvetic, J.; Dobrosavljevic, A.; Nedeljkovic, T.; Draganic, I.

    2007-01-01

    We have used the well-known metal-ions-from-volatile- compounds (MIVOC) method with the mVINIS Ion Source to produce multiply charged ion beams form solid substances. Using this method very intense stable multiply charged ion beams of several solid substances having high melting points were obtained. The yields and spectrum of the multiply charged ion beams obtained from Hf will be presented. A hafnium ion beam spectrum was recorded at an ECR ion source for the first time. We have utilized the multiply charged ion beams from solid substances to irradiate the polymer, fullerene and glassy carbon samples at the channel for modification of materials (L3A). (author)

  17. MIVOC method at the mVINIS ion source

    Directory of Open Access Journals (Sweden)

    Jovović Jovica

    2007-01-01

    Full Text Available Based on the metal-ions-from-volatile-compounds (MIVOC method with the mVINIS ion source, we have produced multiply charged ion beams from solid substances. Highly in tense, stable multiply charged ion beams of several solid substances with high melting points were extracted by using this method. The spectrum of multiply charged ion beams obtained from the element hafnium is presented here. For the first time ever, hafnium ion beam spectra were recorded at an electron cyclotron resonance ion source. Multiply charged ion beams from solid substances were used to irradiate the polymer, fullerene and glassy carbon samples at the channel for the modification of materials.

  18. Multi-level methods and approximating distribution functions

    International Nuclear Information System (INIS)

    Wilson, D.; Baker, R. E.

    2016-01-01

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  19. Multi-level methods and approximating distribution functions

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E. [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  20. Calculation of dose for β point and sphere sources in soft tissue

    International Nuclear Information System (INIS)

    Sun Fuyin; Yuan Shuyu; Tan Jian

    1999-01-01

    Objective: To compare the results of the distribution of dose rate calculated by three typical methods for point source and sphere source of β nuclide. Methods: Calculating and comparing the distributions of dose rate from 32 P β point and sphere sources in soft tissue calculated by the three methods published in references, [1]. [2] and [3], respectively. Results: For the point source of 3.7 x 10 7 Bq (1mCi), the variations of the calculation results of the three formulas are within 10% if r≤0.35 g/cm 2 , r being the distance from source, and larger than 10% if r > 0.35 g/cm 2 . For the sphere source whose volume is 50 μl and activity is 3.7 x 10 7 Bq(1 mCi), the variations are within 10% if z≤0.15 g/cm 2 , z being the distance from the surface of the sphere source to a point outside the sphere. Conclusion: The agreement of the distributions of the dose rate calculated by the three methods mentioned above for point and sphere β source are good if the distances from point source or the surface of sphere source to the points observed are small, and poor if they are large

  1. Spatial distribution, environmental risk and source of heavy metals in street dust from an industrial city in semi-arid area of China

    Directory of Open Access Journals (Sweden)

    Han Xiufeng

    2017-06-01

    Full Text Available Environmental risks associated with Co, Cr, Cu, Mn, Ni, Pb, V and Zn in street dust collected from Baotou, a medium-sized industrial city in a semi-arid area of northwest China, were assessed by using enrichment factor and the potential ecological index. Their spatial distributions and sources in the dust were analyzed on the basis of geostatistical methods and multivariate statistical analysis, respectively. The results indicate that street dust in Baotou has elevated heavy metal concentrations, especially of Co, Cr, Cu, Pb and Zn. Co in the dust was significantly enriched. Cr and Pb were from moderate to significant enrichment. Cu and Zn were from minimal to moderate enrichment, whereas Mn, Ni and V in the dust were from deficient to minimal enrichment. The ecological risk levels of Co and Pb in the dust were moderate to considerable and low to moderate, respectively, whereas those of other heavy metals studied in the dust presented low ecological risk. Different distribution patterns were found among the analyzed heavy metals. Three main sources of these heavy metals were identified. Cr, Mn, Ni and V originated from nature and industrial activities. Cu, Pb and Zn derived mainly from traffic sources, and Co was mainly from construction sources.

  2. PROGRAMMING OF METHODS FOR THE NEEDS OF LOGISTICS DISTRIBUTION SOLVING PROBLEMS

    Directory of Open Access Journals (Sweden)

    Andrea Štangová

    2014-06-01

    Full Text Available Logistics has become one of the dominant factors which is affecting the successful management, competitiveness and mentality of the global economy. Distribution logistics materializes the connesciton of production and consumer marke. It uses different methodology and methods of multicriterial evaluation and allocation. This thesis adresses the problem of the costs of securing the distribution of product. It was therefore relevant to design a software product thet would be helpful in solvin the problems related to distribution logistics. Elodis – electronic distribution logistics program was designed on the basis of theoretical analysis of the issue of distribution logistics and on the analysis of the software products market. The program uses a multicriterial evaluation methods to deremine the appropriate type and mathematical and geometrical method to determine an appropriate allocation of the distribution center, warehouse and company.

  3. Identification of Potential Sources of Mercury (Hg) in Farmland Soil Using a Decision Tree Method in China.

    Science.gov (United States)

    Zhong, Taiyang; Chen, Dongmei; Zhang, Xiuying

    2016-11-09

    Identification of the sources of soil mercury (Hg) on the provincial scale is helpful for enacting effective policies to prevent further contamination and take reclamation measurements. The natural and anthropogenic sources and their contributions of Hg in Chinese farmland soil were identified based on a decision tree method. The results showed that the concentrations of Hg in parent materials were most strongly associated with the general spatial distribution pattern of Hg concentration on a provincial scale. The decision tree analysis gained an 89.70% total accuracy in simulating the influence of human activities on the additions of Hg in farmland soil. Human activities-for example, the production of coke, application of fertilizers, discharge of wastewater, discharge of solid waste, and the production of non-ferrous metals-were the main external sources of a large amount of Hg in the farmland soil.

  4. Accuracy of Dual-Energy Virtual Monochromatic CT Numbers: Comparison between the Single-Source Projection-Based and Dual-Source Image-Based Methods.

    Science.gov (United States)

    Ueguchi, Takashi; Ogihara, Ryota; Yamada, Sachiko

    2018-03-21

    To investigate the accuracy of dual-energy virtual monochromatic computed tomography (CT) numbers obtained by two typical hardware and software implementations: the single-source projection-based method and the dual-source image-based method. A phantom with different tissue equivalent inserts was scanned with both single-source and dual-source scanners. A fast kVp-switching feature was used on the single-source scanner, whereas a tin filter was used on the dual-source scanner. Virtual monochromatic CT images of the phantom at energy levels of 60, 100, and 140 keV were obtained by both projection-based (on the single-source scanner) and image-based (on the dual-source scanner) methods. The accuracy of virtual monochromatic CT numbers for all inserts was assessed by comparing measured values to their corresponding true values. Linear regression analysis was performed to evaluate the dependency of measured CT numbers on tissue attenuation, method, and their interaction. Root mean square values of systematic error over all inserts at 60, 100, and 140 keV were approximately 53, 21, and 29 Hounsfield unit (HU) with the single-source projection-based method, and 46, 7, and 6 HU with the dual-source image-based method, respectively. Linear regression analysis revealed that the interaction between the attenuation and the method had a statistically significant effect on the measured CT numbers at 100 and 140 keV. There were attenuation-, method-, and energy level-dependent systematic errors in the measured virtual monochromatic CT numbers. CT number reproducibility was comparable between the two scanners, and CT numbers had better accuracy with the dual-source image-based method at 100 and 140 keV. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  5. A gamma-source method of measuring soil moisture

    International Nuclear Information System (INIS)

    Al-Jeboori, M.A.; Ameen, I.A.

    1986-01-01

    Water content in soil column was measured using NaI scintillation detector 5 mci Cs-137 as a gamma source. The measurements were done with a back scatter gauge, restricted with scattering angle less to than /2 overcome the effect of soil type. A 3 cm air gap was maintained between the front of the detector and the wall of the soil container in order to increase the counting rate. The distance between the center of the source and the center of the back scattering detector was 14 cm. The accuracy of the measurements was 0.63. For comparision, a direct rays method was used to measure the soil moisture. The results gave an error of 0.65. Results of the two methods were compared with the gravimetric method which gave an error of 0.18 g/g and 0.17 g/g for direct and back method respectively. The quick direct method was used to determine the gravimetric and volumetric percentage constants, and were found to be 1.62 and 0.865 respectively. The method then used to measure the water content in the layers of soil column.(6 tabs., 4 figs., 12 refs.)

  6. SOILD: A computer model for calculating the effective dose equivalent from external exposure to distributed gamma sources in soil

    International Nuclear Information System (INIS)

    Chen, S.Y.; LePoire, D.; Yu, C.; Schafetz, S.; Mehta, P.

    1991-01-01

    The SOLID computer model was developed for calculating the effective dose equivalent from external exposure to distributed gamma sources in soil. It is designed to assess external doses under various exposure scenarios that may be encountered in environmental restoration programs. The models four major functional features address (1) dose versus source depth in soil, (2) shielding of clean cover soil, (3) area of contamination, and (4) nonuniform distribution of sources. The model is also capable of adjusting doses when there are variations in soil densities for both source and cover soils. The model is supported by a data base of approximately 500 radionuclides. 4 refs

  7. Computational methods for high-energy source shielding

    International Nuclear Information System (INIS)

    Armstrong, T.W.; Cloth, P.; Filges, D.

    1983-01-01

    The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given

  8. Disentangling the major source areas for an intense aerosol advection in the Central Mediterranean on the basis of Potential Source Contribution Function modeling of chemical and size distribution measurements

    Science.gov (United States)

    Petroselli, Chiara; Crocchianti, Stefano; Moroni, Beatrice; Castellini, Silvia; Selvaggi, Roberta; Nava, Silvia; Calzolai, Giulia; Lucarelli, Franco; Cappelletti, David

    2018-05-01

    In this paper, we combined a Potential Source Contribution Function (PSCF) analysis of daily chemical aerosol composition data with hourly aerosol size distributions with the aim to disentangle the major source areas during a complex and fast modulating advection event impacting on Central Italy in 2013. Chemical data include an ample set of metals obtained by Proton Induced X-ray Emission (PIXE), main soluble ions from ionic chromatography and elemental and organic carbon (EC, OC) obtained by thermo-optical measurements. Size distributions have been recorded with an optical particle counter for eight calibrated size classes in the 0.27-10 μm range. We demonstrated the usefulness of the approach by the positive identification of two very different source areas impacting during the transport event. In particular, biomass burning from Eastern Europe and desert dust from Sahara sources have been discriminated based on both chemistry and size distribution time evolution. Hourly BT provided the best results in comparison to 6 h or 24 h based calculations.

  9. Empirical tests of Zipf's law mechanism in open source Linux distribution.

    Science.gov (United States)

    Maillart, T; Sornette, D; Spaeth, S; von Krogh, G

    2008-11-21

    Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.

  10. Distribution and Sources of Black Carbon in the Arctic

    Science.gov (United States)

    Qi, Ling

    The Arctic is warming at twice the global rate over recent decades. To slow down this warming trend, there is growing interest in reducing the impact from short-lived climate forcers, such as black carbon (BC), because the benefits of mitigation are seen more quickly relative to CO2 reduction. To propose efficient mitigation policies, it is imperative to improve our understanding of BC distribution in the Arctic and to identify the sources. In this dissertation, we investigate the sensitivity of BC in the Arctic, including BC concentrations in snow (BCsnow) and BC concentrations in air (BCair), to emissions, dry deposition and wet scavenging using a global 3-D chemical transport model (CTM) GEOS-Chem. By including flaring emissions, estimating dry deposition velocity using resistance-in-series method, and including Wegener-Bergeron-Findeisen (WBF) in wet scavenging, simulated BCsnow in the eight Arctic sub-regions agree with the observations within a factor of two, and simulated BCair fall within the uncertainty range of observations. Specifically, we find that natural gas flaring emissions in Western Extreme North of Russia (WENR) strongly enhance BCsnow (by up to ?50%) and BCair (by 20-32%) during snow season in the so-called 'Arctic front', but has negligible impact on BC in the free troposphere. The updated dry deposition velocity over snow and ice is much larger than those used in most of global CTMs and agrees better with observation. The resulting BCsnow changes marginally because of the offsetting of higher dry and lower wet deposition fluxes. In contrast, surface BCair decreases strongly due to the faster dry deposition (by 27-68%). WBF occurs when the environmental vapor pressure is in between the saturation vapor pressure of ice crystals and water drops in mixed-phase clouds. As a result, water drops evaporate and releases BC particles in them back into the interstitial air. In most CTMs, WBF is either missing or represented by a uniform and low BC

  11. Analytic solution of field distribution and demagnetization function of ideal hollow cylindrical field source

    Science.gov (United States)

    Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min

    2017-09-01

    The Halbach type hollow cylindrical permanent magnet array (HCPMA) is a volume compact and energy conserved field source, which have attracted intense interests in many practical applications. Here, using the complex variable integration method based on the Biot-Savart Law (including current distributions inside the body and on the surfaces of magnet), we derive analytical field solutions to an ideal multipole HCPMA in entire space including the interior of magnet. The analytic field expression inside the array material is used to construct an analytic demagnetization function, with which we can explain the origin of demagnetization phenomena in HCPMA by taking into account an ideal magnetic hysteresis loop with finite coercivity. These analytical field expressions and demagnetization functions provide deeper insight into the nature of such permanent magnet array systems and offer guidance in designing optimized array system.

  12. Distribution, sources and health risk assessment of mercury in kindergarten dust

    Science.gov (United States)

    Sun, Guangyi; Li, Zhonggen; Bi, Xiangyang; Chen, Yupeng; Lu, Shuangfang; Yuan, Xin

    2013-07-01

    Mercury (Hg) contamination in urban area is a hot issue in environmental research. In this study, the distribution, sources and health risk of Hg in dust from 69 kindergartens in Wuhan, China, were investigated. In comparison with most other cities, the concentrations of total mercury (THg) and methylmercury (MeHg) were significantly elevated, ranging from 0.15 to 10.59 mg kg-1 and from 0.64 to 3.88 μg kg-1, respectively. Among the five different urban areas, the educational area had the highest concentrations of THg and MeHg. The GIS mapping was used to identify the hot-spot areas and assess the potential pollution sources of Hg. The emissions of coal-power plants and coking plants were the main sources of THg in the dust, whereas the contributions of municipal solid waste (MSW) landfills and iron and steel smelting related industries were not significant. However, the emission of MSW landfills was considered to be an important source of MeHg in the studied area. The result of health risk assessment indicated that there was a high adverse health effect of the kindergarten dust in terms of Hg contamination on the children living in the educational area (Hazard index (HI) = 6.89).

  13. Predicting volume of distribution with decision tree-based regression methods using predicted tissue:plasma partition coefficients.

    Science.gov (United States)

    Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat

    2015-01-01

    Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.

  14. Positron energy distributions from a hybrid positron source based on channeling radiation

    International Nuclear Information System (INIS)

    Azadegan, B.; Mahdipour, A.; Dabagov, S.B.; Wagner, W.

    2013-01-01

    A hybrid positron source which is based on the generation of channeling radiation by relativistic electrons channeled along different crystallographic planes and axes of a tungsten single crystal and subsequent conversion of radiation into e + e − -pairs in an amorphous tungsten target is described. The photon spectra of channeling radiation are calculated using the Doyle–Turner approximation for the continuum potentials and classical equations of motion for channeled particles to obtain their trajectories, velocities and accelerations. The spectral-angular distributions of channeling radiation are found applying classical electrodynamics. Finally, the conversion of radiation into e + e − -pairs and the energy distributions of positrons are simulated using the GEANT4 package

  15. Source of spill ripple in the RF-KO slow-extraction method with FM and AM

    International Nuclear Information System (INIS)

    Noda, K.; Furukawa, T.; Shibuya, S.; Muramatsu, M.; Uesugi, T.; Kanazawa, M.; Torikoshi, M.; Takada, E.; Yamada, S.

    2002-01-01

    The RF-knockout (RF-KO) slow-extraction method with frequency modulation (FM) and amplitude modulation (AM) has brought high-accuracy irradiation to the treatment of a cancer tumor moving with respiration, because of a quick response to beam start/stop. However, a beam spill extracted from a synchrotron ring through RF-KO slow-extraction has a huge ripple with a frequency of around 1 kHz related to the FM. The spill ripple will disturb the lateral dose distribution in the beam scanning methods. Thus, the source of the spill ripple has been investigated through experiments and simulations. There are two tune regions for the extraction process through the RF-KO method: the extraction region and the diffusion region. The particles in the extraction region can be extracted due to amplitude growth through the transverse RF field, only when its frequency matches with the tune in the extraction region. For a large chromaticity, however, the particles in the extraction region can be extracted through the synchrotron oscillation, even when the frequency does not match with the tune in the extraction region. Thus, the spill structure during one period of the FM strongly depends on the horizontal chromaticity. They are repeated with the repetition frequency of the FM, which is the very source of the spill ripple in the RF-KO method

  16. Evaluation of methods to leak test sealed radiation sources

    International Nuclear Information System (INIS)

    Arbeau, N.D.; Scott, C.K.

    1987-04-01

    The methods for the leak testing of sealed radiation sources were reviewed. One hundred and thirty-one equipment vendors were surveyed to identify commercially available leak test instruments. The equipment is summarized in tabular form by radiation type and detector type for easy reference. The radiation characteristics of the licensed sources were reviewed and summarized in a format that can be used to select the most suitable detection method. A test kit is proposed for use by inspectors when verifying a licensee's test procedures. The general elements of leak test procedures are discussed

  17. Simulations of a spectral gamma-ray logging tool response to a surface source distribution on the borehole wall

    International Nuclear Information System (INIS)

    Wilson, R.D.; Conaway, J.G.

    1991-01-01

    We have developed Monte Carlo and discrete ordinates simulation models for the large-detector spectral gamma-ray (SGR) logging tool in use at the Nevada Test Site. Application of the simulation models produced spectra for source layers on the borehole wall, either from potassium-bearing mudcakes or from plate-out of radon daughter products. Simulations show that the shape and magnitude of gamma-ray spectra from sources distributed on the borehole wall depend on radial position with in the air-filled borehole as well as on hole diameter. No such dependence is observed for sources uniformly distributed in the formation. In addition, sources on the borehole wall produce anisotropic angular fluxes at the higher scattered energies and at the source energy. These differences in borehole effects and in angular flux are important to the process of correcting SGR logs for the presence of potassium mudcakes; they also suggest a technique for distinguishing between spectral contributions from formation sources and sources on the borehole wall. These results imply the existence of a standoff effect not present for spectra measured in air-filled boreholes from formation sources. 5 refs., 11 figs

  18. Isotopes, Inventories and Seasonality: Unraveling Methane Source Distribution in the Complex Landscapes of the United Kingdom.

    Science.gov (United States)

    Lowry, D.; Fisher, R. E.; Zazzeri, G.; Lanoisellé, M.; France, J.; Allen, G.; Nisbet, E. G.

    2017-12-01

    Unlike the big open landscapes of many continents with large area sources dominated by one particular methane emission type that can be isotopically characterized by flight measurements and sampling, the complex patchwork of urban, fossil and agricultural methane sources across NW Europe require detailed ground surveys for characterization (Zazzeri et al., 2017). Here we outline the findings from multiple seasonal urban and rural measurement campaigns in the United Kingdom. These surveys aim to: 1) Assess source distribution and baseline in regions of planned fracking, and relate to on-site continuous baseline climatology. 2) Characterize spatial and seasonal differences in the isotopic signatures of the UNFCCC source categories, and 3) Assess the spatial validity of the 1 x 1 km UK inventory for large continuous emitters, proposed point sources, and seasonal / ephemeral emissions. The UK inventory suggests that 90% of methane emissions are from 3 source categories, ruminants, landfill and gas distribution. Bag sampling and GC-IRMS delta13C analysis shows that landfill gives a constant signature of -57 ±3 ‰ throughout the year. Fugitive gas emissions are consistent regionally depending on the North Sea supply regions feeding the network (-41 ± 2 ‰ in N England, -37 ± 2 ‰ in SE England). Ruminant, mostly cattle, emissions are far more complex as these spend winters in barns and summers in fields, but are essentially a mix of 2 end members, breath at -68 ±3 ‰ and manure at -51 ±3 ‰, resulting in broad summer field emission plumes of -64 ‰ and point winter barn emission plumes of -58 ‰. The inventory correctly locates emission hotspots from landfill, larger sewage treatment plants and gas compressor stations, giving a broad overview of emission distribution for regional model validation. Mobile surveys are adding an extra layer of detail to this which, combined with isotopic characterization, has identified spatial distribution of gas pipe leaks

  19. Time-dependent anisotropic distributed source capability in transient 3-d transport code tort-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    The transient 3-D discrete ordinates transport code TORT-TD has been extended to account for time-dependent anisotropic distributed external sources. The extension aims at the simulation of the pulsed neutron source in the YALINA-Thermal subcritical assembly. Since feedback effects are not relevant in this zero-power configuration, this offers a unique opportunity to validate the time-dependent neutron kinetics of TORT-TD with experimental data. The extensions made in TORT-TD to incorporate a time-dependent anisotropic external source are described. The steady state of the YALINA-Thermal assembly and its response to an artificial square-wave source pulse sequence have been analysed with TORT-TD using pin-wise homogenised cross sections in 18 prompt energy groups with P 1 scattering order and 8 delayed neutron groups. The results demonstrate the applicability of TORT-TD to subcritical problems with a time-dependent external source. (authors)

  20. Angular and mass resolved energy distribution measurements with a gallium liquid metal ion source

    International Nuclear Information System (INIS)

    Marriott, Philip

    1987-06-01

    Ionisation and energy broadening mechanisms relevant to liquid metal ion sources are discussed. A review of experimental results giving a picture of source operation and a discussion of the emission mechanisms thought to occur for the ionic species and droplets emitted is presented. Further work is suggested by this review and an analysis system for angular and mass resolved energy distribution measurements of liquid metal ion source beams has been constructed. The energy analyser has been calibrated and a series of measurements, both on and off the beam axis, of 69 Ga + , Ga ++ and Ga 2 + ions emitted at various currents from a gallium source has been performed. A comparison is made between these results and published work where possible, and the results are discussed with the aim of determining the emission and energy spread mechanisms operating in the gallium liquid metal ion source. (author)

  1. Distributional patterns of arsenic concentrations in contaminant plumes offer clues to the source of arsenic in groundwater at landfills

    Science.gov (United States)

    Harte, Philip T.

    2015-01-01

    The distributional pattern of dissolved arsenic concentrations from landfill plumes can provide clues to the source of arsenic contamination. Under simple idealized conditions, arsenic concentrations along flow paths in aquifers proximal to a landfill will decrease under anthropogenic sources but potentially increase under in situ sources. This paper presents several conceptual distributional patterns of arsenic in groundwater based on the arsenic source under idealized conditions. An example of advanced subsurface mapping of dissolved arsenic with geophysical surveys, chemical monitoring, and redox fingerprinting is presented for a landfill site in New Hampshire with a complex flow pattern. Tools to assist in the mapping of arsenic in groundwater ultimately provide information on the source of contamination. Once an understanding of the arsenic contamination is achieved, appropriate remedial strategies can then be formulated.

  2. A Method for the Analysis of Information Use in Source-Based Writing

    Science.gov (United States)

    Sormunen, Eero; Heinstrom, Jannica; Romu, Leena; Turunen, Risto

    2012-01-01

    Introduction: Past research on source-based writing assignments has hesitated to scrutinize how students actually use information afforded by sources. This paper introduces a method for the analysis of text transformations from sources to texts composed. The method is aimed to serve scholars in building a more detailed understanding of how…

  3. Source preparations for alpha and beta measurements

    Energy Technology Data Exchange (ETDEWEB)

    Holm, E. [Risoe National Lab., Roskilde (Denmark)

    2001-01-01

    Regarding alpha particle emitters subject for environmental studies, electrodeposition or co-precipitation as fluorides are the most common methods. For electro deposition stainless steel is generally used as cathode material but also other metals such as Ni, Ag, and Cu showed promising results. The use of other anode material than platinum, such as graphite should be investigated. For other purposes such as optimal resolution other more sophisticated methods are used but often resulting in poorer recovery. For beta particle emitters the type of detection system will decide the source preparation. Similar methods as for alpha particle emitters, electrodeposition or precipitation techniques can be used. Due to the continuous energy distribution of the beta pulse height distribution a high resolution is not required. Thicker sources from the precipitates or a stable isotopic carrier can be accepted but correction for absorption in the source must be done. (au)

  4. QACD: A method for the quantitative assessment of compositional distribution in geologic materials

    Science.gov (United States)

    Loocke, M. P.; Lissenberg, J. C. J.; MacLeod, C. J.

    2017-12-01

    In order to fully understand the petrogenetic history of a rock, it is critical to obtain a thorough characterization of the chemical and textural relationships of its mineral constituents. Element mapping combines the microanalytical techniques that allow for the analysis of major- and minor elements at high spatial resolutions (e.g., electron microbeam analysis) with 2D mapping of samples in order to provide unprecedented detail regarding the growth histories and compositional distributions of minerals within a sample. We present a method for the acquisition and processing of large area X-ray element maps obtained by energy-dispersive X-ray spectrometer (EDS) to produce a quantitative assessment of compositional distribution (QACD) of mineral populations within geologic materials. By optimizing the conditions at which the EDS X-ray element maps are acquired, we are able to obtain full thin section quantitative element maps for most major elements in relatively short amounts of time. Such maps can be used to not only accurately identify all phases and calculate mineral modes for a sample (e.g., a petrographic thin section), but, critically, enable a complete quantitative assessment of their compositions. The QACD method has been incorporated into a python-based, easy-to-use graphical user interface (GUI) called Quack. The Quack software facilitates the generation of mineral modes, element and molar ratio maps and the quantification of full-sample compositional distributions. The open-source nature of the Quack software provides a versatile platform which can be easily adapted and modified to suit the needs of the user.

  5. Methods to estimate distribution and range extent of grizzly bears in the Greater Yellowstone Ecosystem

    Science.gov (United States)

    Haroldson, Mark A.; Schwartz, Charles C.; Thompson, Daniel J.; Bjornlie, Daniel D.; Gunther, Kerry A.; Cain, Steven L.; Tyers, Daniel B.; Frey, Kevin L.; Aber, Bryan C.

    2014-01-01

    The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.

  6. Quasi-homogeneous partial coherent source modeling of multimode optical fiber output using the elementary source method

    Science.gov (United States)

    Fathy, Alaa; Sabry, Yasser M.; Khalil, Diaa A.

    2017-10-01

    Multimode fibers (MMF) have many applications in illumination, spectroscopy, sensing and even in optical communication systems. In this work, we present a model for the MMF output field assuming the fiber end as a quasi-homogenous source. The fiber end is modeled by a group of partially coherent elementary sources, spatially shifted and uncorrelated with each other. The elementary source distribution is derived from the far field intensity measurement, while the weighting function of the sources is derived from the fiber end intensity measurement. The model is compared with practical measurements for fibers with different core/cladding diameters at different propagation distances and for different input excitations: laser, white light and LED. The obtained results show normalized root mean square error less than 8% in the intensity profile in most cases, even when the fiber end surface is not perfectly cleaved. Also, the comparison with the Gaussian-Schell model results shows a better agreement with the measurement. In addition, the complex degree of coherence, derived from the model results, is compared with the theoretical predictions of the modified Van Zernike equation showing very good agreement, which strongly supports the assumption that the large core MMF could be considered as a quasi-homogenous source.

  7. Methods of assessing grain-size distribution during grain growth

    DEFF Research Database (Denmark)

    Tweed, Cherry J.; Hansen, Niels; Ralph, Brian

    1985-01-01

    This paper considers methods of obtaining grain-size distributions and ways of describing them. In order to collect statistically useful amounts of data, an automatic image analyzer is used, and the resulting data are subjected to a series of tests that evaluate the differences between two related...... distributions (before and after grain growth). The distributions are measured from two-dimensional sections, and both the data and the corresponding true three-dimensional grain-size distributions (obtained by stereological analysis) are collected. The techniques described here are illustrated by reference...

  8. The effect of the volumetric heat source distribution of the fuel pellet on the minimum DNBR ratio

    International Nuclear Information System (INIS)

    Hordosy, G.; Kereszturi, A.; Maroti, L.; Trosztel, I.

    1995-01-01

    The radial power distribution in a VVER-440 type fuel assembly is strongly non-uniform as a result of the water-gap between the shrouds and the moderator filled central tube. Consequently, it can be expected that the power density inside a single fuel rod is inhomogeneous, as well. In the paper the methodology and the results of coupled thermohydraulic and neutronic calculations are presented. The objective of the analysis was the investigation of the heat source distribution and the determination of the possible extent of the power non-uniformity in a corner rod which has always the highest peaking factor in a VVER-440 type assembly. The results of the analysis revealed that there can be a strong non-uniformity of power distribution inside a fuel pellet, and the effect depends first of all on the general assembly conditions, while the local subchannel parameters have only a slight influence on the pellet heat source distribution. (author)

  9. Spatial distribution and sources of heavy metals in natural pasture soil around copper-molybdenum mine in Northeast China.

    Science.gov (United States)

    Wang, Zhiqiang; Hong, Chen; Xing, Yi; Wang, Kang; Li, Yifei; Feng, Lihui; Ma, Silu

    2018-06-15

    The characterization of the content and source of heavy metals are essential to assess the potential threat of metals to human health. The present study collected 140 topsoil samples around a Cu-Mo mine (Wunugetushan, China) and investigated the concentrations and spatial distribution pattern of Cr, Ni, Zn, Cu, Mo and Cd in soil using multivariate and geostatistical analytical methods. Results indicated that the average concentrations of six heavy metals, especially Cu and Mo, were obviously higher than the local background values. Correlation analysis and principal component analysis divided these metals into three groups, including Cr and Ni, Cu and Mo, Zn and Cd. Meanwhile, the spatial distribution maps of heavy metals indicated that Cr and Ni in soil were no notable anthropogenic inputs and mainly controlled by natural factors because their spatial maps exhibited non-point source contamination. The concentrations of Cu and Mo gradually decreased with distance away from the mine area, suggesting that human mining activities may be crucial in the spreading of contaminants. Soil contamination of Zn were associated with livestock manure produced from grazing. In addition, the environmental risk of heavy metal pollution was assessed by geo-accumulation index. All the results revealed that the spatial distribution of heavy metals in soil were in agreement with the local human activities. Investigating and identifying the origin of heavy metals in pasture soil will lay the foundation for taking effective measures to preserve soil from the long-term accumulation of heavy metals. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. EVALUATING THE NOVEL METHODS ON SPECIES DISTRIBUTION MODELING IN COMPLEX FOREST

    Directory of Open Access Journals (Sweden)

    C. H. Tu

    2012-07-01

    Full Text Available The prediction of species distribution has become a focus in ecology. For predicting a result more effectively and accurately, some novel methods have been proposed recently, like support vector machine (SVM and maximum entropy (MAXENT. However, high complexity in the forest, like that in Taiwan, will make the modeling become even harder. In this study, we aim to explore which method is more applicable to species distribution modeling in the complex forest. Castanopsis carlesii (long-leaf chinkapin, LLC, growing widely in Taiwan, was chosen as the target species because its seeds are an important food source for animals. We overlaid the tree samples on the layers of altitude, slope, aspect, terrain position, and vegetation index derived from SOPT-5 images, and developed three models, MAXENT, SVM, and decision tree (DT, to predict the potential habitat of LLCs. We evaluated these models by two sets of independent samples in different site and the effect on the complexity of forest by changing the background sample size (BSZ. In the forest with low complex (small BSZ, the accuracies of SVM (kappa = 0.87 and DT (0.86 models were slightly higher than that of MAXENT (0.84. In the more complex situation (large BSZ, MAXENT kept high kappa value (0.85, whereas SVM (0.61 and DT (0.57 models dropped significantly due to limiting the habitat close to samples. Therefore, MAXENT model was more applicable to predict species’ potential habitat in the complex forest; whereas SVM and DT models would tend to underestimate the potential habitat of LLCs.

  11. Analyzed method for calculating the distribution of electrostatic field

    International Nuclear Information System (INIS)

    Lai, W.

    1981-01-01

    An analyzed method for calculating the distribution of electrostatic field under any given axial gradient in tandem accelerators is described. This method possesses satisfactory accuracy compared with the results of numerical calculation

  12. OpenMC In Situ Source Convergence Detection

    Energy Technology Data Exchange (ETDEWEB)

    Aldrich, Garrett Allen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Univ. of California, Davis, CA (United States); Dutta, Soumya [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); The Ohio State Univ., Columbus, OH (United States); Woodring, Jonathan Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-07

    We designed and implemented an in situ version of particle source convergence for the OpenMC particle transport simulator. OpenMC is a Monte Carlo based-particle simulator for neutron criticality calculations. For the transport simulation to be accurate, source particles must converge on a spatial distribution. Typically, convergence is obtained by iterating the simulation by a user-settable, fixed number of steps, and it is assumed that convergence is achieved. We instead implement a method to detect convergence, using the stochastic oscillator for identifying convergence of source particles based on their accumulated Shannon Entropy. Using our in situ convergence detection, we are able to detect and begin tallying results for the full simulation once the proper source distribution has been confirmed. Our method ensures that the simulation is not started too early, by a user setting too optimistic parameters, or too late, by setting too conservative a parameter.

  13. The P1-approximation for the Distribution of Neutrons from a Pulsed Source in Hydrogen

    International Nuclear Information System (INIS)

    Claesson, A.

    1963-12-01

    The asymptotic distribution of neutrons from a pulsed, high energy source in an infinite moderator has been obtained earlier in a 'diffusion' approximation. In that paper the cross section was assumed to be constant over the whole energy region and the time derivative of the first moment was disregarded. Here, first, an analytic expression is obtained for the density in a P 1 -approximation. However, the result is very complicated, and it is shown that an asymptotic solution can be found in a simpler way. By taking into account the low hydrogen scattering cross section at the source energy it follows that the space dependence of the distribution is less than that obtained earlier. The importance of keeping the time derivative of the first moment is further shown in a perturbation approximation

  14. Regulatory actions to expand the offer of distributed generation from renewable energy sources in Brazil

    International Nuclear Information System (INIS)

    Pepitone da Nóbrega, André; Cabral Carvalho, Carlos Eduardo

    2015-01-01

    The composition of the Brazilian electric energy matrix has undergone transformations in recent years. However, it has still maintained significant participation of renewable energy sources, in particular hydropower plants of various magnitudes. Reasons for the growth of other renewable sources of energy, such as wind and solar, include the fact that the remaining hydropower capacity is mainly located in the Amazon, which is far from centers of consumption, the necessity of diversifying the energy mix and reducing dependence on hydrologic regimes, the increase in environmental restrictions, the increase of civil construction and land costs.Wind power generation has grown most significantly in Brazil. Positive results in the latest energy auctions show that wind power generation has reached competitive pricing. Solar energy is still incipient in Brazil, despite its high potential for conversion into electric energy. This energy source in the Brazilian electric energy matrix mainly involves solar centrals and distributed generation. Biomass thermal plants, mainly the ones that use bagasse of sugar cane, also have an important role in renewable generation in Brazil.This paper aims to present an overview of the present situation and discuss the actions and the regulations to expand the offer of renewable distributed generation in Brazil, mainly from wind power, solar and biomass energy sources. (full text)

  15. THE ADVANTAGES OF THE DISTRIBUTION FUNCTION AS A METHOD OF GRAPHICAL REPRESENTATION OF THE ECONOMIC STRUCTURE OF SOCIETY

    Directory of Open Access Journals (Sweden)

    V. A. Kapitanov

    2018-01-01

    Full Text Available The aim of the paper is to compare three different methods of graphical representation of the inequality: using frequency polygons, Lorentz curves and distribution functions. It is shown that for the representation of real (i.e. incomplete data, the last is most appropriate. The method of investigation consists in verifying the conformity of the method of graphical representation of inequality to the following three requirements:1. Insensitivity of the method to the quantization of data.2. Sensitivity to the width of the entire range of income from zero to income of the richest person provided that information about the wealthy members of society might be incomplete.3. Visibility. The curve, describing the inequality must have characteristic points (extremes, bends so that it can be somehow identified. The presence of features in the economic structure of society must be reflected in the qualitative behavior of the curves. The demand is caused by the necessity to draw a conclusion about the mechanism of the movement of goods in society, which led to the appearance of a curve of exactly this form.The work analyzed direct data on the incomes of Russian citizens published by ROSSTAT (Federal State Statistics Service, Forbes magazine and the Federal Tax Service, indirect data on incomes determined by the distribution of car prices (from two independent sources and real estate, as well as data from the Credit Suisse Research Institute about property inequality in Russia. The following main conclusions were made. The course of the curves that characterize the real distribution of the population by income, suggests that in society there is only one mechanism for the movement of goods. This is a mechanism of rank exchange, in which the interaction of rich and poor economic agents is characterized by a shift in market prices in favor of the rich and the greater, the more resources the latter has.The frequency polygons (and therefore the histograms do not

  16. Preparation of protactinium measurement source by electroplating method

    International Nuclear Information System (INIS)

    Li Zongwei; Yang Weifan; Fang Keming; Yuan Shuanggui; Guo Junsheng; Pan Qiangyan

    1998-01-01

    An electroplating method for the preparation of Pa sources is described, and the main factors (such as pH value of solution, electroplating time and current density) influencing the electroplating of Pa are tested and discussed with 233 Pa as a tracer. A thin and uniform electroplating Pa-Layer of 1 mg/cm 2 thick on thin stainless steel disk was obtained. The Pa source was measured by a HPGe detector to determine the chemical efficiency

  17. Calculating Error Percentage in Using Water Phantom Instead of Soft Tissue Concerning 103Pd Brachytherapy Source Distribution via Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    OL Ahmadi

    2015-12-01

    Full Text Available Introduction: 103Pd is a low energy source, which is used in brachytherapy. According to the standards of American Association of Physicists in Medicine, dosimetric parameters determination of brachytherapy sources before the clinical application was considered significantly important. Therfore, the present study aimed to compare the dosimetric parameters of the target source using the water phantom and soft tissue. Methods: According to the TG-43U1 protocol, the dosimetric parameters were compared around the 103Pd source in regard with water phantom with the density of 0.998 gr/cm3 and the soft tissue with the density of 1.04 gr/cm3 on the longitudinal and transverse axes using the MCNP4C code and the relative differences were compared between the both conditions. Results: The simulation results indicated that the dosimetric parameters depended on the radial dose function and the anisotropy function in the application of the water phantom instead of soft tissue up to a distance of 1.5 cm,  between which a good consistency was observed. With increasing the distance, the difference increased, so as within 6 cm from the source, this difference increased to 4%. Conclusions: The results of  the soft tissue phantom compared with those of the water phantom indicated 4% relative difference at a distance of 6 cm from the source. Therefore, the results of the water phantom with a maximum error of 4% can be used in practical applications instead of soft tissue. Moreover, the amount of differences obtained in each distance regarding using the soft tissue phantom could be corrected.

  18. Fast optical source for quantum key distribution based on semiconductor optical amplifiers.

    Science.gov (United States)

    Jofre, M; Gardelein, A; Anzolin, G; Amaya, W; Capmany, J; Ursin, R; Peñate, L; Lopez, D; San Juan, J L; Carrasco, J A; Garcia, F; Torcal-Milla, F J; Sanchez-Brea, L M; Bernabeu, E; Perdigues, J M; Jennewein, T; Torres, J P; Mitchell, M W; Pruneri, V

    2011-02-28

    A novel integrated optical source capable of emitting faint pulses with different polarization states and with different intensity levels at 100 MHz has been developed. The source relies on a single laser diode followed by four semiconductor optical amplifiers and thin film polarizers, connected through a fiber network. The use of a single laser ensures high level of indistinguishability in time and spectrum of the pulses for the four different polarizations and three different levels of intensity. The applicability of the source is demonstrated in the lab through a free space quantum key distribution experiment which makes use of the decoy state BB84 protocol. We achieved a lower bound secure key rate of the order of 3.64 Mbps and a quantum bit error ratio as low as 1.14×10⁻² while the lower bound secure key rate became 187 bps for an equivalent attenuation of 35 dB. To our knowledge, this is the fastest polarization encoded QKD system which has been reported so far. The performance, reduced size, low power consumption and the fact that the components used can be space qualified make the source particularly suitable for secure satellite communication.

  19. Development of source term evaluation method for Korean Next Generation Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Keon Jae; Cheong, Jae Hak; Park, Jin Baek; Kim, Guk Gee [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-10-15

    This project had investigate several design features of radioactive waste processing system and method to predict nuclide concentration at primary coolant basic concept of next generation reactor and safety goals at the former phase. In this project several prediction methods of source term are evaluated conglomerately and detailed contents of this project are : model evaluation of nuclide concentration at Reactor Coolant System, evaluation of primary and secondary coolant concentration of reference Nuclear Power Plant(NPP), investigation of prediction parameter of source term evaluation, basic parameter of PWR, operational parameter, respectively, radionuclide removal system and adjustment values of reference NPP, suggestion of source term prediction method of next generation NPP.

  20. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    Science.gov (United States)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  1. Size distribution and sources of humic-like substances in particulate matter at an urban site during winter.

    Science.gov (United States)

    Park, Seungshik; Son, Se-Chang

    2016-01-01

    This study investigates the size distribution and possible sources of humic-like substances (HULIS) in ambient aerosol particles collected at an urban site in Gwangju, Korea during the winter of 2015. A total of 10 sets of size-segregated aerosol samples were collected using a 10-stage Micro-Orifice Uniform Deposit Impactor (MOUDI), and the samples were analyzed to determine the mass as well as the presence of ionic species (Na(+), NH4(+), K(+), Ca(2+), Mg(2+), Cl(-), NO3(-), and SO4(2-)), water-soluble organic carbon (WSOC) and HULIS. The separation and quantification of the size-resolved HULIS components from the MOUDI samples was accomplished using a Hydrophilic-Lipophilic Balanced (HLB) solid phase extraction method and a total organic carbon analyzer, respectively. The entire sampling period was divided into two periods: non-Asian dust (NAD) and Asian dust (AD) periods. The contributions of water-soluble organic mass (WSOM = 1.9 × WSOC) and HULIS (=1.9 × HULIS-C) to fine particles (PM1.8) were approximately two times higher in the NAD samples (23.2 and 8.0%) than in the AD samples (12.8 and 4.2%). However, the HULIS-C/WSOC ratio in PM1.8 showed little difference between the NAD (0.35 ± 0.07) and AD (0.35 ± 0.05) samples. The HULIS exhibited a uni-modal size distribution (@0.55 μm) during NAD and a bimodal distribution (@0.32 and 1.8 μm) during AD, which was quite similar to the mass size distributions of particulate matter, WSOC, NO3(-), SO4(2-), and NH4(+) in both the NAD and AD samples. The size distribution characteristics and the results of the correlation analyses indicate that the sources of HULIS varied according to the particle size. In the fine mode (≤1.8 μm), the HULIS composition during the NAD period was strongly associated with secondary organic aerosol (SOA) formation processes similar to those of secondary ionic species (cloud processing and/or heterogeneous reactions) and primary emissions during the biomass burning period, and during

  2. Methods of forming single source precursors, methods of forming polymeric single source precursors, and single source precursors and intermediate products formed by such methods

    Science.gov (United States)

    Fox, Robert V.; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin; Margulieux, Kelsey R.; Holland, Andrew W.

    2012-12-04

    Methods of forming single source precursors (SSPs) include forming intermediate products having the empirical formula 1/2{L.sub.2N(.mu.-X).sub.2M'X.sub.2}.sub.2, and reacting MER with the intermediate products to form SSPs of the formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2, wherein L is a Lewis base, M is a Group IA atom, N is a Group IB atom, M' is a Group IIIB atom, each E is a Group VIB atom, each X is a Group VIIA atom or a nitrate group, and each R group is an alkyl, aryl, vinyl, (per)fluoro alkyl, (per)fluoro aryl, silane, or carbamato group. Methods of forming polymeric or copolymeric SSPs include reacting at least one of HE.sup.1R.sup.1E.sup.1H and MER with one or more substances having the empirical formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2 or L.sub.2N(.mu.-X).sub.2M'(X).sub.2 to form a polymeric or copolymeric SSP. New SSPs and intermediate products are formed by such methods.

  3. Thermal Analysis of a Cracked Half-plane under Moving Point Heat Source

    Directory of Open Access Journals (Sweden)

    He Kuanfang

    2017-09-01

    Full Text Available The heat conduction in half-plane with an insulated crack subjected to moving point heat source is investigated. The analytical solution and the numerical means are combined to analyze the transient temperature distribution of a cracked half-plane under moving point heat source. The transient temperature distribution of the half plane structure under moving point heat source is obtained by the moving coordinate method firstly, then the heat conduction equation with thermal boundary of an insulated crack face is changed to singular integral equation by applying Fourier transforms and solved by the numerical method. The numerical examples of the temperature distribution on the cracked half-plane structure under moving point heat source are presented and discussed in detail.

  4. Effect of tissue inhomogeneity on dose distribution of point sources of low-energy electrons

    International Nuclear Information System (INIS)

    Kwok, C.S.; Bialobzyski, P.J.; Yu, S.K.; Prestwich, W.V.

    1990-01-01

    Perturbation in dose distributions of point sources of low-energy electrons at planar interfaces of cortical bone (CB) and red marrow (RM) was investigated experimentally and by Monte Carlo codes EGS and the TIGER series. Ultrathin LiF thermoluminescent dosimeters were used to measure the dose distributions of point sources of 204 Tl and 147 Pm in RM. When the point sources were at 12 mg/cm 2 from a planar interface of CB and RM equivalent plastics, dose enhancement ratios in RM averaged over the region 0--12 mg/cm 2 from the interface were measured to be 1.08±0.03 (SE) and 1.03±0.03 (SE) for 204 Tl and 147 Pm, respectively. The Monte Carlo codes predicted 1.05±0.02 and 1.01±0.02 for the two nuclides, respectively. However, EGS gave consistently 3% higher dose in the dose scoring region than the TIGER series when point sources of monoenergetic electrons up to 0.75 MeV energy were considered in the homogeneous RM situation or in the CB and RM heterogeneous situation. By means of the TIGER series, it was demonstrated that aluminum, which is normally assumed to be equivalent to CB in radiation dosimetry, leads to an overestimation of backscattering of low-energy electrons in soft tissue at a CB--soft-tissue interface by as much as a factor of 2

  5. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  6. Community shift of biofilms developed in a full-scale drinking water distribution system switching from different water sources.

    Science.gov (United States)

    Li, Weiying; Wang, Feng; Zhang, Junpeng; Qiao, Yu; Xu, Chen; Liu, Yao; Qian, Lin; Li, Wenming; Dong, Bingzhi

    2016-02-15

    The bacterial community of biofilms in drinking water distribution systems (DWDS) with various water sources has been rarely reported. In this research, biofilms were sampled at three points (A, B, and C) during the river water source phase (phase I), the interim period (phase II) and the reservoir water source phase (phase III), and the biofilm community was determined using the 454-pyrosequencing method. Results showed that microbial diversity declined in phase II but increased in phase III. The primary phylum was Proteobacteria during three phases, while the dominant class at points A and B was Betaproteobacteria (>49%) during all phases, but that changed to Holophagae in phase II (62.7%) and Actinobacteria in phase III (35.6%) for point C, which was closely related to its water quality. More remarkable community shift was found at the genus level. In addition, analysis results showed that water quality could significantly affect microbial diversity together, while the nutrient composition (e.g. C/N ration) of the water environment might determine the microbial community. Furthermore, Mycobacterium spp. and Pseudomonas spp. were detected in the biofilm, which should give rise to attention. This study revealed that water source switching produced substantial impact on the biofilm community. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. System and Method for Monitoring Distributed Asset Data

    Science.gov (United States)

    Gorinevsky, Dimitry (Inventor)

    2015-01-01

    A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.

  8. Development and application of controlled source audiofrequency magneto telluric method. Results of experiment at Akagi, Sakurajima, Kuju volcanos. Shingogen seigyo chijikichidenryuho (CSAMT-ho) no kaihatsuto oyo

    Energy Technology Data Exchange (ETDEWEB)

    Kusunoki, Ken' ichiro; Suzuki, Koichi

    1988-03-01

    Central Reserch Inst. of Electric Power Industry has carried out prospecting experiments, in various places, with magneto telluric method (MT method) which employs natural electromagnetic wave, and has confirmed the effectiveness of the method in estimation for location of faults, distribution range of rocks, and structure of geothermal sources. With increase in accuracy, the MT method, which was suitable for the approximate prospecting in wide areas, was considered to become useful for determination of detailed geothermal structures directly under prospective points of geothermal wells. For the increase in accuracy, it was necessary to increase the kind and intensity of electromagnetic wave. Consequently, we developed, first as domestically, an unit of controlled source audiofrequency magneto telluric method. The unit, generating artificially electromagnetic wave, is useful for underground structure prospecting. Fundamental experiment on transmission and reception of electromagnetic wave was carried out as preparations for full-scale prospecting, then the structures of volcanos were prospected resulting in the determination of thickness distribution of shirasu layers and heat transfer route from magma reservoirs up to ground surface. (19 figs, 11 refs)

  9. The frequency-independent control method for distributed generation systems

    DEFF Research Database (Denmark)

    Naderi, Siamak; Pouresmaeil, Edris; Gao, Wenzhong David

    2012-01-01

    In this paper a novel frequency-independent control method suitable for distributed generation (DG) is presented. This strategy is derived based on the . abc/. αβ transformation and . abc/. dq transformation of the ac system variables. The active and reactive currents injected by the DG are contr......In this paper a novel frequency-independent control method suitable for distributed generation (DG) is presented. This strategy is derived based on the . abc/. αβ transformation and . abc/. dq transformation of the ac system variables. The active and reactive currents injected by the DG...

  10. Studies on the method of producing radiographic 170Tm source

    International Nuclear Information System (INIS)

    Maeda, Sho

    1976-08-01

    A method of producing radiographic 170 Tm source has been studied, including target preparation, neutron irradiation, handling of the irradiated target in the hot cell and source capsules. On the basis of the results, practical 170 Tm radiographic sources (29 -- 49Ci, with pellets 3mm in diameter and 3mm long) were produced in trial by neutron irradiation with the JMTR. (auth.)

  11. Size distributions of micro-bubbles generated by a pressurized dissolution method

    Science.gov (United States)

    Taya, C.; Maeda, Y.; Hosokawa, S.; Tomiyama, A.; Ito, Y.

    2012-03-01

    Size of micro-bubbles is widely distributed in the range of one to several hundreds micrometers and depends on generation methods, flow conditions and elapsed times after the bubble generation. Although a size distribution of micro-bubbles should be taken into account to improve accuracy in numerical simulations of flows with micro-bubbles, a variety of the size distribution makes it difficult to introduce the size distribution in the simulations. On the other hand, several models such as the Rosin-Rammler equation and the Nukiyama-Tanazawa equation have been proposed to represent the size distribution of particles or droplets. Applicability of these models to the size distribution of micro-bubbles has not been examined yet. In this study, we therefore measure size distribution of micro-bubbles generated by a pressurized dissolution method by using a phase Doppler anemometry (PDA), and investigate the applicability of the available models to the size distributions of micro-bubbles. Experimental apparatus consists of a pressurized tank in which air is dissolved in liquid under high pressure condition, a decompression nozzle in which micro-bubbles are generated due to pressure reduction, a rectangular duct and an upper tank. Experiments are conducted for several liquid volumetric fluxes in the decompression nozzle. Measurements are carried out at the downstream region of the decompression nozzle and in the upper tank. The experimental results indicate that (1) the Nukiyama-Tanasawa equation well represents the size distribution of micro-bubbles generated by the pressurized dissolution method, whereas the Rosin-Rammler equation fails in the representation, (2) the bubble size distribution of micro-bubbles can be evaluated by using the Nukiyama-Tanasawa equation without individual bubble diameters, when mean bubble diameter and skewness of the bubble distribution are given, and (3) an evaluation method of visibility based on the bubble size distribution and bubble

  12. Combination of acoustical radiosity and the image source method

    DEFF Research Database (Denmark)

    Koutsouris, Georgios I; Brunskog, Jonas; Jeong, Cheol-Ho

    2013-01-01

    A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part...

  13. Fundamental-mode sources in approach to critical experiments

    International Nuclear Information System (INIS)

    Goda, J.; Busch, R.

    2000-01-01

    An equivalent fundamental-mode source is an imaginary source that is distributed identically in space, energy, and angle to the fundamental-mode fission source. Therefore, it produces the same neutron multiplication as the fundamental-mode fission source. Even if two source distributions produce the same number of spontaneous fission neutrons, they will not necessarily contribute equally toward the multiplication of a given system. A method of comparing the relative importance of source distributions is needed. A factor, denoted as g* and defined as the ratio of the fixed-source multiplication to the fundamental-mode multiplication, is used to convert a given source strength to its equivalent fundamental-mode source strength. This factor is of interest to criticality safety as it relates to the 1/M method of approach to critical. Ideally, a plot of 1/M versus κ eff is linear. However, since 1/M = (1 minus κ eff )/g*, the plot will be linear only if g* is constant with κ eff . When g* increases with κ eff , the 1/M plot is said to be conservative because the critical mass is underestimated. However, it is possible for g* to decrease with κ eff yielding a nonconservative 1/M plot. A better understanding of g* would help predict whether a given approach to critical will be conservative or nonconservative. The equivalent fundamental-mode source strength g*S can be predicted by experiment. The experimental method was tested on the XIX-1 core on the Fast Critical Assembly at the Japan Atomic Energy Research Institute. The results showed a 30% difference between measured and calculated values. However, the XIX-1 reactor had significant intermediate-energy neutrons. The presence of intermediate-energy neutrons may have made the cross-section set used for predicted values less than ideal for the system

  14. Method of triggering the vacuum arc in source with a resistor

    International Nuclear Information System (INIS)

    Zheng Le; Lan Zhaohui; Long Jidong; Peng Yufei; Li Jie; Yang Zhen; Dong Pan; Shi Jinshui

    2014-01-01

    Background: The metal vapor vacuum arc (MEVVA) ion source is a common source which provides strong metal ion flow. To trigger this ion source, a high-voltage trigger pulse generator and a high-voltage isolation pulse transformer are needed, which makes the power supply system complex. Purpose: To simplify the power supply system, a trigger method with a resistor was introduced, and some characteristics of this method were studied. Methods: The ion flow provided by different main arc current was measured, as well as the trigger current. The main arc current and the ion current were recorded with different trigger resistances. Results: Experimental results showed that, within a certain range of resistances, the larger the resistance value, the more difficult it was to success fully trigger the source. Meanwhile, the main arc rising edge became slower on the increasing in the trigger time. However, the resistance value increment had hardly impact on the intensity of ion flow extracted in the end, The ion flow became stronger with the increasing main arc current. Conclusion: The power supply system of ion source is simplified by using the trigger method with a resistor. Only a suitable resistor was needed to complete the conversion process from trigger to arc initiating. (authors)

  15. Radiation source reconstruction with known geometry and materials using the adjoint

    International Nuclear Information System (INIS)

    Hykes, Joshua M.; Azmy, Yousry Y.

    2011-01-01

    We present a method to estimate an unknown isotropic source distribution, in space and energy, using detector measurements when the geometry and material composition are known. The estimated source distribution minimizes the difference between the measured and computed responses of detectors located at a selected number of points within the domain. In typical methods, a forward flux calculation is performed for each source guess in an iterative process. In contrast, we use the adjoint flux to compute the responses. Potential applications of the proposed method include determining the distribution of radio-contaminants following a nuclear event, monitoring the flow of radioactive fluids in pipes to determine hold-up locations, and retroactive reconstruction of radiation fields using workers' detectors' readings. After presenting the method, we describe a numerical test problem to demonstrate the preliminary viability of the method. As expected, using the adjoint flux reduces the number of transport solves to be proportional to the number of detector measurements, in contrast to methods using the forward flux that require a typically larger number proportional to the number of spatial mesh cells. (author)

  16. Path-integral method for the source apportionment of photochemical pollutants

    Science.gov (United States)

    Dunker, A. M.

    2015-06-01

    A new, path-integral method is presented for apportioning the concentrations of pollutants predicted by a photochemical model to emissions from different sources. A novel feature of the method is that it can apportion the difference in a species concentration between two simulations. For example, the anthropogenic ozone increment, which is the difference between a simulation with all emissions present and another simulation with only the background (e.g., biogenic) emissions included, can be allocated to the anthropogenic emission sources. The method is based on an existing, exact mathematical equation. This equation is applied to relate the concentration difference between simulations to line or path integrals of first-order sensitivity coefficients. The sensitivities describe the effects of changing the emissions and are accurately calculated by the decoupled direct method. The path represents a continuous variation of emissions between the two simulations, and each path can be viewed as a separate emission-control strategy. The method does not require auxiliary assumptions, e.g., whether ozone formation is limited by the availability of volatile organic compounds (VOCs) or nitrogen oxides (NOx), and can be used for all the species predicted by the model. A simplified configuration of the Comprehensive Air Quality Model with Extensions (CAMx) is used to evaluate the accuracy of different numerical integration procedures and the dependence of the source contributions on the path. A Gauss-Legendre formula using three or four points along the path gives good accuracy for apportioning the anthropogenic increments of ozone, nitrogen dioxide, formaldehyde, and nitric acid. Source contributions to these increments were obtained for paths representing proportional control of all anthropogenic emissions together, control of NOx emissions before VOC emissions, and control of VOC emissions before NOx emissions. There are similarities in the source contributions from the

  17. Computerized method for X-ray angular distribution simulation in radiological systems

    International Nuclear Information System (INIS)

    Marques, Marcio A.; Oliveira, Henrique J.Q. de; Frere, Annie F.; Schiabel, Homero; Marques, Paulo M.A.

    1996-01-01

    A method to simulate the changes in X-ray angular distribution (the Heel effect) for radiologic imaging systems is presented. This simulation method is described as to predict images for any exposure technique considering that the distribution is the cause of the intensity variation along the radiation field

  18. An improvement of source-jerk method for measuring high antireactivities of reactor systems

    Energy Technology Data Exchange (ETDEWEB)

    Bosevski, T; Spiric, V [Institute of nuclear sciences Boris Kidric, Vinca, Beograd (Yugoslavia)

    1965-10-15

    In this paper we modified the well known source jerk method /1/ thus obtaining a method for experimental determination of negative reactivities of reactor systems by which, based on the basic idea of the source jerk method, a new experimental procedure and an analysis were developed. The analysis and numerical preparation allows direct application of the method to heavy water and graphite systems. Compared with the source jerk method the experimental procedure and the interpretation of results is faster, simpler and more exact (author)

  19. Advanced airflow distribution methods for reduction of personal exposure to indoor pollutants

    DEFF Research Database (Denmark)

    Cao, Guangyu; Kosonen, Risto; Melikov, Arsen

    2016-01-01

    The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow ...... distribution methods to reduce indoor exposure to various indoor pollutants. This article presents some of the latest development of advanced airflow distribution methods to reduce indoor exposure in various types of buildings.......The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow...

  20. A method for statistically comparing spatial distribution maps

    Directory of Open Access Journals (Sweden)

    Reynolds Mary G

    2009-01-01

    Full Text Available Abstract Background Ecological niche modeling is a method for estimation of species distributions based on certain ecological parameters. Thus far, empirical determination of significant differences between independently generated distribution maps for a single species (maps which are created through equivalent processes, but with different ecological input parameters, has been challenging. Results We describe a method for comparing model outcomes, which allows a statistical evaluation of whether the strength of prediction and breadth of predicted areas is measurably different between projected distributions. To create ecological niche models for statistical comparison, we utilized GARP (Genetic Algorithm for Rule-Set Production software to generate ecological niche models of human monkeypox in Africa. We created several models, keeping constant the case location input records for each model but varying the ecological input data. In order to assess the relative importance of each ecological parameter included in the development of the individual predicted distributions, we performed pixel-to-pixel comparisons between model outcomes and calculated the mean difference in pixel scores. We used a two sample Student's t-test, (assuming as null hypothesis that both maps were identical to each other regardless of which input parameters were used to examine whether the mean difference in corresponding pixel scores from one map to another was greater than would be expected by chance alone. We also utilized weighted kappa statistics, frequency distributions, and percent difference to look at the disparities in pixel scores. Multiple independent statistical tests indicated precipitation as the single most important independent ecological parameter in the niche model for human monkeypox disease. Conclusion In addition to improving our understanding of the natural factors influencing the distribution of human monkeypox disease, such pixel-to-pixel comparison

  1. On Road Study of Colorado Front Range Greenhouse Gases Distribution and Sources

    Science.gov (United States)

    Petron, G.; Hirsch, A.; Trainer, M. K.; Karion, A.; Kofler, J.; Sweeney, C.; Andrews, A.; Kolodzey, W.; Miller, B. R.; Miller, L.; Montzka, S. A.; Kitzis, D. R.; Patrick, L.; Frost, G. J.; Ryerson, T. B.; Robers, J. M.; Tans, P.

    2008-12-01

    The Global Monitoring Division and Chemical Sciences Division of the NOAA Earth System Research Laboratory have teamed up over the summer 2008 to experiment with a new measurement strategy to characterize greenhouse gases distribution and sources in the Colorado Front Range. Combining expertise in greenhouse gases measurements and in local to regional scales air quality study intensive campaigns, we have built the 'Hybrid Lab'. A continuous CO2 and CH4 cavity ring down spectroscopic analyzer (Picarro, Inc.), a CO gas-filter correlation instrument (Thermo Environmental, Inc.) and a continuous UV absorption ozone monitor (2B Technologies, Inc., model 202SC) have been installed securely onboard a 2006 Toyota Prius Hybrid vehicle with an inlet bringing in outside air from a few meters above the ground. To better characterize point and distributed sources, air samples were taken with a Portable Flask Package (PFP) for later multiple species analysis in the lab. A GPS unit hooked up to the ozone analyzer and another one installed on the PFP kept track of our location allowing us to map measured concentrations on the driving route using Google Earth. The Hybrid Lab went out for several drives in the vicinity of the NOAA Boulder Atmospheric Observatory (BAO) tall tower located in Erie, CO and covering areas from Boulder, Denver, Longmont, Fort Collins and Greeley. Enhancements in CO2, CO and destruction of ozone mainly reflect emissions from traffic. Methane enhancements however are clearly correlated with nearby point sources (landfill, feedlot, natural gas compressor ...) or with larger scale air masses advected from the NE Colorado, where oil and gas drilling operations are widespread. The multiple species analysis (hydrocarbons, CFCs, HFCs) of the air samples collected along the way bring insightful information about the methane sources at play. We will present results of the analysis and interpretation of the Hybrid Lab Front Range Study and conclude with perspectives

  2. Agent paradigm and services technology for distributed Information Sources

    Directory of Open Access Journals (Sweden)

    Hakima Mellah

    2011-10-01

    Full Text Available The complexity of information is issued from interacting information sources (IS, and could be better exploited with respect to relevance of information. In distributed IS system, relevant information has a content that is in connection with other contents in information network, and is used for a certain purpose. The highlighting point of the proposed model is to contribute to information system agility according to a three-dimensional view involving the content, the use and the structure. This reflects the relevance of information complexity and effective methodologies through self organized principle to manage the complexity. This contribution is primarily focused on presenting some factors that lead and trigger for self organization in a Service Oriented Architecture (SOA and how it can be possible to integrate self organization mechanism in the same.

  3. A New High-Precision Correction Method of Temperature Distribution in Model Stellar Atmospheres

    Directory of Open Access Journals (Sweden)

    Sapar A.

    2013-06-01

    Full Text Available The main features of the temperature correction methods, suggested and used in modeling of plane-parallel stellar atmospheres, are discussed. The main features of the new method are described. Derivation of the formulae for a version of the Unsöld-Lucy method, used by us in the SMART (Stellar Model Atmospheres and Radiative Transport software for modeling stellar atmospheres, is presented. The method is based on a correction of the model temperature distribution based on minimizing differences of flux from its accepted constant value and on the requirement of the lack of its gradient, meaning that local source and sink terms of radiation must be equal. The final relative flux constancy obtainable by the method with the SMART code turned out to have the precision of the order of 0.5 %. Some of the rapidly converging iteration steps can be useful before starting the high-precision model correction. The corrections of both the flux value and of its gradient, like in Unsöld-Lucy method, are unavoidably needed to obtain high-precision flux constancy. A new temperature correction method to obtain high-precision flux constancy for plane-parallel LTE model stellar atmospheres is proposed and studied. The non-linear optimization is carried out by the least squares, in which the Levenberg-Marquardt correction method and thereafter additional correction by the Broyden iteration loop were applied. Small finite differences of temperature (δT/T = 10−3 are used in the computations. A single Jacobian step appears to be mostly sufficient to get flux constancy of the order 10−2 %. The dual numbers and their generalization – the dual complex numbers (the duplex numbers – enable automatically to get the derivatives in the nilpotent part of the dual numbers. A version of the SMART software is in the stage of refactorization to dual and duplex numbers, what enables to get rid of the finite differences, as an additional source of lowering precision of the

  4. Mathematical methods linear algebra normed spaces distributions integration

    CERN Document Server

    Korevaar, Jacob

    1968-01-01

    Mathematical Methods, Volume I: Linear Algebra, Normed Spaces, Distributions, Integration focuses on advanced mathematical tools used in applications and the basic concepts of algebra, normed spaces, integration, and distributions.The publication first offers information on algebraic theory of vector spaces and introduction to functional analysis. Discussions focus on linear transformations and functionals, rectangular matrices, systems of linear equations, eigenvalue problems, use of eigenvectors and generalized eigenvectors in the representation of linear operators, metric and normed vector

  5. PVIS-4, Pressure vessel irradiation, source preparation

    International Nuclear Information System (INIS)

    Wasastjerna, Frej

    2003-01-01

    1 - Description of program or function: The program prepares a fixed neutron source distribution in radial, (r,θ), (r,z) or (r,θ,z) geometry for ANISN, DORT or TORT. The user can input the source distribution in some relatively compact form (typically a few variables defining the spectrum, 10 values for the axial source distribution and, for the horizontal distribution, the values at the center and corners of each of the outermost fuel bundles and the average value for each interior bundle). The program then creates the required source arrays, such as 96*, 97* and 98* arrays for DORT. 2 - Methods: Each required operation is performed by a separate module (a set of subprograms). HORIHX or HORISQ takes a source distribution in the transverse plane, given at the center and corners of each fuel bundle in hexagonal or square geometry, and transforms it into (r,θ) or radial geometry. In the latter case, the output distribution may be averaged in the azimuthal direction or azimuthal maxima may be obtained. FOUR takes an axial distribution, specified as a histogram, and approximates this with a Fourier series. This is then used to obtain a histogram distribution for a different axial mesh. SQPIN takes a 3-D pin-wise distribution in square geometry and transforms it into a radial, (r,θ), (r,z) or (r,θ,z) distribution. FISPEC calculates a group-wise energy spectrum from any of several different functional forms. Several components with different forms may be combined into one spectrum. COMBI combines the space and energy distributions prepared by the other modules and presents the result in a format appropriate for the SN programs in the DOORS system. 3 - Restrictions on the complexity of the problem: At present a reactor core of hexagonal fuel bundles must have 30- or 60-degree symmetry, a core of square bundles must have 45- or 90-degree symmetry (except if the sq-pin option is used). Other core geometries are not supported for the input distribution. Only cylindrical

  6. Distributed optimization for systems design : an augmented Lagrangian coordination method

    NARCIS (Netherlands)

    Tosserams, S.

    2008-01-01

    This thesis presents a coordination method for the distributed design optimization of engineering systems. The design of advanced engineering systems such as aircrafts, automated distribution centers, and microelectromechanical systems (MEMS) involves multiple components that together realize the

  7. Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods

    Science.gov (United States)

    MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason

    2010-01-01

    The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal distribution. This article uses a simulation study to demonstrate that confidence limits are imbalanced because the distribution of the indirect effect is normal only in special cases. Two alternatives for improving the performance of confidence limits for the indirect effect are evaluated: (a) a method based on the distribution of the product of two normal random variables, and (b) resampling methods. In Study 1, confidence limits based on the distribution of the product are more accurate than methods based on an assumed normal distribution but confidence limits are still imbalanced. Study 2 demonstrates that more accurate confidence limits are obtained using resampling methods, with the bias-corrected bootstrap the best method overall. PMID:20157642

  8. Development of source term evaluation method for Korean Next Generation Reactor(III)

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Geon Jae; Park, Jin Baek; Lee, Yeong Il; Song, Min Cheonl; Lee, Ho Jin [Korea Advanced Institue of Science and Technology, Taejon (Korea, Republic of)

    1998-06-15

    This project had investigated irradiation characteristics of MOX fuel method to predict nuclide concentration at primary and secondary coolant using a core containing 100% of all MOX fuel and development of source term evaluation tool. In this study, several prediction methods of source term are evaluated. Detailed contents of this project are : an evaluation of model for nuclear concentration at Reactor Coolant System, evaluation of primary and secondary coolant concentration of reference Nuclear Power Plant using purely MOX fuel, suggestion of source term prediction method of NPP with a core using MOX fuel.

  9. Micro-seismic imaging using a source function independent full waveform inversion method

    Science.gov (United States)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-03-01

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  10. Micro-seismic imaging using a source function independent full waveform inversion method

    KAUST Repository

    Wang, Hanchen

    2018-03-26

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  11. Nonuniformity correction of infrared cameras by reading radiance temperatures with a spatially nonhomogeneous radiation source

    International Nuclear Information System (INIS)

    Gutschwager, Berndt; Hollandt, Jörg

    2017-01-01

    We present a novel method of nonuniformity correction (NUC) of infrared cameras and focal plane arrays (FPA) in a wide optical spectral range by reading radiance temperatures and by applying a radiation source with an unknown and spatially nonhomogeneous radiance temperature distribution. The benefit of this novel method is that it works with the display and the calculation of radiance temperatures, it can be applied to radiation sources of arbitrary spatial radiance temperature distribution, and it only requires sufficient temporal stability of this distribution during the measurement process. In contrast to this method, an initially presented method described the calculation of NUC correction with the reading of monitored radiance values. Both methods are based on the recording of several (at least three) images of a radiation source and a purposeful row- and line-shift of these sequent images in relation to the first primary image. The mathematical procedure is explained in detail. Its numerical verification with a source of a predefined nonhomogeneous radiance temperature distribution and a thermal imager of a predefined nonuniform FPA responsivity is presented. (paper)

  12. Absolute nuclear material assay using count distribution (LAMBDA) space

    Science.gov (United States)

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  13. Method of controlling power distribution in FBR type reactors

    International Nuclear Information System (INIS)

    Sawada, Shusaku; Kaneto, Kunikazu.

    1982-01-01

    Purpose: To attain the power distribution flattening with ease by obtaining a radial power distribution substantially in a constant configuration not depending on the burn-up cycle. Method: As the fuel burning proceeds, the radial power distribution is effected by the accumulation of fission products in the inner blancket fuel assemblies which varies the effect thereof as the neutron absorbing substances. Taking notice of the above fact, the power distribution is controlled in a heterogeneous FBR type reactor by varying the core residence period of the inner blancket assemblies in accordance with the charging density of the inner blancket assemblies in the reactor core. (Kawakami, Y.)

  14. The interplay of various sources of noise on reliability of species distribution models hinges on ecological specialisation.

    Science.gov (United States)

    Soultan, Alaaeldin; Safi, Kamran

    2017-01-01

    Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.

  15. Tetrodotoxin: Chemistry, Toxicity, Source, Distribution and Detection

    Directory of Open Access Journals (Sweden)

    Vaishali Bane

    2014-02-01

    Full Text Available Tetrodotoxin (TTX is a naturally occurring toxin that has been responsible for human intoxications and fatalities. Its usual route of toxicity is via the ingestion of contaminated puffer fish which are a culinary delicacy, especially in Japan. TTX was believed to be confined to regions of South East Asia, but recent studies have demonstrated that the toxin has spread to regions in the Pacific and the Mediterranean. There is no known antidote to TTX which is a powerful sodium channel inhibitor. This review aims to collect pertinent information available to date on TTX and its analogues with a special emphasis on the structure, aetiology, distribution, effects and the analytical methods employed for its detection.

  16. Source of spill ripple in the RF-KO slow-extraction method with FM and AM

    CERN Document Server

    Noda, K; Shibuya, S; Muramatsu, M; Uesugi, T; Kanazawa, M; Torikoshi, M; Takada, E; Yamada, S

    2002-01-01

    The RF-knockout (RF-KO) slow-extraction method with frequency modulation (FM) and amplitude modulation (AM) has brought high-accuracy irradiation to the treatment of a cancer tumor moving with respiration, because of a quick response to beam start/stop. However, a beam spill extracted from a synchrotron ring through RF-KO slow-extraction has a huge ripple with a frequency of around 1 kHz related to the FM. The spill ripple will disturb the lateral dose distribution in the beam scanning methods. Thus, the source of the spill ripple has been investigated through experiments and simulations. There are two tune regions for the extraction process through the RF-KO method: the extraction region and the diffusion region. The particles in the extraction region can be extracted due to amplitude growth through the transverse RF field, only when its frequency matches with the tune in the extraction region. For a large chromaticity, however, the particles in the extraction region can be extracted through the synchrotron ...

  17. The new fabrication method of standard surface sources

    Energy Technology Data Exchange (ETDEWEB)

    Sato, Yasushi E-mail: yss.sato@aist.go.jp; Hino, Yoshio; Yamada, Takahiro; Matsumoto, Mikio

    2004-04-01

    We developed a new fabrication method for standard surface sources by using an inkjet printer with inks in which a radioactive material is mixed to print on a sheet of paper. Three printed test patterns have been prepared: (1) 100 mmx100 mm uniformity-test patterns, (2) positional-resolution test patterns with different widths and intervals of straight lines, and (3) logarithmic intensity test patterns with different radioactive intensities. The results revealed that the fabricated standard surface sources had high uniformity, high positional resolution, arbitrary shapes and a broad intensity range.

  18. Dose distribution considerations of medium energy electron beams at extended source-to-surface distance

    International Nuclear Information System (INIS)

    Saw, Cheng B.; Ayyangar, Komanduri M.; Pawlicki, Todd; Korb, Leroy J.

    1995-01-01

    Purpose: To determine the effects of extended source-to-surface distance (SSD) on dose distributions for a range of medium energy electron beams and cone sizes. Methods and Materials: The depth-dose curves and isodose distributions of 6 MeV, 10 MeV, and 14 MeV electron beams from a dual photon and multielectron energies linear accelerator were studied. To examine the influence of cone size, the smallest and the largest cone sizes available were used. Measurements were carried out in a water phantom with the water surface set at three different SSDs from 101 to 116 cm. Results: In the region between the phantom surface and the depth of maximum dose, the depth-dose decreases as the SSD increases for all electron beam energies. The effects of extended SSD in the region beyond the depth of maximum dose are unobservable and, hence, considered minimal. Extended SSD effects are apparent for higher electron beam energy with small cone size causing the depth of maximum dose and the rapid dose fall-off region to shift deeper into the phantom. However, the change in the depth-dose curve is small. On the other hand, the rapid dose fall-off region is essentially unaltered when the large cone is used. The penumbra enlarges and electron beam flatness deteriorates with increasing SSD

  19. Optimal Allocation of Generalized Power Sources in Distribution Network Based on Multi-Objective Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Li Ran

    2017-01-01

    Full Text Available Optimal allocation of generalized power sources in distribution network is researched. A simple index of voltage stability is put forward. Considering the investment and operation benefit, the stability of voltage and the pollution emissions of generalized power sources in distribution network, a multi-objective optimization planning model is established. A multi-objective particle swarm optimization algorithm is proposed to solve the optimal model. In order to improve the global search ability, the strategies of fast non-dominated sorting, elitism and crowding distance are adopted in this algorithm. Finally, tested the model and algorithm by IEEE-33 node system to find the best configuration of GP, the computed result shows that with the generalized power reasonable access to the active distribution network, the investment benefit and the voltage stability of the system is improved, and the proposed algorithm has better global search capability.

  20. Prediction of broadband ground-motion time histories: Hybrid low/high-frequency method with correlated random source parameters

    Science.gov (United States)

    Liu, P.; Archuleta, R.J.; Hartzell, S.H.

    2006-01-01

    We present a new method for calculating broadband time histories of ground motion based on a hybrid low-frequency/high-frequency approach with correlated source parameters. Using a finite-difference method we calculate low- frequency synthetics (structure. We also compute broadband synthetics in a 1D velocity model using a frequency-wavenumber method. The low frequencies from the 3D calculation are combined with the high frequencies from the 1D calculation by using matched filtering at a crossover frequency of 1 Hz. The source description, common to both the 1D and 3D synthetics, is based on correlated random distributions for the slip amplitude, rupture velocity, and rise time on the fault. This source description allows for the specification of source parameters independent of any a priori inversion results. In our broadband modeling we include correlation between slip amplitude, rupture velocity, and rise time, as suggested by dynamic fault modeling. The method of using correlated random source parameters is flexible and can be easily modified to adjust to our changing understanding of earthquake ruptures. A realistic attenuation model is common to both the 3D and 1D calculations that form the low- and high-frequency components of the broadband synthetics. The value of Q is a function of the local shear-wave velocity. To produce more accurate high-frequency amplitudes and durations, the 1D synthetics are corrected with a randomized, frequency-dependent radiation pattern. The 1D synthetics are further corrected for local site and nonlinear soil effects by using a 1D nonlinear propagation code and generic velocity structure appropriate for the site’s National Earthquake Hazards Reduction Program (NEHRP) site classification. The entire procedure is validated by comparison with the 1994 Northridge, California, strong ground motion data set. The bias and error found here for response spectral acceleration are similar to the best results that have been published by

  1. Cellular Neural Network-Based Methods for Distributed Network Intrusion Detection

    Directory of Open Access Journals (Sweden)

    Kang Xie

    2015-01-01

    Full Text Available According to the problems of current distributed architecture intrusion detection systems (DIDS, a new online distributed intrusion detection model based on cellular neural network (CNN was proposed, in which discrete-time CNN (DTCNN was used as weak classifier in each local node and state-controlled CNN (SCCNN was used as global detection method, respectively. We further proposed a new method for design template parameters of SCCNN via solving Linear Matrix Inequality. Experimental results based on KDD CUP 99 dataset show its feasibility and effectiveness. Emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI implementation which allows the distributed intrusion detection to be performed better.

  2. Lead concentration distribution and source tracing of urban/suburban aquatic sediments in two typical famous tourist cities: Haikou and Sanya, China.

    Science.gov (United States)

    Dong, Zhicheng; Bao, Zhengyu; Wu, Guoai; Fu, Yangrong; Yang, Yi

    2010-11-01

    The content and spatial distribution of lead in the aquatic systems in two Chinese tropical cities in Hainan province (Haikou and Sanyan) show an unequal distribution of lead between the urban and the suburban areas. The lead content is significantly higher (72.3 mg/kg) in the urban area than the suburbs (15.0 mg/kg) in Haikou, but quite equal in Sanya (41.6 and 43.9 mg/kg). The frequency distribution histograms suggest that the lead in Haikou and in Sanya derives from different natural and/or anthropogenic sources. The isotopic compositions indicate that urban sediment lead in Haikou originates mainly from anthropogenic sources (automobile exhaust, atmospheric deposition, etc.) which contribute much more than the natural sources, while natural lead (basalt and sea sands) is still dominant in the suburban areas in Haikou. In Sanya, the primary source is natural (soils and sea sands).

  3. Polycyclic Aromatic Hydrocarbons in the Dagang Oilfield (China: Distribution, Sources, and Risk Assessment

    Directory of Open Access Journals (Sweden)

    Haihua Jiao

    2015-05-01

    Full Text Available The levels of 16 polycyclic aromatic hydrocarbons (PAHs were investigated in 27 upper layer (0–25 cm soil samples collected from the Dagang Oilfield (China in April 2013 to estimate their distribution, possible sources, and potential risks posed. The total concentrations of PAHs (∑PAHs varied between 103.6 µg·kg−1 and 5872 µg·kg−1, with a mean concentration of 919.8 µg·kg−1; increased concentrations were noted along a gradient from arable desert soil (mean 343.5 µg·kg−1, to oil well areas (mean of 627.3 µg·kg−1, to urban and residential zones (mean of 1856 µg·kg−1. Diagnostic ratios showed diverse source of PAHs, including petroleum, liquid fossil fuels, and biomass combustion sources. Combustion sources were most significant for PAHs in arable desert soils and residential zones, while petroleum sources were a significant source of PAHs in oilfield areas. Based ontheir carcinogenity, PAHs were classified as carcinogenic (B or not classified/non-carcinogenic (NB. The total concentrations of carcinogenic PAHs (∑BPAHs varied from 13.3 µg·kg−1 to 4397 µg·kg−1 across all samples, with a mean concentration of 594.4 µg·kg−1. The results suggest that oilfield soil is subject to a certain level of ecological environment risk.

  4. Ion source development for uranium-logging neutron tube

    International Nuclear Information System (INIS)

    Bacon, F.M.; O'Hagan, J.B.

    1977-03-01

    Ion beam current and mass distributions have been measured for a Penning-type ion source in a uranium-logging neutron tube. For a discharge current of 1 A and gas pressure of 1.3 Pa, the beam current was about 65 mA and the mass distribution was 5 percent D + , 80 percent D 2 + , and 15 percent D 3 + . A demountable version of this source was built to determine how geometry changes could affect the ion beam current and mass distribution. A factor of three increase in beam current was achieved by decreasing the depth of the plasma expansion cup to zero. The only method by which the mass distribution was significantly modified was by dissociating the gas in the source with a hot tungsten filament. Atomic percentage was increased to 40 percent with a filament at about 3000 K

  5. Gluon field distribution in baryons

    International Nuclear Information System (INIS)

    Bissey, F.; Cao, F-G.; Kitson, A.; Lasscock, B.G.; Leinweber, D.B.; Signal, A.I.; Williams, A.G.; Zanotti, J.M.

    2005-01-01

    Methods for revealing the distribution of gluon fields within the three-quark static-baryon potential are presented. In particular, we outline methods for studying the sensitivity of the source on the emerging vacuum response for the three-quark system. At the same time, we explore the possibility of revealing gluon-field distributions in three-quark systems in QCD without the use of gauge-dependent smoothing techniques. Renderings of flux tubes from a preliminary high-statistics study on a 12 3 x 24 lattice are presented

  6. Determination of disintegration rates of a 60Co point source and volume sources by the sum-peak method

    International Nuclear Information System (INIS)

    Kawano, Takao; Ebihara, Hiroshi

    1990-01-01

    The disintegration rates of 60 Co as a point source (<2 mm in diameter on a thin plastic disc) and volume sources (10-100 mL solutions in a polyethylene bottle) are determined by the sum-peak method. The sum-peak formula gives the exact disintegration rate for the point source at different positions from the detector. However, increasing the volume of the solution results in enlarged deviations from the true disintegration rate. Extended sources must be treated as an amalgam of many point sources. (author)

  7. Methods for reconstruction of the density distribution of nuclear power

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2015-01-01

    Highlights: • Two methods for reconstruction of the pin power distribution are presented. • The ARM method uses analytical solution of the 2D diffusion equation. • The PRM method uses polynomial solution without boundary conditions. • The maximum errors in pin power reconstruction occur in the peripheral water region. • The errors are significantly less in the inner area of the core. - Abstract: In analytical reconstruction method (ARM), the two-dimensional (2D) neutron diffusion equation is analytically solved for two energy groups (2G) and homogeneous nodes with dimensions of a fuel assembly (FA). The solution employs a 2D fourth-order expansion for the axial leakage term. The Nodal Expansion Method (NEM) provides the solution average values as the four average partial currents on the surfaces of the node, the average flux in the node and the multiplying factor of the problem. The expansion coefficients for the axial leakage are determined directly from NEM method or can be determined in the reconstruction method. A new polynomial reconstruction method (PRM) is implemented based on the 2D expansion for the axial leakage term. The ARM method use the four average currents on the surfaces of the node and four average fluxes in corners of the node as boundary conditions and the average flux in the node as a consistency condition. To determine the average fluxes in corners of the node an analytical solution is employed. This analytical solution uses the average fluxes on the surfaces of the node as boundary conditions and discontinuities in corners are incorporated. The polynomial and analytical solutions to the PRM and ARM methods, respectively, represent the homogeneous flux distributions. The detailed distributions inside a FA are estimated by product of homogeneous distribution by local heterogeneous form function. Moreover, the form functions of power are used. The results show that the methods have good accuracy when compared with reference values and

  8. Identification of reactor failure states using noise methods, and spatial power distribution

    International Nuclear Information System (INIS)

    Vavrin, J.; Blazek, J.

    1981-01-01

    A survey is given of the results achieved. Methodical means and programs were developed for the control computer which may be used in noise diagnostics and in the control of reactor power distribution. Statistical methods of processing the noise components of the signals of measured variables were used for identifying failures of reactors. The method of the synthesis of the neutron flux was used for modelling and evaluating the reactor power distribution. For monitoring and controlling the power distribution a mathematical model of the reactor was constructed suitable for control computers. The uses of noise analysis methods are recommended and directions of further development shown. (J.P.)

  9. The P{sub 1}-approximation for the Distribution of Neutrons from a Pulsed Source in Hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Claesson, A

    1963-12-15

    The asymptotic distribution of neutrons from a pulsed, high energy source in an infinite moderator has been obtained earlier in a 'diffusion' approximation. In that paper the cross section was assumed to be constant over the whole energy region and the time derivative of the first moment was disregarded. Here, first, an analytic expression is obtained for the density in a P{sub 1} -approximation. However, the result is very complicated, and it is shown that an asymptotic solution can be found in a simpler way. By taking into account the low hydrogen scattering cross section at the source energy it follows that the space dependence of the distribution is less than that obtained earlier. The importance of keeping the time derivative of the first moment is further shown in a perturbation approximation.

  10. A formal method for identifying distinct states of variability in time-varying sources: SGR A* as an example

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, L.; Witzel, G.; Ghez, A. M. [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095-1547 (United States); Longstaff, F. A. [UCLA Anderson School of Management, University of California, Los Angeles, CA 90095-1481 (United States)

    2014-08-10

    Continuously time variable sources are often characterized by their power spectral density and flux distribution. These quantities can undergo dramatic changes over time if the underlying physical processes change. However, some changes can be subtle and not distinguishable using standard statistical approaches. Here, we report a methodology that aims to identify distinct but similar states of time variability. We apply this method to the Galactic supermassive black hole, where 2.2 μm flux is observed from a source associated with Sgr A* and where two distinct states have recently been suggested. Our approach is taken from mathematical finance and works with conditional flux density distributions that depend on the previous flux value. The discrete, unobserved (hidden) state variable is modeled as a stochastic process and the transition probabilities are inferred from the flux density time series. Using the most comprehensive data set to date, in which all Keck and a majority of the publicly available Very Large Telescope data have been merged, we show that Sgr A* is sufficiently described by a single intrinsic state. However, the observed flux densities exhibit two states: noise dominated and source dominated. Our methodology reported here will prove extremely useful to assess the effects of the putative gas cloud G2 that is on its way toward the black hole and might create a new state of variability.

  11. Sensitivity Analysis of Dynamic Tariff Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi

    2015-01-01

    The dynamic tariff (DT) method is designed for the distribution system operator (DSO) to alleviate the congestions that might occur in a distribution network with high penetration of distribute energy resources (DERs). Sensitivity analysis of the DT method is crucial because of its decentralized...... control manner. The sensitivity analysis can obtain the changes of the optimal energy planning and thereby the line loading profiles over the infinitely small changes of parameters by differentiating the KKT conditions of the convex quadratic programming, over which the DT method is formed. Three case...

  12. Test of methods for retrospective activity size distribution determination from filter samples

    International Nuclear Information System (INIS)

    Meisenberg, Oliver; Tschiersch, Jochen

    2015-01-01

    Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter

  13. Hydrogen distribution in a containment with a high-velocity hydrogen-steam source

    International Nuclear Information System (INIS)

    Bloom, G.R.; Muhlestein, L.D.; Postma, A.K.; Claybrook, S.W.

    1982-09-01

    Hydrogen mixing and distribution tests are reported for a modeled high velocity hydrogen-steam release from a postulated small pipe break or release from a pressurizer relief tank rupture disk into the lower compartment of an Ice Condenser Plant. The tests, which in most cases used helium as a simulant for hydrogen, demonstrated that the lower compartment gas was well mixed for both hydrogen release conditions used. The gas concentration differences between any spatial locations were less than 3 volume percent during the hydrogen/steam release period and were reduced to less than 0.5 volume percent within 20 minutes after termination of the hydrogen source. The high velocity hydrogen/steam jet provided the dominant mixing mechanism; however, natural convection and forced air recirculation played important roles in providing a well mixed atmosphere following termination of the hydrogen source. 5 figures, 4 tables

  14. Evaluation of the dose distribution for prostate implants using various 125I and 103Pd sources

    International Nuclear Information System (INIS)

    Meigooni, Ali S.; Luerman, Christine M.; Sowards, Keith T.

    2009-01-01

    Recently, several different models of 125 I and 103 Pd brachytherapy sources have been introduced in order to meet the increasing demand for prostate seed implants. These sources have different internal structures; hence, their TG-43 dosimetric parameters are not the same. In this study, the effects of the dosimetric differences among the sources on their clinical applications were evaluated. The quantitative and qualitative evaluations were performed by comparisons of dose distributions and dose volume histograms of prostate implants calculated for various designs of 125 I and 103 Pd sources. These comparisons were made for an identical implant scheme with the same number of seeds for each source. The results were compared with the Amersham model 6711 seed for 125 I and the Theragenics model 200 seed for 103 Pd using the same implant scheme.

  15. The distribution of polarized radio sources >15 μJy IN GOODS-N

    International Nuclear Information System (INIS)

    Rudnick, L.; Owen, F. N.

    2014-01-01

    We present deep Very Large Array observations of the polarization of radio sources in the GOODS-N field at 1.4 GHz at resolutions of 1.''6 and 10''. At 1.''6, we find that the peak flux cumulative number count distribution is N(> p) ∼ 45*(p/30 μJy) –0.6 per square degree above a detection threshold of 14.5 μJy. This represents a break from the steeper slopes at higher flux densities, resulting in fewer sources predicted for future surveys with the Square Kilometer Array and its precursors. It provides a significant challenge for using background rotation measures (RMs) to study clusters of galaxies or individual galaxies. Most of the polarized sources are well above our detection limit, and they are also radio galaxies that are well-resolved even at 10'', with redshifts from ∼0.2-1.9. We determined a total polarized flux for each source by integrating the 10'' polarized intensity maps, as will be done by upcoming surveys such as POSSUM. These total polarized fluxes are a factor of two higher, on average, than the peak polarized flux at 1.''6; this would increase the number counts by ∼50% at a fixed flux level. The detected sources have RMs with a characteristic rms scatter of ∼11 rad m –2 around the local Galactic value, after eliminating likely outliers. The median fractional polarization from all total intensity sources does not continue the trend of increasing at lower flux densities, as seen for stronger sources. The changes in the polarization characteristics seen at these low fluxes likely represent the increasing dominance of star-forming galaxies.

  16. Qualitative analysis of precipiation distribution in Poland with use of different data sources

    Directory of Open Access Journals (Sweden)

    J. Walawender

    2008-04-01

    Full Text Available Geographical Information Systems (GIS can be used to integrate data from different sources and in different formats to perform innovative spatial and temporal analysis. GIS can be also applied for climatic research to manage, investigate and display all kinds of weather data.

    The main objective of this study is to demonstrate that GIS is a useful tool to examine and visualise precipitation distribution obtained from different data sources: ground measurements, satellite and radar data.

    Three selected days (30 cases with convective rainfall situations were analysed. Firstly, scalable GRID-based approach was applied to store data from three different sources in comparable layout. Then, geoprocessing algorithm was created within ArcGIS 9.2 environment. The algorithm included: GRID definition, reclassification and raster algebra. All of the calculations and procedures were performed automatically. Finally, contingency tables and pie charts were created to show relationship between ground measurements and both satellite and radar derived data. The results were visualised on maps.

  17. Coupled double-distribution-function lattice Boltzmann method for the compressible Navier-Stokes equations.

    Science.gov (United States)

    Li, Q; He, Y L; Wang, Y; Tao, W Q

    2007-11-01

    A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.

  18. Discussion on the source survey method in a natural evaporation pond

    International Nuclear Information System (INIS)

    Dai Xiaoshu; Fan Chengrong; Fu Yunshan

    2014-01-01

    A natural evaporation pond intended to be decommissioned. The survey of the pond focused on investigating radioactive contamination distribution and estimating the total amount of deposits in the pond, in order to provide support for subsequent decommissioning activities. Based on the source survey in the pond, this paper introduced how to implement radiation measurements and sampling (such as water and sediment) in the water. The movable work platform was built on the pond to facilitate sampling and measurement. In addition, a sludge sampler had been designed so as to accurately determine the amount of sampling and its depth. This paper also described the distribution of sampling points. (authors)

  19. A novel method for detecting light source for digital images forensic

    Science.gov (United States)

    Roy, A. K.; Mitra, S. K.; Agrawal, R.

    2011-06-01

    Manipulation in image has been in practice since centuries. These manipulated images are intended to alter facts — facts of ethics, morality, politics, sex, celebrity or chaos. Image forensic science is used to detect these manipulations in a digital image. There are several standard ways to analyze an image for manipulation. Each one has some limitation. Also very rarely any method tried to capitalize on the way image was taken by the camera. We propose a new method that is based on light and its shade as light and shade are the fundamental input resources that may carry all the information of the image. The proposed method measures the direction of light source and uses the light based technique for identification of any intentional partial manipulation in the said digital image. The method is tested for known manipulated images to correctly identify the light sources. The light source of an image is measured in terms of angle. The experimental results show the robustness of the methodology.

  20. Comparative analysis of methods and sources of financing of the transport organizations activity

    Science.gov (United States)

    Gorshkov, Roman

    2017-10-01

    The article considers the analysis of methods of financing of transport organizations in conditions of limited investment resources. A comparative analysis of these methods is carried out, the classification of investment, methods and sources of financial support for projects being implemented to date are presented. In order to select the optimal sources of financing for the projects, various methods of financial management and financial support for the activities of the transport organization were analyzed, which were considered from the perspective of analysis of advantages and limitations. The result of the study is recommendations on the selection of optimal sources and methods of financing of transport organizations.

  1. Acoustic Source Localization via Subspace Based Method Using Small Aperture MEMS Arrays

    Directory of Open Access Journals (Sweden)

    Xin Zhang

    2014-01-01

    Full Text Available Small aperture microphone arrays provide many advantages for portable devices and hearing aid equipment. In this paper, a subspace based localization method is proposed for acoustic source using small aperture arrays. The effects of array aperture on localization are analyzed by using array response (array manifold. Besides array aperture, the frequency of acoustic source and the variance of signal power are simulated to demonstrate how to optimize localization performance, which is carried out by introducing frequency error with the proposed method. The proposed method for 5 mm array aperture is validated by simulations and experiments with MEMS microphone arrays. Different types of acoustic sources can be localized with the highest precision of 6 degrees even in the presence of wind noise and other noises. Furthermore, the proposed method reduces the computational complexity compared with other methods.

  2. Comparisons of calculated and measured spectral distributions of neutrons from a 14-MeV neutron source inside the Tokamak Fusion Test Reactor

    International Nuclear Information System (INIS)

    Santoro, R.T.; Barnes, J.M.; Alsmiller, R.G. Jr.; Emmett, M.B.; Drischler, J.D.

    1985-12-01

    A recent paper presented neutron spectral distributions (energy greater than or equal to0.91 MeV) measured at various locations around the Tokamak Fusion Test Reactor (TFTR) at the Princeton Plasma Physics Laboratory. The neutron source for the series of measurements was a small D-T generator placed at various positions in the TFTR vacuum chamber. In the present paper the results of neutron transport calculations are presented and compared with these experimental data. The calculations were carried out using Monte Carlo methods and a very detailed model of the TFTR and the TFTR test cell. The calculated and experimental fluences per unit energy are compared in absolute units and are found to be in substantial agreement for five different combinations of source and detector positions

  3. Determination of Key Risk Supervision Areas around River-Type Water Sources Affected by Multiple Risk Sources: A Case Study of Water Sources along the Yangtze’s Nanjing Section

    Directory of Open Access Journals (Sweden)

    Qi Zhou

    2017-02-01

    Full Text Available To provide a reference for risk management of water sources, this study screens the key risk supervision areas around river-type water sources (hereinafter referred to as the water sources threatened by multiple fixed risk sources (the risk sources, and establishes a comprehensive methodological system. Specifically, it comprises: (1 method of partitioning risk source concentrated sub-regions for screening water source perimeter key risk supervision areas; (2 approach of determining sub-regional risk indexes (SrRI, which characterizes the scale of sub-regional risks considering factors like risk distribution intensity within sub-regions, risk indexes of risk sources (RIRS, characterizing the risk scale of risk sources and the number of risk sources; and (3 method of calculating sub-region’s risk threats to the water sources (SrTWS which considers the positional relationship between water sources and sub-regions as well as SrRI, and the criteria for determining key supervision sub-regions. Favorable effects are achieved by applying this methodological system in determining water source perimeter sub-regions distributed along the Yangtze’s Nanjing section. Results revealed that for water sources, the key sub-regions needing supervision were SD16, SD06, SD21, SD26, SD15, SD03, SD02, SD32, SD10, SD11, SD14, SD05, SD27, etc., in the order of criticality. The sub-region with the greatest risk threats on the water sources was SD16, which was located in the middle reaches of Yangtze River. In general, sub-regions along the upper Yangtze reaches had greater threats to water sources than the lower reach sub-regions other than SD26 and SD21. Upstream water sources were less subject to the threats of sub-regions than the downstream sources other than NJ09B and NJ03.

  4. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    Science.gov (United States)

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  5. Distribution and behavior of tritium in the environment near its sources

    International Nuclear Information System (INIS)

    Fukui, Masami

    1994-01-01

    Of the long-lived radionuclides migrating globally through the environment, tritium was investigated because it is a latent source of detectable pollution in the Kyoto University Research Reactor Institute (KURRI). Moreover, mass transport in an in situ situation could be studied as well. Discharge rates of tritium from the operation of reactors and reprocessing plants throughout the world are given together with data on the background level of tritium in the environment and the resulting collective effective dose equivalent commitment. The behavior of tritium that has leaked from a heavy water facility in the Kyoto University Research Reactor (KURR) is dealt with by modeling its transport between air, water, and building materials. The spatial distributions of tritium in the air within the facilities of the KURR and KUCA (Kyoto University Critical Assembly), as measured by a convenient monitoring method that uses small water basins as passive samplers also are shown. Tritium discharged as liquid effluents in a reservoir for monitoring and in a retention pond at the KURRI site was monitored, and its behavior clarified by use of a box model and/or a classical dispersion equation. The purpose of this research is to keep radiation exposure to workers and the public as low as reasonably achievable, taking into account economic and social considerations (the ALARA concept). (author)

  6. Stable source reconstruction from a finite number of measurements in the multi-frequency inverse source problem

    DEFF Research Database (Denmark)

    Karamehmedovic, Mirza; Kirkeby, Adrian; Knudsen, Kim

    2018-01-01

    setting: From measurements made at a finite set of frequencies we uniquely determine and reconstruct sources in a subspace spanned by finitely many Fourier-Bessel functions. Further, we obtain a constructive criterion for identifying a minimal set of measurement frequencies sufficient for reconstruction......, and under an additional, mild assumption, the reconstruction method is shown to be stable." Our analysis is based on a singular value decomposition of the source-to-measurement forward operators and the distribution of positive zeros of the Bessel functions of the first kind. The reconstruction method...

  7. Rainfall Deduction Method for Estimating Non-Point Source Pollution Load for Watershed

    OpenAIRE

    Cai, Ming; Li, Huai-en; KAWAKAMI, Yoji

    2004-01-01

    The water pollution can be divided into point source pollution (PSP) and non-point source pollution (NSP). Since the point source pollution has been controlled, the non-point source pollution is becoming the main pollution source. The prediction of NSP load is being increasingly important in water pollution controlling and planning in watershed. Considering the monitoring data shortage of NPS in China, a practical estimation method of non-point source pollution load --- rainfall deduction met...

  8. Post-quantum attacks on key distribution schemes in the presence of weakly stochastic sources

    International Nuclear Information System (INIS)

    Al–Safi, S W; Wilmott, C M

    2015-01-01

    It has been established that the security of quantum key distribution protocols can be severely compromised were one to permit an eavesdropper to possess a very limited knowledge of the random sources used between the communicating parties. While such knowledge should always be expected in realistic experimental conditions, the result itself opened a new line of research to fully account for real-world weak randomness threats to quantum cryptography. Here we expand of this novel idea by describing a key distribution scheme that is provably secure against general attacks by a post-quantum adversary. We then discuss possible security consequences for such schemes under the assumption of weak randomness. (paper)

  9. Phenotypic and Genotypic Eligible Methods for Salmonella Typhimurium Source Tracking.

    Science.gov (United States)

    Ferrari, Rafaela G; Panzenhagen, Pedro H N; Conte-Junior, Carlos A

    2017-01-01

    Salmonellosis is one of the most common causes of foodborne infection and a leading cause of human gastroenteritis. Throughout the last decade, Salmonella enterica serotype Typhimurium (ST) has shown an increase report with the simultaneous emergence of multidrug-resistant isolates, as phage type DT104. Therefore, to successfully control this microorganism, it is important to attribute salmonellosis to the exact source. Studies of Salmonella source attribution have been performed to determine the main food/food-production animals involved, toward which, control efforts should be correctly directed. Hence, the election of a ST subtyping method depends on the particular problem that efforts must be directed, the resources and the data available. Generally, before choosing a molecular subtyping, phenotyping approaches such as serotyping, phage typing, and antimicrobial resistance profiling are implemented as a screening of an investigation, and the results are computed using frequency-matching models (i.e., Dutch, Hald and Asymmetric Island models). Actually, due to the advancement of molecular tools as PFGE, MLVA, MLST, CRISPR, and WGS more precise results have been obtained, but even with these technologies, there are still gaps to be elucidated. To address this issue, an important question needs to be answered: what are the currently suitable subtyping methods to source attribute ST. This review presents the most frequently applied subtyping methods used to characterize ST, analyses the major available microbial subtyping attribution models and ponders the use of conventional phenotyping methods, as well as, the most applied genotypic tools in the context of their potential applicability to investigates ST source tracking.

  10. Phenotypic and Genotypic Eligible Methods for Salmonella Typhimurium Source Tracking

    Directory of Open Access Journals (Sweden)

    Rafaela G. Ferrari

    2017-12-01

    Full Text Available Salmonellosis is one of the most common causes of foodborne infection and a leading cause of human gastroenteritis. Throughout the last decade, Salmonella enterica serotype Typhimurium (ST has shown an increase report with the simultaneous emergence of multidrug-resistant isolates, as phage type DT104. Therefore, to successfully control this microorganism, it is important to attribute salmonellosis to the exact source. Studies of Salmonella source attribution have been performed to determine the main food/food-production animals involved, toward which, control efforts should be correctly directed. Hence, the election of a ST subtyping method depends on the particular problem that efforts must be directed, the resources and the data available. Generally, before choosing a molecular subtyping, phenotyping approaches such as serotyping, phage typing, and antimicrobial resistance profiling are implemented as a screening of an investigation, and the results are computed using frequency-matching models (i.e., Dutch, Hald and Asymmetric Island models. Actually, due to the advancement of molecular tools as PFGE, MLVA, MLST, CRISPR, and WGS more precise results have been obtained, but even with these technologies, there are still gaps to be elucidated. To address this issue, an important question needs to be answered: what are the currently suitable subtyping methods to source attribute ST. This review presents the most frequently applied subtyping methods used to characterize ST, analyses the major available microbial subtyping attribution models and ponders the use of conventional phenotyping methods, as well as, the most applied genotypic tools in the context of their potential applicability to investigates ST source tracking.

  11. Bayesian mixture models for source separation in MEG

    International Nuclear Information System (INIS)

    Calvetti, Daniela; Homa, Laura; Somersalo, Erkki

    2011-01-01

    This paper discusses the problem of imaging electromagnetic brain activity from measurements of the induced magnetic field outside the head. This imaging modality, magnetoencephalography (MEG), is known to be severely ill posed, and in order to obtain useful estimates for the activity map, complementary information needs to be used to regularize the problem. In this paper, a particular emphasis is on finding non-superficial focal sources that induce a magnetic field that may be confused with noise due to external sources and with distributed brain noise. The data are assumed to come from a mixture of a focal source and a spatially distributed possibly virtual source; hence, to differentiate between those two components, the problem is solved within a Bayesian framework, with a mixture model prior encoding the information that different sources may be concurrently active. The mixture model prior combines one density that favors strongly focal sources and another that favors spatially distributed sources, interpreted as clutter in the source estimation. Furthermore, to address the challenge of localizing deep focal sources, a novel depth sounding algorithm is suggested, and it is shown with simulated data that the method is able to distinguish between a signal arising from a deep focal source and a clutter signal. (paper)

  12. Security analysis of an untrusted source for quantum key distribution: passive approach

    International Nuclear Information System (INIS)

    Zhao Yi; Qi Bing; Lo, H-K; Qian Li

    2010-01-01

    We present a passive approach to the security analysis of quantum key distribution (QKD) with an untrusted source. A complete proof of its unconditional security is also presented. This scheme has significant advantages in real-life implementations as it does not require fast optical switching or a quantum random number generator. The essential idea is to use a beam splitter to split each input pulse. We show that we can characterize the source using a cross-estimate technique without active routing of each pulse. We have derived analytical expressions for the passive estimation scheme. Moreover, using simulations, we have considered four real-life imperfections: additional loss introduced by the 'plug and play' structure, inefficiency of the intensity monitor noise of the intensity monitor, and statistical fluctuation introduced by finite data size. Our simulation results show that the passive estimate of an untrusted source remains useful in practice, despite these four imperfections. Also, we have performed preliminary experiments, confirming the utility of our proposal in real-life applications. Our proposal makes it possible to implement the 'plug and play' QKD with the security guaranteed, while keeping the implementation practical.

  13. Probabilist methods applied to electric source problems in nuclear safety

    International Nuclear Information System (INIS)

    Carnino, A.; Llory, M.

    1979-01-01

    Nuclear Safety has frequently been asked to quantify safety margins and evaluate the hazard. In order to do so, the probabilist methods have proved to be the most promising. Without completely replacing determinist safety, they are now commonly used at the reliability or availability stages of systems as well as for determining the likely accidental sequences. In this paper an application linked to the problem of electric sources is described, whilst at the same time indicating the methods used. This is the calculation of the probable loss of all the electric sources of a pressurized water nuclear power station, the evaluation of the reliability of diesels by event trees of failures and the determination of accidental sequences which could be brought about by the 'total electric source loss' initiator and affect the installation or the environment [fr

  14. Planck Early Results. XV. Spectral Energy Distributions and Radio Continuum Spectra of Northern Extragalactic Radio Sources

    Science.gov (United States)

    Aatrokoski, J.; Ade, P. A. R.; Aghanim, N.; Aller, H. D.; Aller, M. F.; Angelakis, E.; Amaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; hide

    2011-01-01

    Spectral energy distributions (SEDs) and radio continuum spectra are presented for a northern sample of 104 extragalactic radio sources. based on the Planck Early Release Compact Source Catalogue (ERCSC) and simultaneous multi frequency data. The nine Planck frequencies, from 30 to 857 GHz, are complemented by a set of simultaneous observations ranging from radio to gamma-rays. This is the first extensive frequency coverage in the radio and millimetre domains for an essentially complete sample of extragalactic radio sources, and it shows how the individual shocks, each in their own phase of development, shape the radio spectra as they move in the relativistic jet. The SEDs presented in this paper were fitted with second and third degree polynomials to estimate the frequencies of the synchrotron and inverse Compton (IC) peaks, and the spectral indices of low and high frequency radio data, including the Planck ERCSC data, were calculated. SED modelling methods are discussed, with an emphasis on proper. physical modelling of the synchrotron bump using multiple components. Planck ERCSC data also suggest that the original accelerated electron energy spectrum could be much harder than commonly thought, with power-law index around 1.5 instead of the canonical 2.5. The implications of this are discussed for the acceleration mechanisms effective in blazar shock. Furthermore in many cases the Planck data indicate that gamma-ray emission must originate in the same shocks that produce the radio emission.

  15. Environmental DNA method for estimating salamander distribution in headwater streams, and a comparison of water sampling methods.

    Science.gov (United States)

    Katano, Izumi; Harada, Ken; Doi, Hideyuki; Souma, Rio; Minamoto, Toshifumi

    2017-01-01

    Environmental DNA (eDNA) has recently been used for detecting the distribution of macroorganisms in various aquatic habitats. In this study, we applied an eDNA method to estimate the distribution of the Japanese clawed salamander, Onychodactylus japonicus, in headwater streams. Additionally, we compared the detection of eDNA and hand-capturing methods used for determining the distribution of O. japonicus. For eDNA detection, we designed a qPCR primer/probe set for O. japonicus using the 12S rRNA region. We detected the eDNA of O. japonicus at all sites (with the exception of one), where we also observed them by hand-capturing. Additionally, we detected eDNA at two sites where we were unable to observe individuals using the hand-capturing method. Moreover, we found that eDNA concentrations and detection rates of the two water sampling areas (stream surface and under stones) were not significantly different, although the eDNA concentration in the water under stones was more varied than that on the surface. We, therefore, conclude that eDNA methods could be used to determine the distribution of macroorganisms inhabiting headwater systems by using samples collected from the surface of the water.

  16. Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions

    Science.gov (United States)

    Liu, C.; Charpentier, R.R.; Su, J.

    2011-01-01

    Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.

  17. Intracorporeal Heat Distribution from Fully Implantable Energy Sources for Mechanical Circulatory Support: A Computational Proof-of-Concept Study

    Directory of Open Access Journals (Sweden)

    Jacopo Biasetti

    2017-10-01

    Full Text Available Mechanical circulatory support devices, such as total artificial hearts and left ventricular assist devices, rely on external energy sources for their continuous operation. Clinically approved power supplies rely on percutaneous cables connecting an external energy source to the implanted device with the associated risk of infections. One alternative, investigated in the 70s and 80s, employs a fully implanted nuclear power source. The heat generated by the nuclear decay can be converted into electricity to power circulatory support devices. Due to the low conversion efficiencies, substantial levels of waste heat are generated and must be dissipated to avoid tissue damage, heat stroke, and death. The present work computationally evaluates the ability of the blood flow in the descending aorta to remove the locally generated waste heat for subsequent full-body distribution and dissipation, with the specific aim of investigating methods for containment of local peak temperatures within physiologically acceptable limits. To this aim, coupled fluid–solid heat transfer computational models of the blood flow in the human aorta and different heat exchanger architectures are developed. Particle tracking is used to evaluate temperature histories of cells passing through the heat exchanger region. The use of the blood flow in the descending aorta as a heat sink proves to be a viable approach for the removal of waste heat loads. With the basic heat exchanger design, blood thermal boundary layer temperatures exceed 50°C, possibly damaging blood cells and proteins. Improved designs of the heat exchanger, with the addition of fins and heat guides, allow for drastically lower blood temperatures, possibly leading to a more biocompatible implant. The ability to maintain blood temperatures at biologically compatible levels will ultimately allow for the body-wise distribution, and subsequent dissipation, of heat loads with minimum effects on the human physiology.

  18. Intracorporeal Heat Distribution from Fully Implantable Energy Sources for Mechanical Circulatory Support: A Computational Proof-of-Concept Study.

    Science.gov (United States)

    Biasetti, Jacopo; Pustavoitau, Aliaksei; Spazzini, Pier Giorgio

    2017-01-01

    Mechanical circulatory support devices, such as total artificial hearts and left ventricular assist devices, rely on external energy sources for their continuous operation. Clinically approved power supplies rely on percutaneous cables connecting an external energy source to the implanted device with the associated risk of infections. One alternative, investigated in the 70s and 80s, employs a fully implanted nuclear power source. The heat generated by the nuclear decay can be converted into electricity to power circulatory support devices. Due to the low conversion efficiencies, substantial levels of waste heat are generated and must be dissipated to avoid tissue damage, heat stroke, and death. The present work computationally evaluates the ability of the blood flow in the descending aorta to remove the locally generated waste heat for subsequent full-body distribution and dissipation, with the specific aim of investigating methods for containment of local peak temperatures within physiologically acceptable limits. To this aim, coupled fluid-solid heat transfer computational models of the blood flow in the human aorta and different heat exchanger architectures are developed. Particle tracking is used to evaluate temperature histories of cells passing through the heat exchanger region. The use of the blood flow in the descending aorta as a heat sink proves to be a viable approach for the removal of waste heat loads. With the basic heat exchanger design, blood thermal boundary layer temperatures exceed 50°C, possibly damaging blood cells and proteins. Improved designs of the heat exchanger, with the addition of fins and heat guides, allow for drastically lower blood temperatures, possibly leading to a more biocompatible implant. The ability to maintain blood temperatures at biologically compatible levels will ultimately allow for the body-wise distribution, and subsequent dissipation, of heat loads with minimum effects on the human physiology.

  19. A robust fusion method for multiview distributed video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina

    2014-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the redundancy of the source (video) at the decoder side, as opposed to predictive coding, where the encoder leverages the redundancy. To exploit the correlation between views, multiview predictive video codecs require the encoder...... with a robust fusion system able to improve the quality of the fused SI along the decoding process through a learning process using already decoded data. We shall here take the approach to fuse the estimated distributions of the SIs as opposed to a conventional fusion algorithm based on the fusion of pixel...... values. The proposed solution is able to achieve gains up to 0.9 dB in Bjøntegaard difference when compared with the best-performing (in a RD sense) single SI DVC decoder, chosen as the best of an inter-view and a temporal SI-based decoder one....

  20. An improvement of source-jerk method for measuring high anti reactivities of reactor system

    Energy Technology Data Exchange (ETDEWEB)

    Bosevski, T; Spiric, V [Institut za nuklearne nauke ' Boris Kidric' , Vinca, Belgrade (Yugoslavia)

    1966-07-01

    In this paper we modified the well known source jerk method (1) thus obtaining a method for experimental determination of negative reactivities of reactor systems by which, based on the basic idea of the source jerk method, a new experimental procedure and an exact analysis were developed. The analysis and numerical preparation allows direct application of the method to heavy water and graphite systems. Compared with the source jerk method the experimental procedure and the interpretation of results is faster, simpler and more exact (author)

  1. Distributed Interior-point Method for Loosely Coupled Problems

    DEFF Research Database (Denmark)

    Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard

    2014-01-01

    In this paper, we put forth distributed algorithms for solving loosely coupled unconstrained and constrained optimization problems. Such problems are usually solved using algorithms that are based on a combination of decomposition and first order methods. These algorithms are commonly very slow a...

  2. The eGo grid model: An open-source and open-data based synthetic medium-voltage grid model for distribution power supply systems

    Science.gov (United States)

    Amme, J.; Pleßmann, G.; Bühler, J.; Hülk, L.; Kötter, E.; Schwaegerl, P.

    2018-02-01

    The increasing integration of renewable energy into the electricity supply system creates new challenges for distribution grids. The planning and operation of distribution systems requires appropriate grid models that consider the heterogeneity of existing grids. In this paper, we describe a novel method to generate synthetic medium-voltage (MV) grids, which we applied in our DIstribution Network GeneratOr (DINGO). DINGO is open-source software and uses freely available data. Medium-voltage grid topologies are synthesized based on location and electricity demand in defined demand areas. For this purpose, we use GIS data containing demand areas with high-resolution spatial data on physical properties, land use, energy, and demography. The grid topology is treated as a capacitated vehicle routing problem (CVRP) combined with a local search metaheuristics. We also consider the current planning principles for MV distribution networks, paying special attention to line congestion and voltage limit violations. In the modelling process, we included power flow calculations for validation. The resulting grid model datasets contain 3608 synthetic MV grids in high resolution, covering all of Germany and taking local characteristics into account. We compared the modelled networks with real network data. In terms of number of transformers and total cable length, we conclude that the method presented in this paper generates realistic grids that could be used to implement a cost-optimised electrical energy system.

  3. Analytical method for determining the channel-temperature distribution

    International Nuclear Information System (INIS)

    Kurbatov, I.M.

    1992-01-01

    The distribution of the predicted temperature over the volume or cross section of the active zone is important for thermal calculations of reactors taking into account random deviations. This requires a laborious calculation which includes the following steps: separation of the nominal temperature field, within the temperature range, into intervals, in each of which the temperature is set equal to its average value in the interval; determination of the number of channels whose temperature falls within each interval; construction of the channel-temperature distribution in each interval in accordance with the weighted error function; and summation of the number of channels with the same temperature over all intervals. This procedure can be greatly simplified with the help of methods which eliminate numerous variant calculations when the nominal temperature field is open-quotes refinedclose quotes up to the optimal field according to different criteria. In the present paper a universal analytical method is proposed for determining, by changing the coefficients in the channel-temperature distribution function, the form of this function that reflects all conditions of operation of the elements in the active zone. The problem is solved for the temperature of the coolant at the outlet from the reactor channels

  4. Analysis of inconsistent source sampling in monte carlo weight-window variance reduction methods

    Directory of Open Access Journals (Sweden)

    David P. Griesheimer

    2017-09-01

    Full Text Available The application of Monte Carlo (MC to large-scale fixed-source problems has recently become possible with new hybrid methods that automate generation of parameters for variance reduction techniques. Two common variance reduction techniques, weight windows and source biasing, have been automated and popularized by the consistent adjoint-driven importance sampling (CADIS method. This method uses the adjoint solution from an inexpensive deterministic calculation to define a consistent set of weight windows and source particles for a subsequent MC calculation. One of the motivations for source consistency is to avoid the splitting or rouletting of particles at birth, which requires computational resources. However, it is not always possible or desirable to implement such consistency, which results in inconsistent source biasing. This paper develops an original framework that mathematically expresses the coupling of the weight window and source biasing techniques, allowing the authors to explore the impact of inconsistent source sampling on the variance of MC results. A numerical experiment supports this new framework and suggests that certain classes of problems may be relatively insensitive to inconsistent source sampling schemes with moderate levels of splitting and rouletting.

  5. Optimal reactive power and voltage control in distribution networks with distributed generators by fuzzy adaptive hybrid particle swarm optimisation method

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Su, Chi

    2015-01-01

    A new and efficient methodology for optimal reactive power and voltage control of distribution networks with distributed generators based on fuzzy adaptive hybrid PSO (FAHPSO) is proposed. The objective is to minimize comprehensive cost, consisting of power loss and operation cost of transformers...... that the proposed method can search a more promising control schedule of all transformers, all capacitors and all distributed generators with less time consumption, compared with other listed artificial intelligent methods....... algorithm is implemented in VC++ 6.0 program language and the corresponding numerical experiments are finished on the modified version of the IEEE 33-node distribution system with two newly installed distributed generators and eight newly installed capacitors banks. The numerical results prove...

  6. Application of Micro-coprecipitation Method to Alpha Source Preparation for Measuring Alpha Nuclides

    International Nuclear Information System (INIS)

    Lee, Myung Ho; Park, Jong Ho; Oh, Se Jin; Song, Byung Chul; Song, Kyuseok

    2011-01-01

    Among the source preparations, an electrodeposition is a commonly used method for the preparation of sources for an alpha spectrometry, because this technique is simple and produces a very thin deposit, which is essential for a high resolution of the alpha peak. Recently, micro-coprecipitation with rare earths have been used to yield sources for -spectrometry. In this work, the Pu, Am and Cm isotopes were purified from hindrance nuclides and elements with an a TRU resin in radioactive waste samples, and the activity concentrations of the Pu, Am and Cm isotopes were determined by radiation counting methods after alpha source preparation like micro coprecipitation. After the Pu isotopes in the radioactive waste samples were separated from the other nuclides with an anion exchange resin, the Am isotopes were purified with a TRU resin and an anion exchange resin or a TRU resin. Activity concentrations and chemical recoveries of 241 Am purified with the TRU resin were similar to those with the TRU resin and anion exchange resin. In this study, to save on the analytical time and cost, the Am isotopes were purified with the TRU resin without using an additional anion exchange resin. After comparing the electrodeposition method with the micro-coprecipitation method, the micro-coprecipitation method was used for the alpha source preparation, because the micro-coprecipitation method is simple and more reliable for source preparation of the Pu, Am and Cm isotopes

  7. Heuristic derivation of the Rossi-alpha formula for a pulsed neutron source

    International Nuclear Information System (INIS)

    Baeten, P.

    2004-01-01

    Expressions for the Rossi-alpha distribution for a pulsed neutron source were derived using a heuristic derivation based on the method of joint detection probability. This heuristic technique was chosen over the more rigorous master equation method due to its simplicity and the complementary of both techniques. The derived equations also take into account the presence of delayed neutrons and intrinsic neutron sources which often cannot be neglected in source-driven subcritical cores. The obtained expressions showed that the ratio of the correlated to the uncorrelated signal in the Rossi-Alpha distribution for a Pulsed Source (RAPS) was strongly increased compared to the case for a standard Rossi-alpha distribution for a continuous source. It was also demonstrated that by using this RAPS technique four independent measurement quantities, instead of three with the standard Rossi-alpha technique, can be determined. Hence, it is no longer necessary to combine the Rossi-alpha technique with another method to measure the reactivity expressed in dollars. Both properties, the increased signal-to-noise ratio of the correlated signal and the measurement of a fourth measurement quantity, make that the RAPS technique is an excellent candidate for the measurement of kinetic parameters in source-driven subcritical assemblies

  8. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    Directory of Open Access Journals (Sweden)

    Chaoyang Shi

    2017-12-01

    Full Text Available Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  9. Standardization method for alpha and beta surface sources

    Energy Technology Data Exchange (ETDEWEB)

    Sahagia, M; Grigorescu, E L; Razdolescu, A C; Ivan, C [Institute of Physics and Nuclear Engineering, Institute of Atomic Physics, PO Box MG-6, R-76900 Bucharest, (Romania)

    1994-01-01

    The installation and method of standardization of large surface alpha and beta sources are presented. A multiwire, flow-type proportional counter and the associated electronics is used. The counter is placed in a lead-shield. The response of the system in (s[sup -1]/Bq) or (s[sup -1]/(particle x s[sup -1])) was determined for [sup 241] Am, [sup 239] Pu, [sup 147] Pm, [sup 204] Tl, [sup 90](Sr+Y) and [sup 137] Cs using standard sources with different dimensions, from some mm[sup 2] to 180 x 220 mm[sup 2]. The system was legally attested for expanded uncertainties of +7%. (Author).

  10. Size distribution of chemical elements and their source apportionment in ambient coarse, fine, and ultrafine particles in Shanghai urban summer atmosphere.

    Science.gov (United States)

    Lü, Senlin; Zhang, Rui; Yao, Zhenkun; Yi, Fei; Ren, Jingjing; Wu, Minghong; Feng, Man; Wang, Qingyue

    2012-01-01

    Ambient coarse particles (diameter 1.8-10 microm), fine particles (diameter 0.1-1.8 microm), and ultrafine particles (diameter Source apportionment of the chemical elements was analyzed by means of an enrichment factor method. Our results showed that the average mass concentrations of coarse particles, fine particles and ultrafine particles in the summer air were 9.38 +/- 2.18, 8.82 +/- 3.52, and 2.02 +/- 0.41 microg/m3, respectively. The mass percentage of the fine particles accounted for 51.47% in the total mass of PM10, indicating that fine particles are the major component in the Shanghai ambient particles. SEM/EDX results showed that the coarse particles were dominated by minerals, fine particles by soot aggregates and fly ashes, and ultrafine particles by soot particles and unidentified particles. SRXRF results demonstrated that crustal elements were mainly distributed in the coarse particles, while heavy metals were in higher proportions in the fine particles. Source apportionment revealed that Si, K, Ca, Fe, Mn, Rb, and Sr were from crustal sources, and S, Cl, Cu, Zn, As, Se, Br, and Pb from anthropogenic sources. Levels of P, V, Cr, and Ni in particles might be contributed from multi-sources, and need further investigation.

  11. Efficiency correction for disk sources using coaxial High-Purity Ge detectors

    International Nuclear Information System (INIS)

    Chatani, Hiroshi.

    1993-03-01

    Efficiency correction factors for disk sources were determined by making use of closed-ended coaxial High-Purity Ge (HPGe) detectors, their relative efficiencies for a 3' 'x3' ' NaI(Tl) with the 1.3 MeV γ-rays were 30 % and 10 %, respectively. Parameters for the correction by mapping method were obtained systematically, using several monoenergetic (i.e. no coincidence summing loses) γ-ray sources produced by irradiation in the Kyoto University Reactor (KUR) core. These were found out that (1) the systematics of the Gaussian fitting parameters, which were calculated using the relative efficiency distributions of HPGe, to the γ-ray energies are recognized, (2) the efficiency distributions deviate from the Gaussian distributions outside of the radii of HPGe. (3) mapping method is a practical use in satisfactory accuracy, as the results in comparison with the disk source measurements. (author)

  12. A Preliminary Analysis of the Economics of Using Distributed Energy as a Source of Reactive Power Supply

    Energy Technology Data Exchange (ETDEWEB)

    Li, Fangxing [ORNL; Kueck, John D [ORNL; Rizy, D Tom [ORNL; King, Thomas F [ORNL

    2006-04-01

    A major blackout affecting 50 million people in the Northeast United States, where insufficient reactive power supply was an issue, and an increased number of filings made to the Federal Energy Regulatory Commission by generators for reactive power has led to a closer look at reactive power supply and compensation. The Northeastern Massachusetts region is one such area where there is an insufficiency in reactive power compensation. Distributed energy due to its close proximity to loads seems to be a viable option for solving any present or future reactive power shortage problems. Industry experts believe that supplying reactive power from synchronized distributed energy sources can be 2 to 3 times more effective than providing reactive support in bulk from longer distances at the transmission or generation level. Several technology options are available to supply reactive power from distributed energy sources such as small generators, synchronous condensers, fuel cells or microturbines. In addition, simple payback analysis indicates that investments in DG to provide reactive power can be recouped in less than 5 years when capacity payments for providing reactive power are larger than $5,000/kVAR and the DG capital and installation costs are lower than $30/kVAR. However, the current institutional arrangements for reactive power compensation present a significant barrier to wider adoption of distributed energy as a source of reactive power. Furthermore, there is a significant difference between how generators and transmission owners/providers are compensated for reactive power supplied. The situation for distributed energy sources is even more difficult, as there are no arrangements to compensate independent DE owners interested in supplying reactive power to the grid other than those for very large IPPs. There are comparable functionality barriers as well, as these smaller devices do not have the control and communications requirements necessary for automatic

  13. FINANCING OF INVESTMENT PROJECTS OF GAS DISTRIBUTION ENTERPISES AS A FACTOR OF THEIR DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Svitlana Korol

    2016-03-01

    Full Text Available In the article theoretical questions of formation sources of financing investments   are  considered, the analysis of investment activities is carried out by the sources of funding for gas  utility. The purpose of this article is to identify priority sources of financing investment activities of gas distribution enterprises. The  methodology  of  research.  To  achieve  this  goal  the  author  used  methods  of  theoretical generalization; statistical and financial methods in the study of dynamics and structure of  investment; tabular methods to display the structure of the main sources of financing of  the  investment program of gas distribution enterprises; consistency and comparison, to determine the relationship between the main components of investment sources of financing. As a result of research by critical retrospective analysis to determine the structure of sources of financing investment activities of gas distribution enterprises. It is established that the main sources of financing the investment program are the tariffs for transportation and supply of gas, says the national Commission, carrying out state regulation in the areas of energy and  utilities (NCREU. It is filed the structure of the main financing sources of the investment  program of gas distribution enterprises. It is proved that the level of funding depends on the size  of NCREU rates and gas consumption. Scientific novelty of the article is lack in domestic and foreign areas of research priority  selection of sources financing of the investment program for gas distribution enterprises. The practical significance is that the theoretical concepts, practical results and conclusions of  articles that reveal the essence of the problem of investment sources of financing, can be used in  the activity of gas distribution enterprises taking into account the current state of development  of the economy. Keywords: investment  resources,  financing

  14. Optimization of Excitation in FDTD Method and Corresponding Source Modeling

    Directory of Open Access Journals (Sweden)

    B. Dimitrijevic

    2015-04-01

    Full Text Available Source and excitation modeling in FDTD formulation has a significant impact on the method performance and the required simulation time. Since the abrupt source introduction yields intensive numerical variations in whole computational domain, a generally accepted solution is to slowly introduce the source, using appropriate shaping functions in time. The main goal of the optimization presented in this paper is to find balance between two opposite demands: minimal required computation time and acceptable degradation of simulation performance. Reducing the time necessary for source activation and deactivation is an important issue, especially in design of microwave structures, when the simulation is intensively repeated in the process of device parameter optimization. Here proposed optimized source models are realized and tested within an own developed FDTD simulation environment.

  15. A Reconstruction Method for the Estimation of Temperatures of Multiple Sources Applied for Nanoparticle-Mediated Hyperthermia.

    Science.gov (United States)

    Steinberg, Idan; Tamir, Gil; Gannot, Israel

    2018-03-16

    Solid malignant tumors are one of the leading causes of death worldwide. Many times complete removal is not possible and alternative methods such as focused hyperthermia are used. Precise control of the hyperthermia process is imperative for the successful application of such treatment. To that end, this research presents a fast method that enables the estimation of deep tissue heat distribution by capturing and processing the transient temperature at the boundary based on a bio-heat transfer model. The theoretical model is rigorously developed and thoroughly validated by a series of experiments. A 10-fold improvement is demonstrated in resolution and visibility on tissue mimicking phantoms. The inverse problem is demonstrated as well with a successful application of the model for imaging deep-tissue embedded heat sources. Thereby, allowing the physician then ability to dynamically evaluate the hyperthermia treatment efficiency in real time.

  16. The occurrence and distribution of a group of organic micropollutants in Mexico City's water sources.

    Science.gov (United States)

    Félix-Cañedo, Thania E; Durán-Álvarez, Juan C; Jiménez-Cisneros, Blanca

    2013-06-01

    The occurrence and distribution of a group of 17 organic micropollutants in surface and groundwater sources from Mexico City was determined. Water samples were taken from 7 wells, 4 dams and 15 tanks where surface and groundwater are mixed and stored before distribution. Results evidenced the occurrence of seven of the target compounds in groundwater: salicylic acid, diclofenac, di-2-ethylhexylphthalate (DEHP), butylbenzylphthalate (BBP), triclosan, bisphenol A (BPA) and 4-nonylphenol (4-NP). In surface water, 11 target pollutants were detected: same found in groundwater as well as naproxen, ibuprofen, ketoprofen and gemfibrozil. In groundwater, concentration ranges of salicylic acid, 4-NP and DEHP, the most frequently found compounds, were 1-464, 1-47 and 19-232 ng/L, respectively; while in surface water, these ranges were 29-309, 89-655 and 75-2,282 ng/L, respectively. Eleven target compounds were detected in mixed water. Concentrations in mixed water were higher than those determined in groundwater but lower than the detected in surface water. Different to that found in ground and surface water, the pesticide 2,4-D was found in mixed water, indicating that some pollutants can reach areas where they are not originally present in the local water sources. Concentration of the organic micropollutants found in this study showed similar to lower to those reported in water sources from developed countries. This study provides information that enriches the state of the art on the occurrence of organic micropollutants in water sources worldwide, notably in megacities of developing countries. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Improvement of Source Number Estimation Method for Single Channel Signal.

    Directory of Open Access Journals (Sweden)

    Zhi Dong

    Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.

  18. Morphology, chemistry and distribution of neoformed spherulites in agricultural land affected by metallurgical point-source pollution

    NARCIS (Netherlands)

    Leguedois, S.; Oort, van F.; Jongmans, A.G.; Chevalier, P.

    2004-01-01

    Metal distribution patterns in superficial soil horizons of agricultural land affected by metallurgical point-source pollution were studied using optical and electron microscopy, synchrotron radiation and spectroscopy analyses. The site is located in northern France, at the center of a former entry

  19. Distribution, partitioning and sources of polycyclic aromatic hydrocarbons in the water–SPM–sediment system of Lake Chaohu, China

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Ning [MOE Laboratory for Earth Surface Processes, College of Urban and Environmental Sciences, Peking University, Beijing 100871 (China); State Key Laboratory of Environmental Criteria and Risk Assessment, Chinese Research Academy of Environmental Sciences, Beijing 100012 (China); He, Wei; Kong, Xiang-Zhen; Liu, Wen-Xiu; He, Qi-Shuang; Yang, Bin; Wang, Qing-Mei; Yang, Chen; Jiang, Yu-Jiao [MOE Laboratory for Earth Surface Processes, College of Urban and Environmental Sciences, Peking University, Beijing 100871 (China); Jorgensen, Sven Erik [Section of Toxicology and Environmental Chemistry, Institute A, University of Copenhagen, University Park 2, DK 2100, Copenhagen Ø (Denmark); Xu, Fu-Liu, E-mail: xufl@urban.pku.edu.cn [MOE Laboratory for Earth Surface Processes, College of Urban and Environmental Sciences, Peking University, Beijing 100871 (China); Zhao, Xiao-Li, E-mail: zhaoxiaoli_zxl@126.com [State Key Laboratory of Environmental Criteria and Risk Assessment, Chinese Research Academy of Environmental Sciences, Beijing 100012 (China)

    2014-10-15

    The residual levels of polycyclic aromatic hydrocarbons (PAHs) in the water, suspended particular matter (SPM) and sediment from Lake Chaohu were measured with a gas chromatograph–mass spectrometer (GC–MS). The spatial–temporal distributions and the SPM–water partition of PAHs and their influencing factors were investigated. The potential sources and contributions of PAHs in the sediment were estimated by positive matrix factorization (PMF) and probabilistic stable isotopic analysis (PSIA). The results showed that the average residual levels of total PAHs (PAH16) in the water, SPM and sediment were 170.7 ± 70.8 ng/L, 210.7 ± 160.7 ng/L and 908.5 ± 1878.1 ng/g dry weight, respectively. The same spatial distribution trend of PAH16 in the water, SPM and sediment was found from high to low: river inflows > western lake > eastern lake > water source area. There was an obvious seasonal trend of PAH16 in the water, while no obvious seasonal trend was found in the SPM. The residues and distributions of PAHs in the water, SPM and sediment relied heavily on carbon content. Significant Pearson correlations were found between LogK{sub oc} and LogK{sub ow} as well as some hydro-meteorological factors. Three major sources of PAHs including coal and biomass combustions, and vehicle emissions were identified. - Highlights: • Highest residual level of total PAHs in the SPM was detected. • Similar spatial trend of PAH16 in the water, SPM and sediment. • PAHs distributions in the water-sediment system relied heavily on organic carbon. • Correlations between LogK{sub oc} and LogK{sub ow} as well as hydro-meteorological factors. • Coal and biomass combustions and vehicle emissions were three major sources of PAHs.

  20. Convolutive Blind Source Separation Methods

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Larsen, Jan; Kjems, Ulrik

    2008-01-01

    During the past decades, much attention has been given to the separation of mixed sources, in particular for the blind case where both the sources and the mixing process are unknown and only recordings of the mixtures are available. In several situations it is desirable to recover all sources from...... the recorded mixtures, or at least to segregate a particular source. Furthermore, it may be useful to identify the mixing process itself to reveal information about the physical mixing system. In some simple mixing models each recording consists of a sum of differently weighted source signals. However, in many...... real-world applications, such as in acoustics, the mixing process is more complex. In such systems, the mixtures are weighted and delayed, and each source contributes to the sum with multiple delays corresponding to the multiple paths by which an acoustic signal propagates to a microphone...

  1. A Modular GIS-Based Software Architecture for Model Parameter Estimation using the Method of Anchored Distributions (MAD)

    Science.gov (United States)

    Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.

    2012-12-01

    The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.

  2. Use of ultrasonic array method for positioning multiple partial discharge sources in transformer oil.

    Science.gov (United States)

    Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng

    2014-08-01

    Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.

  3. The synthesis method for design of electron flow sources

    Science.gov (United States)

    Alexahin, Yu I.; Molodozhenzev, A. Yu

    1997-01-01

    The synthesis method to design a relativistic magnetically - focused beam source is described in this paper. It allows to find a shape of electrodes necessary to produce laminar space charge flows. Electron guns with shielded cathodes designed with this method were analyzed using the EGUN code. The obtained results have shown the coincidence of the synthesis and analysis calculations [1]. This method of electron gun calculation may be applied for immersed electron flows - of interest for the EBIS electron gun design.

  4. Electron energy distributions and electron impact source functions in Ar/N{sub 2} inductively coupled plasmas using pulsed power

    Energy Technology Data Exchange (ETDEWEB)

    Logue, Michael D., E-mail: mdlogue@umich.edu; Kushner, Mark J., E-mail: mjkush@umich.edu [Department of Electrical Engineering and Computer Science, University of Michigan, 1301 Beal Ave., Ann Arbor, Michigan 48109-2122 (United States)

    2015-01-28

    In plasma materials processing, such as plasma etching, control of the time-averaged electron energy distributions (EEDs) in the plasma allows for control of the time-averaged electron impact source functions of reactive species in the plasma and their fluxes to surfaces. One potential method for refining the control of EEDs is through the use of pulsed power. Inductively coupled plasmas (ICPs) are attractive for using pulsed power in this manner because the EEDs are dominantly controlled by the ICP power as opposed to the bias power applied to the substrate. In this paper, we discuss results from a computational investigation of EEDs and electron impact source functions in low pressure (5–50 mTorr) ICPs sustained in Ar/N{sub 2} for various duty cycles. We find there is an ability to control EEDs, and thus source functions, by pulsing the ICP power, with the greatest variability of the EEDs located within the skin depth of the electromagnetic field. The transit time of hot electrons produced in the skin depth at the onset of pulse power produces a delay in the response of the EEDs as a function of distance from the coils. The choice of ICP pressure has a large impact on the dynamics of the EEDs, whereas duty cycle has a small influence on time-averaged EEDs and source functions.

  5. Quantitative method for measurement of the Goos-Hanchen effect based on source divergence considerations

    International Nuclear Information System (INIS)

    Gray, Jeffrey F.; Puri, Ashok

    2007-01-01

    In this paper we report on a method for quantitative measurement and characterization of the Goos-Hanchen effect based upon the real world performance of optical sources. A numerical model of a nonideal plane wave is developed in terms of uniform divergence properties. This model is applied to the Goos-Hanchen shift equations to determine beam shift displacement characteristics, which provides quantitative estimates of finite shifts near critical angle. As a potential technique for carrying out a meaningful comparison with experiments, a classical method of edge detection is discussed. To this end a line spread Green's function is defined which can be used to determine the effective transfer function of the near critical angle behavior of divergent plane waves. The process yields a distributed (blurred) output with a line spread function characteristic of the inverse square root nature of the Goos-Hanchen shift equation. A parameter of interest for measurement is given by the edge shift function. Modern imaging and image processing methods provide suitable techniques for exploiting the edge shift phenomena to attain refractive index sensitivities of the order of 10 -6 , comparable with the recent results reported in the literature

  6. The Chandra Source Catalog: X-ray Aperture Photometry

    Science.gov (United States)

    Kashyap, Vinay; Primini, F. A.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, I. N.; Evans, J. D.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) represents a reanalysis of the entire ACIS and HRC imaging observations over the 9-year Chandra mission. We describe here the method by which fluxes are measured for detected sources. Source detection is carried out on a uniform basis, using the CIAO tool wavdetect. Source fluxes are estimated post-facto using a Bayesian method that accounts for background, spatial resolution effects, and contamination from nearby sources. We use gamma-function prior distributions, which could be either non-informative, or in case there exist previous observations of the same source, strongly informative. The current implementation is however limited to non-informative priors. The resulting posterior probability density functions allow us to report the flux and a robust credible range on it.

  7. Full-Scale Turbofan Engine Noise-Source Separation Using a Four-Signal Method

    Science.gov (United States)

    Hultgren, Lennart S.; Arechiga, Rene O.

    2016-01-01

    Contributions from the combustor to the overall propulsion noise of civilian transport aircraft are starting to become important due to turbofan design trends and expected advances in mitigation of other noise sources. During on-ground, static-engine acoustic tests, combustor noise is generally sub-dominant to other engine noise sources because of the absence of in-flight effects. Consequently, noise-source separation techniques are needed to extract combustor-noise information from the total noise signature in order to further progress. A novel four-signal source-separation method is applied to data from a static, full-scale engine test and compared to previous methods. The new method is, in a sense, a combination of two- and three-signal techniques and represents an attempt to alleviate some of the weaknesses of each of those approaches. This work is supported by the NASA Advanced Air Vehicles Program, Advanced Air Transport Technology Project, Aircraft Noise Reduction Subproject and the NASA Glenn Faculty Fellowship Program.

  8. Optimal Planning Method of On-load Capacity Regulating Distribution Transformers in Urban Distribution Networks after Electric Energy Replacement Considering Uncertainties

    Directory of Open Access Journals (Sweden)

    Yu Su

    2018-06-01

    Full Text Available Electric energy replacement is the umbrella term for the use of electric energy to replace oil (e.g., electric automobiles, coal (e.g., electric heating, and gas (e.g., electric cooking appliances, which increases the electrical load peak, causing greater valley/peak differences. On-load capacity regulating distribution transformers have been used to deal with loads with great valley/peak differences, so reasonably replacing conventional distribution transformers with on-load capacity regulating distribution transformers can effectively cope with load changes after electric energy replacement and reduce the no-load losses of distribution transformers. Before planning for on-load capacity regulating distribution transformers, the nodal effective load considering uncertainties within the life cycle after electric energy replacement was obtained by a Monte Carlo method. Then, according to the loss relation between on-load capacity regulating distribution transformers and conventional distribution transformers, three characteristic indexes of annual continuous apparent power curve and replacement criteria for on-load capacity regulating distribution transformers were put forward in this paper, and a set of distribution transformer replaceable points was obtained. Next, based on cost benefit analysis, a planning model of on-load capacity regulating distribution transformers which consists of investment profitability index within the life cycle, investment cost recouping index and capacity regulating cost index was put forward. The branch and bound method was used to solve the planning model within replaceable point set to obtain upgrading and reconstruction scheme of distribution transformers under a certain investment. Finally, planning analysis of on-load capacity regulating distribution transformers was carried out for electric energy replacement points in one urban distribution network under three scenes: certain load, uncertain load and nodal

  9. Study and Analysis of an Intelligent Microgrid Energy Management Solution with Distributed Energy Sources

    Directory of Open Access Journals (Sweden)

    Swaminathan Ganesan

    2017-09-01

    Full Text Available In this paper, a robust energy management solution which will facilitate the optimum and economic control of energy flows throughout a microgrid network is proposed. The increased penetration of renewable energy sources is highly intermittent in nature; the proposed solution demonstrates highly efficient energy management. This study enables precise management of power flows by forecasting of renewable energy generation, estimating the availability of energy at storage batteries, and invoking the appropriate mode of operation, based on the load demand to achieve efficient and economic operation. The predefined mode of operation is derived out of an expert rule set and schedules the load and distributed energy sources along with utility grid.

  10. Development of methods for DSM and distribution automation planning

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M.; Seppaelae, A.; Kekkonen, V.; Koreneff, G. [VTT Energy, Espoo (Finland)

    1996-12-31

    In the de-regulated electricity market, the power trading companies have to face new problems. The biggest challenges are caused by the uncertainty in the load magnitudes. In order to minimize the risks in power purchase and also in retail sales, the power traders should have as reliable and accurate estimates for hourly demands of their customers as possible. New tools have been developed for the distribution load estimation and for the management of energy balances of the trading companies. These tools are based on the flexible combination of the information available from several sources, like direct customer measurements, network measurements, load models and statistical data. These functions also serve as an information source for higher level activities of the electricity selling companies. These activities and the associated functions have been studied in the prototype system called DEM, which is now being developed for the operation of Finnish utilities in the newly de-regulated power market

  11. Development of methods for DSM and distribution automation planning

    International Nuclear Information System (INIS)

    Lehtonen, M.; Seppaelae, A.; Kekkonen, V.; Koreneff, G.

    1996-01-01

    In the de-regulated electricity market, the power trading companies have to face new problems. The biggest challenges are caused by the uncertainty in the load magnitudes. In order to minimize the risks in power purchase and also in retail sales, the power traders should have as reliable and accurate estimates for hourly demands of their customers as possible. New tools have been developed for the distribution load estimation and for the management of energy balances of the trading companies. These tools are based on the flexible combination of the information available from several sources, like direct customer measurements, network measurements, load models and statistical data. These functions also serve as an information source for higher level activities of the electricity selling companies. These activities and the associated functions have been studied in the prototype system called DEM, which is now being developed for the operation of Finnish utilities in the newly de-regulated power market

  12. Development of methods for DSM and distribution automation planning

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M; Seppaelae, A; Kekkonen, V; Koreneff, G [VTT Energy, Espoo (Finland)

    1997-12-31

    In the de-regulated electricity market, the power trading companies have to face new problems. The biggest challenges are caused by the uncertainty in the load magnitudes. In order to minimize the risks in power purchase and also in retail sales, the power traders should have as reliable and accurate estimates for hourly demands of their customers as possible. New tools have been developed for the distribution load estimation and for the management of energy balances of the trading companies. These tools are based on the flexible combination of the information available from several sources, like direct customer measurements, network measurements, load models and statistical data. These functions also serve as an information source for higher level activities of the electricity selling companies. These activities and the associated functions have been studied in the prototype system called DEM, which is now being developed for the operation of Finnish utilities in the newly de-regulated power market

  13. Developing an Open Source, Reusable Platform for Distributed Collaborative Information Management in the Early Detection Research Network

    Science.gov (United States)

    Hart, Andrew F.; Verma, Rishi; Mattmann, Chris A.; Crichton, Daniel J.; Kelly, Sean; Kincaid, Heather; Hughes, Steven; Ramirez, Paul; Goodale, Cameron; Anton, Kristen; hide

    2012-01-01

    For the past decade, the NASA Jet Propulsion Laboratory, in collaboration with Dartmouth University has served as the center for informatics for the Early Detection Research Network (EDRN). The EDRN is a multi-institution research effort funded by the U.S. National Cancer Institute (NCI) and tasked with identifying and validating biomarkers for the early detection of cancer. As the distributed network has grown, increasingly formal processes have been developed for the acquisition, curation, storage, and dissemination of heterogeneous research information assets, and an informatics infrastructure has emerged. In this paper we discuss the evolution of EDRN informatics, its success as a mechanism for distributed information integration, and the potential sustainability and reuse benefits of emerging efforts to make the platform components themselves open source. We describe our experience transitioning a large closed-source software system to a community driven, open source project at the Apache Software Foundation, and point to lessons learned that will guide our present efforts to promote the reuse of the EDRN informatics infrastructure by a broader community.

  14. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    Science.gov (United States)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  15. Characterization of a Distributed Plasma Ionization Source (DPIS) for Ion Mobility Spectrometry and Mass Spectrometry

    International Nuclear Information System (INIS)

    Waltman, Melanie J.; Dwivedi, Prabha; Hill, Herbert; Blanchard, William C.; Ewing, Robert G.

    2008-01-01

    A recently developed atmospheric pressure ionization source, a distributed plasma ionization source (DPIS), was characterized and compared to commonly used atmospheric pressure ionization sources with both mass spectrometry and ion mobility spectrometry. The source consisted of two electrodes of different sizes separated by a thin dielectric. Application of a high RF voltage across the electrodes generated plasma in air yielding both positive and negative ions depending on the polarity of the applied potential. These reactant ions subsequently ionized the analyte vapors. The reactant ions generated were similar to those created in a conventional point-to-plane corona discharge ion source. The positive reactant ions generated by the source were mass identified as being solvated protons of general formula (H2O)nH+ with (H2O)2H+ as the most abundant reactant ion. The negative reactant ions produced were mass identified primarily as CO3-, NO3-, NO2-, O3- and O2- of various relative intensities. The predominant ion and relative ion ratios varied depending upon source construction and supporting gas flow rates. A few compounds including drugs, explosives and environmental pollutants were selected to evaluate the new ionization source. The source was operated continuously for several months and although deterioration was observed visually, the source continued to produce ions at a rate similar that of the initial conditions. The results indicated that the DPIS may have a longer operating life than a conventional corona discharge.

  16. Methods for evaluating information sources

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2012-01-01

    The article briefly presents and discusses 12 different approaches to the evaluation of information sources (for example a Wikipedia entry or a journal article): (1) the checklist approach; (2) classical peer review; (3) modified peer review; (4) evaluation based on examining the coverage...... of controversial views; (5) evidence-based evaluation; (6) comparative studies; (7) author credentials; (8) publisher reputation; (9) journal impact factor; (10) sponsoring: tracing the influence of economic, political, and ideological interests; (11) book reviews and book reviewing; and (12) broader criteria....... Reading a text is often not a simple process. All the methods discussed here are steps on the way on learning how to read, understand, and criticize texts. According to hermeneutics it involves the subjectivity of the reader, and that subjectivity is influenced, more or less, by different theoretical...

  17. Power distribution arrangement

    DEFF Research Database (Denmark)

    2010-01-01

    An arrangement and a method for distributing power supplied by a power source to two or more of loads (e.g., electrical vehicular systems) is disclosed, where a representation of the power taken by a particular one of the loads from the source is measured. The measured representation of the amount...... of power taken from the source by the particular one of the loads is compared to a threshold to provide an overload signal in the event the representation exceeds the threshold. Control signals dependant on the occurring of the overload signal are provided such that the control signal decreases the output...... power of the power circuit in case the overload signal occurs...

  18. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Parallel Resolved Open Source CFD-DEM: Method, Validation and Application

    Directory of Open Access Journals (Sweden)

    A. Hager

    2014-03-01

    Full Text Available In the following paper the authors present a fully parallelized Open Source method for calculating the interaction of immersed bodies and surrounding fluid. A combination of computational fluid dynamics (CFD and a discrete element method (DEM accounts for the physics of both the fluid and the particles. The objects considered are relatively big compared to the cells of the fluid mesh, i.e. they cover several cells each. Thus this fictitious domain method (FDM is called resolved. The implementation is realized within the Open Source framework CFDEMcOupling (www.cfdem.com, which provides an interface between OpenFOAM® based CFD-solvers and the DEM software LIGGGHTS (www.liggghts.com. While both LIGGGHTS and OpenFOAM® were already parallelized, only a recent improvement of the algorithm permits the fully parallel computation of resolved problems. Alongside with a detailed description of the method, its implementation and recent improvements, a number of application and validation examples is presented in the scope of this paper.

  20. Loss distribution approach for operational risk capital modelling under Basel II: Combining different data sources for risk estimation

    Directory of Open Access Journals (Sweden)

    Pavel V. Shevchenko

    2013-07-01

    Full Text Available The management of operational risk in the banking industry has undergone significant changes over the last decade due to substantial changes in operational risk environment. Globalization, deregulation, the use of complex financial products and changes in information technology have resulted in exposure to new risks very different from market and credit risks. In response, Basel Committee for banking Supervision has developed a regulatory framework, referred to as Basel II, that introduced operational risk category and corresponding capital requirements. Over the past five years, major banks in most parts of the world have received accreditation under the Basel II Advanced Measurement Approach (AMA by adopting the loss distribution approach (LDA despite there being a number of unresolved methodological challenges in its implementation. Different approaches and methods are still under hot debate. In this paper, we review methods proposed in the literature for combining different data sources (internal data, external data and scenario analysis which is one of the regulatory requirement for AMA.

  1. Localization Accuracy of Distributed Inverse Solutions for Electric and Magnetic Source Imaging of Interictal Epileptic Discharges in Patients with Focal Epilepsy.

    Science.gov (United States)

    Heers, Marcel; Chowdhury, Rasheda A; Hedrich, Tanguy; Dubeau, François; Hall, Jeffery A; Lina, Jean-Marc; Grova, Christophe; Kobayashi, Eliane

    2016-01-01

    Distributed inverse solutions aim to realistically reconstruct the origin of interictal epileptic discharges (IEDs) from noninvasively recorded electroencephalography (EEG) and magnetoencephalography (MEG) signals. Our aim was to compare the performance of different distributed inverse solutions in localizing IEDs: coherent maximum entropy on the mean (cMEM), hierarchical Bayesian implementations of independent identically distributed sources (IID, minimum norm prior) and spatially coherent sources (COH, spatial smoothness prior). Source maxima (i.e., the vertex with the maximum source amplitude) of IEDs in 14 EEG and 19 MEG studies from 15 patients with focal epilepsy were analyzed. We visually compared their concordance with intracranial EEG (iEEG) based on 17 cortical regions of interest and their spatial dispersion around source maxima. Magnetic source imaging (MSI) maxima from cMEM were most often confirmed by iEEG (cMEM: 14/19, COH: 9/19, IID: 8/19 studies). COH electric source imaging (ESI) maxima co-localized best with iEEG (cMEM: 8/14, COH: 11/14, IID: 10/14 studies). In addition, cMEM was less spatially spread than COH and IID for ESI and MSI (p < 0.001 Bonferroni-corrected post hoc t test). Highest positive predictive values for cortical regions with IEDs in iEEG could be obtained with cMEM for MSI and with COH for ESI. Additional realistic EEG/MEG simulations confirmed our findings. Accurate spatially extended sources, as found in cMEM (ESI and MSI) and COH (ESI) are desirable for source imaging of IEDs because this might influence surgical decision. Our simulations suggest that COH and IID overestimate the spatial extent of the generators compared to cMEM.

  2. Dose distribution around Ir192 brachytherapy source in non-full scattering conditions: comparison of in-phantom measurements and Nucletron-Oldelft plato system calculations

    International Nuclear Information System (INIS)

    Jastrzembski, Michal; Kabacinska, Renata; Makarewicz, Roman

    1996-01-01

    Introduction: Comparing the values of doses measured in vivo during gynaecological brachytherapy with those computed with the use of Nucletron-Oldelft brachytherapy treatment planning system a high level of uncertainty appears. In case of points located close to the media border this is also due to the lack of scattering in this region. The influence of the lack of scattering on dose distribution has been investigated. Measured data has been compared to those given by Nucletron-Oldelft BPS. Materials and methods: Profiles in a large water phantom (PTW MP3 system) has been measured in directions perpendicular to the long axis of the fixed source at varied water level and at varied source-to-detector distances. Normalization values for the curves has been acquired by absolute dose measurements. Obtained data has been compared to profiles calculated in the same axes by Nucletron-Oldelft BPS. Results: The lack of scattering in the region close to water surface (up to 8cm) results in significant drop in measured dose. The decrease depends both on the distance from the medium border and on the distance from the source. For source-to-detector distance of 6.5cm the difference between calculated and measured dose is 8% for 3cm and 21% for 1cm of water above the source. Profiles in this region become flattened and asymmetric according to the drop in dose level. Conclusions: The lack of scattering in the region close to the patient skin results in significant drop in dose which is not taken into account by Nucletron-Oldelft BPS. This means that dose distribution calculated in this region by the System is not correct

  3. Simultaneous distribution of AC and DC power

    Science.gov (United States)

    Polese, Luigi Gentile

    2015-09-15

    A system and method for the transport and distribution of both AC (alternating current) power and DC (direct current) power over wiring infrastructure normally used for distributing AC power only, for example, residential and/or commercial buildings' electrical wires is disclosed and taught. The system and method permits the combining of AC and DC power sources and the simultaneous distribution of the resulting power over the same wiring. At the utilization site a complementary device permits the separation of the DC power from the AC power and their reconstruction, for use in conventional AC-only and DC-only devices.

  4. Analytical method for reconstruction pin to pin of the nuclear power density distribution

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2013-01-01

    An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)

  5. Analytical method for reconstruction pin to pin of the nuclear power density distribution

    Energy Technology Data Exchange (ETDEWEB)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: ppessoa@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@imp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)

  6. Study of classification and disposed method for disused sealed radioactive source in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Suk Hoon; Kim, Ju Youl; Lee, Seung Hee [FNC Technology Co., Ltd.,Yongin (Korea, Republic of)

    2016-09-15

    In accordance with the classification system of radioactive waste in Korea, all the disused sealed radioactive sources (DSRSs) fall under the category of EW, VLLW or LILW, and should be managed in compliance with the restrictions for the disposal method. In this study, the management and disposal method are drawn in consideration of half-life of radionuclides contained in the source and A/D value (i.e. the activity A of the source dividing by the D value for the relevant radionuclide, which is used to provide an initial ranking of relative risk for sources) in addition to the domestic classification scheme and disposal method, based on the characteristic analysis and review results of the management practices in IAEA and foreign countries. For all the DSRSs that are being stored (as of March 2015) in the centralized temporary disposal facility for radioisotope wastes, applicability of the derivation result is confirmed through performing the characteristic analysis and case studies for assessing quantity and volume of DSRSs to be managed by each method. However, the methodology derived from this study is not applicable to the following sources; i) DSRSs without information on the radioactivity, ii) DSRSs that are not possible to calculate the specific activity and/or the source-specific A/D value. Accordingly, it is essential to identify the inherent characteristics for each of DSRSs prior to implementation of this management and disposal method.

  7. Field distribution of a source and energy absorption in an inhomogeneous magneto-active plasma

    International Nuclear Information System (INIS)

    Galushko, N.P.; Erokhin, N.S.; Moiseev, S.S.

    1975-01-01

    In the present paper the distribution of source fields in in a magnetoactive plasma is studied from the standpoint of the possibility of an effective SHF heating of an inhomogeneous plasma in both high (ωapproximatelyωsub(pe) and low (ωapproximatelyωsub(pi) frequency ranges, where ωsub(pe) and ωsub(pi) are the electron and ion plasma frequencies. The localization of the HF energy absorption regions in cold and hot plasma and the effect of plasma inhomogeneity and source dimensions on the absorption efficiency are investigated. The linear wave transformation in an inhomogeneous hot plasma is taken into consideration. Attention is paid to the difference between the region localization for collisional and non-collisional absorption. It has been shown that the HF energy dissipation in plasma particle collisions is localized in the region of thin jets going from the source; the radiation field has a sharp peak in this region. At the same time, non-collisional HF energy dissipation is spread over the plasma volume as a result of Cherenkov and cyclotron wave attenuation. The essential contribution to the source field from resonances due to standing wave excitation in an inhomogeneous plasma shell near the source is pointed out

  8. Uncertainty Management of Dynamic Tariff Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Cheng, Lin

    2016-01-01

    The dynamic tariff (DT) method is designed for the distribution system operator (DSO) to alleviate congestions that might occur in a distribution network with high penetration of distributed energy resources (DERs). Uncertainty management is required for the decentralized DT method because the DT...... is de- termined based on optimal day-ahead energy planning with forecasted parameters such as day-ahead energy prices and en- ergy needs which might be different from the parameters used by aggregators. The uncertainty management is to quantify and mitigate the risk of the congestion when employing...

  9. Community Based Distribution of Child Spacing Methods at ...

    African Journals Online (AJOL)

    uses volunteer CBD agents. Mrs. E.F. Pelekamoyo. Service Delivery Officer. National Family Welfare Council of Malawi. Private Bag 308. Lilongwe 3. Malawi. Community Based Distribution of. Child Spacing Methods ... than us at the Hospital; male motivators by talking to their male counterparts help them to accept that their ...

  10. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  11. Information system architecture to support transparent access to distributed, heterogeneous data sources

    International Nuclear Information System (INIS)

    Brown, J.C.

    1994-08-01

    Quality situation assessment and decision making require access to multiple sources of data and information. Insufficient accessibility to data exists for many large corporations and Government agencies. By utilizing current advances in computer technology, today's situation analyst's have a wealth of information at their disposal. There are many potential solutions to the information accessibility problem using today's technology. The United States Department of Energy (US-DOE) faced this problem when dealing with one class of problem in the US. The result of their efforts has been the creation of the Tank Waste Information Network System -- TWINS. The TWINS solution combines many technologies to address problems in several areas such as User Interfaces, Transparent Access to Multiple Data Sources, and Integrated Data Access. Data related to the complex is currently distributed throughout several US-DOE installations. Over time, each installation has adopted their own set of standards as related to information management. Heterogeneous hardware and software platforms exist both across the complex and within a single installation. Standards for information management vary between US-DOE mission areas within installations. These factors contribute to the complexity of accessing information in a manner that enhances the performance and decision making process of the analysts. This paper presents one approach taken by the DOE to resolve the problem of distributed, heterogeneous, multi-media information management for the HLW Tank complex. The information system architecture developed for the DOE by the TWINS effort is one that is adaptable to other problem domains and uses

  12. Energy efficiency optimization in distribution transformers considering Spanish distribution regulation policy

    International Nuclear Information System (INIS)

    Pezzini, Paola; Gomis-Bellmunt, Oriol; Frau-Valenti, Joan; Sudria-Andreu, Antoni

    2010-01-01

    In transmission and distribution systems, the high number of installed transformers, a loss source in networks, suggests a good potential for energy savings. This paper presents how the Spanish Distribution regulation policy, Royal Decree 222/2008, affects the overall energy efficiency in distribution transformers. The objective of a utility is the maximization of the benefit, and in case of failures, to install a chosen transformer in order to maximize the profit. Here, a novel method to optimize energy efficiency, considering the constraints set by the Spanish Distribution regulation policy, is presented; its aim is to achieve the objectives of the utility when installing new transformers. The overall energy efficiency increase is a clear result that can help in meeting the requirements of European environmental plans, such as the '20-20-20' action plan.

  13. Optimal operation management of fuel cell/wind/photovoltaic power sources connected to distribution networks

    Science.gov (United States)

    Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid

    2011-10-01

    In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the 'best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.

  14. Open source intelligence: A tool to combat illicit trafficking

    Energy Technology Data Exchange (ETDEWEB)

    Sjoeberg, J [Swedish Armed Forces HQ, Stockholm (Sweden)

    2001-10-01

    The purpose of my presentation is to provide some thoughts on Open Sources and how Open Sources can be used as tools for detecting illicit trafficking and proliferation. To fulfill this purpose I would like to deal with the following points during my presentation: What is Open Source? How can it be defined? - Different sources - Methods. Open Source information can be defined as publicly available information as well as other unclassified information that has limited public distribution or access to it. It comes in print, electronic or oral form. It can be found distributed either to the mass public by print or electronic media or to a much more limited customer base like companies, experts or specialists of some kind including the so called gray literature. Open Source information is not a single source but a multi-source. Thus, you can say that Open Sources does not say anything about the information itself, it only refers to if the information is classified or not.

  15. Open source intelligence: A tool to combat illicit trafficking

    International Nuclear Information System (INIS)

    Sjoeberg, J.

    2001-01-01

    The purpose of my presentation is to provide some thoughts on Open Sources and how Open Sources can be used as tools for detecting illicit trafficking and proliferation. To fulfill this purpose I would like to deal with the following points during my presentation: What is Open Source? How can it be defined? - Different sources - Methods. Open Source information can be defined as publicly available information as well as other unclassified information that has limited public distribution or access to it. It comes in print, electronic or oral form. It can be found distributed either to the mass public by print or electronic media or to a much more limited customer base like companies, experts or specialists of some kind including the so called gray literature. Open Source information is not a single source but a multi-source. Thus, you can say that Open Sources does not say anything about the information itself, it only refers to if the information is classified or not

  16. Quantifying the isotopic composition of NOx emission sources: An analysis of collection methods

    Science.gov (United States)

    Fibiger, D.; Hastings, M.

    2012-04-01

    We analyze various collection methods for nitrogen oxides, NOx (NO2 and NO), used to evaluate the nitrogen isotopic composition (δ15N). Atmospheric NOx is a major contributor to acid rain deposition upon its conversion to nitric acid; it also plays a significant role in determining air quality through the production of tropospheric ozone. NOx is released by both anthropogenic (fossil fuel combustion, biomass burning, aircraft emissions) and natural (lightning, biogenic production in soils) sources. Global concentrations of NOx are rising because of increased anthropogenic emissions, while natural source emissions also contribute significantly to the global NOx burden. The contributions of both natural and anthropogenic sources and their considerable variability in space and time make it difficult to attribute local NOx concentrations (and, thus, nitric acid) to a particular source. Several recent studies suggest that variability in the isotopic composition of nitric acid deposition is related to variability in the isotopic signatures of NOx emission sources. Nevertheless, the isotopic composition of most NOx sources has not been thoroughly constrained. Ultimately, the direct capture and quantification of the nitrogen isotopic signatures of NOx sources will allow for the tracing of NOx emissions sources and their impact on environmental quality. Moreover, this will provide a new means by which to verify emissions estimates and atmospheric models. We present laboratory results of methods used for capturing NOx from air into solution. A variety of methods have been used in field studies, but no independent laboratory verification of the efficiencies of these methods has been performed. When analyzing isotopic composition, it is important that NOx be collected quantitatively or the possibility of fractionation must be constrained. We have found that collection efficiency can vary widely under different conditions in the laboratory and fractionation does not vary

  17. A simple nodal force distribution method in refined finite element meshes

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jai Hak [Chungbuk National University, Chungju (Korea, Republic of); Shin, Kyu In [Gentec Co., Daejeon (Korea, Republic of); Lee, Dong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Cho, Seungyon [National Fusion Research Institute, Daejeon (Korea, Republic of)

    2017-05-15

    In finite element analyses, mesh refinement is frequently performed to obtain accurate stress or strain values or to accurately define the geometry. After mesh refinement, equivalent nodal forces should be calculated at the nodes in the refined mesh. If field variables and material properties are available at the integration points in each element, then the accurate equivalent nodal forces can be calculated using an adequate numerical integration. However, in certain circumstances, equivalent nodal forces cannot be calculated because field variable data are not available. In this study, a very simple nodal force distribution method was proposed. Nodal forces of the original finite element mesh are distributed to the nodes of refined meshes to satisfy the equilibrium conditions. The effect of element size should also be considered in determining the magnitude of the distributing nodal forces. A program was developed based on the proposed method, and several example problems were solved to verify the accuracy and effectiveness of the proposed method. From the results, accurate stress field can be recognized to be obtained from refined meshes using the proposed nodal force distribution method. In example problems, the difference between the obtained maximum stress and target stress value was less than 6 % in models with 8-node hexahedral elements and less than 1 % in models with 20-node hexahedral elements or 10-node tetrahedral elements.

  18. Distributed Reactive Power Control based Conservation Voltage Reduction in Active Distribution Systems

    Directory of Open Access Journals (Sweden)

    EMIROGLU, S.

    2017-11-01

    Full Text Available This paper proposes a distributed reactive power control based approach to deploy Volt/VAr optimization (VVO / Conservation Voltage Reduction (CVR algorithm in a distribution network with distributed generations (DG units and distribution static synchronous compensators (D-STATCOM. A three-phase VVO/CVR problem is formulated and the reactive power references of D-STATCOMs and DGs are determined in a distributed way by decomposing the VVO/CVR problem into voltage and reactive power control. The main purpose is to determine the coordination between voltage regulator (VR and reactive power sources (Capacitors, D-STATCOMs and DGs based on VVO/CVR. The study shows that the reactive power injection capability of DG units may play an important role in VVO/CVR. In addition, it is shown that the coordination of VR and reactive power sources does not only save more energy and power but also reduces the power losses. Moreover, the proposed VVO/CVR algorithm reduces the computational burden and finds fast solutions. To illustrate the effectiveness of the proposed method, the VVO/CVR is performed on the IEEE 13-node test system feeder considering unbalanced loading and line configurations. The tests are performed taking the practical voltage-dependent load modeling and different customer types into consideration to improve accuracy.

  19. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  20. Finite difference applied to the reconstruction method of the nuclear power density distribution

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2016-01-01

    Highlights: • A method for reconstruction of the power density distribution is presented. • The method uses discretization by finite differences of 2D neutrons diffusion equation. • The discretization is performed homogeneous meshes with dimensions of a fuel cell. • The discretization is combined with flux distributions on the four node surfaces. • The maximum errors in reconstruction occur in the peripheral water region. - Abstract: In this reconstruction method the two-dimensional (2D) neutron diffusion equation is discretized by finite differences, employed to two energy groups (2G) and meshes with fuel-pin cell dimensions. The Nodal Expansion Method (NEM) makes use of surface discontinuity factors of the node and provides for reconstruction method the effective multiplication factor of the problem and the four surface average fluxes in homogeneous nodes with size of a fuel assembly (FA). The reconstruction process combines the discretized 2D diffusion equation by finite differences with fluxes distribution on four surfaces of the nodes. These distributions are obtained for each surfaces from a fourth order one-dimensional (1D) polynomial expansion with five coefficients to be determined. The conditions necessary for coefficients determination are three average fluxes on consecutive surfaces of the three nodes and two fluxes in corners between these three surface fluxes. Corner fluxes of the node are determined using a third order 1D polynomial expansion with four coefficients. This reconstruction method uses heterogeneous nuclear parameters directly providing the heterogeneous neutron flux distribution and the detailed nuclear power density distribution within the FAs. The results obtained with this method has good accuracy and efficiency when compared with reference values.