WorldWideScience

Sample records for source distribution predictions

  1. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    Science.gov (United States)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  2. Hybrid ATDL-gamma distribution model for predicting area source acid gas concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Jakeman, A J; Taylor, J A

    1985-01-01

    An air quality model is developed to predict the distribution of concentrations of acid gas in an urban airshed. The model is hybrid in character, combining reliable features of a deterministic ATDL-based model with statistical distributional approaches. The gamma distribution was identified from a range of distributional models as the best model. The paper shows that the assumptions of a previous hybrid model may be relaxed and presents a methodology for characterizing the uncertainty associated with model predictions. Results are demonstrated for the 98-percentile predictions of 24-h average data over annual periods at six monitoring sites. This percentile relates to the World Health Organization goal for acid gas concentrations.

  3. Escherichia coli at Ohio Bathing Beaches--Distribution, Sources, Wastewater Indicators, and Predictive Modeling

    Science.gov (United States)

    Francy, Donna S.; Gifford, Amie M.; Darner, Robert A.

    2003-01-01

    Results of studies during the recreational seasons of 2000 and 2001 strengthen the science that supports monitoring of our Nation?s beaches. Water and sediment samples were collected and analyzed for concentrations of Escherichia coli (E. coli). Ancillary water-quality and environmental data were collected or compiled to determine their relation to E. coli concentrations. Data were collected at three Lake Erie urban beaches (Edgewater, Villa Angela, and Huntington), two Lake Erie beaches in a less populated area (Mentor Headlands and Fairport Harbor), and one inland-lake beach (Mosquito Lake). The distribution of E. coli in water and sediments within the bathing area, outside the bathing area, and near the swash zone was investigated at the three Lake Erie urban beaches and at Mosquito Lake. (The swash zone is the zone that is alternately covered and exposed by waves.) Lake-bottom sediments from outside the bathing area were not significant deposition areas for E. coli. In contrast, interstitial water and subsurface sediments from near the swash zone were enriched with E. coli. For example, E. coli concentrations were as high as 100,000 colonies per 100 milliliters in some interstitial waters. Although there are no standards for E. coli in swash-zone materials, the high concentrations found at some locations warrant concern for public health. Studies were done at Mosquito Lake to identify sources of fecal contamination to the lake and bathing beach. Escherichia coli concentrations decreased with distance from a suspected source of fecal contamination that is north of the beach but increased at the bathing beach. This evidence indicated that elevated E. coli concentrations at the bathing beach are of local origin rather than from transport of bacteria from sites to the north. Samples collected from the three Lake Erie urban beaches and Mosquito Lake were analyzed to determine whether wastewater indicators could be used as surrogates for E. coli at bathing beaches

  4. Using a topographic index to distribute variable source area runoff predicted with the SCS curve-number equation

    Science.gov (United States)

    Lyon, Steve W.; Walter, M. Todd; Gérard-Marchant, Pierre; Steenhuis, Tammo S.

    2004-10-01

    Because the traditional Soil Conservation Service curve-number (SCS-CN) approach continues to be used ubiquitously in water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed and tested a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Predicting the location of source areas is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point-source pollution. The method presented here used the traditional SCS-CN approach to predict runoff volume and spatial extent of saturated areas and a topographic index, like that used in TOPMODEL, to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was applied to two subwatersheds of the Delaware basin in the Catskill Mountains region of New York State and one watershed in south-eastern Australia to produce runoff-probability maps. Observed saturated area locations in the watersheds agreed with the distributed CN-VSA method. Results showed good agreement with those obtained from the previously validated soil moisture routing (SMR) model. When compared with the traditional SCS-CN method, the distributed CN-VSA method predicted a similar total volume of runoff, but vastly different locations of runoff generation. Thus, the distributed CN-VSA approach provides a physically based method that is simple enough to be incorporated into water quality models, and other tools that currently use the traditional SCS-CN method, while still adhering to the principles of VSA hydrology.

  5. Predictable return distributions

    DEFF Research Database (Denmark)

    Pedersen, Thomas Quistgaard

    trace out the entire distribution. A univariate quantile regression model is used to examine stock and bond return distributions individually, while a multivariate model is used to capture their joint distribution. An empirical analysis on US data shows that certain parts of the return distributions......-of-sample analyses show that the relative accuracy of the state variables in predicting future returns varies across the distribution. A portfolio study shows that an investor with power utility can obtain economic gains by applying the empirical return distribution in portfolio decisions instead of imposing...

  6. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  7. Different Predictive Control Strategies for Active Load Management in Distributed Power Systems with High Penetration of Renewable Energy Sources

    DEFF Research Database (Denmark)

    Zong, Yi; Bindner, Henrik W.; Gehrke, Oliver

    2013-01-01

    In order to achieve a Danish energy supply based on 100% renewable energy from combinations of wind, biomass, wave and solar power in 2050 and to cover 50% of the Danish electricity consumption by wind power in 2020, it requires more renewable energy in buildings and industries (e.g. cold stores......, greenhouses, etc.), and to coordinate the management of large numbers of distributed energy resources with the smart grid solution. This paper presents different predictive control (Genetic Algorithm-based and Model Predictive Control-based) strategies that schedule controlled loads in the industrial...... and residential sectors, based on dynamic power price and weather forecast, considering users’ comfort settings to meet an optimization objective, such as maximum profit or minimum energy consumption. Some field tests were carried out on a facility for intelligent, active and distributed power systems, which...

  8. Using open source computational tools for predicting human metabolic stability and additional absorption, distribution, metabolism, excretion, and toxicity properties.

    Science.gov (United States)

    Gupta, Rishi R; Gifford, Eric M; Liston, Ted; Waller, Chris L; Hohman, Moses; Bunin, Barry A; Ekins, Sean

    2010-11-01

    Ligand-based computational models could be more readily shared between researchers and organizations if they were generated with open source molecular descriptors [e.g., chemistry development kit (CDK)] and modeling algorithms, because this would negate the requirement for proprietary commercial software. We initially evaluated open source descriptors and model building algorithms using a training set of approximately 50,000 molecules and a test set of approximately 25,000 molecules with human liver microsomal metabolic stability data. A C5.0 decision tree model demonstrated that CDK descriptors together with a set of Smiles Arbitrary Target Specification (SMARTS) keys had good statistics [κ = 0.43, sensitivity = 0.57, specificity = 0.91, and positive predicted value (PPV) = 0.64], equivalent to those of models built with commercial Molecular Operating Environment 2D (MOE2D) and the same set of SMARTS keys (κ = 0.43, sensitivity = 0.58, specificity = 0.91, and PPV = 0.63). Extending the dataset to ∼193,000 molecules and generating a continuous model using Cubist with a combination of CDK and SMARTS keys or MOE2D and SMARTS keys confirmed this observation. When the continuous predictions and actual values were binned to get a categorical score we observed a similar κ statistic (0.42). The same combination of descriptor set and modeling method was applied to passive permeability and P-glycoprotein efflux data with similar model testing statistics. In summary, open source tools demonstrated predictive results comparable to those of commercial software with attendant cost savings. We discuss the advantages and disadvantages of open source descriptors and the opportunity for their use as a tool for organizations to share data precompetitively, avoiding repetition and assisting drug discovery.

  9. Relationship between the Prediction Accuracy of Tsunami Inundation and Relative Distribution of Tsunami Source and Observation Arrays: A Case Study in Tokyo Bay

    Science.gov (United States)

    Takagawa, T.

    2017-12-01

    A rapid and precise tsunami forecast based on offshore monitoring is getting attention to reduce human losses due to devastating tsunami inundation. We developed a forecast method based on the combination of hierarchical Bayesian inversion with pre-computed database and rapid post-computing of tsunami inundation. The method was applied to Tokyo bay to evaluate the efficiency of observation arrays against three tsunamigenic earthquakes. One is a scenario earthquake at Nankai trough and the other two are historic ones of Genroku in 1703 and Enpo in 1677. In general, rich observation array near the tsunami source has an advantage in both accuracy and rapidness of tsunami forecast. To examine the effect of observation time length we used four types of data with the lengths of 5, 10, 20 and 45 minutes after the earthquake occurrences. Prediction accuracy of tsunami inundation was evaluated by the simulated tsunami inundation areas around Tokyo bay due to target earthquakes. The shortest time length of accurate prediction varied with target earthquakes. Here, accurate prediction means the simulated values fall within the 95% credible intervals of prediction. In Enpo earthquake case, 5-minutes observation is enough for accurate prediction for Tokyo bay, but 10-minutes and 45-minutes are needed in the case of Nankai trough and Genroku, respectively. The difference of the shortest time length for accurate prediction shows the strong relationship with the relative distance from the tsunami source and observation arrays. In the Enpo case, offshore tsunami observation points are densely distributed even in the source region. So, accurate prediction can be rapidly achieved within 5 minutes. This precise prediction is useful for early warnings. Even in the worst case of Genroku, where less observation points are available near the source, accurate prediction can be obtained within 45 minutes. This information can be useful to figure out the outline of the hazard in an early

  10. Over-Distribution in Source Memory

    Science.gov (United States)

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  11. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  12. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  13. Distributed power sources for Mars colonization

    International Nuclear Information System (INIS)

    Miley, George H.; Shaban, Yasser

    2003-01-01

    One of the fundamental needs for Mars colonization is an abundant source of energy. The total energy system will probably use a mixture of sources based on solar energy, fuel cells, and nuclear energy. Here we concentrate on the possibility of developing a distributed system employing several unique new types of nuclear energy sources, specifically small fusion devices using inertial electrostatic confinement and portable 'battery type' proton reaction cells

  14. Source distribution dependent scatter correction for PVI

    International Nuclear Information System (INIS)

    Barney, J.S.; Harrop, R.; Dykstra, C.J.

    1993-01-01

    Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction

  15. Image authentication using distributed source coding.

    Science.gov (United States)

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  16. Microseism Source Distribution Observed from Ireland

    Science.gov (United States)

    Craig, David; Bean, Chris; Donne, Sarah; Le Pape, Florian; Möllhoff, Martin

    2017-04-01

    Ocean generated microseisms (OGM) are recorded globally with similar spectral features observed everywhere. The generation mechanism for OGM and their subsequent propagation to continental regions has led to their use as a proxy for sea-state characteristics. Also many modern seismological methods make use of OGM signals. For example, the Earth's crust and upper mantle can be imaged using ``ambient noise tomography``. For many of these methods an understanding of the source distribution is necessary to properly interpret the results. OGM recorded on near coastal seismometers are known to be related to the local ocean wavefield. However, contributions from more distant sources may also be present. This is significant for studies attempting to use OGM as a proxy for sea-state characteristics such as significant wave height. Ireland has a highly energetic ocean wave climate and is close to one of the major source regions for OGM. This provides an ideal location to study an OGM source region in detail. Here we present the source distribution observed from seismic arrays in Ireland. The region is shown to consist of several individual source areas. These source areas show some frequency dependence and generally occur at or near the continental shelf edge. We also show some preliminary results from an off-shore OBS network to the North-West of Ireland. The OBS network includes instruments on either side of the shelf and should help interpret the array observations.

  17. Quantum key distribution with entangled photon sources

    International Nuclear Information System (INIS)

    Ma Xiongfeng; Fung, Chi-Hang Fred; Lo, H.-K.

    2007-01-01

    A parametric down-conversion (PDC) source can be used as either a triggered single-photon source or an entangled-photon source in quantum key distribution (QKD). The triggering PDC QKD has already been studied in the literature. On the other hand, a model and a post-processing protocol for the entanglement PDC QKD are still missing. We fill in this important gap by proposing such a model and a post-processing protocol for the entanglement PDC QKD. Although the PDC model is proposed to study the entanglement-based QKD, we emphasize that our generic model may also be useful for other non-QKD experiments involving a PDC source. Since an entangled PDC source is a basis-independent source, we apply Koashi and Preskill's security analysis to the entanglement PDC QKD. We also investigate the entanglement PDC QKD with two-way classical communications. We find that the recurrence scheme increases the key rate and the Gottesman-Lo protocol helps tolerate higher channel losses. By simulating a recent 144-km open-air PDC experiment, we compare three implementations: entanglement PDC QKD, triggering PDC QKD, and coherent-state QKD. The simulation result suggests that the entanglement PDC QKD can tolerate higher channel losses than the coherent-state QKD. The coherent-state QKD with decoy states is able to achieve highest key rate in the low- and medium-loss regions. By applying the Gottesman-Lo two-way post-processing protocol, the entanglement PDC QKD can tolerate up to 70 dB combined channel losses (35 dB for each channel) provided that the PDC source is placed in between Alice and Bob. After considering statistical fluctuations, the PDC setup can tolerate up to 53 dB channel losses

  18. Fiber optic distributed temperature sensing for fire source localization

    Science.gov (United States)

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Sigrist, Markus W.; Li, Jun; Dong, Fengzhong

    2017-08-01

    A method for localizing a fire source based on a distributed temperature sensor system is proposed. Two sections of optical fibers were placed orthogonally to each other as the sensing elements. A tray of alcohol was lit to act as a fire outbreak in a cabinet with an uneven ceiling to simulate a real scene of fire. Experiments were carried out to demonstrate the feasibility of the method. Rather large fluctuations and systematic errors with respect to predicting the exact room coordinates of the fire source caused by the uneven ceiling were observed. Two mathematical methods (smoothing recorded temperature curves and finding temperature peak positions) to improve the prediction accuracy are presented, and the experimental results indicate that the fluctuation ranges and systematic errors are significantly reduced. The proposed scheme is simple and appears reliable enough to locate a fire source in large spaces.

  19. Perceived loudness of spatially distributed sound sources

    DEFF Research Database (Denmark)

    Song, Woo-keun; Ellermeier, Wolfgang; Minnaar, Pauli

    2005-01-01

    psychoacoustic attributes into account. Therefore, a method for deriving loudness maps was developed in an earlier study [Song, Internoise2004, paper 271]. The present experiment investigates to which extent perceived loudness depends on the distribution of individual sound sources. Three loudspeakers were...... positioned 1.5 m from the centre of the listener’s head, one straight ahead, and two 10 degrees to the right and left, respectively. Six participants matched the loudness of either one, or two simultaneous sounds (narrow-band noises with 1-kHz, and 3.15-kHz centre frequencies) to a 2-kHz, 60-dB SPL narrow......-band noise placed in the frontal loudspeaker. The two sounds were either originating from the central speaker, or from the two offset loudspeakers. It turned out that the subjects perceived the noises to be softer when they were distributed in space. In addition, loudness was calculated from the recordings...

  20. Distributed quantum computing with single photon sources

    International Nuclear Information System (INIS)

    Beige, A.; Kwek, L.C.

    2005-01-01

    Full text: Distributed quantum computing requires the ability to perform nonlocal gate operations between the distant nodes (stationary qubits) of a large network. To achieve this, it has been proposed to interconvert stationary qubits with flying qubits. In contrast to this, we show that distributed quantum computing only requires the ability to encode stationary qubits into flying qubits but not the conversion of flying qubits into stationary qubits. We describe a scheme for the realization of an eventually deterministic controlled phase gate by performing measurements on pairs of flying qubits. Our scheme could be implemented with a linear optics quantum computing setup including sources for the generation of single photons on demand, linear optics elements and photon detectors. In the presence of photon loss and finite detector efficiencies, the scheme could be used to build large cluster states for one way quantum computing with a high fidelity. (author)

  1. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  2. Predictive access control for distributed computation

    DEFF Research Database (Denmark)

    Yang, Fan; Hankin, Chris; Nielson, Flemming

    2013-01-01

    We show how to use aspect-oriented programming to separate security and trust issues from the logical design of mobile, distributed systems. The main challenge is how to enforce various types of security policies, in particular predictive access control policies — policies based on the future beh...... behavior of a program. A novel feature of our approach is that we can define policies concerning secondary use of data....

  3. Panchromatic spectral energy distributions of Herschel sources

    Science.gov (United States)

    Berta, S.; Lutz, D.; Santini, P.; Wuyts, S.; Rosario, D.; Brisbin, D.; Cooray, A.; Franceschini, A.; Gruppioni, C.; Hatziminaoglou, E.; Hwang, H. S.; Le Floc'h, E.; Magnelli, B.; Nordon, R.; Oliver, S.; Page, M. J.; Popesso, P.; Pozzetti, L.; Pozzi, F.; Riguccini, L.; Rodighiero, G.; Roseboom, I.; Scott, D.; Symeonidis, M.; Valtchanov, I.; Viero, M.; Wang, L.

    2013-03-01

    Combining far-infrared Herschel photometry from the PACS Evolutionary Probe (PEP) and Herschel Multi-tiered Extragalactic Survey (HerMES) guaranteed time programs with ancillary datasets in the GOODS-N, GOODS-S, and COSMOS fields, it is possible to sample the 8-500 μm spectral energy distributions (SEDs) of galaxies with at least 7-10 bands. Extending to the UV, optical, and near-infrared, the number of bands increases up to 43. We reproduce the distribution of galaxies in a carefully selected restframe ten colors space, based on this rich data-set, using a superposition of multivariate Gaussian modes. We use this model to classify galaxies and build median SEDs of each class, which are then fitted with a modified version of the magphys code that combines stellar light, emission from dust heated by stars and a possible warm dust contribution heated by an active galactic nucleus (AGN). The color distribution of galaxies in each of the considered fields can be well described with the combination of 6-9 classes, spanning a large range of far- to near-infrared luminosity ratios, as well as different strength of the AGN contribution to bolometric luminosities. The defined Gaussian grouping is used to identify rare or odd sources. The zoology of outliers includes Herschel-detected ellipticals, very blue z ~ 1 Ly-break galaxies, quiescent spirals, and torus-dominated AGN with star formation. Out of these groups and outliers, a new template library is assembled, consisting of 32 SEDs describing the intrinsic scatter in the restframe UV-to-submm colors of infrared galaxies. This library is tested against L(IR) estimates with and without Herschel data included, and compared to eightother popular methods often adopted in the literature. When implementing Herschel photometry, these approaches produce L(IR) values consistent with each other within a median absolute deviation of 10-20%, the scatter being dominated more by fine tuning of the codes, rather than by the choice of

  4. Electromagnetic projectile acceleration utilizing distributed energy sources

    International Nuclear Information System (INIS)

    Parker, J.V.

    1982-01-01

    Circuit equations are derived for an electromagnetic projectile accelerator (railgun) powered by a large number of capacitive discharge circuits distributed along its length. The circuit equations are put into dimensionless form and the parameters governing the solutions derived. After specializing the equations to constant spacing between circuits, the case of lossless rails and negligible drag is analyzed to show that the electrical to kinetic energy transfer efficiency is equal to sigma/2, where sigma = 2mS/Lq 2 0 and m is the projectile mass, S the distance between discharge circuit, Lthe rail inductance per unit length, and q 0 the charge on the first stage capacitor. For sigma = 2 complete transfer of electrical to kinetic energy is predicted while for sigma>2 the projective-discharge circuit system is unstable. Numerical solutions are presented for both lossless rails and for finite rail resistance. When rail resistance is included, >70% transfer is calculated for accelerators of arbitrary length. The problem of projectile startup is considered and a simple modification of the first two stages is described which provides proper startup. Finally, the results of the numerical solutions are applied to a practical railgun design. A research railgun designed for repeated operation at 50 km/sec is described. It would have an overall length of 77 m, an electrical efficiency of 81%, a stored energy per stage of 105 kJ, and a charge transfer of <50 C per stage. A railgun of this design appears to be practicable with current pulsed power technology

  5. Coded aperture imaging of alpha source spatial distribution

    International Nuclear Information System (INIS)

    Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul

    2012-01-01

    The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.

  6. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    units. The approach is inspired by smart-grid electric power production and consumption systems, where the flexibility of a large number of power producing and/or power consuming units can be exploited in a smart-grid solution. The objective is to accommodate the load variation on the grid, arising......This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... on one hand from varying consumption, on the other hand by natural variations in power production e.g. from wind turbines. The approach presented is based on quadratic optimization and possess the properties of low algorithmic complexity and of scalability. In particular, the proposed design methodology...

  7. Computer program for source distribution process in radiation facility

    International Nuclear Information System (INIS)

    Al-Kassiri, H.; Abdul Ghani, B.

    2007-08-01

    Computer simulation for dose distribution using Visual Basic has been done according to the arrangement and activities of Co-60 sources. This program provides dose distribution in treated products depending on the product density and desired dose. The program is useful for optimization of sources distribution during loading process. there is good agreement between calculated data for the program and experimental data.(Author)

  8. Sources And Compositional Distribution Of Polycyclic Aromatic ...

    African Journals Online (AJOL)

    For molecular mass 178, an anthracene to anthracene plus phenanthrene ratio ≤ 0.10 was taken as indication of petroleum related sources, while a ratio > 0.10 indicated dominance of combustion related sources. For molecular mass 202, a fluoranthene to fluoranthene plus pyrene ratio ≤ 0.50 was indication of petroleum ...

  9. Distributed coding of multiview sparse sources with joint recovery

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren

    2016-01-01

    In support of applications involving multiview sources in distributed object recognition using lightweight cameras, we propose a new method for the distributed coding of sparse sources as visual descriptor histograms extracted from multiview images. The problem is challenging due to the computati...... transform (SIFT) descriptors extracted from multiview images shows that our method leads to bit-rate saving of up to 43% compared to the state-of-the-art distributed compressed sensing method with independent encoding of the sources....

  10. Tetrodotoxin: Chemistry, Toxicity, Source, Distribution and Detection

    Directory of Open Access Journals (Sweden)

    Vaishali Bane

    2014-02-01

    Full Text Available Tetrodotoxin (TTX is a naturally occurring toxin that has been responsible for human intoxications and fatalities. Its usual route of toxicity is via the ingestion of contaminated puffer fish which are a culinary delicacy, especially in Japan. TTX was believed to be confined to regions of South East Asia, but recent studies have demonstrated that the toxin has spread to regions in the Pacific and the Mediterranean. There is no known antidote to TTX which is a powerful sodium channel inhibitor. This review aims to collect pertinent information available to date on TTX and its analogues with a special emphasis on the structure, aetiology, distribution, effects and the analytical methods employed for its detection.

  11. Model predictive control for Z-source power converter

    DEFF Research Database (Denmark)

    Mo, W.; Loh, P.C.; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of impedance-source (commonly known as Z-source) power converter. Output voltage control and current control for Z-source inverter are analyzed and simulated. With MPC's ability of multi- system variables regulation, load current and voltage...

  12. Ionizing nightglow: sources, intensity, and spatial distribution

    International Nuclear Information System (INIS)

    Young, J.M.; Troy, B.E. Jr.; Johnson, C.Y.; Holmes, J.C.

    1975-01-01

    Photometers carried aboard an Aerobee rocket mapped the ultraviolet night sky at White Sands, New Mexico. Maps for five 300 A passbands in the wavelength range 170 to 1400 A reveal spatial radiation patterns unique to each spectral subregion. The major ultraviolet features seen in these maps are ascribed to a variety of sources: 1) solar Lyman α (1216 A) and Lyman β (1026 A), resonantly scattered by geocoronal hydrogen; 2) solar HeII (304 A) resonantly scattered by ionized helium in the Earth's plasmasphere; 3) solar HeI (584 A) resonantly scattered by neutral helium in the interstellar wind and Doppler shifted so that it penetrates the Earth's helium blanket; and 4) starlight in the 912 to 1400 A band, primarily from early-type stars in the Orion region. Not explained are the presence of small, but measurable, albedo signals observed near the peak of flight. Intensities vary from several kilorayleighs for Lyman α to a few rayleighs for HeII. (auth)

  13. Brightness distribution data on 2918 radio sources at 365 MHz

    International Nuclear Information System (INIS)

    Cotton, W.D.; Owen, F.N.; Ghigo, F.D.

    1975-01-01

    This paper is the second in a series describing the results of a program attempting to fit models of the brightness distribution to radio sources observed at 365 MHz with the Bandwidth Synthesis Interferometer (BSI) operated by the University of Texas Radio Astronomy Observatory. Results for a further 2918 radio sources are given. An unresolved model and three symmetric extended models with angular sizes in the range 10--70 arcsec were attempted for each radio source. In addition, for 348 sources for which other observations of brightness distribution are published, the reference to the observations and a brief description are included

  14. Distributional sources for Newman's holomorphic Coulomb field

    International Nuclear Information System (INIS)

    Kaiser, Gerald

    2004-01-01

    Newman (1973 J. Math. Phys. 14 102-3) considered the holomorphic extension E-tilde(z) of the Coulomb field E(x) in R 3 . From an analysis of its multipole expansion, he concluded that the real and imaginary parts E(x+iy)≡Re E-tilde(x+iy), H(x+iy)≡Im E-tilde(x+iy), viewed as functions of x, are the electric and magnetic fields generated by a spinning ring of charge R. This represents the EM part of the Kerr-Newman solution to the Einstein-Maxwell equations (Newman E T and Janis A I 1965 J. Math. Phys. 6 915-7; Newman E T et al 1965 J. Math. Phys. 6 918-9). As already pointed out in Newman and Janis (1965 J. Math. Phys. 6 915-7), this interpretation is somewhat problematic since the fields are double-valued. To make them single-valued, a branch cut must be introduced so that R is replaced by a charged disc D having R as its boundary. In the context of curved spacetime, D becomes a spinning disc of charge and mass representing the singularity of the Kerr-Newman solution. Here we confirm the above interpretation of E and H without resorting to asymptotic expansions, by computing the charge and current densities directly as distributions in R 3 supported in D. This will show that D spins rigidly at the critical rate so that its rim R moves at the speed of light

  15. Predicting Statistical Distributions of Footbridge Vibrations

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2009-01-01

    The paper considers vibration response of footbridges to pedestrian loading. Employing Newmark and Monte Carlo simulation methods, a statistical distribution of bridge vibration levels is calculated modelling walking parameters such as step frequency and stride length as random variables...

  16. Searching Malware and Sources of Its Distribution in the Internet

    Directory of Open Access Journals (Sweden)

    L. L. Protsenko

    2011-09-01

    Full Text Available In the article is considered for the first time developed by the author algorithm of searching malware and sources of its distribution, based on published HijackThis logs in the Internet.

  17. Thematic and spatial resolutions affect model-based predictions of tree species distribution.

    Science.gov (United States)

    Liang, Yu; He, Hong S; Fraser, Jacob S; Wu, ZhiWei

    2013-01-01

    Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance). We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity) had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types) and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution.

  18. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  19. Sediment sources and their Distribution in Chwaka Bay, Zanzibar ...

    African Journals Online (AJOL)

    This work establishes sediment sources, character and their distribution in Chwaka Bay using (i) stable isotopes compositions of organic carbon (OC) and nitrogen, (ii) contents of OC, nitrogen and CaCO3, (iii) C/N ratios, (iv) distribution of sediment mean grain size and sorting, and (v) thickness of unconsolidated sediments.

  20. Activity distribution of a cobalt-60 teletherapy source

    International Nuclear Information System (INIS)

    Jaffray, D.A.; Munro, P.; Battista, J.J.; Fenster, A.

    1991-01-01

    In the course of quantifying the effect of radiation source size on the spatial resolution of portal images, a concentric ring structure in the activity distribution of a Cobalt-60 teletherapy source has been observed. The activity distribution was measured using a strip integral technique and confirmed independently by a contact radiograph of an identical but inactive source replica. These two techniques suggested that this concentric ring structure is due to the packing configuration of the small 60Co pellets that constitute the source. The source modulation transfer function (MTF) showed that this ring structure has a negligible influence on the spatial resolution of therapy images when compared to the effect of the large size of the 60Co source

  1. Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...

  2. The Competition Between a Localised and Distributed Source of Buoyancy

    Science.gov (United States)

    Partridge, Jamie; Linden, Paul

    2012-11-01

    We propose a new mathematical model to study the competition between localised and distributed sources of buoyancy within a naturally ventilated filling box. The main controlling parameters in this configuration are the buoyancy fluxes of the distributed and local source, specifically their ratio Ψ. The steady state dynamics of the flow are heavily dependent on this parameter. For large Ψ, where the distributed source dominates, we find the space becomes well mixed as expected if driven by an distributed source alone. Conversely, for small Ψ we find the space reaches a stable two layer stratification. This is analogous to the classical case of a purely local source but here the lower layer is buoyant compared to the ambient, due to the constant flux of buoyancy emanating from the distributed source. The ventilation flow rate, buoyancy of the layers and also the location of the interface height, which separates the two layer stratification, are obtainable from the model. To validate the theoretical model, small scale laboratory experiments were carried out. Water was used as the working medium with buoyancy being driven directly by temperature differences. Theoretical results were compared with experimental data and overall good agreement was found. A CASE award project with Arup.

  3. Application of 'SPICE' to predict temperature distribution in heat pipes

    Energy Technology Data Exchange (ETDEWEB)

    Li, H M; Liu, Y; Damodaran, M [Nanyang Technological Univ., Singapore (SG). School of Mechanical and Production Engineering

    1991-11-01

    This article presents a new alternative approach to predict temperature distribution in heat pipes. In this method, temperature distribution in a heat pipe, modelled as an analogous electrical circuit, is predicted by applying SPICE, a general-purpose circuit simulation program. SPICE is used to simulate electrical circuit designs before the prototype is assembled. Useful predictions are obtained for heat pipes with and without adiabatic sections and for heat pipes with various evaporator and condenser lengths. Comparison of the predicted results with experiments demonstrates fairly good agreement. It is also shown how interdisciplinary developments could be used appropriately. (author).

  4. Precise Mapping Of A Spatially Distributed Radioactive Source

    International Nuclear Information System (INIS)

    Beck, A.; Caras, I.; Piestum, S.; Sheli, E.; Melamud, Y.; Berant, S.; Kadmon, Y.; Tirosh, D.

    1999-01-01

    Spatial distribution measurement of radioactive sources is a routine task in the nuclear industry. The precision of each measurement depends upon the specific application. However, the technological edge of this precision is motivated by the production of standards for calibration. Within this definition, the most demanding field is the calibration of standards for medical equipment. In this paper, a semi-empirical method for controlling the measurement precision is demonstrated, using a relatively simple laboratory apparatus. The spatial distribution of the source radioactivity is measured as part of the quality assurance tests, during the production of flood sources. These sources are further used in calibration of medical gamma cameras. A typical flood source is a 40 x 60 cm 2 plate with an activity of 10 mCi (or more) of 57 Co isotope. The measurement set-up is based on a single NaI(Tl) scintillator with a photomultiplier tube, moving on an X Y table which scans the flood source. In this application the source is required to have a uniform activity distribution over its surface

  5. Prediction of spatial distribution for some land use allometric ...

    African Journals Online (AJOL)

    Prediction of spatial distribution for some land use allometric characteristics in land use planning models with geostatistic and Geographical Information System (GIS) (Case study: Boein and Miandasht, Isfahan Province, Iran)

  6. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  7. Continuous-variable quantum key distribution with Gaussian source noise

    International Nuclear Information System (INIS)

    Shen Yujie; Peng Xiang; Yang Jian; Guo Hong

    2011-01-01

    Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.

  8. Predictive Analytics for Coordinated Optimization in Distribution Systems

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Rui [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-04-13

    This talk will present NREL's work on developing predictive analytics that enables the optimal coordination of all the available resources in distribution systems to achieve the control objectives of system operators. Two projects will be presented. One focuses on developing short-term state forecasting-based optimal voltage regulation in distribution systems; and the other one focuses on actively engaging electricity consumers to benefit distribution system operations.

  9. Maxent modelling for predicting the potential distribution of Thai Palms

    DEFF Research Database (Denmark)

    Tovaranonte, Jantrararuk; Barfod, Anders S.; Overgaard, Anne Blach

    2011-01-01

    on presence data. The aim was to identify potential hot spot areas, assess the determinants of palm distribution ranges, and provide a firmer knowledge base for future conservation actions. We focused on a relatively small number of climatic, environmental and spatial variables in order to avoid...... overprediction of species distribution ranges. The models with the best predictive power were found by calculating the area under the curve (AUC) of receiver-operating characteristic (ROC). Here, we provide examples of contrasting predicted species distribution ranges as well as a map of modeled palm diversity...

  10. Minimum-phase distribution of cosmic source brightness

    International Nuclear Information System (INIS)

    Gal'chenko, A.A.; Malov, I.F.; Mogil'nitskaya, L.F.; Frolov, V.A.

    1984-01-01

    Minimum-phase distributions of brightness (profiles) for cosmic radio sources 3C 144 (the wave lambda=21 cm), 3C 338 (lambda=3.5 m), and 3C 353 (labda=31.3 cm and 3.5 m) are obtained. A real possibility for the profile recovery from module fragments of its Fourier-image is shown

  11. Geometric effects in alpha particle detection from distributed air sources

    International Nuclear Information System (INIS)

    Gil, L.R.; Leitao, R.M.S.; Marques, A.; Rivera, A.

    1994-08-01

    Geometric effects associated to detection of alpha particles from distributed air sources, as it happens in Radon and Thoron measurements, are revisited. The volume outside which no alpha particle may reach the entrance window of the detector is defined and determined analytically for rectangular and cylindrical symmetry geometries. (author). 3 figs

  12. Spatial distribution of saline water and possible sources of intrusion ...

    African Journals Online (AJOL)

    The spatial distribution of saline water and possible sources of intrusion into Lekki lagoon and transitional effects on the lacustrine ichthyofaunal characteristics were studied during March, 2006 and February, 2008. The water quality analysis indicated that, salinity has drastically increased recently in the lagoon (0.007 to ...

  13. Galactic distribution of X-ray burst sources

    International Nuclear Information System (INIS)

    Lewin, W.H.G.; Hoffman, J.A.; Doty, J.; Clark, G.W.; Swank, J.H.; Becker, R.H.; Pravdo, S.H.; Serlemitsos, P.J.

    1977-01-01

    It is stated that 18 X-ray burst sources have been observed to date, applying the following definition for these bursts - rise times of less than a few seconds, durations of seconds to minutes, and recurrence in some regular pattern. If single burst events that meet the criteria of rise time and duration, but not recurrence are included, an additional seven sources can be added. A sky map is shown indicating their positions. The sources are spread along the galactic equator and cluster near low galactic longitudes, and their distribution is different from that of the observed globular clusters. Observations based on the SAS-3 X-ray observatory studies and the Goddard X-ray Spectroscopy Experiment on OSO-9 are described. The distribution of the sources is examined and the effect of uneven sky exposure on the observed distribution is evaluated. It has been suggested that the bursts are perhaps produced by remnants of disrupted globular clusters and specifically supermassive black holes. This would imply the existence of a new class of unknown objects, and at present is merely an ad hoc method of relating the burst sources to globular clusters. (U.K.)

  14. FDTD verification of deep-set brain tumor hyperthermia using a spherical microwave source distribution

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, D. [20th Intelligence Squadron, Offutt AFB, NE (United States); Rappaport, C.M. [Northeastern Univ., Boston, MA (United States). Center for Electromagnetics Research; Terzuoli, A.J. Jr. [Air Force Inst. of Tech., Dayton, OH (United States). Graduate School of Engineering

    1996-10-01

    Although use of noninvasive microwave hyperthermia to treat cancer is problematic in many human body structures, careful selection of the source electric field distribution around the entire surface of the head can generate a tightly focused global power density maximum at the deepest point within the brain. An analytic prediction of the optimum volume field distribution in a layered concentric head model based on summing spherical harmonic modes is derived and presented. This ideal distribution is then verified using a three-dimensional finite difference time domain (TDTD) simulation with a discretized, MRI-based head model excited by the spherical source. The numerical computation gives a very similar dissipated power pattern as the analytic prediction. This study demonstrates that microwave hyperthermia can theoretically be a feasible cancer treatment modality for tumors in the head, providing a well-resolved hot-spot at depth without overheating any other healthy tissue.

  15. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  16. Deformation due to distributed sources in micropolar thermodiffusive medium

    Directory of Open Access Journals (Sweden)

    Sachin Kaushal

    2010-10-01

    Full Text Available The general solution to the field equations in micropolar generalized thermodiffusive in the context of G-L theory is investigated by applying the Laplace and Fourier transform's as a result of various sources. An application of distributed normal forces or thermal sources or potential sources has been taken to show the utility of the problem. To get the solution in the physical form, a numerical inversion technique has been applied. The transformed components of stress, temperature distribution and chemical potential for G-L theory and CT theory has been depicted graphically and results are compared analytically to show the impact of diffusion, relaxation times and micropolarity on these quantities. Some special case of interest are also deduced from present investigation.

  17. Comments on the Redshift Distribution of 44,200 SDSS Quasars: Evidence for Predicted Preferred Redshifts?

    OpenAIRE

    Bell, M. B.

    2004-01-01

    A Sloan Digital Sky Survey (SDSS) source sample containing 44,200 quasar redshifts is examined. Although arguments have been put forth to explain some of the structure observed in the redshift distribution, it is argued here that this structure may just as easily be explained by the presence of previously predicted preferred redshifts.

  18. Theoretical predictions of lactate and hydrogen ion distributions in tumours.

    Directory of Open Access Journals (Sweden)

    Maymona Al-Husari

    Full Text Available High levels of lactate and H(+-ions play an important role in the invasive and metastatic cascade of some tumours. We develop a mathematical model of cellular pH regulation focusing on the activity of the Na(+/H(+ exchanger (NHE and the lactate/H(+ symporter (MCT to investigate the spatial correlations of extracellular lactate and H(+-ions. We highlight a crucial role for blood vessel perfusion rates in determining the spatial correlation between these two cations. We also predict critical roles for blood lactate, the activity of the MCTs and NHEs on the direction of the cellular pH gradient in the tumour. We also incorporate experimentally determined heterogeneous distributions of the NHE and MCT transporters. We show that this can give rise to a higher intracellular pH and a lower intracellular lactate but does not affect the direction of the reversed cellular pH gradient or redistribution of protons away from the glycolytic source. On the other hand, including intercellular gap junction communication in our model can give rise to a reversed cellular pH gradient and can influence the levels of pH.

  19. Stand diameter distribution modelling and prediction based on Richards function.

    Directory of Open Access Journals (Sweden)

    Ai-guo Duan

    Full Text Available The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM or maximum likelihood estimates method (MLEM were applied to estimate the parameters of models, and the parameter prediction method (PPM and parameter recovery method (PRM were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1 R distribution presented a more accurate simulation than three-parametric Weibull function; (2 the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3 the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4 the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.

  20. Sources and distribution of anthropogenic radionuclides in different marine environments

    International Nuclear Information System (INIS)

    Holm, E.

    1997-01-01

    The knowledge of the distribution in time and space radiologically important radionuclides from different sources in different marine environments is important for assessment of dose commitment following controlled or accidental releases and for detecting eventual new sources. Present sources from nuclear explosion tests, releases from nuclear facilities and the Chernobyl accident provide a tool for such studies. The different sources can be distinguished by different isotopic and radionuclide composition. Results show that radiocaesium behaves rather conservatively in the south and north Atlantic while plutonium has a residence time of about 8 years. On the other hand enhanced concentrations of plutonium in surface waters in arctic regions where vertical mixing is small and iceformation plays an important role. Significantly increased concentrations of plutonium are also found below the oxic layer in anoxic basins due to geochemical concentration. (author)

  1. A Heuristic Approach to Distributed Generation Source Allocation for Electrical Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    M. Sharma

    2010-12-01

    Full Text Available The recent trends in electrical power distribution system operation and management are aimed at improving system conditions in order to render good service to the customer. The reforms in distribution sector have given major scope for employment of distributed generation (DG resources which will boost the system performance. This paper proposes a heuristic technique for allocation of distribution generation source in a distribution system. The allocation is determined based on overall improvement in network performance parameters like reduction in system losses, improvement in voltage stability, improvement in voltage profile. The proposed Network Performance Enhancement Index (NPEI along with the heuristic rules facilitate determination of feasible location and corresponding capacity of DG source. The developed approach is tested with different test systems to ascertain its effectiveness.

  2. Optimal Prediction of Moving Sound Source Direction in the Owl.

    Directory of Open Access Journals (Sweden)

    Weston Cox

    2015-07-01

    Full Text Available Capturing nature's statistical structure in behavioral responses is at the core of the ability to function adaptively in the environment. Bayesian statistical inference describes how sensory and prior information can be combined optimally to guide behavior. An outstanding open question of how neural coding supports Bayesian inference includes how sensory cues are optimally integrated over time. Here we address what neural response properties allow a neural system to perform Bayesian prediction, i.e., predicting where a source will be in the near future given sensory information and prior assumptions. The work here shows that the population vector decoder will perform Bayesian prediction when the receptive fields of the neurons encode the target dynamics with shifting receptive fields. We test the model using the system that underlies sound localization in barn owls. Neurons in the owl's midbrain show shifting receptive fields for moving sources that are consistent with the predictions of the model. We predict that neural populations can be specialized to represent the statistics of dynamic stimuli to allow for a vector read-out of Bayes-optimal predictions.

  3. Do Staphylococcus epidermidis Genetic Clusters Predict Isolation Sources?

    Science.gov (United States)

    Tolo, Isaiah; Thomas, Jonathan C.; Fischer, Rebecca S. B.; Brown, Eric L.; Gray, Barry M.

    2016-01-01

    Staphylococcus epidermidis is a ubiquitous colonizer of human skin and a common cause of medical device-associated infections. The extent to which the population genetic structure of S. epidermidis distinguishes commensal from pathogenic isolates is unclear. Previously, Bayesian clustering of 437 multilocus sequence types (STs) in the international database revealed a population structure of six genetic clusters (GCs) that may reflect the species' ecology. Here, we first verified the presence of six GCs, including two (GC3 and GC5) with significant admixture, in an updated database of 578 STs. Next, a single nucleotide polymorphism (SNP) assay was developed that accurately assigned 545 (94%) of 578 STs to GCs. Finally, the hypothesis that GCs could distinguish isolation sources was tested by SNP typing and GC assignment of 154 isolates from hospital patients with bacteremia and those with blood culture contaminants and from nonhospital carriage. GC5 was isolated almost exclusively from hospital sources. GC1 and GC6 were isolated from all sources but were overrepresented in isolates from nonhospital and infection sources, respectively. GC2, GC3, and GC4 were relatively rare in this collection. No association was detected between fdh-positive isolates (GC2 and GC4) and nonhospital sources. Using a machine learning algorithm, GCs predicted hospital and nonhospital sources with 80% accuracy and predicted infection and contaminant sources with 45% accuracy, which was comparable to the results seen with a combination of five genetic markers (icaA, IS256, sesD [bhp], mecA, and arginine catabolic mobile element [ACME]). Thus, analysis of population structure with subgenomic data shows the distinction of hospital and nonhospital sources and the near-inseparability of sources within a hospital. PMID:27076664

  4. Distributed estimation based on observations prediction in wireless sensor networks

    KAUST Repository

    Bouchoucha, Taha

    2015-03-19

    We consider wireless sensor networks (WSNs) used for distributed estimation of unknown parameters. Due to the limited bandwidth, sensor nodes quantize their noisy observations before transmission to a fusion center (FC) for the estimation process. In this letter, the correlation between observations is exploited to reduce the mean-square error (MSE) of the distributed estimation. Specifically, sensor nodes generate local predictions of their observations and then transmit the quantized prediction errors (innovations) to the FC rather than the quantized observations. The analytic and numerical results show that transmitting the innovations rather than the observations mitigates the effect of quantization noise and hence reduces the MSE. © 2015 IEEE.

  5. ASSERT and COBRA predictions of flow distribution in vertical bundles

    International Nuclear Information System (INIS)

    Tahir, A.; Carver, M.B.

    1983-01-01

    COBRA and ASSERT are subchannel codes which compute flow and enthalpy distributions in rod bundles. COBRA is a well known code, ASSERT is under development at CRNL. This paper gives a comparison of the two codes with boiling experiments in vertical seven rod bundles. ASSERT predictions of the void distribution are shown to be in good agreement with reported experimental results, while COBRA predictions are unsatisfactory. The mixing models in both COBRA and ASSERT are briefly discussed. The reasons for the failure of COBRA-IV and the success of ASSERT in simulating the experiments are highlighted

  6. Determining profile of dose distribution for PD-103 brachytherapy source

    International Nuclear Information System (INIS)

    Berkay, Camgoz; Mehmet, N. Kumru; Gultekin, Yegin

    2006-01-01

    Full text: Brachytherapy is a particular radiotherapy for cancer treatments. By destructing cancerous cells using radiation, the treatment proceeded. When alive tissues are subject it is hazardous to study experimental. For brachytherapy sources generally are studied as theoretical using computer simulation. General concept of the treatment is to locate the radioactive source into cancerous area of related tissue. In computer studies Monte Carlo mathematical method that is in principle based on random number generations, is used. Palladium radioisotope is LDR (Low radiation Dose Rate) source. Main radioactive material was coated with titanium cylinder with 3mm length, 0.25 mm radius. There are two parts of Pd-103 in the titanium cylinder. It is impossible to investigate differential effects come from two part as experimental. Because the source dimensions are small compared with measurement distances. So there is only simulation method. In dosimetric studies it is aimed to determine absorbed dose distribution in tissue as radial and angular. In nuclear physics it is obligation to use computer based methods for researchers. Radiation studies have hazards for scientist and people interacted with radiation. When hazard exceed over recommended limits or physical conditions are not suitable (long work time, non economical experiments, inadequate sensitivity of materials etc.) it is unavoidable to simulate works and experiments before practices of scientific methods in life. In medical area, usage of radiation is required computational work for cancer treatments. Some computational studies are routine in clinics and other studies have scientific development purposes. In brachytherapy studies there are significant differences between experimental measurements and theoretical (computer based) output data. Errors of data taken from experimental studies are larger than simulation values errors. In design of a new brachytherapy source it is important to consider detailed

  7. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    . However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones......Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity...

  8. Predicting volume of distribution with decision tree-based regression methods using predicted tissue:plasma partition coefficients.

    Science.gov (United States)

    Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat

    2015-01-01

    Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.

  9. Quantum key distribution with an unknown and untrusted source

    Science.gov (United States)

    Zhao, Yi; Qi, Bing; Lo, Hoi-Kwong

    2009-03-01

    The security of a standard bi-directional ``plug & play'' quantum key distribution (QKD) system has been an open question for a long time. This is mainly because its source is equivalently controlled by an eavesdropper, which means the source is unknown and untrusted. Qualitative discussion on this subject has been made previously. In this paper, we present the first quantitative security analysis on a general class of QKD protocols whose sources are unknown and untrusted. The securities of standard BB84 protocol, weak+vacuum decoy state protocol, and one-decoy decoy state protocol, with unknown and untrusted sources are rigorously proved. We derive rigorous lower bounds to the secure key generation rates of the above three protocols. Our numerical simulation results show that QKD with an untrusted source gives a key generation rate that is close to that with a trusted source. Our work is published in [1]. [4pt] [1] Y. Zhao, B. Qi, and H.-K. Lo, Phys. Rev. A, 77:052327 (2008).

  10. Supply and distribution for γ-ray sources

    International Nuclear Information System (INIS)

    Yamamoto, Takeo

    1997-01-01

    Japan Atomic energy Research Institute (JAERI) is the only facility to supply and distribute radioisotopes (RI) in Japan. The γ-ray sources for medical use are 192 Ir and 169 Yb for non-destructive examination and 192 Ir, 198 Au and 153 Gd for clinical use. All of these demands in Japan are supplied with domestic products at present. Meanwhile, γ-ray sources imported are 60 Co sources for medical and industrial uses including sterilization of medical instruments, 137 Cs for irradiation to blood and 241 Am for industrial measurements. The major overseas suppliers are Nordion International Inc. and Amersham International plc. RI products on the market are divided into two groups; one is the primary products which are supplied in liquid or solid after chemical or physical treatments of radioactive materials obtained from reactor and the other is the secondary product which is a final product after various processing. Generally these secondary products are used in practice. In Japan, both of the domestic and imported products are supplied to the users via JRIA (Japan Radioisotope Association). The association participates in the sales and the distributions of the secondary products and also in the processings of the primary ones to their sealed sources. Furthermore, stable supplying systems for these products are almost established according to the half life of each nuclide only if there is no accident in the reactor. (M.N.)

  11. The effect of energy distribution of external source on source multiplication in fast assemblies

    International Nuclear Information System (INIS)

    Karam, R.A.; Vakilian, M.

    1976-02-01

    The essence of this study is the effect of energy distribution of a source on the detection rate as a function of K effective in fast assemblies. This effectiveness, as a function of K was studied in a fission chamber, using the ABN cross-section set and Mach 1 code. It was found that with a source which has a fission spectrum, the reciprocal count rate versus mass relationship is linear down to K effective 0.59. For a thermal source, the linearity was never achieved. (author)

  12. Prediction of wind energy distribution in complex terrain using CFD

    DEFF Research Database (Denmark)

    Xu, Chang; Li, Chenqi; Yang, Jianchuan

    2013-01-01

    Based on linear models, WAsP software predicts wind energy distribution, with a good accuracy for flat terrain, but with a large error under complicated topography. In this paper, numerical simulations are carried out using the FLUENT software on a mesh generated by the GAMBIT and ARGIS software ...

  13. Nonparametric Bayesian predictive distributions for future order statistics

    Science.gov (United States)

    Richard A. Johnson; James W. Evans; David W. Green

    1999-01-01

    We derive the predictive distribution for a specified order statistic, determined from a future random sample, under a Dirichlet process prior. Two variants of the approach are treated and some limiting cases studied. A practical application to monitoring the strength of lumber is discussed including choices of prior expectation and comparisons made to a Bayesian...

  14. Performance prediction model for distributed applications on multicore clusters

    CSIR Research Space (South Africa)

    Khanyile, NP

    2012-07-01

    Full Text Available discusses some of the short comings of this law in the current age. We propose a theoretical model for predicting the behavior of a distributed algorithm given the network restrictions of the cluster used. The paper focuses on the impact of latency...

  15. Production, Distribution, and Applications of Californium-252 Neutron Sources

    International Nuclear Information System (INIS)

    Balo, P.A.; Knauer, J.B.; Martin, R.C.

    1999-01-01

    The radioisotope 252 Cf is routinely encapsulated into compact, portable, intense neutron sources with a 2.6-year half-life. A source the size of a person's little finger can emit up to 10 11 neutrons/s. Californium-252 is used commercially as a reliable, cost-effective neutron source for prompt gamma neutron activation analysis (PGNAA) of coal, cement, and minerals, as well as for detection and identification of explosives, laud mines, and unexploded military ordnance. Other uses are neutron radiography, nuclear waste assays, reactor start-up sources, calibration standards, and cancer therapy. The inherent safety of source encapsulations is demonstrated by 30 years of experience and by U.S. Bureau of Mines tests of source survivability during explosions. The production and distribution center for the U. S Department of Energy (DOE) Californium Program is the Radiochemical Engineering Development Center (REDC) at Oak Ridge National Laboratory (ORNL). DOE sells 252 Cf to commercial reencapsulators domestically and internationally. Sealed 252 Cf sources are also available for loan to agencies and subcontractors of the U.S. government and to universities for educational, research, and medical applications. The REDC has established the Californium User Facility (CUF) for Neutron Science to make its large inventory of 252 Cf sources available to researchers for irradiations inside uncontaminated hot cells. Experiments at the CUF include a land mine detection system, neutron damage testing of solid-state detectors, irradiation of human cancer cells for boron neutron capture therapy experiments, and irradiation of rice to induce genetic mutations

  16. A joint calibration model for combining predictive distributions

    Directory of Open Access Journals (Sweden)

    Patrizia Agati

    2013-05-01

    Full Text Available In many research fields, as for example in probabilistic weather forecasting, valuable predictive information about a future random phenomenon may come from several, possibly heterogeneous, sources. Forecast combining methods have been developed over the years in order to deal with ensembles of sources: the aim is to combine several predictions in such a way to improve forecast accuracy and reduce risk of bad forecasts.In this context, we propose the use of a Bayesian approach to information combining, which consists in treating the predictive probability density functions (pdfs from the individual ensemble members as data in a Bayesian updating problem. The likelihood function is shown to be proportional to the product of the pdfs, adjusted by a joint “calibration function” describing the predicting skill of the sources (Morris, 1977. In this paper, after rephrasing Morris’ algorithm in a predictive context, we propose to model the calibration function in terms of bias, scale and correlation and to estimate its parameters according to the least squares criterion. The performance of our method is investigated and compared with that of Bayesian Model Averaging (Raftery, 2005 on simulated data.

  17. Distribution and Source Identification of Pb Contamination in industrial soil

    Science.gov (United States)

    Ko, M. S.

    2017-12-01

    INTRODUCTION Lead (Pb) is toxic element that induce neurotoxic effect to human, because competition of Pb and Ca in nerve system. Lead is classified as a chalophile element and galena (PbS) is the major mineral. Although the Pb is not an abundant element in nature, various anthropogenic source has been enhanced Pb enrichment in the environment after the Industrial Revolution. The representative anthropogenic sources are batteries, paint, mining, smelting, and combustion of fossil fuel. Isotope analysis widely used to identify the Pb contamination source. The Pb has four stable isotopes that are 208Pb, 207Pb, 206Pb, and 204Pb in natural. The Pb is stable isotope and the ratios maintain during physical and chemical fractionation. Therefore, variations of Pb isotope abundance and relative ratios could imply the certain Pb contamination source. In this study, distributions and isotope ratios of Pb in industrial soil were used to identify the Pb contamination source and dispersion pathways. MATERIALS AND METHODS Soil samples were collected at depth 0­-6 m from an industrial area in Korea. The collected soil samples were dried and sieved under 2 mm. Soil pH, aqua-regia digestion and TCLP carried out using sieved soil sample. The isotope analysis was carried out to determine the abundance of Pb isotope. RESULTS AND DISCUSSION The study area was developed land for promotion of industrial facilities. The study area was forest in 1980, and the satellite image show the alterations of land use with time. The variations of land use imply the possibilities of bringing in external contaminated soil. The Pb concentrations in core samples revealed higher in lower soil compare with top soil. Especially, 4 m soil sample show highest Pb concentrations that are approximately 1500 mg/kg. This result indicated that certain Pb source existed at 4 m depth. CONCLUSIONS This study investigated the distribution and source identification of Pb in industrial soil. The land use and Pb

  18. Confronting species distribution model predictions with species functional traits.

    Science.gov (United States)

    Wittmann, Marion E; Barnes, Matthew A; Jerde, Christopher L; Jones, Lisa A; Lodge, David M

    2016-02-01

    Species distribution models are valuable tools in studies of biogeography, ecology, and climate change and have been used to inform conservation and ecosystem management. However, species distribution models typically incorporate only climatic variables and species presence data. Model development or validation rarely considers functional components of species traits or other types of biological data. We implemented a species distribution model (Maxent) to predict global climate habitat suitability for Grass Carp (Ctenopharyngodon idella). We then tested the relationship between the degree of climate habitat suitability predicted by Maxent and the individual growth rates of both wild (N = 17) and stocked (N = 51) Grass Carp populations using correlation analysis. The Grass Carp Maxent model accurately reflected the global occurrence data (AUC = 0.904). Observations of Grass Carp growth rate covered six continents and ranged from 0.19 to 20.1 g day(-1). Species distribution model predictions were correlated (r = 0.5, 95% CI (0.03, 0.79)) with observed growth rates for wild Grass Carp populations but were not correlated (r = -0.26, 95% CI (-0.5, 0.012)) with stocked populations. Further, a review of the literature indicates that the few studies for other species that have previously assessed the relationship between the degree of predicted climate habitat suitability and species functional traits have also discovered significant relationships. Thus, species distribution models may provide inferences beyond just where a species may occur, providing a useful tool to understand the linkage between species distributions and underlying biological mechanisms.

  19. Do predictions from Species Sensitivity Distributions match with field data?

    International Nuclear Information System (INIS)

    Smetanová, S.; Bláha, L.; Liess, M.; Schäfer, R.B.; Beketov, M.A.

    2014-01-01

    Species Sensitivity Distribution (SSD) is a statistical model that can be used to predict effects of contaminants on biological communities, but only few comparisons of this model with field studies have been conducted so far. In the present study we used measured pesticides concentrations from streams in Germany, France, and Finland, and we used SSD to calculate msPAF (multiple substance potentially affected fraction) values based on maximum toxic stress at localities. We compared these SSD-based predictions with the actual effects on stream invertebrates quantified by the SPEAR pesticides bioindicator. The results show that the msPAFs correlated well with the bioindicator, however, the generally accepted SSD threshold msPAF of 0.05 (5% of species are predicted to be affected) severely underestimated the observed effects (msPAF values causing significant effects are 2–1000-times lower). These results demonstrate that validation with field data is required to define the appropriate thresholds for SSD predictions. - Highlights: • We validated the statistical model Species Sensitivity Distribution with field data. • Good correlation was found between the model predictions and observed effects. • But, the generally accepted threshold msPAF 0.05 severely underestimated the effects. - Comparison of the SSD-based prediction with the field data evaluated with the SPEAR pesticides index shows that SSD threshold msPAF of 0.05 severely underestimates the effects observed in the field

  20. DBH Prediction Using Allometry Described by Bivariate Copula Distribution

    Science.gov (United States)

    Xu, Q.; Hou, Z.; Li, B.; Greenberg, J. A.

    2017-12-01

    Forest biomass mapping based on single tree detection from the airborne laser scanning (ALS) usually depends on an allometric equation that relates diameter at breast height (DBH) with per-tree aboveground biomass. The incapability of the ALS technology in directly measuring DBH leads to the need to predict DBH with other ALS-measured tree-level structural parameters. A copula-based method is proposed in the study to predict DBH with the ALS-measured tree height and crown diameter using a dataset measured in the Lassen National Forest in California. Instead of exploring an explicit mathematical equation that explains the underlying relationship between DBH and other structural parameters, the copula-based prediction method utilizes the dependency between cumulative distributions of these variables, and solves the DBH based on an assumption that for a single tree, the cumulative probability of each structural parameter is identical. Results show that compared with the bench-marking least-square linear regression and the k-MSN imputation, the copula-based method obtains better accuracy in the DBH for the Lassen National Forest. To assess the generalization of the proposed method, prediction uncertainty is quantified using bootstrapping techniques that examine the variability of the RMSE of the predicted DBH. We find that the copula distribution is reliable in describing the allometric relationship between tree-level structural parameters, and it contributes to the reduction of prediction uncertainty.

  1. CMP reflection imaging via interferometry of distributed subsurface sources

    Science.gov (United States)

    Kim, D.; Brown, L. D.; Quiros, D. A.

    2015-12-01

    The theoretical foundations of recovering body wave energy via seismic interferometry are well established. However in practice, such recovery remains problematic. Here, synthetic seismograms computed for subsurface sources are used to evaluate the geometrical combinations of realistic ambient source and receiver distributions that result in useful recovery of virtual body waves. This study illustrates how surface receiver arrays that span a limited distribution suite of sources, can be processed to reproduce virtual shot gathers that result in CMP gathers which can be effectively stacked with traditional normal moveout corrections. To verify the feasibility of the approach in practice, seismic recordings of 50 aftershocks following the magnitude of 5.8 Virginia earthquake occurred in August, 2011 have been processed using seismic interferometry to produce seismic reflection images of the crustal structure above and beneath the aftershock cluster. Although monotonic noise proved to be problematic by significantly reducing the number of usable recordings, the edited dataset resulted in stacked seismic sections characterized by coherent reflections that resemble those seen on a nearby conventional reflection survey. In particular, "virtual" reflections at travel times of 3 to 4 seconds suggest reflector sat approximately 7 to 12 km depth that would seem to correspond to imbricate thrust structures formed during the Appalachian orogeny. The approach described here represents a promising new means of body wave imaging of 3D structure that can be applied to a wide array of geologic and energy problems. Unlike other imaging techniques using natural sources, this technique does not require precise source locations or times. It can thus exploit aftershocks too small for conventional analyses. This method can be applied to any type of microseismic cloud, whether tectonic, volcanic or man-made.

  2. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    Science.gov (United States)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  3. Popularity prediction tool for ATLAS distributed data management

    International Nuclear Information System (INIS)

    Beermann, T; Maettig, P; Stewart, G; Lassnig, M; Garonne, V; Barisits, M; Vigne, R; Serfon, C; Goossens, L; Nairz, A; Molfetas, A

    2014-01-01

    This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distributions. This article describes the popularity prediction method and the simulator that is used to evaluate the redistribution.

  4. Popularity Prediction Tool for ATLAS Distributed Data Management

    Science.gov (United States)

    Beermann, T.; Maettig, P.; Stewart, G.; Lassnig, M.; Garonne, V.; Barisits, M.; Vigne, R.; Serfon, C.; Goossens, L.; Nairz, A.; Molfetas, A.; Atlas Collaboration

    2014-06-01

    This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distributions. This article describes the popularity prediction method and the simulator that is used to evaluate the redistribution.

  5. Predicting Polylepis distribution: vulnerable and increasingly important Andean woodlands

    Directory of Open Access Journals (Sweden)

    Brian R. Zutta

    2012-11-01

    Full Text Available Polylepis woodlands are a vital resource for preserving biodiversity and hydrological functions, which will be altered by climate change and challenge the sustainability of local human communities. However, these highaltitude Andean ecosystems are becoming increasingly vulnerable due to anthropogenic pressure including fragmentation, deforestation and the increase in livestock. Predicting the distribution of native woodlands has become increasingly important to counteract the negative effects of climate change through reforestation and conservation. The objective of this study was to develop and analyze the distribution models of two species that form extensive woodlands along the Andes, namely Polylepis sericea and P. weberbaueri. This study utilized the program Maxent, climate and remotely sensed environmental layers at 1 km resolution. The predicted distribution model for P. sericea indicated that the species could be located in a variety of habitats along the Andean Cordillera, while P. weberbaueri was restricted to the high elevations of southern Peru and Bolivia. For both species, elevation and temperature metrics were the most significant factors for predicted distribution. Further model refinement of Polylepis and other Andean species using increasingly available satellite data demonstrate the potential to help define areas of diversity and improve conservation strategies for the Andes.

  6. Predicted and measured velocity distribution in a model heat exchanger

    International Nuclear Information System (INIS)

    Rhodes, D.B.; Carlucci, L.N.

    1984-01-01

    This paper presents a comparison between numerical predictions, using the porous media concept, and measurements of the two-dimensional isothermal shell-side velocity distributions in a model heat exchanger. Computations and measurements were done with and without tubes present in the model. The effect of tube-to-baffle leakage was also investigated. The comparison was made to validate certain porous media concepts used in a computer code being developed to predict the detailed shell-side flow in a wide range of shell-and-tube heat exchanger geometries

  7. Combining disparate data sources for improved poverty prediction and mapping.

    Science.gov (United States)

    Pokhriyal, Neeti; Jacques, Damien Christophe

    2017-11-14

    More than 330 million people are still living in extreme poverty in Africa. Timely, accurate, and spatially fine-grained baseline data are essential to determining policy in favor of reducing poverty. The potential of "Big Data" to estimate socioeconomic factors in Africa has been proven. However, most current studies are limited to using a single data source. We propose a computational framework to accurately predict the Global Multidimensional Poverty Index (MPI) at a finest spatial granularity and coverage of 552 communes in Senegal using environmental data (related to food security, economic activity, and accessibility to facilities) and call data records (capturing individualistic, spatial, and temporal aspects of people). Our framework is based on Gaussian Process regression, a Bayesian learning technique, providing uncertainty associated with predictions. We perform model selection using elastic net regularization to prevent overfitting. Our results empirically prove the superior accuracy when using disparate data (Pearson correlation of 0.91). Our approach is used to accurately predict important dimensions of poverty: health, education, and standard of living (Pearson correlation of 0.84-0.86). All predictions are validated using deprivations calculated from census. Our approach can be used to generate poverty maps frequently, and its diagnostic nature is, likely, to assist policy makers in designing better interventions for poverty eradication. Copyright © 2017 the Author(s). Published by PNAS.

  8. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  9. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  10. Experimental Validation of Energy Resources Integration in Microgrids via Distributed Predictive Control

    DEFF Research Database (Denmark)

    Mantovani, Giancarlo; Costanzo, Giuseppe Tommaso; Marinelli, Mattia

    2014-01-01

    This paper presents an innovative control scheme for the management of energy consumption in commercial build- ings with local energy production, such as photovoltaic panels or wind turbine and an energy storage unit. The presented scheme is based on distributed model predictive controllers, which...... sources, a vanadium redox battery system, resistive load, and a point of common coupling to the national grid. Several experiments are carried to assess the performance of the control scheme in managing local energy pro- duction and consumption....

  11. Characterization of DNAPL Source Zone Architecture and Prediction of Associated Plume Response: Progress and Perspectives

    Science.gov (United States)

    Abriola, L. M.; Pennell, K. D.; Ramsburg, C. A.; Miller, E. L.; Christ, J.; Capiro, N. L.; Mendoza-Sanchez, I.; Boroumand, A.; Ervin, R. E.; Walker, D. I.; Zhang, H.

    2012-12-01

    It is now widely recognized that the distribution of contaminant mass will control both the evolution of aqueous phase plumes and the effectiveness of many source zone remediation technologies at sites contaminated by dense nonaqueous phase liquids (DNAPLs). Advances in the management of sites containing DNAPL source zones, however, are currently hampered by the difficulty associated with characterizing subsurface DNAPL 'architecture'. This presentation provides an overview of recent research, integrating experimental and mathematical modeling studies, designed to improve our ability to characterize DNAPL distributions and predict associated plume response. Here emphasis is placed on estimation of the most information-rich DNAPL architecture metrics, through a combination of localized in situ tests and more readily available plume transect concentration observations. Estimated metrics will then serve as inputs to an upscaled screening model for prediction of long term plume response. Machine learning techniques were developed and refined to identify a variety of source zone metrics and associated confidence intervals through the processing of down gradient concentration data. Estimated metrics include the volumes and volume percentages of DNAPL in pools and ganglia, as well as their ratio (pool fraction). Multiphase flow and transport simulations provided training data for model development and assessment that are representative of field-scale DNAPL source zones and their evolving plumes. Here, a variety of release and site heterogeneity (sequential Gaussian permeability) conditions were investigated. Push-pull tracer tests were also explored as a means to provide localized in situ observations to refine these metric estimates. Here, two-dimensional aquifer cell experiments and mathematical modeling were used to quantify upscaled interphase mass transfer rates and the interplay between injection and extraction rates, local source zone architecture, and tracer

  12. Probabilistic source term predictions for use with decision support systems

    International Nuclear Information System (INIS)

    Grindon, E.; Kinniburgh, C.G.

    2003-01-01

    Full text: Decision Support Systems for use in off-site emergency management, following an incident at a Nuclear Power Plant (NPP) within Europe, are becoming accepted as a useful and appropriate tool to aid decision makers. An area which is not so well developed is the 'upstream' prediction of the source term released into the environment. Rapid prediction of this source term is crucial to the appropriate early management of a nuclear emergency. The initial source term prediction would today be typically based on simple tabulations taking little, or no, account of plant status. It is the interface between the inward looking plant control room team and the outward looking off-site emergency management team that needs to be addressed. This is not an easy proposition as these two distinct disciplines have little common basis from which to communicate their immediate findings and concerns. Within the Euratom Fifth Framework Programme (FP5), complementary approaches are being developed to the pre-release stage; each based on software tools to help bridge this gap. Traditionally source terms (or releases into the environment) provided for use with Decision Support Systems are estimated on a deterministic basis. These approaches use a single, deterministic assumption about plant status. The associated source term represents the 'best estimate' based an available information. No information is provided an the potential for uncertainty in the source term estimate. Using probabilistic methods the outcome is typically a number of possible plant states each with an associated source term and probability. These represent both the best estimate and the spread of the likely source term. However, this is a novel approach and the usefulness of such source term prediction tools is yet to be tested on a wide scale. The benefits of probabilistic source term estimation are presented here; using, as an example, the SPRINT tool developed within the FP5 STERPS project. System for the

  13. Moving Towards Dynamic Ocean Management: How Well Do Modeled Ocean Products Predict Species Distributions?

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2016-02-01

    Full Text Available Species distribution models are now widely used in conservation and management to predict suitable habitat for protected marine species. The primary sources of dynamic habitat data have been in situ and remotely sensed oceanic variables (both are considered “measured data”, but now ocean models can provide historical estimates and forecast predictions of relevant habitat variables such as temperature, salinity, and mixed layer depth. To assess the performance of modeled ocean data in species distribution models, we present a case study for cetaceans that compares models based on output from a data assimilative implementation of the Regional Ocean Modeling System (ROMS to those based on measured data. Specifically, we used seven years of cetacean line-transect survey data collected between 1991 and 2009 to develop predictive habitat-based models of cetacean density for 11 species in the California Current Ecosystem. Two different generalized additive models were compared: one built with a full suite of ROMS output and another built with a full suite of measured data. Model performance was assessed using the percentage of explained deviance, root mean squared error (RMSE, observed to predicted density ratios, and visual inspection of predicted and observed distributions. Predicted distribution patterns were similar for models using ROMS output and measured data, and showed good concordance between observed sightings and model predictions. Quantitative measures of predictive ability were also similar between model types, and RMSE values were almost identical. The overall demonstrated success of the ROMS-based models opens new opportunities for dynamic species management and biodiversity monitoring because ROMS output is available in near real time and can be forecast.

  14. Agent paradigm and services technology for distributed Information Sources

    Directory of Open Access Journals (Sweden)

    Hakima Mellah

    2011-10-01

    Full Text Available The complexity of information is issued from interacting information sources (IS, and could be better exploited with respect to relevance of information. In distributed IS system, relevant information has a content that is in connection with other contents in information network, and is used for a certain purpose. The highlighting point of the proposed model is to contribute to information system agility according to a three-dimensional view involving the content, the use and the structure. This reflects the relevance of information complexity and effective methodologies through self organized principle to manage the complexity. This contribution is primarily focused on presenting some factors that lead and trigger for self organization in a Service Oriented Architecture (SOA and how it can be possible to integrate self organization mechanism in the same.

  15. Building predictive models of soil particle-size distribution

    Directory of Open Access Journals (Sweden)

    Alessandro Samuel-Rosa

    2013-04-01

    Full Text Available Is it possible to build predictive models (PMs of soil particle-size distribution (psd in a region with complex geology and a young and unstable land-surface? The main objective of this study was to answer this question. A set of 339 soil samples from a small slope catchment in Southern Brazil was used to build PMs of psd in the surface soil layer. Multiple linear regression models were constructed using terrain attributes (elevation, slope, catchment area, convergence index, and topographic wetness index. The PMs explained more than half of the data variance. This performance is similar to (or even better than that of the conventional soil mapping approach. For some size fractions, the PM performance can reach 70 %. Largest uncertainties were observed in geologically more complex areas. Therefore, significant improvements in the predictions can only be achieved if accurate geological data is made available. Meanwhile, PMs built on terrain attributes are efficient in predicting the particle-size distribution (psd of soils in regions of complex geology.

  16. Popularity Prediction Tool for ATLAS Distributed Data Management

    CERN Document Server

    Beermann, T; The ATLAS collaboration; Stewart, G; Lassnig, M; Garonne, V; Barisits, M; Vigne, R; Serfon, C; Goossens, L; Nairz, A; Molfetas, A

    2013-01-01

    This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distri...

  17. Popularity Prediction Tool for ATLAS Distributed Data Management

    CERN Document Server

    Beermann, T; The ATLAS collaboration; Stewart, G; Lassnig, M; Garonne, V; Barisits, M; Vigne, R; Serfon, C; Goossens, L; Nairz, A; Molfetas, A

    2014-01-01

    This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distri...

  18. Predicting Dynamical Crime Distribution From Environmental and Social Influences

    Directory of Open Access Journals (Sweden)

    Simon Garnier

    2018-05-01

    Full Text Available Understanding how social and environmental factors contribute to the spatio-temporal distribution of criminal activities is a fundamental question in modern criminology. Thanks to the development of statistical techniques such as Risk Terrain Modeling (RTM, it is possible to evaluate precisely the criminogenic contribution of environmental features to a given location. However, the role of social information in shaping the distribution of criminal acts is largely understudied by the criminological research literature. In this paper we investigate the existence of spatio-temporal correlations between successive robbery events, after controlling for environmental influences as estimated by RTM. We begin by showing that a robbery event increases the likelihood of future robberies at and in the neighborhood of its location. This event-dependent influence decreases exponentially with time and as an inverse function of the distance to the original event. We then combine event-dependence and environmental influences in a simulation model to predict robbery patterns at the scale of a large city (Newark, NJ. We show that this model significantly improves upon the predictions of RTM alone and of a model taking into account event-dependence only when tested against real data that were not used to calibrate either model. We conclude that combining risk from exposure (past event and vulnerability (environment, following from the Theory of Risky Places, when modeling crime distribution can improve crime suppression and prevention efforts by providing more accurate forecasting of the most likely locations of criminal events.

  19. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    Science.gov (United States)

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Spatial distribution of carbon sources and sinks in Canada's forests

    International Nuclear Information System (INIS)

    Chen, Jing M.; Weimin, Ju; Liu, Jane; Cihlar, Josef; Chen, Wenjun

    2003-01-01

    Annual spatial distributions of carbon sources and sinks in Canada's forests at 1 km resolution are computed for the period from 1901 to 1998 using ecosystem models that integrate remote sensing images, gridded climate, soils and forest inventory data. GIS-based fire scar maps for most regions of Canada are used to develop a remote sensing algorithm for mapping and dating forest burned areas in the 25 yr prior to 1998. These mapped and dated burned areas are used in combination with inventory data to produce a complete image of forest stand age in 1998. Empirical NPP age relationships were used to simulate the annual variations of forest growth and carbon balance in 1 km pixels, each treated as a homogeneous forest stand. Annual CO 2 flux data from four sites were used for model validation. Averaged over the period 1990-1998, the carbon source and sink map for Canada's forests show the following features: (i) large spatial variations corresponding to the patchiness of recent fire scars and productive forests and (ii) a general south-to-north gradient of decreasing carbon sink strength and increasing source strength. This gradient results mostly from differential effects of temperature increase on growing season length, nutrient mineralization and heterotrophic respiration at different latitudes as well as from uneven nitrogen deposition. The results from the present study are compared with those of two previous studies. The comparison suggests that the overall positive effects of non-disturbance factors (climate, CO 2 and nitrogen) outweighed the effects of increased disturbances in the last two decades, making Canada's forests a carbon sink in the 1980s and 1990s. Comparisons of the modeled results with tower-based eddy covariance measurements of net ecosystem exchange at four forest stands indicate that the sink values from the present study may be underestimated

  1. Modeling the distribution of Culex tritaeniorhynchus to predict Japanese encephalitis distribution in the Republic of Korea

    Directory of Open Access Journals (Sweden)

    Penny Masuoka

    2010-11-01

    Full Text Available Over 35,000 cases of Japanese encephalitis (JE are reported worldwide each year. Culex tritaeniorhynchus is the primary vector of the JE virus, while wading birds are natural reservoirs and swine amplifying hosts. As part of a JE risk analysis, the ecological niche modeling programme, Maxent, was used to develop a predictive model for the distribution of Cx. tritaeniorhynchus in the Republic of Korea, using mosquito collection data, temperature, precipitation, elevation, land cover and the normalized difference vegetation index (NDVI. The resulting probability maps from the model were consistent with the known environmental limitations of the mosquito with low probabilities predicted for forest covered mountains. July minimum temperature and land cover were the most important variables in the model. Elevation, summer NDVI (July-September, precipitation in July, summer minimum temperature (May-August and maximum temperature for fall and winter months also contributed to the model. Comparison of the Cx. tritaeniorhynchus model to the distribution of JE cases in the Republic of Korea from 2001 to 2009 showed that cases among a highly vaccinated Korean population were located in high-probability areas for Cx. tritaeniorhynchus. No recent JE cases were reported from the eastern coastline, where higher probabilities of mosquitoes were predicted, but where only small numbers of pigs are raised. The geographical distribution of reported JE cases corresponded closely with the predicted high-probability areas for Cx. tritaeniorhynchus, making the map a useful tool for health risk analysis that could be used for planning preventive public health measures.

  2. Distribution of tessera terrain on Venus: Prediction for Magellan

    International Nuclear Information System (INIS)

    Bindschadler, D.L.; Head, J.W.; Kreslavsky, M.A.; Shkuratov, Yu.G.; Ivanov, M.A.; Basilevsky, A.T.

    1990-01-01

    Tessera terrain is the dominant tectonic unit in the northern hemisphere of Venus and is characterized by complex sets of intersecting structural trends and distinctive radar properties due to a high degree of meter and sub-meter scale (5 cm to 10 m) roughness. Based on these distinctive radar properties, a prediction of the global distribution of tessera can be made using Pioneer Venus (PV) reflectivity and roughness data. Where available, Venera 15/16 and Arecibo images and PV diffuse scattering data were used to evaluate the prediction. From this assessment, the authors conclude that most of the regions with prediction values greater than 0.6 (out of 1) are likely to be tessera, and are almost certain to be tectonically deformed. Lada Terra and Phoebe Regio are very likely to contain tessera terrain, while much of Aphrodite Terra is most likely to be either tessera or a landform which has not yet been recognized on Venus. This prediction map will assist in targeting Magellan investigations of Venus tectonics

  3. Model Predictive Control for Distributed Microgrid Battery Energy Storage Systems

    DEFF Research Database (Denmark)

    Morstyn, Thomas; Hredzak, Branislav; Aguilera, Ricardo P.

    2018-01-01

    , and converter current constraints to be addressed. In addition, nonlinear variations in the charge and discharge efficiencies of lithium ion batteries are analyzed and included in the control strategy. Real-time digital simulations were carried out for an islanded microgrid based on the IEEE 13 bus prototypical......This brief proposes a new convex model predictive control (MPC) strategy for dynamic optimal power flow between battery energy storage (ES) systems distributed in an ac microgrid. The proposed control strategy uses a new problem formulation, based on a linear $d$ – $q$ reference frame voltage...... feeder, with distributed battery ES systems and intermittent photovoltaic generation. It is shown that the proposed control strategy approaches the performance of a strategy based on nonconvex optimization, while reducing the required computation time by a factor of 1000, making it suitable for a real...

  4. Distributed predictive control of spiral wave in cardiac excitable media

    International Nuclear Information System (INIS)

    Zheng-Ning, Gan; Xin-Ming, Cheng

    2010-01-01

    In this paper, we propose the distributed predictive control strategies of spiral wave in cardiac excitable media. The modified FitzHugh–Nagumo model was used to express the cardiac excitable media approximately. Based on the control-Lyapunov theory, we obtained the distributed control equation, which consists of a positive control-Lyapunov function and a positive cost function. Using the equation, we investigate two kinds of robust control strategies: the time-dependent distributed control strategy and the space-time dependent distributed control strategy. The feasibility of the strategies was demonstrated via an illustrative example, in which the spiral wave was prevented to occur, and the possibility for inducing ventricular fibrillation was eliminated. The strategies are helpful in designing various cardiac devices. Since the second strategy is more efficient and robust than the first one, and the response time in the second strategy is far less than that in the first one, the former is suitable for the quick-response control systems. In addition, our spatiotemporal control strategies, especially the second strategy, can be applied to other cardiac models, even to other reaction-diffusion systems. (general)

  5. Natural ventilation in an enclosure induced by a heat source distributed uniformly over a vertical wall

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z.D.; Li, Y.; Mahoney, J. [CSIRO Building, Construction and Engineering, Advanced Thermo-Fluids Technologies Lab., Highett, VIC (Australia)

    2001-05-01

    A simple multi-layer stratification model is suggested for displacement ventilation in a single-zone building driven by a heat source distributed uniformly over a vertical wall. Theoretical expressions are obtained for the stratification interface height and ventilation flow rate and compared with those obtained by an existing model available in the literature. Experiments were also carried out using a recently developed fine-bubble modelling technique. It was shown that the experimental results obtained using the fine-bubble technique are in good agreement with the theoretical predictions. (Author)

  6. Evaluation of Airborne Remote Sensing Techniques for Predicting the Distribution of Energetic Compounds on Impact Areas

    National Research Council Canada - National Science Library

    Graves, Mark R; Dove, Linda P; Jenkins, Thomas F; Bigl, Susan; Walsh, Marianne E; Hewitt, Alan D; Lambert, Dennis; Perron, Nancy; Ramsey, Charles; Gamey, Jeff; Beard, Les; Doll, William E; Magoun, Dale

    2007-01-01

    .... These sampling approaches do not accurately account for the distribution of such contaminants over the landscape due to the distributed nature of explosive compound sources throughout impact areas...

  7. Affordable non-traditional source data mining for context assessment to improve distributed fusion system robustness

    Science.gov (United States)

    Bowman, Christopher; Haith, Gary; Steinberg, Alan; Morefield, Charles; Morefield, Michael

    2013-05-01

    This paper describes methods to affordably improve the robustness of distributed fusion systems by opportunistically leveraging non-traditional data sources. Adaptive methods help find relevant data, create models, and characterize the model quality. These methods also can measure the conformity of this non-traditional data with fusion system products including situation modeling and mission impact prediction. Non-traditional data can improve the quantity, quality, availability, timeliness, and diversity of the baseline fusion system sources and therefore can improve prediction and estimation accuracy and robustness at all levels of fusion. Techniques are described that automatically learn to characterize and search non-traditional contextual data to enable operators integrate the data with the high-level fusion systems and ontologies. These techniques apply the extension of the Data Fusion & Resource Management Dual Node Network (DNN) technical architecture at Level 4. The DNN architecture supports effectively assessment and management of the expanded portfolio of data sources, entities of interest, models, and algorithms including data pattern discovery and context conformity. Affordable model-driven and data-driven data mining methods to discover unknown models from non-traditional and `big data' sources are used to automatically learn entity behaviors and correlations with fusion products, [14 and 15]. This paper describes our context assessment software development, and the demonstration of context assessment of non-traditional data to compare to an intelligence surveillance and reconnaissance fusion product based upon an IED POIs workflow.

  8. 99Tc in the environment. Sources, distribution and methods

    International Nuclear Information System (INIS)

    Garcia-Leon, Manuel

    2005-01-01

    99 Tc is a β-emitter, E max =294 keV, with a very long half-life (T 1/2 =2.11 x 10 5 y). It is mainly produced in the fission of 235 U and 239 Pu at a rate of about 6%. This rate together with its long half-life makes it a significant nuclide in the whole nuclear fuel cycle, from which it can be introduced into the environment at different rates depending on the cycle step. A gross estimation shows that adding all the possible sources, at least 2000 TBq had been released into the environment up to 2000 and that up to the middle of the nineties of the last century some 64000 TBq had been produced worldwide. Nuclear explosions have liberated some 160 TBq into the environment. In this work, environmental distribution of 99 Tc as well as the methods for its determination will be discussed. Emphasis is put on the environmental relevance of 99 Tc, mainly with regard to the future committed radiation dose received by the population and to the problem of nuclear waste management. Its determination at environmental levels is a challenging task. For that, special mention is made about the mass spectrometric methods for its measurement. (author)

  9. Effect of tissue inhomogeneity on dose distribution of point sources of low-energy electrons

    International Nuclear Information System (INIS)

    Kwok, C.S.; Bialobzyski, P.J.; Yu, S.K.; Prestwich, W.V.

    1990-01-01

    Perturbation in dose distributions of point sources of low-energy electrons at planar interfaces of cortical bone (CB) and red marrow (RM) was investigated experimentally and by Monte Carlo codes EGS and the TIGER series. Ultrathin LiF thermoluminescent dosimeters were used to measure the dose distributions of point sources of 204 Tl and 147 Pm in RM. When the point sources were at 12 mg/cm 2 from a planar interface of CB and RM equivalent plastics, dose enhancement ratios in RM averaged over the region 0--12 mg/cm 2 from the interface were measured to be 1.08±0.03 (SE) and 1.03±0.03 (SE) for 204 Tl and 147 Pm, respectively. The Monte Carlo codes predicted 1.05±0.02 and 1.01±0.02 for the two nuclides, respectively. However, EGS gave consistently 3% higher dose in the dose scoring region than the TIGER series when point sources of monoenergetic electrons up to 0.75 MeV energy were considered in the homogeneous RM situation or in the CB and RM heterogeneous situation. By means of the TIGER series, it was demonstrated that aluminum, which is normally assumed to be equivalent to CB in radiation dosimetry, leads to an overestimation of backscattering of low-energy electrons in soft tissue at a CB--soft-tissue interface by as much as a factor of 2

  10. Precision predictions for Higgs differential distributions at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Ebert, Markus

    2017-08-15

    After the discovery of a Standard-Model-like Higgs boson at the LHC a central aspect of the LHC physics program is to study the Higgs boson's couplings to Standard Model particles in detail in order to elucidate the nature of the Higgs mechanism and to search for hints of physics beyond the Standard Model. This requires precise theory predictions for both inclusive and differential Higgs cross sections. In this thesis we focus on the application of resummation techniques in the framework of Soft-Collinear Effective Theory (SCET) to obtain accurate predictions with reliable theory uncertainties for various observables. We first consider transverse momentum distributions, where the resummation of large logarithms in momentum (or distribution) space has been a long-standing open question. We show that its two-dimensional nature leads to additional difficulties not observed in one-dimensional observables such as thrust, and solving the associated renormalization group equations (RGEs) in momentum space thus requires a very careful scale setting. This is achieved using distributional scale setting, a new technique to solve differential equations such as RGEs directly in distribution space, as it allows one to treat logarithmic plus distributions like ordinary logarithms. We show that the momentum space solution fundamentally differs from the standard resummation in Fourier space by different boundary terms to all orders in perturbation theory and hence provides an interesting and complementary approach to obtain new insight into the all-order perturbative and nonperturbative structure of transverse momentum distributions. Our work lays the ground for a detailed numerical study of the momentum space resummation. We then show that in the case of a discovery of a new heavy color-singlet resonance such as a heavy Higgs boson, one can reliably and model-independently infer its production mechanism by dividing the data into two mutually exclusive jet bins. The method is

  11. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    Energy Technology Data Exchange (ETDEWEB)

    AlRashidi, M.R., E-mail: malrash2002@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait); AlHajri, M.F., E-mail: mfalhajri@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait)

    2011-10-15

    Highlights: {yields} A new hybrid PSO for optimal DGs placement and sizing. {yields} Statistical analysis to fine tune PSO parameters. {yields} Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  12. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    International Nuclear Information System (INIS)

    AlRashidi, M.R.; AlHajri, M.F.

    2011-01-01

    Highlights: → A new hybrid PSO for optimal DGs placement and sizing. → Statistical analysis to fine tune PSO parameters. → Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  13. Spatial Regression and Prediction of Water Quality in a Watershed with Complex Pollution Sources.

    Science.gov (United States)

    Yang, Xiaoying; Liu, Qun; Luo, Xingzhang; Zheng, Zheng

    2017-08-16

    Fast economic development, burgeoning population growth, and rapid urbanization have led to complex pollution sources contributing to water quality deterioration simultaneously in many developing countries including China. This paper explored the use of spatial regression to evaluate the impacts of watershed characteristics on ambient total nitrogen (TN) concentration in a heavily polluted watershed and make predictions across the region. Regression results have confirmed the substantial impact on TN concentration by a variety of point and non-point pollution sources. In addition, spatial regression has yielded better performance than ordinary regression in predicting TN concentrations. Due to its best performance in cross-validation, the river distance based spatial regression model was used to predict TN concentrations across the watershed. The prediction results have revealed a distinct pattern in the spatial distribution of TN concentrations and identified three critical sub-regions in priority for reducing TN loads. Our study results have indicated that spatial regression could potentially serve as an effective tool to facilitate water pollution control in watersheds under diverse physical and socio-economical conditions.

  14. Technical note: Combining quantile forecasts and predictive distributions of streamflows

    Science.gov (United States)

    Bogner, Konrad; Liechti, Katharina; Zappa, Massimiliano

    2017-11-01

    The enhanced availability of many different hydro-meteorological modelling and forecasting systems raises the issue of how to optimally combine this great deal of information. Especially the usage of deterministic and probabilistic forecasts with sometimes widely divergent predicted future streamflow values makes it even more complicated for decision makers to sift out the relevant information. In this study multiple streamflow forecast information will be aggregated based on several different predictive distributions, and quantile forecasts. For this combination the Bayesian model averaging (BMA) approach, the non-homogeneous Gaussian regression (NGR), also known as the ensemble model output statistic (EMOS) techniques, and a novel method called Beta-transformed linear pooling (BLP) will be applied. By the help of the quantile score (QS) and the continuous ranked probability score (CRPS), the combination results for the Sihl River in Switzerland with about 5 years of forecast data will be compared and the differences between the raw and optimally combined forecasts will be highlighted. The results demonstrate the importance of applying proper forecast combination methods for decision makers in the field of flood and water resource management.

  15. Prediction of oil droplet size distribution in agitated aquatic environments

    International Nuclear Information System (INIS)

    Khelifa, A.; Lee, K.; Hill, P.S.

    2004-01-01

    Oil spilled at sea undergoes many transformations based on physical, biological and chemical processes. Vertical dispersion is the hydrodynamic mechanism controlled by turbulent mixing due to breaking waves, vertical velocity, density gradients and other environmental factors. Spilled oil is dispersed in the water column as small oil droplets. In order to estimate the mass of an oil slick in the water column, it is necessary to know how the droplets formed. Also, the vertical dispersion and fate of oil spilled in aquatic environments can be modelled if the droplet-size distribution of the oil droplets is known. An oil spill remediation strategy can then be implemented. This paper presented a newly developed Monte Carlo model to predict droplet-size distribution due to Brownian motion, turbulence and a differential settling at equilibrium. A kinematic model was integrated into the proposed model to simulate droplet breakage. The key physical input of the model is the maximum droplet size permissible in the simulation. Laboratory studies were found to be in good agreement with field studies. 26 refs., 1 tab., 5 figs

  16. Calculation of spatial distribution of the EURACOS II converter source

    International Nuclear Information System (INIS)

    Santo, A.C.F. de

    1985-01-01

    It is obtained the neutron spatial flux from the EURACOS (Enriched Uranium Converter Source) device, adjusted to experimental measures. The EURACOS device is a converter source which is constituted a circle plate of highly enriched uranium (90%). The converter provides an intense source of fast neutrons which has the energetic spectrum near to the fission spectrum. (M.C.K.) [pt

  17. Distribution and Sources of Black Carbon in the Arctic

    Science.gov (United States)

    Qi, Ling

    The Arctic is warming at twice the global rate over recent decades. To slow down this warming trend, there is growing interest in reducing the impact from short-lived climate forcers, such as black carbon (BC), because the benefits of mitigation are seen more quickly relative to CO2 reduction. To propose efficient mitigation policies, it is imperative to improve our understanding of BC distribution in the Arctic and to identify the sources. In this dissertation, we investigate the sensitivity of BC in the Arctic, including BC concentrations in snow (BCsnow) and BC concentrations in air (BCair), to emissions, dry deposition and wet scavenging using a global 3-D chemical transport model (CTM) GEOS-Chem. By including flaring emissions, estimating dry deposition velocity using resistance-in-series method, and including Wegener-Bergeron-Findeisen (WBF) in wet scavenging, simulated BCsnow in the eight Arctic sub-regions agree with the observations within a factor of two, and simulated BCair fall within the uncertainty range of observations. Specifically, we find that natural gas flaring emissions in Western Extreme North of Russia (WENR) strongly enhance BCsnow (by up to ?50%) and BCair (by 20-32%) during snow season in the so-called 'Arctic front', but has negligible impact on BC in the free troposphere. The updated dry deposition velocity over snow and ice is much larger than those used in most of global CTMs and agrees better with observation. The resulting BCsnow changes marginally because of the offsetting of higher dry and lower wet deposition fluxes. In contrast, surface BCair decreases strongly due to the faster dry deposition (by 27-68%). WBF occurs when the environmental vapor pressure is in between the saturation vapor pressure of ice crystals and water drops in mixed-phase clouds. As a result, water drops evaporate and releases BC particles in them back into the interstitial air. In most CTMs, WBF is either missing or represented by a uniform and low BC

  18. Using geomorphological variables to predict the spatial distribution of plant species in agricultural drainage networks.

    Science.gov (United States)

    Rudi, Gabrielle; Bailly, Jean-Stéphane; Vinatier, Fabrice

    2018-01-01

    To optimize ecosystem services provided by agricultural drainage networks (ditches) in headwater catchments, we need to manage the spatial distribution of plant species living in these networks. Geomorphological variables have been shown to be important predictors of plant distribution in other ecosystems because they control the water regime, the sediment deposition rates and the sun exposure in the ditches. Whether such variables may be used to predict plant distribution in agricultural drainage networks is unknown. We collected presence and absence data for 10 herbaceous plant species in a subset of a network of drainage ditches (35 km long) within a Mediterranean agricultural catchment. We simulated their spatial distribution with GLM and Maxent model using geomorphological variables and distance to natural lands and roads. Models were validated using k-fold cross-validation. We then compared the mean Area Under the Curve (AUC) values obtained for each model and other metrics issued from the confusion matrices between observed and predicted variables. Based on the results of all metrics, the models were efficient at predicting the distribution of seven species out of ten, confirming the relevance of geomorphological variables and distance to natural lands and roads to explain the occurrence of plant species in this Mediterranean catchment. In particular, the importance of the landscape geomorphological variables, ie the importance of the geomorphological features encompassing a broad environment around the ditch, has been highlighted. This suggests that agro-ecological measures for managing ecosystem services provided by ditch plants should focus on the control of the hydrological and sedimentological connectivity at the catchment scale. For example, the density of the ditch network could be modified or the spatial distribution of vegetative filter strips used for sediment trapping could be optimized. In addition, the vegetative filter strips could constitute

  19. A Predictive Model for Microbial Counts on Beaches where Intertidal Sand is the Primary Source

    Science.gov (United States)

    Feng, Zhixuan; Reniers, Ad; Haus, Brian K.; Solo-Gabriele, Helena M.; Wang, John D.; Fleming, Lora E.

    2015-01-01

    Human health protection at recreational beaches requires accurate and timely information on microbiological conditions to issue advisories. The objective of this study was to develop a new numerical mass balance model for enterococci levels on nonpoint source beaches. The significant advantage of this model is its easy implementation, and it provides a detailed description of the cross-shore distribution of enterococci that is useful for beach management purposes. The performance of the balance model was evaluated by comparing predicted exceedances of a beach advisory threshold value to field data, and to a traditional regression model. Both the balance model and regression equation predicted approximately 70% the advisories correctly at the knee depth and over 90% at the waist depth. The balance model has the advantage over the regression equation in its ability to simulate spatiotemporal variations of microbial levels, and it is recommended for making more informed management decisions. PMID:25840869

  20. A Distributed Model Predictive Control approach for the integration of flexible loads, storage and renewables

    DEFF Research Database (Denmark)

    Ferrarini, Luca; Mantovani, Giancarlo; Costanzo, Giuseppe Tommaso

    2014-01-01

    This paper presents an innovative solution based on distributed model predictive controllers to integrate the control and management of energy consumption, energy storage, PV and wind generation at customer side. The overall goal is to enable an advanced prosumer to autoproduce part of the energy...... he needs with renewable sources and, at the same time, to optimally exploit the thermal and electrical storages, to trade off its comfort requirements with different pricing schemes (including real-time pricing), and apply optimal control techniques rather than sub-optimal heuristics....

  1. The distribution of polarized radio sources >15 μJy IN GOODS-N

    International Nuclear Information System (INIS)

    Rudnick, L.; Owen, F. N.

    2014-01-01

    We present deep Very Large Array observations of the polarization of radio sources in the GOODS-N field at 1.4 GHz at resolutions of 1.''6 and 10''. At 1.''6, we find that the peak flux cumulative number count distribution is N(> p) ∼ 45*(p/30 μJy) –0.6 per square degree above a detection threshold of 14.5 μJy. This represents a break from the steeper slopes at higher flux densities, resulting in fewer sources predicted for future surveys with the Square Kilometer Array and its precursors. It provides a significant challenge for using background rotation measures (RMs) to study clusters of galaxies or individual galaxies. Most of the polarized sources are well above our detection limit, and they are also radio galaxies that are well-resolved even at 10'', with redshifts from ∼0.2-1.9. We determined a total polarized flux for each source by integrating the 10'' polarized intensity maps, as will be done by upcoming surveys such as POSSUM. These total polarized fluxes are a factor of two higher, on average, than the peak polarized flux at 1.''6; this would increase the number counts by ∼50% at a fixed flux level. The detected sources have RMs with a characteristic rms scatter of ∼11 rad m –2 around the local Galactic value, after eliminating likely outliers. The median fractional polarization from all total intensity sources does not continue the trend of increasing at lower flux densities, as seen for stronger sources. The changes in the polarization characteristics seen at these low fluxes likely represent the increasing dominance of star-forming galaxies.

  2. Predicting weed problems in maize cropping by species distribution modelling

    Directory of Open Access Journals (Sweden)

    Bürger, Jana

    2014-02-01

    Full Text Available Increasing maize cultivation and changed cropping practices promote the selection of typical maize weeds that may also profit strongly from climate change. Predicting potential weed problems is of high interest for plant production. Within the project KLIFF, experiments were combined with species distribution modelling for this task in the region of Lower Saxony, Germany. For our study, we modelled ecological and damage niches of nine weed species that are significant and wide spread in maize cropping in a number of European countries. Species distribution models describe the ecological niche of a species, these are the environmental conditions under which a species can maintain a vital population. It is also possible to estimate a damage niche, i.e. the conditions under which a species causes damage in agricultural crops. For this, we combined occurrence data of European national data bases with high resolution climate, soil and land use data. Models were also projected to simulated climate conditions for the time horizon 2070 - 2100 in order to estimate climate change effects. Modelling results indicate favourable conditions for typical maize weed occurrence virtually all over the study region, but only a few species are important in maize cropping. This is in good accordance with the findings of an earlier maize weed monitoring. Reaction to changing climate conditions is species-specific, for some species neutral (E. crus-galli, other species may gain (Polygonum persicaria or loose (Viola arvensis large areas of suitable habitats. All species with damage potential under present conditions will remain important in maize cropping, some more species will gain regional importance (Calystegia sepium, Setara viridis.

  3. Distribution and Sources of Nitrate-Nitrogen in Kansas Groundwater

    Directory of Open Access Journals (Sweden)

    Margaret A. Townsend

    2001-01-01

    Full Text Available Kansas is primarily an agricultural state. Irrigation water and fertilizer use data show long- term increasing trends. Similarly, nitrate-N concentrations in groundwater show long-term increases and exceed the drinking-water standard of 10 mg/l in many areas. A statistical analysis of nitrate-N data collected for local and regional studies in Kansas from 1990 to 1998 (747 samples found significant relationships between nitrate-N concentration with depth, age, and geographic location of wells. Sources of nitrate-N have been identified for 297 water samples by using nitrogen stable isotopes. Of these samples, 48% showed fertilizer sources (+2 to +8 and 34% showed either animal waste sources (+10 to +15 with nitrate-N greater than 10 mg/l or indication that enrichment processes had occurred (+10 or above with variable nitrate-N or both. Ultimate sources for nitrate include nonpoint sources associated with past farming and fertilization practices, and point sources such as animal feed lots, septic systems, and commercial fertilizer storage units. Detection of nitrate from various sources in aquifers of different depths in geographically varied areas of the state indicates that nonpoint and point sources currently impact and will continue to impact groundwater under current land uses.

  4. Predicting the Distribution of Yellowfin Tuna in Philippine Waters

    Science.gov (United States)

    Perez, G. J. P.; Leonardo, E. M.

    2015-12-01

    The Philippines is considered as a major tuna producer in the Western and Central Pacific Ocean, both for domestic consumption and on industrial scale. However, with the ever-increasing demand of growing population, it has always been a challenge to achieve sustainable fishing. The creation of satellite-derived potential fishing zone maps is a technology that has been adopted by advanced countries for almost three decades already and has led to reduction in search times by up to 40%. In this study, a Generalized Additive Model (GAM) is developed to predict the distribution of the Yellowfin tuna species in seas surrounding the Philippines based on the Catch-Per-Unit-Effort (CPUE) index. Level 3 gridded chlorophyll-a and sea surface temperature from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Aqua satellite of the National Aeronautics and Space Administration (NASA) are the main input parameters of the model. Chlorophyll-a is linked with the presence of phytoplankton, which indicates primary productivity and suggests potential regions of fish aggregation. Fish also prefers to stay in regions where the temperature is stable, thus the sea surface temperature fronts serve as a guide to locate concentrations of fish school. Historical monthly tuna catch data from Western and Central Pacific Commissions (WCPFC) is used to train the model. The resulting predictions are converted to potential fishing zone maps and are evaluated within and beyond the historical time range of the training data used. Diagnostic tests involving adjusted R2 value, GAM residual plots and root mean square error value are used to assess the accuracy of the model. The generated maps were able to confirm locations of known tuna fishing grounds in Mindanao and other parts of the country, as well us detect their seasonality and interannual variability. To improve the performance of the model, ancillary data such as surface winds reanalysis from National Centers for

  5. Uncertainties of predictions from parton distributions 1, experimental errors

    CERN Document Server

    Martin, A D; Stirling, William James; Thorne, R S; CERN. Geneva

    2003-01-01

    We determine the uncertainties on observables arising from the errors on the experimental data that are fitted in the global MRST2001 parton analysis. By diagonalizing the error matrix we produce sets of partons suitable for use within the framework of linear propagation of errors, which is the most convenient method for calculating the uncertainties. Despite the potential limitations of this approach we find that it can be made to work well in practice. This is confirmed by our alternative approach of using the more rigorous Lagrange multiplier method to determine the errors on physical quantities directly. As particular examples we determine the uncertainties on the predictions of the charged-current deep-inelastic structure functions, on the cross-sections for W production and for Higgs boson production via gluon--gluon fusion at the Tevatron and the LHC, on the ratio of W-minus to W-plus production at the LHC and on the moments of the non-singlet quark distributions. We discuss the corresponding uncertain...

  6. Prediction of HAMR Debris Population Distribution Released from GEO Space

    Science.gov (United States)

    Rosengren, A.; Scheeres, D.

    2012-09-01

    in inclination. When the nodal rate of the system is commensurate with the nodal rate of the Moon, the perturbations build up more effectively over long periods to produce significant effects on the orbit. Such resonances, which occurs for a class of HAMR objects that are not cleared out of orbit, gives rise to strongly changing dynamics over longer time periods. In this paper, we present the averaged model, and discuss its fundamental predictions and comparisons with explicit long-term numerical integrations of HAMR objects in GEO space. Using this tool, we study a range of HAMR objects, released in geostationary orbit, with various area-to-mass ratios, and predict the spatiotemporal distribution of the population. We identified a unique systematic structure associated with their distribution in inclination and ascending node phase space. Given that HAMR objects are the most difficult to target from an observational point of view, this work will have many implications for the space surveillance community, and will allow observers to implement better search strategies for this class of debris.

  7. X-Ray imager power source on distribution trailers

    International Nuclear Information System (INIS)

    Johns, B.R.

    1996-01-01

    This Acceptance for Beneficial Use documents the work completed on the addition of an X-ray cable reel on distribution trailer HO-64-3533 for core sampling equipment. Work and documentation remaining to be completed is identified

  8. Economic Model Predictive Control for Large-Scale and Distributed Energy Systems

    DEFF Research Database (Denmark)

    Standardi, Laura

    Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption.  We apply the Economic Model Predictive Control (EMPC......) strategy to optimise the economic performances of the energy systems and to balance the power production and consumption. In the case of large-scale energy systems, the electrical grid connects a high number of power units. Because of this, the related control problem involves a high number of variables......In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies.  An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...

  9. Prediction of calcite Cement Distribution in Shallow Marine Sandstone Reservoirs using Seismic Data

    Energy Technology Data Exchange (ETDEWEB)

    Bakke, N.E.

    1996-12-31

    This doctoral thesis investigates how calcite cemented layers can be detected by reflection seismic data and how seismic data combined with other methods can be used to predict lateral variation in calcite cementation in shallow marine sandstone reservoirs. Focus is on the geophysical aspects. Sequence stratigraphy and stochastic modelling aspects are only covered superficially. Possible sources of calcite in shallow marine sandstone are grouped into internal and external sources depending on their location relative to the presently cemented rock. Well data and seismic data from the Troll Field in the Norwegian North Sea have been analysed. Tuning amplitudes from stacks of thin calcite cemented layers are analysed. Tuning effects are constructive or destructive interference of pulses resulting from two or more closely spaced reflectors. The zero-offset tuning amplitude is shown to depend on calcite content in the stack and vertical stack size. The relationship is found by regression analysis based on extensive seismic modelling. The results are used to predict calcite distribution in a synthetic and a real data example. It is found that describing calcite cemented beds in shallow marine sandstone reservoirs is not a deterministic problem. Hence seismic inversion and sequence stratigraphy interpretation of well data have been combined in a probabilistic approach to produce models of calcite cemented barriers constrained by a maximum amount of information. It is concluded that seismic data can provide valuable information on distribution of calcite cemented beds in reservoirs where the background sandstones are relatively homogeneous. 63 refs., 78 figs., 10 tabs.

  10. The dose distribution surrounding sup 192 Ir and sup 137 Cs seed sources

    Energy Technology Data Exchange (ETDEWEB)

    Thomason, C [Wisconsin Univ., Madison, WI (USA). Dept. of Medical Physics; Mackie, T R [Wisconsin Univ., Madison, WI (USA). Dept. of Medical Physics Wisconsin Univ., Madison, WI (USA). Dept. of Human Oncology; Lindstrom, M J [Wisconsin Univ., Madison, WI (USA). Biostatistics Center; Higgins, P D [Cleveland Clinic Foundation, OH (USA). Dept. of Radiation Oncology

    1991-04-01

    Dose distributions in water were measured using LiF thermoluminescent dosemeters for {sup 192}Ir seed sources with stainless steel and with platinum encapsulation to determine the effect of differing encapsulation. Dose distribution was measured for a {sup 137}Cs seed source. In addition, dose distributions surrounding these sources were calculated using the EGS4 Monte Carlo code and were compared to measured data. The two methods are in good agreement for all three sources. Tables are given describing dose distribution surrounding each source as a function of distance and angle. Specific dose constants were also determined from results of Monte Carlo simulation. This work confirms the use of the EGS4 Monte Carlo code in modelling {sup 192}Ir and {sup 137}Cs seed sources to obtain brachytherapy dose distributions. (author).

  11. The dose distribution surrounding 192Ir and 137Cs seed sources

    International Nuclear Information System (INIS)

    Thomason, C.; Mackie, T.R.; Wisconsin Univ., Madison, WI; Lindstrom, M.J.; Higgins, P.D.

    1991-01-01

    Dose distributions in water were measured using LiF thermoluminescent dosemeters for 192 Ir seed sources with stainless steel and with platinum encapsulation to determine the effect of differing encapsulation. Dose distribution was measured for a 137 Cs seed source. In addition, dose distributions surrounding these sources were calculated using the EGS4 Monte Carlo code and were compared to measured data. The two methods are in good agreement for all three sources. Tables are given describing dose distribution surrounding each source as a function of distance and angle. Specific dose constants were also determined from results of Monte Carlo simulation. This work confirms the use of the EGS4 Monte Carlo code in modelling 192 Ir and 137 Cs seed sources to obtain brachytherapy dose distributions. (author)

  12. Volatile Organic Compounds: Characteristics, distribution and sources in urban schools

    Science.gov (United States)

    Mishra, Nitika; Bartsch, Jennifer; Ayoko, Godwin A.; Salthammer, Tunga; Morawska, Lidia

    2015-04-01

    Long term exposure to organic pollutants, both inside and outside school buildings may affect children's health and influence their learning performance. Since children spend significant amount of time in school, air quality, especially in classrooms plays a key role in determining the health risks associated with exposure at schools. Within this context, the present study investigated the ambient concentrations of Volatile Organic Compounds (VOCs) in 25 primary schools in Brisbane with the aim to quantify the indoor and outdoor VOCs concentrations, identify VOCs sources and their contribution, and based on these; propose mitigation measures to reduce VOCs exposure in schools. One of the most important findings is the occurrence of indoor sources, indicated by the I/O ratio >1 in 19 schools. Principal Component Analysis with Varimax rotation was used to identify common sources of VOCs and source contribution was calculated using an Absolute Principal Component Scores technique. The result showed that outdoor 47% of VOCs were contributed by petrol vehicle exhaust but the overall cleaning products had the highest contribution of 41% indoors followed by air fresheners and art and craft activities. These findings point to the need for a range of basic precautions during the selection, use and storage of cleaning products and materials to reduce the risk from these sources.

  13. Acetone in the atmosphere: Distribution, sources, and sinks

    Science.gov (United States)

    Singh, H. B.; O'Hara, D.; Herlth, D.; Sachse, W.; Blake, D. R.; Bradshaw, J. D.; Kanakidou, M.; Crutzen, P. J.

    1994-01-01

    Acetone (CH3COCH3) was found to be the dominant nonmethane organic species present in the atmosphere sampled primarily over eastern Canada (0-6 km, 35 deg-65 deg N) during ABLE3B (July to August 1990). A concentration range of 357 to 2310 ppt (= 10(exp -12) v/v) with a mean value of 1140 +/- 413 ppt was measured. Under extremely clean conditions, generally involving Arctic flows, lowest (background) mixing ratios of 550 +/- 100 ppt were present in much of the troposphere studied. Correlations between atmospheric mixing ratios of acetone and select species such as C2H2, CO, C3H8, C2Cl4 and isoprene provided important clues to its possible sources and to the causes of its atmospheric variability. Biomass burning as a source of acetone has been identified for the first time. By using atmospheric data and three-dimensional photochemical models, a global acetone source of 40-60 Tg (= 10(exp 12) g)/yr is estimated to be present. Secondary formation from the atmospheric oxidation of precursor hydrocarbons (principally propane, isobutane, and isobutene) provides the single largest source (51%). The remainder is attributable to biomass burning (26%), direct biogenic emissions (21%), and primary anthropogenic emissions (3%). Atmospheric removal of acetone is estimated to be due to photolysis (64%), reaction with OH radicals (24%), and deposition (12%). Model calculations also suggest that acetone photolysis contributed significantly to PAN formation (100-200 ppt) in the middle and upper troposphere of the sampled region and may be important globally. While the source-sink equation appears to be roughly balanced, much more atmospheric and source data, especially from the southern hemisphere, are needed to reliably quantify the atmospheric budget of acetone.

  14. Gas source localization and gas distribution mapping with a micro-drone

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, Patrick P.

    2013-07-01

    -based GSL algorithm uses gas and wind measurements to reason about the trajectory of a gas patch since it was released by the gas source until it reaches the measurement position of the micro-drone. Because of the chaotic nature of wind, an uncertainty about the wind direction has to be considered in the reconstruction process, which extends this trajectory to a patch path envelope (PPE). In general, the PPE describes the envelope of an area which the gas patch has passed with high probability. Then, the weights of the particles are updated based on the PPE. Given a uniform wind field over the search space and a single gas source, the reconstruction of multiple trajectories at different measurement locations using sufficient gas and wind measurements can lead to an accurate estimate of the gas source location, whose distance to the true source location is used as the main performance criterion. Simulations and real-world experiments are used to validate the proposed method. The aspect of environmental monitoring with a micro-drone is also discussed. Two different sampling approaches are suggested in order to address this problem. One method is the use of a predefined sweeping trajectory to explore the target area with the micro-drone in real-world gas distribution mapping experiments. As an alternative sampling approach an adaptive strategy is presented, which suggests next sampling points based on an artificial potential field to direct the micro-drone towards areas of high predictive mean and high predictive variance, while maximizing the coverage area. The purpose of the sensor planning component is to reduce the time that is necessary to converge to the final gas distribution model or to reliably identify important parameters of the distribution such as areas of high concentration. It is demonstrated that gas distribution models can provide an accurate estimate of the location of stationary gas sources. These strategies have been successfully tested in a variety of real

  15. Gas source localization and gas distribution mapping with a micro-drone

    International Nuclear Information System (INIS)

    Neumann, Patrick P.

    2013-01-01

    uses gas and wind measurements to reason about the trajectory of a gas patch since it was released by the gas source until it reaches the measurement position of the micro-drone. Because of the chaotic nature of wind, an uncertainty about the wind direction has to be considered in the reconstruction process, which extends this trajectory to a patch path envelope (PPE). In general, the PPE describes the envelope of an area which the gas patch has passed with high probability. Then, the weights of the particles are updated based on the PPE. Given a uniform wind field over the search space and a single gas source, the reconstruction of multiple trajectories at different measurement locations using sufficient gas and wind measurements can lead to an accurate estimate of the gas source location, whose distance to the true source location is used as the main performance criterion. Simulations and real-world experiments are used to validate the proposed method. The aspect of environmental monitoring with a micro-drone is also discussed. Two different sampling approaches are suggested in order to address this problem. One method is the use of a predefined sweeping trajectory to explore the target area with the micro-drone in real-world gas distribution mapping experiments. As an alternative sampling approach an adaptive strategy is presented, which suggests next sampling points based on an artificial potential field to direct the micro-drone towards areas of high predictive mean and high predictive variance, while maximizing the coverage area. The purpose of the sensor planning component is to reduce the time that is necessary to converge to the final gas distribution model or to reliably identify important parameters of the distribution such as areas of high concentration. It is demonstrated that gas distribution models can provide an accurate estimate of the location of stationary gas sources. These strategies have been successfully tested in a variety of real

  16. Gas source localization and gas distribution mapping with a micro-drone

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, Patrick P.

    2013-07-01

    uses gas and wind measurements to reason about the trajectory of a gas patch since it was released by the gas source until it reaches the measurement position of the micro-drone. Because of the chaotic nature of wind, an uncertainty about the wind direction has to be considered in the reconstruction process, which extends this trajectory to a patch path envelope (PPE). In general, the PPE describes the envelope of an area which the gas patch has passed with high probability. Then, the weights of the particles are updated based on the PPE. Given a uniform wind field over the search space and a single gas source, the reconstruction of multiple trajectories at different measurement locations using sufficient gas and wind measurements can lead to an accurate estimate of the gas source location, whose distance to the true source location is used as the main performance criterion. Simulations and real-world experiments are used to validate the proposed method. The aspect of environmental monitoring with a micro-drone is also discussed. Two different sampling approaches are suggested in order to address this problem. One method is the use of a predefined sweeping trajectory to explore the target area with the micro-drone in real-world gas distribution mapping experiments. As an alternative sampling approach an adaptive strategy is presented, which suggests next sampling points based on an artificial potential field to direct the micro-drone towards areas of high predictive mean and high predictive variance, while maximizing the coverage area. The purpose of the sensor planning component is to reduce the time that is necessary to converge to the final gas distribution model or to reliably identify important parameters of the distribution such as areas of high concentration. It is demonstrated that gas distribution models can provide an accurate estimate of the location of stationary gas sources. These strategies have been successfully tested in a variety of real

  17. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...

  18. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  19. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  20. Responsiveness of performance and morphological traits to experimental submergence predicts field distribution pattern of wetland plants

    NARCIS (Netherlands)

    Luo, Fang-Li; Huang, Lin; Lei, Ting; Xue, Wei; Li, Hong-Li; Yu, Fei-Hai; Cornelissen, J.H.C.

    2016-01-01

    Question: Plant trait mean values and trait responsiveness to different environmental regimes are both important determinants of plant field distribution, but the degree to which plant trait means vs trait responsiveness predict plant distribution has rarely been compared quantitatively. Because

  1. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    International Nuclear Information System (INIS)

    Sig Drellack, Lance Prothro

    2007-01-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  2. Effect of source angular distribution on the evaluation of gamma-ray skyshine

    Energy Technology Data Exchange (ETDEWEB)

    Sheu, R.D.; Jiang, S.H. [Dept. of Engineering and System Science, National Tsing Hua Univ., Taiwan (China); Chang, B.J.; Chen, I.J. [Division of Health Physics, Inst. of Nuclear Energy Research, Taiwan (China)

    2000-03-01

    The effect of the angular distribution of the equivalent point source on the analysis of the skyshine dose rates was investigated in detail. The dedicated skyshine codes SKYDOSE and McSKY were revised to include the capability of dealing with the anisotropic source. It was found that a replace of the cosine-distributed source by an isotropic source will overestimate the skyshine dose rates for large roof-subtended angles and cause underestimation for small roof-subtended angles. For building with roof shielding, however, replacing the cosine-distributed source by an isotropic source will always underestimate the skyshine dose rates. The skyshine dose rates from a volume source calculated by the dedicated skyshine code agree very well with those of the MCNP Monte Carlo calculation. (author)

  3. Family of Quantum Sources for Improving Near Field Accuracy in Transducer Modeling by the Distributed Point Source Method

    Directory of Open Access Journals (Sweden)

    Dominique Placko

    2016-10-01

    Full Text Available The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD. In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.

  4. Depletion of heterogeneous source species pools predicts future invasion rates

    Science.gov (United States)

    Andrew M. Liebhold; Eckehard G. Brockerhoff; Mark Kimberley; Jacqueline Beggs

    2017-01-01

    Predicting how increasing rates of global trade will result in new establishments of potentially damaging invasive species is a question of critical importance to the development of national and international policies aimed at minimizing future invasions. Centuries of historical movement and establishment of invading species may have depleted the supply of species...

  5. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  6. Distribution and sources of 226Rain groundwater of arid region

    DEFF Research Database (Denmark)

    Zheng, M. J.; Murad, A.; Zhou, X. D.

    2016-01-01

    As a part of characterizing radioactivity in groundwater of the eastern Arabian Peninsula, a first systematic evaluation of 226Ra activity in groundwater indicates a wide range (0.65-203.66 mBq L-1) with average of 17.56 mBq L-1. Adsorption/desorption process, groundwater residence time and urani...... concentration are the main controlling factors of 226Ra distribution in groundwater of the different aquifers. Estimation of 226Ra effective dose from water ingestion suggests potential risk of drinking water from the carbonate aquifer....

  7. Distributed control system for the National Synchrotron Light Source

    International Nuclear Information System (INIS)

    Batchelor, K.; Culwick, B.B.; Goldstick, J.; Sheehan, J.; Smith, J.

    1979-01-01

    Until recently, accelerator and similar control systems have used modular interface hardware such as CAMAC or DATACON which translated digital computer commands transmitted over some data link into hardware device status and monitoring variables. Such modules possessed little more than local buffering capability in the processing of commands and data. The advent of the micro-processor has made available low cost small computers of significant computational capability. This paper describes how micro-computers including such micro-processors and associated memory, input/output devices and interrupt facilities have been incorporated into a distributed system for the control of the NSLS

  8. Space distribution of extragalactic sources - Cosmology versus evolution

    International Nuclear Information System (INIS)

    Cavaliere, A.; Maccacaro, T.

    1990-01-01

    Alternative cosmologies have been recurrently invoked to explain in terms of global spacetime structure the apparent large increase, with increasing redshift, in the average luminosity of active galactic nuclei. These models interestingly seek to avoid the complexities of the canonical interpretation in terms of intrinsic population evolutions in a Friedmann universe. However, a problem of consistency for these cosmologies is pointed out, since they have to include also other classes of extragalactic sources, such as clusters of galaxies and BL Lac objects, for which there is preliminary evidence of a different behavior. 40 refs

  9. Visibility from roads predict the distribution of invasive fishes in agricultural ponds.

    Science.gov (United States)

    Kizuka, Toshikazu; Akasaka, Munemitsu; Kadoya, Taku; Takamura, Noriko

    2014-01-01

    Propagule pressure and habitat characteristics are important factors used to predict the distribution of invasive alien species. For species exhibiting strong propagule pressure because of human-mediated introduction of species, indicators of introduction potential must represent the behavioral characteristics of humans. This study examined 64 agricultural ponds to assess the visibility of ponds from surrounding roads and its value as a surrogate of propagule pressure to explain the presence and absence of two invasive fish species. A three-dimensional viewshed analysis using a geographic information system quantified the visual exposure of respective ponds to humans. Binary classification trees were developed as a function of their visibility from roads, as well as five environmental factors: river density, connectivity with upstream dam reservoirs, pond area, chlorophyll a concentration, and pond drainage. Traditional indicators of human-mediated introduction (road density and proportion of urban land-use area) were alternatively included for comparison instead of visual exposure. The presence of Bluegill (Lepomis macrochirus) was predicted by the ponds' higher visibility from roads and pond connection with upstream dam reservoirs. Results suggest that fish stocking into ponds and their dispersal from upstream sources facilitated species establishment. Largemouth bass (Micropterus salmoides) distribution was constrained by chlorophyll a concentration, suggesting their lower adaptability to various environments than that of Bluegill. Based on misclassifications from classification trees for Bluegill, pond visual exposure to roads showed greater predictive capability than traditional indicators of human-mediated introduction. Pond visibility is an effective predictor of invasive species distribution. Its wider use might improve management and mitigate further invasion. The visual exposure of recipient ecosystems to humans is important for many invasive species that

  10. Testing and intercomparison of model predictions of radionuclide migration from a hypothetical area source

    International Nuclear Information System (INIS)

    O'Brien, R.S.; Yu, C.; Zeevaert, T.; Olyslaegers, G.; Amado, V.; Setlow, L.W.; Waggitt, P.W.

    2008-01-01

    This work was carried out as part of the International Atomic Energy Agency's EMRAS program. One aim of the work was to develop scenarios for testing computer models designed for simulating radionuclide migration in the environment, and to use these scenarios for testing the models and comparing predictions from different models. This paper presents the results of the development and testing of a hypothetical area source of NORM waste/residue using two complex computer models and one screening model. There are significant differences in the methods used to model groundwater flow between the complex models. The hypothetical source was used because of its relative simplicity and because of difficulties encountered in finding comprehensive, well-validated data sets for real sites. The source consisted of a simple repository of uniform thickness, with 1 Bq g -1 of uranium-238 ( 238 U) (in secular equilibrium with its decay products) distributed uniformly throughout the waste. These approximate real situations, such as engineered repositories, waste rock piles, tailings piles and landfills. Specification of the site also included the physical layout, vertical stratigraphic details, soil type for each layer of material, precipitation and runoff details, groundwater flow parameters, and meteorological data. Calculations were carried out with and without a cover layer of clean soil above the waste, for people working and living at different locations relative to the waste. The predictions of the two complex models showed several differences which need more detailed examination. The scenario is available for testing by other modelers. It can also be used as a planning tool for remediation work or for repository design, by changing the scenario parameters and running the models for a range of different inputs. Further development will include applying models to real scenarios and integrating environmental impact assessment methods with the safety assessment tools currently

  11. Operator aids for prediction of source term attenuation

    International Nuclear Information System (INIS)

    Powers, D.A.

    2004-01-01

    Simplified expressions for the attenuation of radionuclide releases by sprays and by water pools are devised. These expressions are obtained by correlation of the 10th, 50th and 90th percentiles of uncertainty distributions for the water pool decontamination factor and the spray decontamination coefficient. These uncertainty distributions were obtained by Monte Carlo uncertainty analyses using detailed, mechanistic models of the pools and sprays. Uncertainties considered in the analyses include uncertainties in the phenomena and uncertainties in the initial and boundary conditions dictated by the progression of severe accidents. Final results are graphically displayed in terms of the decontamination factor achieved at selected levels of conservatism versus pool depth and water subcooling or, in the case of sprays, versus time. (author)

  12. Distributed estimation based on observations prediction in wireless sensor networks

    KAUST Repository

    Bouchoucha, Taha; Ahmed, Mohammed F A; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2015-01-01

    We consider wireless sensor networks (WSNs) used for distributed estimation of unknown parameters. Due to the limited bandwidth, sensor nodes quantize their noisy observations before transmission to a fusion center (FC) for the estimation process

  13. High-Lift Propeller Noise Prediction for a Distributed Electric Propulsion Flight Demonstrator

    Science.gov (United States)

    Nark, Douglas M.; Buning, Pieter G.; Jones, William T.; Derlaga, Joseph M.

    2017-01-01

    Over the past several years, the use of electric propulsion technologies within aircraft design has received increased attention. The characteristics of electric propulsion systems open up new areas of the aircraft design space, such as the use of distributed electric propulsion (DEP). In this approach, electric motors are placed in many different locations to achieve increased efficiency through integration of the propulsion system with the airframe. Under a project called Scalable Convergent Electric Propulsion Technology Operations Research (SCEPTOR), NASA is designing a flight demonstrator aircraft that employs many "high-lift propellers" distributed upstream of the wing leading edge and two cruise propellers (one at each wingtip). As the high-lift propellers are operational at low flight speeds (take-off/approach flight conditions), the impact of the DEP configuration on the aircraft noise signature is also an important design consideration. This paper describes efforts toward the development of a mulit-fidelity aerodynamic and acoustic methodology for DEP high-lift propeller aeroacoustic modeling. Specifically, the PAS, OVERFLOW 2, and FUN3D codes are used to predict the aerodynamic performance of a baseline high-lift propeller blade set. Blade surface pressure results from the aerodynamic predictions are then used with PSU-WOPWOP and the F1A module of the NASA second generation Aircraft NOise Prediction Program to predict the isolated high-lift propeller noise source. Comparisons of predictions indicate that general trends related to angle of attack effects at the blade passage frequency are captured well with the various codes. Results for higher harmonics of the blade passage frequency appear consistent for the CFD based methods. Conversely, evidence of the need for a study of the effects of increased azimuthal grid resolution on the PAS based results is indicated and will be pursued in future work. Overall, the results indicate that the computational

  14. Prediction of residence time distributions in food processing machinery

    DEFF Research Database (Denmark)

    Karlson, Torben; Friis, Alan; Szabo, Peter

    1996-01-01

    The velocity field in a co-rotating disc scraped surface heat exchanger (CDHE) is calculated using a finite element method. The residence time distribution for the CDHE is then obtained by tracing particles introduced in the inlet.......The velocity field in a co-rotating disc scraped surface heat exchanger (CDHE) is calculated using a finite element method. The residence time distribution for the CDHE is then obtained by tracing particles introduced in the inlet....

  15. The interplay of various sources of noise on reliability of species distribution models hinges on ecological specialisation.

    Science.gov (United States)

    Soultan, Alaaeldin; Safi, Kamran

    2017-01-01

    Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.

  16. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System.

    Science.gov (United States)

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong

    2016-06-06

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  17. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System

    Directory of Open Access Journals (Sweden)

    Miao Sun

    2016-06-01

    Full Text Available We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  18. Collective phenomena in synchrotron radiation sources. Prediction, diagnostics, countermeasures

    International Nuclear Information System (INIS)

    Khan, S.

    2006-01-01

    This book helps to dispel the notion that collective phenomena, which have become increasingly important in modern storage rings, are an obscure and inaccessible topic. Despite an emphasis on synchrotron light sources, the basic concepts presented here are valid for other facilities as well. Graduate students, scientists and engineers working in an accelerator environment will find this to be a systematic exposition of the principles behind collective instabilities and lifetime-limiting effects. Experimental methods to identify and characterize collective effects are also surveyed. Among other measures to improve the performance of a projected or existing facility, a detailed account of feedback control of instabilities is given. (orig.)

  19. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  20. Prediction method for thermal ratcheting of a cylinder subjected to axially moving temperature distribution

    International Nuclear Information System (INIS)

    Wada, Hiroshi; Igari, Toshihide; Kitade, Shoji.

    1989-01-01

    A prediction method was proposed for plastic ratcheting of a cylinder, which was subjected to axially moving temperature distribution without primary stress. First, a mechanism of this ratcheting was proposed, which considered the movement of temperature distribution as a driving force of this phenomenon. Predictive equations of the ratcheting strain for two representative temperature distributions were proposed based on this mechanism by assuming the elastic-perfectly-plastic material behavior. Secondly, an elastic-plastic analysis was made on a cylinder subjected to the representative two temperature distributions. Analytical results coincided well with the predicted results, and the applicability of the proposed equations was confirmed. (author)

  1. Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua

    2014-01-01

    the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...... is analytically intractable. By considering the concave property of the multivariate inverse beta function, we introduce an upper-bound to the true predictive distribution. As the global minimum of this upper-bound exists, the problem is reduced to seek an approximation to the true predictive distribution...

  2. Nitrogen deposition to the United States: distribution, sources, and processes

    Directory of Open Access Journals (Sweden)

    L. Zhang

    2012-05-01

    Full Text Available We simulate nitrogen deposition over the US in 2006–2008 by using the GEOS-Chem global chemical transport model at 1/2°×2/3° horizontal resolution over North America and adjacent oceans. US emissions of NOx and NH3 in the model are 6.7 and 2.9 Tg N a−1 respectively, including a 20% natural contribution for each. Ammonia emissions are a factor of 3 lower in winter than summer, providing a good match to US network observations of NHx (≡NH3 gas + ammonium aerosol and ammonium wet deposition fluxes. Model comparisons to observed deposition fluxes and surface air concentrations of oxidized nitrogen species (NOy show overall good agreement but excessive wintertime HNO3 production over the US Midwest and Northeast. This suggests a model overestimate N2O5 hydrolysis in aerosols, and a possible factor is inhibition by aerosol nitrate. Model results indicate a total nitrogen deposition flux of 6.5 Tg N a−1 over the contiguous US, including 4.2 as NOy and 2.3 as NHx. Domestic anthropogenic, foreign anthropogenic, and natural sources contribute respectively 78%, 6%, and 16% of total nitrogen deposition over the contiguous US in the model. The domestic anthropogenic contribution generally exceeds 70% in the east and in populated areas of the west, and is typically 50–70% in remote areas of the west. Total nitrogen deposition in the model exceeds 10 kg N ha−1 a−1 over 35% of the contiguous US.

  3. How the Assumed Size Distribution of Dust Minerals Affects the Predicted Ice Forming Nuclei

    Science.gov (United States)

    Perlwitz, Jan P.; Fridlind, Ann M.; Garcia-Pando, Carlos Perez; Miller, Ron L.; Knopf, Daniel A.

    2015-01-01

    The formation of ice in clouds depends on the availability of ice forming nuclei (IFN). Dust aerosol particles are considered the most important source of IFN at a global scale. Recent laboratory studies have demonstrated that the mineral feldspar provides the most efficient dust IFN for immersion freezing and together with kaolinite for deposition ice nucleation, and that the phyllosilicates illite and montmorillonite (a member of the smectite group) are of secondary importance.A few studies have applied global models that simulate mineral specific dust to predict the number and geographical distribution of IFN. These studies have been based on the simple assumption that the mineral composition of soil as provided in data sets from the literature translates directly into the mineral composition of the dust aerosols. However, these tables are based on measurements of wet-sieved soil where dust aggregates are destroyed to a large degree. In consequence, the size distribution of dust is shifted to smaller sizes, and phyllosilicates like illite, kaolinite, and smectite are only found in the size range 2 m. In contrast, in measurements of the mineral composition of dust aerosols, the largest mass fraction of these phyllosilicates is found in the size range 2 m as part of dust aggregates. Conversely, the mass fraction of feldspar is smaller in this size range, varying with the geographical location. This may have a significant effect on the predicted IFN number and its geographical distribution.An improved mineral specific dust aerosol module has been recently implemented in the NASA GISS Earth System ModelE2. The dust module takes into consideration the disaggregated state of wet-sieved soil, on which the tables of soil mineral fractions are based. To simulate the atmospheric cycle of the minerals, the mass size distribution of each mineral in aggregates that are emitted from undispersed parent soil is reconstructed. In the current study, we test the null

  4. Predictive modeling of coral disease distribution within a reef system.

    Directory of Open Access Journals (Sweden)

    Gareth J Williams

    2010-02-01

    Full Text Available Diseases often display complex and distinct associations with their environment due to differences in etiology, modes of transmission between hosts, and the shifting balance between pathogen virulence and host resistance. Statistical modeling has been underutilized in coral disease research to explore the spatial patterns that result from this triad of interactions. We tested the hypotheses that: 1 coral diseases show distinct associations with multiple environmental factors, 2 incorporating interactions (synergistic collinearities among environmental variables is important when predicting coral disease spatial patterns, and 3 modeling overall coral disease prevalence (the prevalence of multiple diseases as a single proportion value will increase predictive error relative to modeling the same diseases independently. Four coral diseases: Porites growth anomalies (PorGA, Porites tissue loss (PorTL, Porites trematodiasis (PorTrem, and Montipora white syndrome (MWS, and their interactions with 17 predictor variables were modeled using boosted regression trees (BRT within a reef system in Hawaii. Each disease showed distinct associations with the predictors. Environmental predictors showing the strongest overall associations with the coral diseases were both biotic and abiotic. PorGA was optimally predicted by a negative association with turbidity, PorTL and MWS by declines in butterflyfish and juvenile parrotfish abundance respectively, and PorTrem by a modal relationship with Porites host cover. Incorporating interactions among predictor variables contributed to the predictive power of our models, particularly for PorTrem. Combining diseases (using overall disease prevalence as the model response, led to an average six-fold increase in cross-validation predictive deviance over modeling the diseases individually. We therefore recommend coral diseases to be modeled separately, unless known to have etiologies that respond in a similar manner to

  5. Sources and distribution of trace elements in Estonian peat

    Science.gov (United States)

    Orru, Hans; Orru, Mall

    2006-10-01

    This paper presents the results of the distribution of trace elements in Estonian mires. Sixty four mires, representative of the different landscape units, were analyzed for the content of 16 trace elements (Cr, Mn, Ni, Cu, Zn, and Pb using AAS; Cd by GF-AAS; Hg by the cold vapour method; and V, Co, As, Sr, Mo, Th, and U by XRF) as well as other peat characteristics (peat type, degree of humification, pH and ash content). The results of the research show that concentrations of trace elements in peat are generally low: V 3.8 ± 0.6, Cr 3.1 ± 0.2, Mn 35.1 ± 2.7, Co 0.50 ± 0.05, Ni 3.7 ± 0.2, Cu 4.4 ± 0.3, Zn 10.0 ± 0.7, As 2.4 ± 0.3, Sr 21.9 ± 0.9, Mo 1.2 ± 0.2, Cd 0.12 ± 0.01, Hg 0.05 ± 0.01, Pb 3.3 ± 0.2, Th 0.47 ± 0.05, U 1.3 ± 0.2 μg g - 1 and S 0.25 ± 0.02%. Statistical analyses on these large database showed that Co has the highest positive correlations with many elements and ash content. As, Ni, Mo, ash content and pH are also significantly correlated. The lowest abundance of most trace elements was recorded in mires fed only by precipitation (ombrotrophic), and the highest in mires fed by groundwater and springs (minerotrophic), which are situated in the flood plains of river valleys. Concentrations usually differ between the superficial, middle and bottom peat layers, but the significance decreases depending on the type of mire in the following order: transitional mires - raised bogs - fens. Differences among mire types are highest for the superficial but not significant for the basal peat layers. The use of peat with high concentrations of trace elements in agriculture, horticulture, as fuel, for water purification etc., may pose a risk for humans: via the food chain, through inhalation, drinking water etc.

  6. Rapidly locating sources and predicting contaminant dispersion in buildings

    International Nuclear Information System (INIS)

    Sohn, Michael D.; Reynolds, Pamela; Gadgil, Ashok J.; Sextro, Richard G.

    2002-01-01

    Contaminant releases in or near a building can lead to significant human exposures unless prompt response measures are taken. However, selecting the proper response depends in part on knowing the source locations, the amounts released, and the dispersion characteristics of the pollutants. We present an approach that estimates this information in real time. It uses Bayesian statistics to interpret measurements from sensors placed in the building yielding best estimates and uncertainties for the release conditions, including the operating state of the building. Because the method is fast, it continuously updates the estimates as measurements stream in from the sensors. We show preliminary results for characterizing a gas release in a three-floor, multi-room building at the Dugway Proving Grounds, Utah, USA

  7. Predicting Spatial Distribution of Key Honeybee Pests in Kenya Using Remotely Sensed and Bioclimatic Variables: Key Honeybee Pests Distribution Models

    Directory of Open Access Journals (Sweden)

    David M. Makori

    2017-02-01

    Full Text Available Bee keeping is indispensable to global food production. It is an alternate income source, especially in rural underdeveloped African settlements, and an important forest conservation incentive. However, dwindling honeybee colonies around the world are attributed to pests and diseases whose spatial distribution and influences are not well established. In this study, we used remotely sensed data to improve the reliability of pest ecological niche (EN models to attain reliable pest distribution maps. Occurrence data on four pests (Aethina tumida, Galleria mellonella, Oplostomus haroldi and Varroa destructor were collected from apiaries within four main agro-ecological regions responsible for over 80% of Kenya’s bee keeping. Africlim bioclimatic and derived normalized difference vegetation index (NDVI variables were used to model their ecological niches using Maximum Entropy (MaxEnt. Combined precipitation variables had a high positive logit influence on all remotely sensed and biotic models’ performance. Remotely sensed vegetation variables had a substantial effect on the model, contributing up to 40.8% for G. mellonella and regions with high rainfall seasonality were predicted to be high-risk areas. Projections (to 2055 indicated that, with the current climate change trend, these regions will experience increased honeybee pest risk. We conclude that honeybee pests could be modelled using bioclimatic data and remotely sensed variables in MaxEnt. Although the bioclimatic data were most relevant in all model results, incorporating vegetation seasonality variables to improve mapping the ‘actual’ habitat of key honeybee pests and to identify risk and containment zones needs to be further investigated.

  8. Distribution and sources of particulate organic matter in the Indian monsoonal estuaries during monsoon

    Digital Repository Service at National Institute of Oceanography (India)

    Sarma, V.V.S.S.; Krishna, M.S.; Prasad, V.R.; Kumar, B.S.K.; Naidu, S.A.; Rao, G.D.; Viswanadham, R.; Sridevi, T.; Kumar, P.P.; Reddy, N.P.C.

    The distribution and sources of particulate organic carbon (POC) and nitrogen (PN) in 27 Indian estuaries were examined during the monsoon using the content and isotopic composition of carbon and nitrogen. Higher phytoplankton biomass was noticed...

  9. Distributed Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus Fogtmann; Vandenberghe, Lieven; Poulsen, Niels Kjølstad

    2016-01-01

    Integration of a large number of flexible consumers in a smart grid requires a scalable power balancing strategy. We formulate the control problem as an optimization problem to be solved repeatedly by the aggregator in a model predictive control framework. To solve the large-scale control problem...

  10. Somatic cell count distributions during lactation predict clinical mastitis

    NARCIS (Netherlands)

    Green, M.J.; Green, L.E.; Schukken, Y.H.; Bradley, A.J.; Peeler, E.J.; Barkema, H.W.; Haas, de Y.; Collis, V.J.; Medley, G.F.

    2004-01-01

    This research investigated somatic cell count (SCC) records during lactation, with the purpose of identifying distribution characteristics (mean and measures of variation) that were most closely associated with clinical mastitis. Three separate data sets were used, one containing quarter SCC (n =

  11. Optimal operation of water distribution networks by predictive control ...

    African Journals Online (AJOL)

    This paper presents an approach for the operational optimisation of potable water distribution networks. The maximisation of the use of low-cost power (e.g. overnight pumping) and the maintenance of a target chlorine concentration at final delivery points were defined as important optimisation objectives. The first objective ...

  12. Distributed BOLD-response in association cortex vector state space predicts reaction time during selective attention.

    Science.gov (United States)

    Musso, Francesco; Konrad, Andreas; Vucurevic, Goran; Schäffner, Cornelius; Friedrich, Britta; Frech, Peter; Stoeter, Peter; Winterer, Georg

    2006-02-15

    Human cortical information processing is thought to be dominated by distributed activity in vector state space (Churchland, P.S., Sejnowski, T.J., 1992. The Computational Brain. MIT Press, Cambridge.). In principle, it should be possible to quantify distributed brain activation with independent component analysis (ICA) through vector-based decomposition, i.e., through a separation of a mixture of sources. Using event-related functional magnetic resonance imaging (fMRI) during a selective attention-requiring task (visual oddball), we explored how the number of independent components within activated cortical areas is related to reaction time. Prior to ICA, the activated cortical areas were determined on the basis of a General linear model (GLM) voxel-by-voxel analysis of the target stimuli (checkerboard reversal). Two activated cortical areas (temporoparietal cortex, medial prefrontal cortex) were further investigated as these cortical regions are known to be the sites of simultaneously active electromagnetic generators which give rise to the compound event-related potential P300 during oddball task conditions. We found that the number of independent components more strongly predicted reaction time than the overall level of "activation" (GLM BOLD-response) in the left temporoparietal area whereas in the medial prefrontal cortex both ICA and GLM predicted reaction time equally well. Comparable correlations were not seen when principle components were used instead of independent components. These results indicate that the number of independently activated components, i.e., a high level of cortical activation complexity in cortical vector state space, may index particularly efficient information processing during selective attention-requiring tasks. To our best knowledge, this is the first report describing a potential relationship between neuronal generators of cognitive processes, the associated electrophysiological evidence for the existence of distributed networks

  13. Investigating The Neutron Flux Distribution Of The Miniature Neutron Source Reactor MNSR Type

    International Nuclear Information System (INIS)

    Nguyen Hoang Hai; Do Quang Binh

    2011-01-01

    Neutron flux distribution is the important characteristic of nuclear reactor. In this article, four energy group neutron flux distributions of the miniature neutron source reactor MNSR type versus radial and axial directions are investigated in case the control rod is fully withdrawn. In addition, the effect of control rod positions on the thermal neutron flux distribution is also studied. The group constants for all reactor components are generated by the WIMSD code, and the neutron flux distributions are calculated by the CITATION code. The results show that the control rod positions only affect in the planning area for distribution in the region around the control rod. (author)

  14. Predictive Model for the Analysis of the Effects of Underwater Impulsive Sources on Marine Life

    National Research Council Canada - National Science Library

    Lazauski, Colin J

    2007-01-01

    A method is provided to predict the biological consequences to marine animals from exposure to multiple underwater impulsive sources by simulating underwater explosions over a defined period of time...

  15. Predicting the distribution of intensive poultry farming in Thailand

    OpenAIRE

    Van Boeckel, Thomas P; Thanapongtharm, Weerapong; Robinson, Timothy; D’Aietti, Laura; Gilbert, Marius

    2012-01-01

    Intensification of animal production can be an important factor in the emergence of infectious diseases because changes in production structure influence disease transmission patterns. In 2004 and 2005, Thailand was subject to two highly pathogenic avian influenza epidemic waves and large surveys were conducted of the poultry sector, providing detailed spatial data on various poultry types. This study analysed these data with the aim of establishing the distributions of extensive and intensiv...

  16. Correlated Sources in Distributed Networks--Data Transmission, Common Information Characterization and Inferencing

    Science.gov (United States)

    Liu, Wei

    2011-01-01

    Correlation is often present among observations in a distributed system. This thesis deals with various design issues when correlated data are observed at distributed terminals, including: communicating correlated sources over interference channels, characterizing the common information among dependent random variables, and testing the presence of…

  17. Y-Source Boost DC/DC Converter for Distributed Generation

    DEFF Research Database (Denmark)

    Siwakoti, Yam P.; Loh, Poh Chiang; Blaabjerg, Frede

    2015-01-01

    This paper introduces a versatile Y-source boost dc/dc converter intended for distributed power generation, where high gain is often demanded. The proposed converter uses a Y-source impedance network realized with a tightly coupled three-winding inductor for high voltage boosting that is presently...

  18. Measurement-device-independent quantum key distribution with correlated source-light-intensity errors

    Science.gov (United States)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2018-04-01

    We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.

  19. Prediction of sound transmission loss through multilayered panels by using Gaussian distribution of directional incident energy

    Science.gov (United States)

    Kang; Ih; Kim; Kim

    2000-03-01

    In this study, a new prediction method is suggested for sound transmission loss (STL) of multilayered panels of infinite extent. Conventional methods such as random or field incidence approach often given significant discrepancies in predicting STL of multilayered panels when compared with the experiments. In this paper, appropriate directional distributions of incident energy to predict the STL of multilayered panels are proposed. In order to find a weighting function to represent the directional distribution of incident energy on the wall in a reverberation chamber, numerical simulations by using a ray-tracing technique are carried out. Simulation results reveal that the directional distribution can be approximately expressed by the Gaussian distribution function in terms of the angle of incidence. The Gaussian function is applied to predict the STL of various multilayered panel configurations as well as single panels. The compared results between the measurement and the prediction show good agreements, which validate the proposed Gaussian function approach.

  20. Assessment of subchannel code ASSERT-PV for flow-distribution predictions

    International Nuclear Information System (INIS)

    Nava-Dominguez, A.; Rao, Y.F.; Waddington, G.M.

    2014-01-01

    Highlights: • Assessment of the subchannel code ASSERT-PV 3.2 for the prediction of flow distribution. • Open literature and in-house experimental data to quantify ASSERT-PV predictions. • Model changes assessed against vertical and horizontal flow experiments. • Improvement of flow-distribution predictions under CANDU-relevant conditions. - Abstract: This paper reports an assessment of the recently released subchannel code ASSERT-PV 3.2 for the prediction of flow-distribution in fuel bundles, including subchannel void fraction, quality and mass fluxes. Experimental data from open literature and from in-house tests are used to assess the flow-distribution models in ASSERT-PV 3.2. The prediction statistics using the recommended model set of ASSERT-PV 3.2 are compared to those from previous code versions. Separate-effects sensitivity studies are performed to quantify the contribution of each flow-distribution model change or enhancement to the improvement in flow-distribution prediction. The assessment demonstrates significant improvement in the prediction of flow-distribution in horizontal fuel channels containing CANDU bundles

  1. Assessment of subchannel code ASSERT-PV for flow-distribution predictions

    Energy Technology Data Exchange (ETDEWEB)

    Nava-Dominguez, A., E-mail: navadoma@aecl.ca; Rao, Y.F., E-mail: raoy@aecl.ca; Waddington, G.M., E-mail: waddingg@aecl.ca

    2014-08-15

    Highlights: • Assessment of the subchannel code ASSERT-PV 3.2 for the prediction of flow distribution. • Open literature and in-house experimental data to quantify ASSERT-PV predictions. • Model changes assessed against vertical and horizontal flow experiments. • Improvement of flow-distribution predictions under CANDU-relevant conditions. - Abstract: This paper reports an assessment of the recently released subchannel code ASSERT-PV 3.2 for the prediction of flow-distribution in fuel bundles, including subchannel void fraction, quality and mass fluxes. Experimental data from open literature and from in-house tests are used to assess the flow-distribution models in ASSERT-PV 3.2. The prediction statistics using the recommended model set of ASSERT-PV 3.2 are compared to those from previous code versions. Separate-effects sensitivity studies are performed to quantify the contribution of each flow-distribution model change or enhancement to the improvement in flow-distribution prediction. The assessment demonstrates significant improvement in the prediction of flow-distribution in horizontal fuel channels containing CANDU bundles.

  2. Research on Fault Prediction of Distribution Network Based on Large Data

    Directory of Open Access Journals (Sweden)

    Jinglong Zhou

    2017-01-01

    Full Text Available With the continuous development of information technology and the improvement of distribution automation level. Especially, the amount of on-line monitoring and statistical data is increasing, and large data is used data distribution system, describes the technology to collect, data analysis and data processing of the data distribution system. The artificial neural network mining algorithm and the large data are researched in the fault diagnosis and prediction of the distribution network.

  3. Predicting the distribution of contamination from a chlorinated hydrocarbon release

    Energy Technology Data Exchange (ETDEWEB)

    Lupo, M.J. [K.W. Brown Environmental Services, College Station, TX (United States); Moridis, G.J. [Lawrence Berkeley Laboratory, Berkeley, CA (United States)

    1995-03-01

    The T2VOC model with the T2CG1 conjugate gradient package was used to simulate the motion of a dense chlorinated hydrocarbon plume released from an industrial plant. The release involved thousands of kilograms of trichloroethylene (TCE) and other chemicals that were disposed of onsite over a period of nearly twenty years. After the disposal practice ceased, an elongated plume was discovered. Because much of the plume underlies a developed area, it was of interest to study the migration history of the plume to determine the distribution of the contamination.

  4. Gaze distribution analysis and saliency prediction across age groups.

    Science.gov (United States)

    Krishna, Onkar; Helo, Andrea; Rämä, Pia; Aizawa, Kiyoharu

    2018-01-01

    Knowledge of the human visual system helps to develop better computational models of visual attention. State-of-the-art models have been developed to mimic the visual attention system of young adults that, however, largely ignore the variations that occur with age. In this paper, we investigated how visual scene processing changes with age and we propose an age-adapted framework that helps to develop a computational model that can predict saliency across different age groups. Our analysis uncovers how the explorativeness of an observer varies with age, how well saliency maps of an age group agree with fixation points of observers from the same or different age groups, and how age influences the center bias tendency. We analyzed the eye movement behavior of 82 observers belonging to four age groups while they explored visual scenes. Explorative- ness was quantified in terms of the entropy of a saliency map, and area under the curve (AUC) metrics was used to quantify the agreement analysis and the center bias tendency. Analysis results were used to develop age adapted saliency models. Our results suggest that the proposed age-adapted saliency model outperforms existing saliency models in predicting the regions of interest across age groups.

  5. Voltage management of distribution networks with high penetration of distributed photovoltaic generation sources

    Science.gov (United States)

    Alyami, Saeed

    Installation of photovoltaic (PV) units could lead to great challenges to the existing electrical systems. Issues such as voltage rise, protection coordination, islanding detection, harmonics, increased or changed short-circuit levels, etc., need to be carefully addressed before we can see a wide adoption of this environmentally friendly technology. Voltage rise or overvoltage issues are of particular importance to be addressed for deploying more PV systems to distribution networks. This dissertation proposes a comprehensive solution to deal with the voltage violations in distribution networks, from controlling PV power outputs and electricity consumption of smart appliances in real time to optimal placement of PVs at the planning stage. The dissertation is composed of three parts: the literature review, the work that has already been done and the future research tasks. An overview on renewable energy generation and its challenges are given in Chapter 1. The overall literature survey, motivation and the scope of study are also outlined in the chapter. Detailed literature reviews are given in the rest of chapters. The overvoltage and undervoltage phenomena in typical distribution networks with integration of PVs are further explained in Chapter 2. Possible approaches for voltage quality control are also discussed in this chapter, followed by the discussion on the importance of the load management for PHEVs and appliances and its benefits to electric utilities and end users. A new real power capping method is presented in Chapter 3 to prevent overvoltage by adaptively setting the power caps for PV inverters in real time. The proposed method can maintain voltage profiles below a pre-set upper limit while maximizing the PV generation and fairly distributing the real power curtailments among all the PV systems in the network. As a result, each of the PV systems in the network has equal opportunity to generate electricity and shares the responsibility of voltage

  6. Flows and Stratification of an Enclosure Containing Both Localised and Vertically Distributed Sources of Buoyancy

    Science.gov (United States)

    Partridge, Jamie; Linden, Paul

    2013-11-01

    We examine the flows and stratification established in a naturally ventilated enclosure containing both a localised and vertically distributed source of buoyancy. The enclosure is ventilated through upper and lower openings which connect the space to an external ambient. Small scale laboratory experiments were carried out with water as the working medium and buoyancy being driven directly by temperature differences. A point source plume gave localised heating while the distributed source was driven by a controllable heater mat located in the side wall of the enclosure. The transient temperatures, as well as steady state temperature profiles, were recorded and are reported here. The temperature profiles inside the enclosure were found to be dependent on the effective opening area A*, a combination of the upper and lower openings, and the ratio of buoyancy fluxes from the distributed and localised source Ψ =Bw/Bp . Industrial CASE award with ARUP.

  7. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local universe

    DEFF Research Database (Denmark)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene

    2017-01-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe....... Assuming that the distribution of the neutrino sources follows that of matter we look for correlations between `warm' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance...... (including that of IceCube-Gen2) we demonstrate that sources with local density exceeding $10^{-6} \\, \\text{Mpc}^{-3}$ and neutrino luminosity $L_{\

  8. Coordinated control of active and reactive power of distribution network with distributed PV cluster via model predictive control

    Science.gov (United States)

    Ji, Yu; Sheng, Wanxing; Jin, Wei; Wu, Ming; Liu, Haitao; Chen, Feng

    2018-02-01

    A coordinated optimal control method of active and reactive power of distribution network with distributed PV cluster based on model predictive control is proposed in this paper. The method divides the control process into long-time scale optimal control and short-time scale optimal control with multi-step optimization. The models are transformed into a second-order cone programming problem due to the non-convex and nonlinear of the optimal models which are hard to be solved. An improved IEEE 33-bus distribution network system is used to analyse the feasibility and the effectiveness of the proposed control method

  9. [Prediction of potential geographic distribution of Lyme disease in Qinghai province with Maximum Entropy model].

    Science.gov (United States)

    Zhang, Lin; Hou, Xuexia; Liu, Huixin; Liu, Wei; Wan, Kanglin; Hao, Qin

    2016-01-01

    To predict the potential geographic distribution of Lyme disease in Qinghai by using Maximum Entropy model (MaxEnt). The sero-diagnosis data of Lyme disease in 6 counties (Huzhu, Zeku, Tongde, Datong, Qilian and Xunhua) and the environmental and anthropogenic data including altitude, human footprint, normalized difference vegetation index (NDVI) and temperature in Qinghai province since 1990 were collected. By using the data of Huzhu Zeku and Tongde, the prediction of potential distribution of Lyme disease in Qinghai was conducted with MaxEnt. The prediction results were compared with the human sero-prevalence of Lyme disease in Datong, Qilian and Xunhua counties in Qinghai. Three hot spots of Lyme disease were predicted in Qinghai, which were all in the east forest areas. Furthermore, the NDVI showed the most important role in the model prediction, followed by human footprint. Datong, Qilian and Xunhua counties were all in eastern Qinghai. Xunhua was in hot spot areaⅡ, Datong was close to the north of hot spot area Ⅲ, while Qilian with lowest sero-prevalence of Lyme disease was not in the hot spot areas. The data were well modeled in MaxEnt (Area Under Curve=0.980). The actual distribution of Lyme disease in Qinghai was in consistent with the results of the model prediction. MaxEnt could be used in predicting the potential distribution patterns of Lyme disease. The distribution of vegetation and the range and intensity of human activity might be related with Lyme disease distribution.

  10. Prediction of in-phantom dose distribution using in-air neutron beam characteristics for BNCS

    Energy Technology Data Exchange (ETDEWEB)

    Verbeke, Jerome M.

    1999-12-14

    A monoenergetic neutron beam simulation study is carried out to determine the optimal neutron energy range for treatment of rheumatoid arthritis using radiation synovectomy. The goal of the treatment is the ablation of diseased synovial membranes in joints, such as knees and fingers. This study focuses on human knee joints. Two figures-of-merit are used to measure the neutron beam quality, the ratio of the synovium absorbed dose to the skin absorbed dose, and the ratio of the synovium absorbed dose to the bone absorbed dose. It was found that (a) thermal neutron beams are optimal for treatment, (b) similar absorbed dose rates and therapeutic ratios are obtained with monodirectional and isotropic neutron beams. Computation of the dose distribution in a human knee requires the simulation of particle transport from the neutron source to the knee phantom through the moderator. A method was developed to predict the dose distribution in a knee phantom from any neutron and photon beam spectra incident on the knee. This method was revealed to be reasonably accurate and enabled one to reduce by a factor of 10 the particle transport simulation time by modeling the moderator only.

  11. Prediction of in-phantom dose distribution using in-air neutron beam characteristics for BNCS

    International Nuclear Information System (INIS)

    Verbeke, Jerome M.

    1999-01-01

    A monoenergetic neutron beam simulation study is carried out to determine the optimal neutron energy range for treatment of rheumatoid arthritis using radiation synovectomy. The goal of the treatment is the ablation of diseased synovial membranes in joints, such as knees and fingers. This study focuses on human knee joints. Two figures-of-merit are used to measure the neutron beam quality, the ratio of the synovium absorbed dose to the skin absorbed dose, and the ratio of the synovium absorbed dose to the bone absorbed dose. It was found that (a) thermal neutron beams are optimal for treatment, (b) similar absorbed dose rates and therapeutic ratios are obtained with monodirectional and isotropic neutron beams. Computation of the dose distribution in a human knee requires the simulation of particle transport from the neutron source to the knee phantom through the moderator. A method was developed to predict the dose distribution in a knee phantom from any neutron and photon beam spectra incident on the knee. This method was revealed to be reasonably accurate and enabled one to reduce by a factor of 10 the particle transport simulation time by modeling the moderator only

  12. The Integration of Renewable Energy Sources into Electric Power Distribution Systems, Vol. II Utility Case Assessments

    Energy Technology Data Exchange (ETDEWEB)

    Zaininger, H.W.

    1994-01-01

    Electric utility distribution system impacts associated with the integration of renewable energy sources such as photovoltaics (PV) and wind turbines (WT) are considered in this project. The impacts are expected to vary from site to site according to the following characteristics: the local solar insolation and/or wind characteristics, renewable energy source penetration level, whether battery or other energy storage systems are applied, and local utility distribution design standards and planning practices. Small, distributed renewable energy sources are connected to the utility distribution system like other, similar kW- and MW-scale equipment and loads. Residential applications are expected to be connected to single-phase 120/240-V secondaries. Larger kW-scale applications may be connected to three+phase secondaries, and larger hundred-kW and y-scale applications, such as MW-scale windfarms, or PV plants, may be connected to electric utility primary systems via customer-owned primary and secondary collection systems. In any case, the installation of small, distributed renewable energy sources is expected to have a significant impact on local utility distribution primary and secondary system economics. Small, distributed renewable energy sources installed on utility distribution systems will also produce nonsite-specific utility generation system benefits such as energy and capacity displacement benefits, in addition to the local site-specific distribution system benefits. Although generation system benefits are not site-specific, they are utility-specific, and they vary significantly among utilities in different regions. In addition, transmission system benefits, environmental benefits and other benefits may apply. These benefits also vary significantly among utilities and regions. Seven utility case studies considering PV, WT, and battery storage were conducted to identify a range of potential renewable energy source distribution system applications. The

  13. Confusion-limited extragalactic source survey at 4.755 GHz. I. Source list and areal distributions

    International Nuclear Information System (INIS)

    Ledden, J.E.; Broderick, J.J.; Condon, J.J.; Brown, R.L.

    1980-01-01

    A confusion-limited 4.755-GHz survey covering 0.00 956 sr between right ascensions 07/sup h/05/sup m/ and 18/sup h/ near declination +35 0 has been made with the NRAO 91-m telescope. The survey found 237 sources and is complete above 15 mJy. Source counts between 15 and 100 mJy were obtained directly. The P(D) distribution was used to determine the number counts between 0.5 and 13.2 mJy, to search for anisotropy in the density of faint extragalactic sources, and to set a 99%-confidence upper limit of 1.83 mK to the rms temperature fluctuation of the 2.7-K cosmic microwave background on angular scales smaller than 7.3 arcmin. The discrete-source density, normalized to the static Euclidean slope, falls off sufficiently rapidly below 100 mJy that no new population of faint flat-spectrum sources is required to explain the 4.755-GHz source counts

  14. Evaluation of Airborne Remote Sensing Techniques for Predicting the Distribution of Energetic Compounds on Impact Areas

    National Research Council Canada - National Science Library

    Graves, Mark R; Dove, Linda P; Jenkins, Thomas F; Bigl, Susan; Walsh, Marianne E; Hewitt, Alan D; Lambert, Dennis; Perron, Nancy; Ramsey, Charles; Gamey, Jeff; Beard, Les; Doll, William E; Magoun, Dale

    2007-01-01

    .... Remote sensing and geographic information system (GIS) technologies were utilized to assist in the development of enhanced sampling strategies to better predict the landscape-scale distribution of energetic compounds...

  15. Studies on the supposition of liquid source for irradiation and its dose distribution, (1)

    International Nuclear Information System (INIS)

    Yoshimura, Seiji; Nishida, Tsuneo

    1977-01-01

    Recently radio isotope has been used and applied in the respective spheres. The application of the effects by irradiation will be specially paid attention to in the future. Today the source for irradiation has been considered to be the thing sealed in the solid state into various capsules. So we suppose that we use liquid radio isotope as the source for irradiation. This is because there are some advantages compared with the solid source in its freedom of the shape or additional easiness at attenuation. In these experiments we measured the dose distribution by the columnar liquid source. We expect that these will be put to practical use. (auth.)

  16. Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic Sensor Networks

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider the problem of remote vector Gaussian source coding for a wireless acoustic sensor network. Each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information. The node’s measurement...... and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network. We show that for this distributed source coding scenario, one can encode a so-called conditional sufficient statistic of the sources instead of jointly...

  17. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    Science.gov (United States)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  18. A Monte Carlo Method for the Analysis of Gamma Radiation Transport from Distributed Sources in Laminated Shields

    International Nuclear Information System (INIS)

    Leimdoerfer, M.

    1964-02-01

    A description is given of a method for calculating the penetration and energy deposition of gamma radiation, based on Monte Carlo techniques. The essential feature is the application of the exponential transformation to promote the transport of penetrating quanta and to balance the steep spatial variations of the source distributions which appear in secondary gamma emission problems. The estimated statistical errors in a number of sample problems, involving concrete shields with thicknesses up to 500 cm, are shown to be quite favorable, even at relatively short computing times. A practical reactor shielding problem is also shown and the predictions compared with measurements

  19. A Monte Carlo Method for the Analysis of Gamma Radiation Transport from Distributed Sources in Laminated Shields

    Energy Technology Data Exchange (ETDEWEB)

    Leimdoerfer, M

    1964-02-15

    A description is given of a method for calculating the penetration and energy deposition of gamma radiation, based on Monte Carlo techniques. The essential feature is the application of the exponential transformation to promote the transport of penetrating quanta and to balance the steep spatial variations of the source distributions which appear in secondary gamma emission problems. The estimated statistical errors in a number of sample problems, involving concrete shields with thicknesses up to 500 cm, are shown to be quite favorable, even at relatively short computing times. A practical reactor shielding problem is also shown and the predictions compared with measurements.

  20. A calculation of dose distribution around 32P spherical sources and its clinical application

    International Nuclear Information System (INIS)

    Ohara, Ken; Tanaka, Yoshiaki; Nishizawa, Kunihide; Maekoshi, Hisashi

    1977-01-01

    In order to avoid the radiation hazard in radiation therapy of craniopharyngioma by using 32 P, it is helpful to prepare a detailed dose distribution in the vicinity of the source in the tissue. Valley's method is used for calculations. A problem of the method is pointed out and the method itself is refined numerically: it extends a region of xi where an approximate polynomial is available, and it determines an optimum degree of the polynomial as 9. Usefulness of the polynomial is examined by comparing with Berger's scaled absorbed dose distribution F(xi) and the Valley's result. The dose and dose rate distributions around uniformly distributed spherical sources are computed from the termwise integration of our polynomial of degree 9 over the range of xi from 0 to 1.7. The dose distributions calculated from the spherical surface to a point at 0.5 cm outside the source, are given, when the radii of sources are 0.5, 0.6, 0.7, 1.0, and 1.5 cm respectively. The therapeutic dose for a craniopharyngioma which has a spherically shaped cyst, and the absorbed dose to the normal tissue, (oculomotor nerve), are obtained from these dose rate distributions. (auth.)

  1. Determining the temperature and density distribution from a Z-pinch radiation source

    International Nuclear Information System (INIS)

    Matuska, W.; Lee, H.

    1997-01-01

    High temperature radiation sources exceeding one hundred eV can be produced via z-pinches using currently available pulsed power. The usual approach to compare the z-pinch simulation and experimental data is to convert the radiation output at the source, whose temperature and density distributions are computed from the 2-D MHD code, into simulated data such as a spectrometer reading. This conversion process involves a radiation transfer calculation through the axially symmetric source, assuming local thermodynamic equilibrium (LTE), and folding the radiation that reaches the detector with the frequency-dependent response function. In this paper the authors propose a different approach by which they can determine the temperature and density distributions of the radiation source directly from the spatially resolved spectral data. This unfolding process is reliable and unambiguous for the ideal case where LTE holds and the source is axially symmetric. In reality, imperfect LTE and axial symmetry will introduce inaccuracies into the unfolded distributions. The authors use a parameter optimization routine to find the temperature and density distributions that best fit the data. They know from their past experience that the radiation source resulting from the implosion of a thin foil does not exhibit good axial symmetry. However, recent experiments carried out at Sandia National Laboratory using multiple wire arrays were very promising to achieve reasonably good symmetry. For these experiments the method will provide a valuable diagnostic tool

  2. The Impact of Source Distribution on Scalar Transport over Forested Hills

    Science.gov (United States)

    Ross, Andrew N.; Harman, Ian N.

    2015-08-01

    Numerical simulations of neutral flow over a two-dimensional, isolated, forested ridge are conducted to study the effects of scalar source distribution on scalar concentrations and fluxes over forested hills. Three different constant-flux sources are considered that span a range of idealized but ecologically important source distributions: a source at the ground, one uniformly distributed through the canopy, and one decaying with depth in the canopy. A fourth source type, where the in-canopy source depends on both the wind speed and the difference in concentration between the canopy and a reference concentration on the leaf, designed to mimic deposition, is also considered. The simulations show that the topographically-induced perturbations to the scalar concentration and fluxes are quantitatively dependent on the source distribution. The net impact is a balance of different processes affecting both advection and turbulent mixing, and can be significant even for moderate topography. Sources that have significant input in the deep canopy or at the ground exhibit a larger magnitude advection and turbulent flux-divergence terms in the canopy. The flows have identical velocity fields and so the differences are entirely due to the different tracer concentration fields resulting from the different source distributions. These in-canopy differences lead to larger spatial variations in above-canopy scalar fluxes for sources near the ground compared to cases where the source is predominantly located near the canopy top. Sensitivity tests show that the most significant impacts are often seen near to or slightly downstream of the flow separation or reattachment points within the canopy flow. The qualitative similarities to previous studies using periodic hills suggest that important processes occurring over isolated and periodic hills are not fundamentally different. The work has important implications for the interpretation of flux measurements over forests, even in

  3. DUSTMS-D: DISPOSAL UNIT SOURCE TERM - MULTIPLE SPECIES - DISTRIBUTED FAILURE DATA INPUT GUIDE.

    Energy Technology Data Exchange (ETDEWEB)

    SULLIVAN, T.M.

    2006-01-01

    Performance assessment of a low-level waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). Many of these physical processes are influenced by the design of the disposal facility (e.g., how the engineered barriers control infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This has been done and the resulting models have been incorporated into the computer code DUST-MS (Disposal Unit Source Term-Multiple Species). The DUST-MS computer code is designed to model water flow, container degradation, release of contaminants from the wasteform to the contacting solution and transport through the subsurface media. Water flow through the facility over time is modeled using tabular input. Container degradation models include three types of failure rates: (a) instantaneous (all containers in a control volume fail at once), (b) uniformly distributed failures (containers fail at a linear rate between a specified starting and ending time), and (c) gaussian failure rates (containers fail at a rate determined by a mean failure time, standard deviation and gaussian distribution). Wasteform release models include four release mechanisms: (a) rinse with partitioning (inventory is released instantly upon container failure subject to equilibrium partitioning (sorption) with

  4. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    Science.gov (United States)

    Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.

    2017-01-01

    Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.

  5. An empirical evaluation of classification algorithms for fault prediction in open source projects

    Directory of Open Access Journals (Sweden)

    Arvinder Kaur

    2018-01-01

    Full Text Available Creating software with high quality has become difficult these days with the fact that size and complexity of the developed software is high. Predicting the quality of software in early phases helps to reduce testing resources. Various statistical and machine learning techniques are used for prediction of the quality of the software. In this paper, six machine learning models have been used for software quality prediction on five open source software. Varieties of metrics have been evaluated for the software including C & K, Henderson & Sellers, McCabe etc. Results show that Random Forest and Bagging produce good results while Naïve Bayes is least preferable for prediction.

  6. Predictive models of threatened plant species distribution in the Iberian arid south-east

    OpenAIRE

    Benito, Blas M.

    2013-01-01

    Poster on the distribution of three rare, endemic and endangered annual plants of arid zones in the south-eastern Iberian peninsula. Presented in the workshop "Predictive Modelling of Species Distribution: New Tools for the XXI Century (Baeza, Spain, november 2005).

  7. On distributed model predictive control for vehicle platooning with a recursive feasibility guarantee

    NARCIS (Netherlands)

    Shi, Shengling; Lazar, Mircea

    2017-01-01

    This paper proposes a distributed model predictive control algorithm for vehicle platooning and more generally networked systems in a chain structure. The distributed models of the vehicle platoon are coupled through the input of the preceding vehicles. Using the principles of robust model

  8. Model Predictive Control of Z-source Neutral Point Clamped Inverter

    DEFF Research Database (Denmark)

    Mo, Wei; Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of Z-source Neutral Point Clamped (NPC) inverter. For illustration, current control of Z-source NPC grid-connected inverter is analyzed and simulated. With MPC’s advantage of easily including system constraints, load current, impedance network...... response are obtained at the same time with a formulated Z-source NPC inverter network model. Operation steady state and transient state simulation results of MPC are going to be presented, which shows good reference tracking ability of this method. It provides new control method for Z-source NPC inverter...

  9. Prediction of monthly average global solar radiation based on statistical distribution of clearness index

    International Nuclear Information System (INIS)

    Ayodele, T.R.; Ogunjuyigbe, A.S.O.

    2015-01-01

    In this paper, probability distribution of clearness index is proposed for the prediction of global solar radiation. First, the clearness index is obtained from the past data of global solar radiation, then, the parameters of the appropriate distribution that best fit the clearness index are determined. The global solar radiation is thereafter predicted from the clearness index using inverse transformation of the cumulative distribution function. To validate the proposed method, eight years global solar radiation data (2000–2007) of Ibadan, Nigeria are used to determine the parameters of appropriate probability distribution for clearness index. The calculated parameters are then used to predict the future monthly average global solar radiation for the following year (2008). The predicted values are compared with the measured values using four statistical tests: the Root Mean Square Error (RMSE), MAE (Mean Absolute Error), MAPE (Mean Absolute Percentage Error) and the coefficient of determination (R"2). The proposed method is also compared to the existing regression models. The results show that logistic distribution provides the best fit for clearness index of Ibadan and the proposed method is effective in predicting the monthly average global solar radiation with overall RMSE of 0.383 MJ/m"2/day, MAE of 0.295 MJ/m"2/day, MAPE of 2% and R"2 of 0.967. - Highlights: • Distribution of clearnes index is proposed for prediction of global solar radiation. • The clearness index is obtained from the past data of global solar radiation. • The parameters of distribution that best fit the clearness index are determined. • Solar radiation is predicted from the clearness index using inverse transformation. • The method is effective in predicting the monthly average global solar radiation.

  10. Z-Source-Inverter-Based Flexible Distributed Generation System Solution for Grid Power Quality Improvement

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Vilathgamuwa, D. M.; Loh, Poh Chiang

    2009-01-01

    Distributed generation (DG) systems are usually connected to the grid using power electronic converters. Power delivered from such DG sources depends on factors like energy availability and load demand. The converters used in power conversion do not operate with their full capacity all the time......-stage buck-boost inverter, recently proposed Z-source inverter (ZSI) is a good candidate for future DG systems. This paper presents a controller design for a ZSI-based DG system to improve power quality of distribution systems. The proposed control method is tested with simulation results obtained using...

  11. Predicting fundamental and realized distributions based on thermal niche: A case study of a freshwater turtle

    Science.gov (United States)

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco; Ribeiro, Bruno R.

    2018-04-01

    Species distribution models (SDM) have been broadly used in ecology to address theoretical and practical problems. Currently, there are two main approaches to generate SDMs: (i) correlative, which is based on species occurrences and environmental predictor layers and (ii) process-based models, which are constructed based on species' functional traits and physiological tolerances. The distributions estimated by each approach are based on different components of species niche. Predictions of correlative models approach species realized niches, while predictions of process-based are more akin to species fundamental niche. Here, we integrated the predictions of fundamental and realized distributions of the freshwater turtle Trachemys dorbigni. Fundamental distribution was estimated using data of T. dorbigni's egg incubation temperature, and realized distribution was estimated using species occurrence records. Both types of distributions were estimated using the same regression approaches (logistic regression and support vector machines), both considering macroclimatic and microclimatic temperatures. The realized distribution of T. dorbigni was generally nested in its fundamental distribution reinforcing theoretical assumptions that the species' realized niche is a subset of its fundamental niche. Both modelling algorithms produced similar results but microtemperature generated better results than macrotemperature for the incubation model. Finally, our results reinforce the conclusion that species realized distributions are constrained by other factors other than just thermal tolerances.

  12. A hybrid approach to advancing quantitative prediction of tissue distribution of basic drugs in human

    International Nuclear Information System (INIS)

    Poulin, Patrick; Ekins, Sean; Theil, Frank-Peter

    2011-01-01

    A general toxicity of basic drugs is related to phospholipidosis in tissues. Therefore, it is essential to predict the tissue distribution of basic drugs to facilitate an initial estimate of that toxicity. The objective of the present study was to further assess the original prediction method that consisted of using the binding to red blood cells measured in vitro for the unbound drug (RBCu) as a surrogate for tissue distribution, by correlating it to unbound tissue:plasma partition coefficients (Kpu) of several tissues, and finally to predict volume of distribution at steady-state (V ss ) in humans under in vivo conditions. This correlation method demonstrated inaccurate predictions of V ss for particular basic drugs that did not follow the original correlation principle. Therefore, the novelty of this study is to provide clarity on the actual hypotheses to identify i) the impact of pharmacological mode of action on the generic correlation of RBCu-Kpu, ii) additional mechanisms of tissue distribution for the outlier drugs, iii) molecular features and properties that differentiate compounds as outliers in the original correlation analysis in order to facilitate its applicability domain alongside the properties already used so far, and finally iv) to present a novel and refined correlation method that is superior to what has been previously published for the prediction of human V ss of basic drugs. Applying a refined correlation method after identifying outliers would facilitate the prediction of more accurate distribution parameters as key inputs used in physiologically based pharmacokinetic (PBPK) and phospholipidosis models.

  13. [Effects of sampling plot number on tree species distribution prediction under climate change].

    Science.gov (United States)

    Liang, Yu; He, Hong-Shi; Wu, Zhi-Wei; Li, Xiao-Na; Luo, Xu

    2013-05-01

    Based on the neutral landscapes under different degrees of landscape fragmentation, this paper studied the effects of sampling plot number on the prediction of tree species distribution at landscape scale under climate change. The tree species distribution was predicted by the coupled modeling approach which linked an ecosystem process model with a forest landscape model, and three contingent scenarios and one reference scenario of sampling plot numbers were assumed. The differences between the three scenarios and the reference scenario under different degrees of landscape fragmentation were tested. The results indicated that the effects of sampling plot number on the prediction of tree species distribution depended on the tree species life history attributes. For the generalist species, the prediction of their distribution at landscape scale needed more plots. Except for the extreme specialist, landscape fragmentation degree also affected the effects of sampling plot number on the prediction. With the increase of simulation period, the effects of sampling plot number on the prediction of tree species distribution at landscape scale could be changed. For generalist species, more plots are needed for the long-term simulation.

  14. On-line test of power distribution prediction system for boiling water reactors

    International Nuclear Information System (INIS)

    Nishizawa, Y.; Kiguchi, T.; Kobayashi, S.; Takumi, K.; Tanaka, H.; Tsutsumi, R.; Yokomi, M.

    1982-01-01

    A power distribution prediction system for boiling water reactors has been developed and its on-line performance test has proceeded at an operating commercial reactor. This system predicts the power distribution or thermal margin in advance of control rod operations and core flow rate change. This system consists of an on-line computer system, an operator's console with a color cathode-ray tube, and plant data input devices. The main functions of this system are present power distribution monitoring, power distribution prediction, and power-up trajectory prediction. The calculation method is based on a simplified nuclear thermal-hydraulic calculation, which is combined with a method of model identification to the actual reactor core state. It has been ascertained by the on-line test that the predicted power distribution (readings of traversing in-core probe) agrees with the measured data within 6% root-mean-square. The computing time required for one prediction calculation step is less than or equal to 1.5 min by an HIDIC-80 on-line computer

  15. Prediction of vertical distribution and ambient development temperature of Baltic cod, Gadus morhua L., eggs

    DEFF Research Database (Denmark)

    Wieland, Kai; Jarre, Astrid

    1997-01-01

    An artificial neural network (ANN) model was established to predict the vertical distribution of Baltic cod eggs. Data from vertical distribution sampling in the Bornholm Basin over the period 1986-1995 were used to train and test the network, while data sets from sampling in 1996 were used...... for validation. The model explained 82% of the variance between observed and predicted relative frequencies of occurrence of the eggs in relation to salinity, temperature and oxygen concentration; The ANN fitted all observations satisfactorily except for one sampling date, where an exceptional hydrographic...... situation was observed. Mean ambient temperatures, calculated from the predicted vertical distributions of the eggs and used for the computation of egg developmental times, were overestimated by 0.05 degrees C on average. This corresponds to an error in prediction of egg developmental time of less than 1%...

  16. Predicting dihedral angle probability distributions for protein coil residues from primary sequence using neural networks

    DEFF Research Database (Denmark)

    Helles, Glennie; Fonseca, Rasmus

    2009-01-01

    residue in the input-window. The trained neural network shows a significant improvement (4-68%) in predicting the most probable bin (covering a 30°×30° area of the dihedral angle space) for all amino acids in the data set compared to first order statistics. An accuracy comparable to that of secondary...... seem to have a significant influence on the dihedral angles adopted by the individual amino acids in coil segments. In this work we attempt to predict a probability distribution of these dihedral angles based on the flanking residues. While attempts to predict dihedral angles of coil segments have been...... done previously, none have, to our knowledge, presented comparable results for the probability distribution of dihedral angles. Results: In this paper we develop an artificial neural network that uses an input-window of amino acids to predict a dihedral angle probability distribution for the middle...

  17. Characterization of a Distributed Plasma Ionization Source (DPIS) for Ion Mobility Spectrometry and Mass Spectrometry

    International Nuclear Information System (INIS)

    Waltman, Melanie J.; Dwivedi, Prabha; Hill, Herbert; Blanchard, William C.; Ewing, Robert G.

    2008-01-01

    A recently developed atmospheric pressure ionization source, a distributed plasma ionization source (DPIS), was characterized and compared to commonly used atmospheric pressure ionization sources with both mass spectrometry and ion mobility spectrometry. The source consisted of two electrodes of different sizes separated by a thin dielectric. Application of a high RF voltage across the electrodes generated plasma in air yielding both positive and negative ions depending on the polarity of the applied potential. These reactant ions subsequently ionized the analyte vapors. The reactant ions generated were similar to those created in a conventional point-to-plane corona discharge ion source. The positive reactant ions generated by the source were mass identified as being solvated protons of general formula (H2O)nH+ with (H2O)2H+ as the most abundant reactant ion. The negative reactant ions produced were mass identified primarily as CO3-, NO3-, NO2-, O3- and O2- of various relative intensities. The predominant ion and relative ion ratios varied depending upon source construction and supporting gas flow rates. A few compounds including drugs, explosives and environmental pollutants were selected to evaluate the new ionization source. The source was operated continuously for several months and although deterioration was observed visually, the source continued to produce ions at a rate similar that of the initial conditions. The results indicated that the DPIS may have a longer operating life than a conventional corona discharge.

  18. Topographic Metric Predictions of Soil redistribution and Organic Carbon Distribution in Croplands

    Science.gov (United States)

    Mccarty, G.; Li, X.

    2017-12-01

    Landscape topography is a key factor controlling soil redistribution and soil organic carbon (SOC) distribution in Iowa croplands (USA). In this study, we adopted a combined approach based on carbon () and cesium (137Cs) isotope tracers, and digital terrain analysis to understand patterns of SOC redistribution and carbon sequestration dynamics as influenced by landscape topography in tilled cropland under long term corn/soybean management. The fallout radionuclide 137Cs was used to estimate soil redistribution rates and a Lidar-derived DEM was used to obtain a set of topographic metrics for digital terrain analysis. Soil redistribution rates and patterns of SOC distribution were examined across 560 sampling locations at two field sites as well as at larger scale within the watershed. We used δ13C content in SOC to partition C3 and C4 plant derived C density at 127 locations in one of the two field sites with corn being the primary source of C4 C. Topography-based models were developed to simulate SOC distribution and soil redistribution using stepwise ordinary least square regression (SOLSR) and stepwise principal component regression (SPCR). All topography-based models developed through SPCR and SOLSR demonstrated good simulation performance, explaining more than 62% variability in SOC density and soil redistribution rates across two field sites with intensive samplings. However, the SOLSR models showed lower reliability than the SPCR models in predicting SOC density at the watershed scale. Spatial patterns of C3-derived SOC density were highly related to those of SOC density. Topographic metrics exerted substantial influence on C3-derived SOC density with the SPCR model accounting for 76.5% of the spatial variance. In contrast C4 derived SOC density had poor spatial structure likely reflecting the substantial contribution of corn vegetation to recently sequestered SOC density. Results of this study highlighted the utility of topographic SPCR models for scaling

  19. Two sample Bayesian prediction intervals for order statistics based on the inverse exponential-type distributions using right censored sample

    Directory of Open Access Journals (Sweden)

    M.M. Mohie El-Din

    2011-10-01

    Full Text Available In this paper, two sample Bayesian prediction intervals for order statistics (OS are obtained. This prediction is based on a certain class of the inverse exponential-type distributions using a right censored sample. A general class of prior density functions is used and the predictive cumulative function is obtained in the two samples case. The class of the inverse exponential-type distributions includes several important distributions such the inverse Weibull distribution, the inverse Burr distribution, the loglogistic distribution, the inverse Pareto distribution and the inverse paralogistic distribution. Special cases of the inverse Weibull model such as the inverse exponential model and the inverse Rayleigh model are considered.

  20. An appraisal of wind speed distribution prediction by soft computing methodologies: A comparative study

    International Nuclear Information System (INIS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Anuar, Nor Badrul; Saboohi, Hadi; Abdul Wahab, Ainuddin Wahid; Protić, Milan; Zalnezhad, Erfan; Mirhashemi, Seyed Mohammad Amin

    2014-01-01

    Highlights: • Probabilistic distribution functions of wind speed. • Two parameter Weibull probability distribution. • To build an effective prediction model of distribution of wind speed. • Support vector regression application as probability function for wind speed. - Abstract: The probabilistic distribution of wind speed is among the more significant wind characteristics in examining wind energy potential and the performance of wind energy conversion systems. When the wind speed probability distribution is known, the wind energy distribution can be easily obtained. Therefore, the probability distribution of wind speed is a very important piece of information required in assessing wind energy potential. For this reason, a large number of studies have been established concerning the use of a variety of probability density functions to describe wind speed frequency distributions. Although the two-parameter Weibull distribution comprises a widely used and accepted method, solving the function is very challenging. In this study, the polynomial and radial basis functions (RBF) are applied as the kernel function of support vector regression (SVR) to estimate two parameters of the Weibull distribution function according to previously established analytical methods. Rather than minimizing the observed training error, SVR p oly and SVR r bf attempt to minimize the generalization error bound, so as to achieve generalized performance. According to the experimental results, enhanced predictive accuracy and capability of generalization can be achieved using the SVR approach compared to other soft computing methodologies

  1. The electron density and temperature distributions predicted by bow shock models of Herbig-Haro objects

    International Nuclear Information System (INIS)

    Noriega-Crespo, A.; Bohm, K.H.; Raga, A.C.

    1990-01-01

    The observable spatial electron density and temperature distributions for series of simple bow shock models, which are of special interest in the study of Herbig-Haro (H-H) objects are computed. The spatial electron density and temperature distributions are derived from forbidden line ratios. It should be possible to use these results to recognize whether an observed electron density or temperature distribution can be attributed to a bow shock, as is the case in some Herbig-Haro objects. As an example, the empirical and predicted distributions for H-H 1 are compared. The predicted electron temperature distributions give the correct temperature range and they show very good diagnostic possibilities if the forbidden O III (4959 + 5007)/4363 wavelength ratio is used. 44 refs

  2. Measurement and prediction of aromatic solute distribution coefficients for aqueous-organic solvent systems. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, J.R.; Luthy, R.G.

    1984-06-01

    Experimental and modeling activities were performed to assess techniques for measurement and prediction of distribution coefficients for aromatic solutes between water and immiscible organic solvents. Experiments were performed to measure distribution coefficients in both clean water and wastewater systems, and to assess treatment of a wastewater by solvent extraction. The theoretical portions of this investigation were directed towards development of techniques for prediction of solute-solvent/water distribution coefficients. Experiments were performed to assess treatment of a phenolic-laden coal conversion wastewater by solvent extraction. The results showed that solvent extraction for recovery of phenolic material offered several wastewater processing advantages. Distribution coefficients were measured in clean water and wastewater systems for aromatic solutes of varying functionality with different solvent types. It was found that distribution coefficients for these compounds in clean water systems were not statistically different from distribution coefficients determined in a complex coal conversion process wastewater. These and other aromatic solute distribution coefficient data were employed for evaluation of modeling techniques for prediction of solute-solvent/water distribution coefficients. Eight solvents were selected in order to represent various chemical classes: toluene and benzene (aromatics), hexane and heptane (alkanes), n-octanol (alcohols), n-butyl acetate (esters), diisopropyl ether (ethers), and methylisobutyl ketone (ketones). The aromatic solutes included: nonpolar compounds such as benzene, toluene and naphthalene, phenolic compounds such as phenol, cresol and catechol, nitrogenous aromatics such as aniline, pyridine and aminonaphthalene, and other aromatic solutes such as naphthol, quinolinol and halogenated compounds. 100 references, 20 figures, 34 tables.

  3. Power Law Distributions in the Experiment for Adjustment of the Ion Source of the NBI System

    International Nuclear Information System (INIS)

    Han Xiaopu; Hu Chundong

    2005-01-01

    The experiential adjustment process in an experiment on the ion source of the neutral beam injector system for the HT-7 Tokamak is reported in this paper. With regard to the data obtained in the same condition, in arranging the arc current intensities of every shot with a decay rank, the distributions of the arc current intensity correspond to the power laws, and the distribution obtained in the condition with the cryo-pump corresponds to the double Pareto distribution. Using the similar study method, the distributions of the arc duration are close to the power laws too. These power law distributions are formed rather naturally instead of being the results of purposeful seeking

  4. Evaluation of the dose distribution for prostate implants using various 125I and 103Pd sources

    International Nuclear Information System (INIS)

    Meigooni, Ali S.; Luerman, Christine M.; Sowards, Keith T.

    2009-01-01

    Recently, several different models of 125 I and 103 Pd brachytherapy sources have been introduced in order to meet the increasing demand for prostate seed implants. These sources have different internal structures; hence, their TG-43 dosimetric parameters are not the same. In this study, the effects of the dosimetric differences among the sources on their clinical applications were evaluated. The quantitative and qualitative evaluations were performed by comparisons of dose distributions and dose volume histograms of prostate implants calculated for various designs of 125 I and 103 Pd sources. These comparisons were made for an identical implant scheme with the same number of seeds for each source. The results were compared with the Amersham model 6711 seed for 125 I and the Theragenics model 200 seed for 103 Pd using the same implant scheme.

  5. Time-dependent anisotropic distributed source capability in transient 3-d transport code tort-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    The transient 3-D discrete ordinates transport code TORT-TD has been extended to account for time-dependent anisotropic distributed external sources. The extension aims at the simulation of the pulsed neutron source in the YALINA-Thermal subcritical assembly. Since feedback effects are not relevant in this zero-power configuration, this offers a unique opportunity to validate the time-dependent neutron kinetics of TORT-TD with experimental data. The extensions made in TORT-TD to incorporate a time-dependent anisotropic external source are described. The steady state of the YALINA-Thermal assembly and its response to an artificial square-wave source pulse sequence have been analysed with TORT-TD using pin-wise homogenised cross sections in 18 prompt energy groups with P 1 scattering order and 8 delayed neutron groups. The results demonstrate the applicability of TORT-TD to subcritical problems with a time-dependent external source. (authors)

  6. Angular and mass resolved energy distribution measurements with a gallium liquid metal ion source

    International Nuclear Information System (INIS)

    Marriott, Philip

    1987-06-01

    Ionisation and energy broadening mechanisms relevant to liquid metal ion sources are discussed. A review of experimental results giving a picture of source operation and a discussion of the emission mechanisms thought to occur for the ionic species and droplets emitted is presented. Further work is suggested by this review and an analysis system for angular and mass resolved energy distribution measurements of liquid metal ion source beams has been constructed. The energy analyser has been calibrated and a series of measurements, both on and off the beam axis, of 69 Ga + , Ga ++ and Ga 2 + ions emitted at various currents from a gallium source has been performed. A comparison is made between these results and published work where possible, and the results are discussed with the aim of determining the emission and energy spread mechanisms operating in the gallium liquid metal ion source. (author)

  7. Predicting habitat distribution to conserve seagrass threatened by sea level rise

    Science.gov (United States)

    Saunders, M. I.; Baldock, T.; Brown, C. J.; Callaghan, D. P.; Golshani, A.; Hamylton, S.; Hoegh-guldberg, O.; Leon, J. X.; Lovelock, C. E.; Lyons, M. B.; O'Brien, K.; Mumby, P.; Phinn, S. R.; Roelfsema, C. M.

    2013-12-01

    Sea level rise (SLR) over the 21st century will cause significant redistribution of valuable coastal habitats. Seagrasses form extensive and highly productive meadows in shallow coastal seas support high biodiversity, including economically valuable and threatened species. Predictive habitat models can inform local management actions that will be required to conserve seagrass faced with multiple stressors. We developed novel modelling approaches, based on extensive field data sets, to examine the effects of sea level rise and other stressors on two representative seagrass habitats in Australia. First, we modelled interactive effects of SLR, water clarity and adjacent land use on estuarine seagrass meadows in Moreton Bay, Southeast Queensland. The extent of suitable seagrass habitat was predicted to decline by 17% by 2100 due to SLR alone, but losses were predicted to be significantly reduced through improvements in water quality (Fig 1a) and by allowing space for seagrass migration with inundation. The rate of sedimentation in seagrass strongly affected the area of suitable habitat for seagrass in sea level rise scenarios (Fig 1b). Further research to understand spatial, temporal and environmental variability of sediment accretion in seagrass is required. Second, we modelled changes in wave energy distribution due to predicted SLR in a linked coral reef and seagrass ecosystem at Lizard Island, Great Barrier Reef. Scenarios where the water depth over the coral reef deepened due to SLR and minimal reef accretion, resulted in larger waves propagating shoreward, changing the existing hydrodynamic conditions sufficiently to reduce area of suitable habitat for seagrass. In a scenario where accretion of the coral reef was severely compromised (e.g. warming, acidification, overfishing), the probability of the presence of seagrass declined significantly. Management to maintain coral health will therefore also benefit seagrasses subject to SLR in reef environments. Further

  8. Light source distribution and scattering phase function influence light transport in diffuse multi-layered media

    Science.gov (United States)

    Vaudelle, Fabrice; L'Huillier, Jean-Pierre; Askoura, Mohamed Lamine

    2017-06-01

    Red and near-Infrared light is often used as a useful diagnostic and imaging probe for highly scattering media such as biological tissues, fruits and vegetables. Part of diffusively reflected light gives interesting information related to the tissue subsurface, whereas light recorded at further distances may probe deeper into the interrogated turbid tissues. However, modelling diffusive events occurring at short source-detector distances requires to consider both the distribution of the light sources and the scattering phase functions. In this report, a modified Monte Carlo model is used to compute light transport in curved and multi-layered tissue samples which are covered with a thin and highly diffusing tissue layer. Different light source distributions (ballistic, diffuse or Lambertian) are tested with specific scattering phase functions (modified or not modified Henyey-Greenstein, Gegenbauer and Mie) to compute the amount of backscattered and transmitted light in apple and human skin structures. Comparisons between simulation results and experiments carried out with a multispectral imaging setup confirm the soundness of the theoretical strategy and may explain the role of the skin on light transport in whole and half-cut apples. Other computational results show that a Lambertian source distribution combined with a Henyey-Greenstein phase function provides a higher photon density in the stratum corneum than in the upper dermis layer. Furthermore, it is also shown that the scattering phase function may affect the shape and the magnitude of the Bidirectional Reflectance Distribution (BRDF) exhibited at the skin surface.

  9. The dislocation distribution function near a crack tip generated by external sources

    International Nuclear Information System (INIS)

    Lung, C.W.; Deng, K.M.

    1988-06-01

    The dislocation distribution function near a crack tip generated by external sources is calculated. It is similar to the shape of curves calculated for the crack tip emission case but the quantative difference is quite large. The image forces enlarges the negative dislocation zone but does not change the form of the curve. (author). 10 refs, 3 figs

  10. Distribution of hadron intranuclear cascade for large distance from a source

    International Nuclear Information System (INIS)

    Bibin, V.L.; Kazarnovskij, M.V.; Serezhnikov, S.V.

    1985-01-01

    Analytical solution of the problem of three-component hadron cascade development for large distances from a source is obtained in the framework of a series of simplifying assumptions. It makes possible to understand physical mechanisms of the process studied and to obtain approximate asymptotic expressions for hadron distribution functions

  11. The Space-, Time-, and Energy-distribution of Neutrons from a Pulsed Plane Source

    Energy Technology Data Exchange (ETDEWEB)

    Claesson, Arne

    1962-05-15

    The space-, time- and energy-distribution of neutrons from a pulsed, plane, high energy source in an infinite medium is determined in a diffusion approximation. For simplicity the moderator is first assumed to be hydrogen gas but it is also shown that the method can be used for a moderator of arbitrary mass.

  12. A Predictive Distribution Model for Cooperative Braking System of an Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Hongqiang Guo

    2014-01-01

    Full Text Available A predictive distribution model for a series cooperative braking system of an electric vehicle is proposed, which can solve the real-time problem of the optimum braking force distribution. To get the predictive distribution model, firstly three disciplines of the maximum regenerative energy recovery capability, the maximum generating efficiency and the optimum braking stability are considered, then an off-line process optimization stream is designed, particularly the optimal Latin hypercube design (Opt LHD method and radial basis function neural network (RBFNN are utilized. In order to decouple the variables between different disciplines, a concurrent subspace design (CSD algorithm is suggested. The established predictive distribution model is verified in a dynamic simulation. The off-line optimization results show that the proposed process optimization stream can improve the regenerative energy recovery efficiency, and optimize the braking stability simultaneously. Further simulation tests demonstrate that the predictive distribution model can achieve high prediction accuracy and is very beneficial for the cooperative braking system.

  13. A New Method for the 2D DOA Estimation of Coherently Distributed Sources

    Directory of Open Access Journals (Sweden)

    Liang Zhou

    2014-03-01

    Full Text Available The purpose of this paper is to develop a new technique for estimating the two- dimensional (2D direction-of-arrivals (DOAs of coherently distributed (CD sources, which can estimate effectively the central azimuth and central elevation of CD sources at the cost of less computational cost. Using the special L-shape array, a new approach for parametric estimation of CD sources is proposed. The proposed method is based on two rotational invariance relations under small angular approximation, and estimates two rotational matrices which depict the relations, using propagator technique. And then the central DOA estimations are obtained by utilizing the primary diagonal elements of two rotational matrices. Simulation results indicate that the proposed method can exhibit a good performance under small angular spread and be applied to the multisource scenario where different sources may have different angular distribution shapes. Without any peak-finding search and the eigendecomposition of the high-dimensional sample covariance matrix, the proposed method has significantly reduced the computational cost compared with the existing methods, and thus is beneficial to real-time processing and engineering realization. In addition, our approach is also a robust estimator which does not depend on the angular distribution shape of CD sources.

  14. Establishment of a Practical Approach for Characterizing the Source of Particulates in Water Distribution Systems

    Directory of Open Access Journals (Sweden)

    Seon-Ha Chae

    2016-02-01

    Full Text Available Water quality complaints related to particulate matter and discolored water can be troublesome for water utilities in terms of follow-up investigations and implementation of appropriate actions because particulate matter can enter from a variety of sources; moreover, physicochemical processes can affect the water quality during the purification and transportation processes. The origin of particulates can be attributed to sources such as background organic/inorganic materials from water sources, water treatment plants, water distribution pipelines that have deteriorated, and rehabilitation activities in the water distribution systems. In this study, a practical method is proposed for tracing particulate sources. The method entails collecting information related to hydraulic, water quality, and structural conditions, employing a network flow-path model, and establishing a database of physicochemical properties for tubercles and slimes. The proposed method was implemented within two city water distribution systems that were located in Korea. These applications were conducted to demonstrate the practical applicability of the method for providing solutions to customer complaints. The results of the field studies indicated that the proposed method would be feasible for investigating the sources of particulates and for preparing appropriate action plans for complaints related to particulate matter.

  15. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    Energy Technology Data Exchange (ETDEWEB)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene, E-mail: mertsch@nbi.ku.dk, E-mail: mohamed.rameez@nbi.ku.dk, E-mail: tamborra@nbi.ku.dk [Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2017-03-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ''warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10{sup −6} Mpc{sup −3} and neutrino luminosity L {sub ν} ∼< 10{sup 42} erg s{sup −1} (10{sup 41} erg s{sup −1}) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.

  16. Regional climate model downscaling may improve the prediction of alien plant species distributions

    Science.gov (United States)

    Liu, Shuyan; Liang, Xin-Zhong; Gao, Wei; Stohlgren, Thomas J.

    2014-12-01

    Distributions of invasive species are commonly predicted with species distribution models that build upon the statistical relationships between observed species presence data and climate data. We used field observations, climate station data, and Maximum Entropy species distribution models for 13 invasive plant species in the United States, and then compared the models with inputs from a General Circulation Model (hereafter GCM-based models) and a downscaled Regional Climate Model (hereafter, RCM-based models).We also compared species distributions based on either GCM-based or RCM-based models for the present (1990-1999) to the future (2046-2055). RCM-based species distribution models replicated observed distributions remarkably better than GCM-based models for all invasive species under the current climate. This was shown for the presence locations of the species, and by using four common statistical metrics to compare modeled distributions. For two widespread invasive taxa ( Bromus tectorum or cheatgrass, and Tamarix spp. or tamarisk), GCM-based models failed miserably to reproduce observed species distributions. In contrast, RCM-based species distribution models closely matched observations. Future species distributions may be significantly affected by using GCM-based inputs. Because invasive plants species often show high resilience and low rates of local extinction, RCM-based species distribution models may perform better than GCM-based species distribution models for planning containment programs for invasive species.

  17. Using Bayesian Belief Network (BBN) modelling for Rapid Source Term Prediction. RASTEP Phase 1

    International Nuclear Information System (INIS)

    Knochenhauer, M.; Swaling, V.H.; Alfheim, P.

    2012-09-01

    The project is connected to the development of RASTEP, a computerized source term prediction tool aimed at providing a basis for improving off-site emergency management. RASTEP uses Bayesian belief networks (BBN) to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, timing, and pathway of released radio-nuclides). The output is a set of possible source terms with associated probabilities. In the NKS project, a number of complex issues associated with the integration of probabilistic and deterministic analyses are addressed. This includes issues related to the method for estimating source terms, signal validation, and sensitivity analysis. One major task within Phase 1 of the project addressed the problem of how to make the source term module flexible enough to give reliable and valid output throughout the accident scenario. Of the alternatives evaluated, it is recommended that RASTEP is connected to a fast running source term prediction code, e.g., MARS, with a possibility of updating source terms based on real-time observations. (Author)

  18. Using Bayesian Belief Network (BBN) modelling for Rapid Source Term Prediction. RASTEP Phase 1

    Energy Technology Data Exchange (ETDEWEB)

    Knochenhauer, M.; Swaling, V.H.; Alfheim, P. [Scandpower AB, Sundbyberg (Sweden)

    2012-09-15

    The project is connected to the development of RASTEP, a computerized source term prediction tool aimed at providing a basis for improving off-site emergency management. RASTEP uses Bayesian belief networks (BBN) to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, timing, and pathway of released radio-nuclides). The output is a set of possible source terms with associated probabilities. In the NKS project, a number of complex issues associated with the integration of probabilistic and deterministic analyses are addressed. This includes issues related to the method for estimating source terms, signal validation, and sensitivity analysis. One major task within Phase 1 of the project addressed the problem of how to make the source term module flexible enough to give reliable and valid output throughout the accident scenario. Of the alternatives evaluated, it is recommended that RASTEP is connected to a fast running source term prediction code, e.g., MARS, with a possibility of updating source terms based on real-time observations. (Author)

  19. Uncertainties in predicting species distributions under climate change: a case study using Tetranychus evansi (Acari: Tetranychidae), a widespread agricultural pest.

    Science.gov (United States)

    Meynard, Christine N; Migeon, Alain; Navajas, Maria

    2013-01-01

    Many species are shifting their distributions due to climate change and to increasing international trade that allows dispersal of individuals across the globe. In the case of agricultural pests, such range shifts may heavily impact agriculture. Species distribution modelling may help to predict potential changes in pest distributions. However, these modelling strategies are subject to large uncertainties coming from different sources. Here we used the case of the tomato red spider mite (Tetranychus evansi), an invasive pest that affects some of the most important agricultural crops worldwide, to show how uncertainty may affect forecasts of the potential range of the species. We explored three aspects of uncertainty: (1) species prevalence; (2) modelling method; and (3) variability in environmental responses between mites belonging to two invasive clades of T. evansi. Consensus techniques were used to forecast the potential range of the species under current and two different climate change scenarios for 2080, and variance between model projections were mapped to identify regions of high uncertainty. We revealed large predictive variations linked to all factors, although prevalence had a greater influence than the statistical model once the best modelling strategies were selected. The major areas threatened under current conditions include tropical countries in South America and Africa, and temperate regions in North America, the Mediterranean basin and Australia. Under future scenarios, the threat shifts towards northern Europe and some other temperate regions in the Americas, whereas tropical regions in Africa present a reduced risk. Analysis of niche overlap suggests that the current differential distribution of mites of the two clades of T. evansi can be partially attributed to environmental niche differentiation. Overall this study shows how consensus strategies and analysis of niche overlap can be used jointly to draw conclusions on invasive threat

  20. Uncertainties in predicting species distributions under climate change: a case study using Tetranychus evansi (Acari: Tetranychidae, a widespread agricultural pest.

    Directory of Open Access Journals (Sweden)

    Christine N Meynard

    Full Text Available Many species are shifting their distributions due to climate change and to increasing international trade that allows dispersal of individuals across the globe. In the case of agricultural pests, such range shifts may heavily impact agriculture. Species distribution modelling may help to predict potential changes in pest distributions. However, these modelling strategies are subject to large uncertainties coming from different sources. Here we used the case of the tomato red spider mite (Tetranychus evansi, an invasive pest that affects some of the most important agricultural crops worldwide, to show how uncertainty may affect forecasts of the potential range of the species. We explored three aspects of uncertainty: (1 species prevalence; (2 modelling method; and (3 variability in environmental responses between mites belonging to two invasive clades of T. evansi. Consensus techniques were used to forecast the potential range of the species under current and two different climate change scenarios for 2080, and variance between model projections were mapped to identify regions of high uncertainty. We revealed large predictive variations linked to all factors, although prevalence had a greater influence than the statistical model once the best modelling strategies were selected. The major areas threatened under current conditions include tropical countries in South America and Africa, and temperate regions in North America, the Mediterranean basin and Australia. Under future scenarios, the threat shifts towards northern Europe and some other temperate regions in the Americas, whereas tropical regions in Africa present a reduced risk. Analysis of niche overlap suggests that the current differential distribution of mites of the two clades of T. evansi can be partially attributed to environmental niche differentiation. Overall this study shows how consensus strategies and analysis of niche overlap can be used jointly to draw conclusions on invasive

  1. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Junjie Ma

    2018-02-01

    Full Text Available Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  2. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks.

    Science.gov (United States)

    Ma, Junjie; Meng, Fansheng; Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-02-16

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  3. Radial dose distribution of 192Ir and 137Cs seed sources

    International Nuclear Information System (INIS)

    Thomason, C.; Higgins, P.

    1989-01-01

    The radial dose distributions in water around /sup 192/ Ir seed sources with both platinum and stainless steel encapsulation have been measured using LiF thermoluminescent dosimeters (TLD) for distances of 1 to 12 cm along the perpendicular bisector of the source to determine the effect of source encapsulation. Similar measurements also have been made around a /sup 137/ Cs seed source of comparable dimensions. The data were fit to a third order polynomial to obtain an empirical equation for the radial dose factor which then can be used in dosimetry. The coefficients of this equation for each of the three sources are given. The radial dose factor of the stainless steel encapsulated /sup 192/ Ir and that of the platinum encapsulated /sup 192/ Ir agree to within 2%. The radial dose distributions measured here for /sup 192/ Ir with either type of encapsulation and for /sup 137/ Cs are indistinguishable from those of other authors when considering uncertainties involved. For clinical dosimetry based on isotropic point or line source models, any of these equations may be used without significantly affecting accuracy

  4. Interpreting predictive maps of disease: highlighting the pitfalls of distribution models in epidemiology

    Directory of Open Access Journals (Sweden)

    Nicola A. Wardrop

    2014-11-01

    Full Text Available The application of spatial modelling to epidemiology has increased significantly over the past decade, delivering enhanced understanding of the environmental and climatic factors affecting disease distributions and providing spatially continuous representations of disease risk (predictive maps. These outputs provide significant information for disease control programmes, allowing spatial targeting and tailored interventions. However, several factors (e.g. sampling protocols or temporal disease spread can influence predictive mapping outputs. This paper proposes a conceptual framework which defines several scenarios and their potential impact on resulting predictive outputs, using simulated data to provide an exemplar. It is vital that researchers recognise these scenarios and their influence on predictive models and their outputs, as a failure to do so may lead to inaccurate interpretation of predictive maps. As long as these considerations are kept in mind, predictive mapping will continue to contribute significantly to epidemiological research and disease control planning.

  5. Sources, occurrence and predicted aquatic impact of legacy and contemporary pesticides in streams

    International Nuclear Information System (INIS)

    McKnight, Ursula S.; Rasmussen, Jes J.; Kronvang, Brian; Binning, Philip J.; Bjerg, Poul L.

    2015-01-01

    We couple current findings of pesticides in surface and groundwater to the history of pesticide usage, focusing on the potential contribution of legacy pesticides to the predicted ecotoxicological impact on benthic macroinvertebrates in headwater streams. Results suggest that groundwater, in addition to precipitation and surface runoff, is an important source of pesticides (particularly legacy herbicides) entering surface water. In addition to current-use active ingredients, legacy pesticides, metabolites and impurities are important for explaining the estimated total toxicity attributable to pesticides. Sediment-bound insecticides were identified as the primary source for predicted ecotoxicity. Our results support recent studies indicating that highly sorbing chemicals contribute and even drive impacts on aquatic ecosystems. They further indicate that groundwater contaminated by legacy and contemporary pesticides may impact adjoining streams. Stream observations of soluble and sediment-bound pesticides are valuable for understanding the long-term fate of pesticides in aquifers, and should be included in stream monitoring programs. - Highlights: • Findings comprised a range of contemporary and banned legacy pesticides in streams. • Groundwater is a significant pathway for some herbicides entering streams. • Legacy pesticides increased predicted aquatic toxicity by four orders of magnitude. • Sediment-bound insecticides were identified as the primary source for ecotoxicity. • Stream monitoring programs should include legacy pesticides to assess impacts. - Legacy pesticides, particularly sediment-bound insecticides were identified as the primary source for predicted ecotoxicity impacting benthic macroinvertebrates in headwater streams

  6. Memory for Textual Conflicts Predicts Sourcing When Adolescents Read Multiple Expository Texts

    Science.gov (United States)

    Stang Lund, Elisabeth; Bråten, Ivar; Brante, Eva W.; Strømsø, Helge I.

    2017-01-01

    This study investigated whether memory for conflicting information predicted mental representation of source-content links (i.e., who said what) in a sample of 86 Norwegian adolescent readers. Participants read four texts presenting conflicting claims about sun exposure and health. With differences in gender, prior knowledge, and interest…

  7. Predicting plant distribution in an heterogeneous Alpine landscape: does soil matter?

    Science.gov (United States)

    Buri, Aline; Cianfrani, Carmen; Pradervand, Jean-Nicolas; Guisan, Antoine

    2016-04-01

    Topographic and climatic factors are usually used to predict plant distribution because they are known to explain their presence or absence. Soil properties have been widely shown to influence plant growth and distributions. However, they are rarely taken into account as predictors of plant species distribution models (SDM) in an edaphically heterogeneous landscape. Or, when it happens, interpolation techniques are used to project soil factors in space. In heterogeneous landscape, such as in the Alps region, where soil properties change abruptly as a function of environmental conditions over short distances, interpolation techniques require a huge quantities of samples to be efficient. This is costly and time consuming, and bring more errors than predictive approach for an equivalent number of samples. In this study we aimed to assess whether soil proprieties may be generalized over entire mountainous geographic extents and can improve predictions of plant distributions over traditional topo-climatic predictors. First, we used a predictive approach to map two soil proprieties based on field measurements in the western Swiss Alps region; the soil pH and the ratio of stable isotopes 13C/12C (called δ13CSOM). We used ensemble forecasting techniques combining together several predictive algorithms to build models of the geographic variation in the values of both soil proprieties and projected them in the entire study area. As predictive factors, we employed very high resolution topo-climatic data. In a second step, output maps from the previous task were used as an input for vegetation regional models. We integrated the predicted soil proprieties to a set of basic topo-climatic predictors known to be important to model plants species. Then we modelled the distribution of 156 plant species inhabiting the study area. Finally, we compared the quality of the models having or not soil proprieties as predictors to evaluate their effect on the predictive power of our models

  8. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    Science.gov (United States)

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  9. Open source machine-learning algorithms for the prediction of optimal cancer drug therapies.

    Science.gov (United States)

    Huang, Cai; Mezencev, Roman; McDonald, John F; Vannberg, Fredrik

    2017-01-01

    Precision medicine is a rapidly growing area of modern medical science and open source machine-learning codes promise to be a critical component for the successful development of standardized and automated analysis of patient data. One important goal of precision cancer medicine is the accurate prediction of optimal drug therapies from the genomic profiles of individual patient tumors. We introduce here an open source software platform that employs a highly versatile support vector machine (SVM) algorithm combined with a standard recursive feature elimination (RFE) approach to predict personalized drug responses from gene expression profiles. Drug specific models were built using gene expression and drug response data from the National Cancer Institute panel of 60 human cancer cell lines (NCI-60). The models are highly accurate in predicting the drug responsiveness of a variety of cancer cell lines including those comprising the recent NCI-DREAM Challenge. We demonstrate that predictive accuracy is optimized when the learning dataset utilizes all probe-set expression values from a diversity of cancer cell types without pre-filtering for genes generally considered to be "drivers" of cancer onset/progression. Application of our models to publically available ovarian cancer (OC) patient gene expression datasets generated predictions consistent with observed responses previously reported in the literature. By making our algorithm "open source", we hope to facilitate its testing in a variety of cancer types and contexts leading to community-driven improvements and refinements in subsequent applications.

  10. Open source machine-learning algorithms for the prediction of optimal cancer drug therapies.

    Directory of Open Access Journals (Sweden)

    Cai Huang

    Full Text Available Precision medicine is a rapidly growing area of modern medical science and open source machine-learning codes promise to be a critical component for the successful development of standardized and automated analysis of patient data. One important goal of precision cancer medicine is the accurate prediction of optimal drug therapies from the genomic profiles of individual patient tumors. We introduce here an open source software platform that employs a highly versatile support vector machine (SVM algorithm combined with a standard recursive feature elimination (RFE approach to predict personalized drug responses from gene expression profiles. Drug specific models were built using gene expression and drug response data from the National Cancer Institute panel of 60 human cancer cell lines (NCI-60. The models are highly accurate in predicting the drug responsiveness of a variety of cancer cell lines including those comprising the recent NCI-DREAM Challenge. We demonstrate that predictive accuracy is optimized when the learning dataset utilizes all probe-set expression values from a diversity of cancer cell types without pre-filtering for genes generally considered to be "drivers" of cancer onset/progression. Application of our models to publically available ovarian cancer (OC patient gene expression datasets generated predictions consistent with observed responses previously reported in the literature. By making our algorithm "open source", we hope to facilitate its testing in a variety of cancer types and contexts leading to community-driven improvements and refinements in subsequent applications.

  11. Neural correlates of encoding processes predicting subsequent cued recall and source memory.

    Science.gov (United States)

    Angel, Lucie; Isingrini, Michel; Bouazzaoui, Badiâa; Fay, Séverine

    2013-03-06

    In this experiment, event-related potentials were used to examine whether the neural correlates of encoding processes predicting subsequent successful recall differed from those predicting successful source memory retrieval. During encoding, participants studied lists of words and were instructed to memorize each word and the list in which it occurred. At test, they had to complete stems (the first four letters) with a studied word and then make a judgment of the initial temporal context (i.e. list). Event-related potentials recorded during encoding were segregated according to subsequent memory performance to examine subsequent memory effects (SMEs) reflecting successful cued recall (cued recall SME) and successful source retrieval (source memory SME). Data showed a cued recall SME on parietal electrode sites from 400 to 1200 ms and a late inversed cued recall SME on frontal sites in the 1200-1400 ms period. Moreover, a source memory SME was reported from 400 to 1400 ms on frontal areas. These findings indicate that patterns of encoding-related activity predicting successful recall and source memory are clearly dissociated.

  12. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

    Science.gov (United States)

    Zhang, Langwen; Xie, Wei; Wang, Jingcheng

    2017-11-01

    In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

  13. Multivariate models for prediction of rheological characteristics of filamentous fermentation broth from the size distribution

    DEFF Research Database (Denmark)

    Petersen, Nanna; Stocks, S.; Gernaey, Krist

    2008-01-01

    fermentations conducted in 550 L pilot scale tanks were characterized with respect to particle size distribution, biomass concentration, and rheological properties. The rheological properties were described using the Herschel-Bulkley model. Estimation of all three parameters in the Herschel-Bulkley model (yield...... in filamentous fermentations. It was therefore chosen to fix this parameter to the average value thereby decreasing the standard deviation of the estimates of the remaining theological parameters significantly. Using a PLSR model, a reasonable prediction of apparent viscosity (mu(app)), yield stress (tau......(y)), and consistency index (K), could be made from the size distributions, biomass concentration, and process information. This provides a predictive method with a high predictive power for the rheology of fermentation broth, and with the advantages over previous models that tau(y) and K can be predicted as well as mu...

  14. Prediction of the neutrons subcritical multiplication using the diffusion hybrid equation with external neutron sources

    Energy Technology Data Exchange (ETDEWEB)

    Costa da Silva, Adilson; Carvalho da Silva, Fernando [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, 21941-914, Rio de Janeiro (Brazil); Senra Martinez, Aquilino, E-mail: aquilino@lmp.ufrj.br [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, 21941-914, Rio de Janeiro (Brazil)

    2011-07-15

    Highlights: > We proposed a new neutron diffusion hybrid equation with external neutron source. > A coarse mesh finite difference method for the adjoint flux and reactivity calculation was developed. > 1/M curve to predict the criticality condition is used. - Abstract: We used the neutron diffusion hybrid equation, in cartesian geometry with external neutron sources to predict the subcritical multiplication of neutrons in a pressurized water reactor, using a 1/M curve to predict the criticality condition. A Coarse Mesh Finite Difference Method was developed for the adjoint flux calculation and to obtain the reactivity values of the reactor. The results obtained were compared with benchmark values in order to validate the methodology presented in this paper.

  15. Prediction of the neutrons subcritical multiplication using the diffusion hybrid equation with external neutron sources

    International Nuclear Information System (INIS)

    Costa da Silva, Adilson; Carvalho da Silva, Fernando; Senra Martinez, Aquilino

    2011-01-01

    Highlights: → We proposed a new neutron diffusion hybrid equation with external neutron source. → A coarse mesh finite difference method for the adjoint flux and reactivity calculation was developed. → 1/M curve to predict the criticality condition is used. - Abstract: We used the neutron diffusion hybrid equation, in cartesian geometry with external neutron sources to predict the subcritical multiplication of neutrons in a pressurized water reactor, using a 1/M curve to predict the criticality condition. A Coarse Mesh Finite Difference Method was developed for the adjoint flux calculation and to obtain the reactivity values of the reactor. The results obtained were compared with benchmark values in order to validate the methodology presented in this paper.

  16. Nonparametric Tree-Based Predictive Modeling of Storm Outages on an Electric Distribution Network.

    Science.gov (United States)

    He, Jichao; Wanik, David W; Hartman, Brian M; Anagnostou, Emmanouil N; Astitha, Marina; Frediani, Maria E B

    2017-03-01

    This article compares two nonparametric tree-based models, quantile regression forests (QRF) and Bayesian additive regression trees (BART), for predicting storm outages on an electric distribution network in Connecticut, USA. We evaluated point estimates and prediction intervals of outage predictions for both models using high-resolution weather, infrastructure, and land use data for 89 storm events (including hurricanes, blizzards, and thunderstorms). We found that spatially BART predicted more accurate point estimates than QRF. However, QRF produced better prediction intervals for high spatial resolutions (2-km grid cells and towns), while BART predictions aggregated to coarser resolutions (divisions and service territory) more effectively. We also found that the predictive accuracy was dependent on the season (e.g., tree-leaf condition, storm characteristics), and that the predictions were most accurate for winter storms. Given the merits of each individual model, we suggest that BART and QRF be implemented together to show the complete picture of a storm's potential impact on the electric distribution network, which would allow for a utility to make better decisions about allocating prestorm resources. © 2016 Society for Risk Analysis.

  17. Model of charge-state distributions for electron cyclotron resonance ion source plasmas

    Directory of Open Access Journals (Sweden)

    D. H. Edgell

    1999-12-01

    Full Text Available A computer model for the ion charge-state distribution (CSD in an electron cyclotron resonance ion source (ECRIS plasma is presented that incorporates non-Maxwellian distribution functions, multiple atomic species, and ion confinement due to the ambipolar potential well that arises from confinement of the electron cyclotron resonance (ECR heated electrons. Atomic processes incorporated into the model include multiple ionization and multiple charge exchange with rate coefficients calculated for non-Maxwellian electron distributions. The electron distribution function is calculated using a Fokker-Planck code with an ECR heating term. This eliminates the electron temperature as an arbitrary user input. The model produces results that are a good match to CSD data from the ANL-ECRII ECRIS. Extending the model to 1D axial will also allow the model to determine the plasma and electrostatic potential profiles, further eliminating arbitrary user input to the model.

  18. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  19. A practical two-way system of quantum key distribution with untrusted source

    International Nuclear Information System (INIS)

    Chen Ming-Juan; Liu Xiang

    2011-01-01

    The most severe problem of a two-way 'plug-and-play' (p and p) quantum key distribution system is that the source can be controlled by the eavesdropper. This kind of source is defined as an “untrusted source . This paper discusses the effects of the fluctuation of internal transmittance on the final key generation rate and the transmission distance. The security of the standard BB84 protocol, one-decoy state protocol, and weak+vacuum decoy state protocol, with untrusted sources and the fluctuation of internal transmittance are studied. It is shown that the one-decoy state is sensitive to the statistical fluctuation but weak+vacuum decoy state is only slightly affected by the fluctuation. It is also shown that both the maximum secure transmission distance and final key generation rate are reduced when Alice's laboratory transmittance fluctuation is considered. (general)

  20. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources

    Directory of Open Access Journals (Sweden)

    Xiang Gao

    2016-07-01

    Full Text Available This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors’ data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented.

  1. Prediction of vertical distribution and ambient development temperature of Baltic cod, Gadus morhua L., eggs

    DEFF Research Database (Denmark)

    Wieland, Kai; Jarre, Astrid

    1997-01-01

    An artificial neural network (ANN) model was established to predict the vertical distribution of Baltic cod eggs. Data from vertical distribution sampling in the Bornholm Basin over the period 1986-1995 were used to train and test the network, while data sets from sampling in 1996 were used...... for validation. The model explained 82% of the variance between observed and predicted relative frequencies of occurrence of the eggs in relation to salinity, temperature and oxygen concentration; The ANN fitted all observations satisfactorily except for one sampling date, where an exceptional hydrographic...

  2. The effects of model and data complexity on predictions from species distributions models

    DEFF Research Database (Denmark)

    García-Callejas, David; Bastos, Miguel

    2016-01-01

    How complex does a model need to be to provide useful predictions is a matter of continuous debate across environmental sciences. In the species distributions modelling literature, studies have demonstrated that more complex models tend to provide better fits. However, studies have also shown...... that predictive performance does not always increase with complexity. Testing of species distributions models is challenging because independent data for testing are often lacking, but a more general problem is that model complexity has never been formally described in such studies. Here, we systematically...

  3. Enhanced effects of biotic interactions on predicting multispecies spatial distribution of submerged macrophytes after eutrophication.

    Science.gov (United States)

    Song, Kun; Cui, Yichong; Zhang, Xijin; Pan, Yingji; Xu, Junli; Xu, Kaiqin; Da, Liangjun

    2017-10-01

    Water eutrophication creates unfavorable environmental conditions for submerged macrophytes. In these situations, biotic interactions may be particularly important for explaining and predicting the submerged macrophytes occurrence. Here, we evaluate the roles of biotic interactions in predicting spatial occurrence of submerged macrophytes in 1959 and 2009 for Dianshan Lake in eastern China, which became eutrophic since the 1980s. For the four common species occurred in 1959 and 2009, null species distribution models based on abiotic variables and full models based on both abiotic and biotic variables were developed using generalized linear model (GLM) and boosted regression trees (BRT) to determine whether the biotic variables improved the model performance. Hierarchical Bayesian-based joint species distribution models capable of detecting paired biotic interactions were established for each species in both periods to evaluate the changes in the biotic interactions. In most of the GLM and BRT models, the full models showed better performance than the null models in predicting the species presence/absence, and the relative importance of the biotic variables in the full models increased from less than 50% in 1959 to more than 50% in 2009 for each species. Moreover, co-occurrence correlation of each paired species interaction was higher in 2009 than that in 1959. The findings suggest biotic interactions that tend to be positive play more important roles in the spatial distribution of multispecies assemblages of macrophytes and should be included in prediction models to improve prediction accuracy when forecasting macrophytes' distribution under eutrophication stress.

  4. The Galactic Distribution of Massive Star Formation from the Red MSX Source Survey

    Science.gov (United States)

    Figura, Charles C.; Urquhart, J. S.

    2013-01-01

    Massive stars inject enormous amounts of energy into their environments in the form of UV radiation and molecular outflows, creating HII regions and enriching local chemistry. These effects provide feedback mechanisms that aid in regulating star formation in the region, and may trigger the formation of subsequent generations of stars. Understanding the mechanics of massive star formation presents an important key to understanding this process and its role in shaping the dynamics of galactic structure. The Red MSX Source (RMS) survey is a multi-wavelength investigation of ~1200 massive young stellar objects (MYSO) and ultra-compact HII (UCHII) regions identified from a sample of colour-selected sources from the Midcourse Space Experiment (MSX) point source catalog and Two Micron All Sky Survey. We present a study of over 900 MYSO and UCHII regions investigated by the RMS survey. We review the methods used to determine distances, and investigate the radial galactocentric distribution of these sources in context with the observed structure of the galaxy. The distribution of MYSO and UCHII regions is found to be spatially correlated with the spiral arms and galactic bar. We examine the radial distribution of MYSOs and UCHII regions and find variations in the star formation rate between the inner and outer Galaxy and discuss the implications for star formation throughout the galactic disc.

  5. Incorporation of lysosomal sequestration in the mechanistic model for prediction of tissue distribution of basic drugs.

    Science.gov (United States)

    Assmus, Frauke; Houston, J Brian; Galetin, Aleksandra

    2017-11-15

    The prediction of tissue-to-plasma water partition coefficients (Kpu) from in vitro and in silico data using the tissue-composition based model (Rodgers & Rowland, J Pharm Sci. 2005, 94(6):1237-48.) is well established. However, distribution of basic drugs, in particular into lysosome-rich lung tissue, tends to be under-predicted by this approach. The aim of this study was to develop an extended mechanistic model for the prediction of Kpu which accounts for lysosomal sequestration and the contribution of different cell types in the tissue of interest. The extended model is based on compound-specific physicochemical properties and tissue composition data to describe drug ionization, distribution into tissue water and drug binding to neutral lipids, neutral phospholipids and acidic phospholipids in tissues, including lysosomes. Physiological data on the types of cells contributing to lung, kidney and liver, their lysosomal content and lysosomal pH were collated from the literature. The predictive power of the extended mechanistic model was evaluated using a dataset of 28 basic drugs (pK a ≥7.8, 17 β-blockers, 11 structurally diverse drugs) for which experimentally determined Kpu data in rat tissue have been reported. Accounting for the lysosomal sequestration in the extended mechanistic model improved the accuracy of Kpu predictions in lung compared to the original Rodgers model (56% drugs within 2-fold or 88% within 3-fold of observed values). Reduction in the extent of Kpu under-prediction was also evident in liver and kidney. However, consideration of lysosomal sequestration increased the occurrence of over-predictions, yielding overall comparable model performances for kidney and liver, with 68% and 54% of Kpu values within 2-fold error, respectively. High lysosomal concentration ratios relative to cytosol (>1000-fold) were predicted for the drugs investigated; the extent differed depending on the lysosomal pH and concentration of acidic phospholipids among

  6. Mathematical model of heat transfer to predict distribution of hardness through the Jominy bar

    International Nuclear Information System (INIS)

    Lopez, E.; Hernandez, J. B.; Solorio, G.; Vergara, H. J.; Vazquez, O.; Garnica, F.

    2013-01-01

    The heat transfer coefficient was estimated at the bottom surface at Jominy bar end quench specimen by solution of the heat inverse conduction problem. A mathematical model based on the finite-difference method was developed to predict thermal paths and volume fraction of transformed phases. The mathematical model was codified in the commercial package Microsoft Visual Basic v. 6. The calculated thermal path and final phase distribution were used to evaluate the hardness distribution along the AISI 4140 Jominy bar. (Author)

  7. Validation of model predictions of pore-scale fluid distributions during two-phase flow

    Science.gov (United States)

    Bultreys, Tom; Lin, Qingyang; Gao, Ying; Raeini, Ali Q.; AlRatrout, Ahmed; Bijeljic, Branko; Blunt, Martin J.

    2018-05-01

    Pore-scale two-phase flow modeling is an important technology to study a rock's relative permeability behavior. To investigate if these models are predictive, the calculated pore-scale fluid distributions which determine the relative permeability need to be validated. In this work, we introduce a methodology to quantitatively compare models to experimental fluid distributions in flow experiments visualized with microcomputed tomography. First, we analyzed five repeated drainage-imbibition experiments on a single sample. In these experiments, the exact fluid distributions were not fully repeatable on a pore-by-pore basis, while the global properties of the fluid distribution were. Then two fractional flow experiments were used to validate a quasistatic pore network model. The model correctly predicted the fluid present in more than 75% of pores and throats in drainage and imbibition. To quantify what this means for the relevant global properties of the fluid distribution, we compare the main flow paths and the connectivity across the different pore sizes in the modeled and experimental fluid distributions. These essential topology characteristics matched well for drainage simulations, but not for imbibition. This suggests that the pore-filling rules in the network model we used need to be improved to make reliable predictions of imbibition. The presented analysis illustrates the potential of our methodology to systematically and robustly test two-phase flow models to aid in model development and calibration.

  8. An efficient central DOA tracking algorithm for multiple incoherently distributed sources

    Science.gov (United States)

    Hassen, Sonia Ben; Samet, Abdelaziz

    2015-12-01

    In this paper, we develop a new tracking method for the direction of arrival (DOA) parameters assuming multiple incoherently distributed (ID) sources. The new approach is based on a simple covariance fitting optimization technique exploiting the central and noncentral moments of the source angular power densities to estimate the central DOAs. The current estimates are treated as measurements provided to the Kalman filter that model the dynamic property of directional changes for the moving sources. Then, the covariance-fitting-based algorithm and the Kalman filtering theory are combined to formulate an adaptive tracking algorithm. Our algorithm is compared to the fast approximated power iteration-total least square-estimation of signal parameters via rotational invariance technique (FAPI-TLS-ESPRIT) algorithm using the TLS-ESPRIT method and the subspace updating via FAPI-algorithm. It will be shown that the proposed algorithm offers an excellent DOA tracking performance and outperforms the FAPI-TLS-ESPRIT method especially at low signal-to-noise ratio (SNR) values. Moreover, the performances of the two methods increase as the SNR values increase. This increase is more prominent with the FAPI-TLS-ESPRIT method. However, their performances degrade when the number of sources increases. It will be also proved that our method depends on the form of the angular distribution function when tracking the central DOAs. Finally, it will be shown that the more the sources are spaced, the more the proposed method can exactly track the DOAs.

  9. Age-related schema reliance of judgments of learning in predicting source memory.

    Science.gov (United States)

    Shi, Liang-Zi; Tang, Wei-Hai; Liu, Xi-Ping

    2012-01-01

    Source memory refers to mental processes of encoding and making attributions to the origin of information. We investigated schematic effects on source attributions of younger and older adults for different schema-based types of items, and their schema-utilization of judgments of learning (JOLs) in estimating source memory. Participants studied statements presented by two speakers either as a doctor or a lawyer: those in the schema-after-encoding condition were informed their occupation only before retrieving, while those of schema-before-encoding were presented the schematic information prior to study. Immediately after learning every item, they made judgments of the likelihood for it to be correctly attributed to the original source later. In the test, they fulfilled a task of source attributing. The results showed a two-edged effect of schemas: schema reliance improved source memory for schema-consistent items while impaired that for schema-inconsistent items, even with schematic information presented prior to encoding. Compared with younger adults, older adults benefited more from schema-based compensatory mechanisms. Both younger and older adults could make JOLs based on before-encoding schematic information, and the schema-based JOLs were more accurate in predicting source memory than JOLs made without schema support. However, even in the schema-after-encoding condition, older adults were able to make metacognitive judgments as accurately as younger adults did, though they did have great impairments in source memory itself.

  10. Microscopic prediction of speech intelligibility in spatially distributed speech-shaped noise for normal-hearing listeners.

    Science.gov (United States)

    Geravanchizadeh, Masoud; Fallah, Ali

    2015-12-01

    A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.

  11. Prediction future asset price which is non-concordant with the historical distribution

    Science.gov (United States)

    Seong, Ng Yew; Hin, Pooi Ah

    2015-12-01

    This paper attempts to predict the major characteristics of the future asset price which is non-concordant with the distribution estimated from the price today and the prices on a large number of previous days. The three major characteristics of the i-th non-concordant asset price are the length of the interval between the occurrence time of the previous non-concordant asset price and that of the present non-concordant asset price, the indicator which denotes that the non-concordant price is extremely small or large by its values -1 and 1 respectively, and the degree of non-concordance given by the negative logarithm of the probability of the left tail or right tail of which one of the end points is given by the observed future price. The vector of three major characteristics of the next non-concordant price is modelled to be dependent on the vectors corresponding to the present and l - 1 previous non-concordant prices via a 3-dimensional conditional distribution which is derived from a 3(l + 1)-dimensional power-normal mixture distribution. The marginal distribution for each of the three major characteristics can then be derived from the conditional distribution. The mean of the j-th marginal distribution is an estimate of the value of the j-th characteristics of the next non-concordant price. Meanwhile, the 100(α/2) % and 100(1 - α/2) % points of the j-th marginal distribution can be used to form a prediction interval for the j-th characteristic of the next non-concordant price. The performance measures of the above estimates and prediction intervals indicate that the fitted conditional distribution is satisfactory. Thus the incorporation of the distribution of the characteristics of the next non-concordant price in the model for asset price has a good potential of yielding a more realistic model.

  12. Post-quantum attacks on key distribution schemes in the presence of weakly stochastic sources

    International Nuclear Information System (INIS)

    Al–Safi, S W; Wilmott, C M

    2015-01-01

    It has been established that the security of quantum key distribution protocols can be severely compromised were one to permit an eavesdropper to possess a very limited knowledge of the random sources used between the communicating parties. While such knowledge should always be expected in realistic experimental conditions, the result itself opened a new line of research to fully account for real-world weak randomness threats to quantum cryptography. Here we expand of this novel idea by describing a key distribution scheme that is provably secure against general attacks by a post-quantum adversary. We then discuss possible security consequences for such schemes under the assumption of weak randomness. (paper)

  13. Study and Analysis of an Intelligent Microgrid Energy Management Solution with Distributed Energy Sources

    Directory of Open Access Journals (Sweden)

    Swaminathan Ganesan

    2017-09-01

    Full Text Available In this paper, a robust energy management solution which will facilitate the optimum and economic control of energy flows throughout a microgrid network is proposed. The increased penetration of renewable energy sources is highly intermittent in nature; the proposed solution demonstrates highly efficient energy management. This study enables precise management of power flows by forecasting of renewable energy generation, estimating the availability of energy at storage batteries, and invoking the appropriate mode of operation, based on the load demand to achieve efficient and economic operation. The predefined mode of operation is derived out of an expert rule set and schedules the load and distributed energy sources along with utility grid.

  14. Calculating method for confinement time and charge distribution of ions in electron cyclotron resonance sources

    International Nuclear Information System (INIS)

    Dougar-Jabon, V.D.; Umnov, A.M.; Kutner, V.B.

    1996-01-01

    It is common knowledge that the electrostatic pit in a core plasma of electron cyclotron resonance sources exerts strict control over generation of ions in high charge states. This work is aimed at finding a dependence of the lifetime of ions on their charge states in the core region and to elaborate a numerical model of ion charge dispersion not only for the core plasmas but for extracted beams as well. The calculated data are in good agreement with the experimental results on charge distributions and magnitudes for currents of beams extracted from the 14 GHz DECRIS source. copyright 1996 American Institute of Physics

  15. Impacts of Spatio-Variability of Source Morphology on Field-Scale Predictions of Subsurface Contaminant Transport

    National Research Council Canada - National Science Library

    Hatfield, Kirk

    1998-01-01

    ... (organic immiscible liquids distribution and composition) and aquifer properties on predicting solute transport in saturated groundwater systems contaminated with residual Organic Immiscible Liquids (OIL's...

  16. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    Science.gov (United States)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit

  17. Novel Radiobiological Gamma Index for Evaluation of 3-Dimensional Predicted Dose Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Sumida, Iori, E-mail: sumida@radonc.med.osaka-u.ac.jp [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Yamaguchi, Hajime; Kizaki, Hisao; Aboshi, Keiko; Tsujii, Mari; Yoshikawa, Nobuhiko; Yamada, Yuji [Department of Radiation Oncology, NTT West Osaka Hospital, Osaka (Japan); Suzuki, Osamu; Seo, Yuji [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Isohashi, Fumiaki [Department of Radiation Oncology, NTT West Osaka Hospital, Osaka (Japan); Yoshioka, Yasuo [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Ogawa, Kazuhiko [Department of Radiation Oncology, NTT West Osaka Hospital, Osaka (Japan)

    2015-07-15

    Purpose: To propose a gamma index-based dose evaluation index that integrates the radiobiological parameters of tumor control (TCP) and normal tissue complication probabilities (NTCP). Methods and Materials: Fifteen prostate and head and neck (H&N) cancer patients received intensity modulated radiation therapy. Before treatment, patient-specific quality assurance was conducted via beam-by-beam analysis, and beam-specific dose error distributions were generated. The predicted 3-dimensional (3D) dose distribution was calculated by back-projection of relative dose error distribution per beam. A 3D gamma analysis of different organs (prostate: clinical [CTV] and planned target volumes [PTV], rectum, bladder, femoral heads; H&N: gross tumor volume [GTV], CTV, spinal cord, brain stem, both parotids) was performed using predicted and planned dose distributions under 2%/2 mm tolerance and physical gamma passing rate was calculated. TCP and NTCP values were calculated for voxels with physical gamma indices (PGI) >1. We propose a new radiobiological gamma index (RGI) to quantify the radiobiological effects of TCP and NTCP and calculate radiobiological gamma passing rates. Results: The mean RGI gamma passing rates for prostate cases were significantly different compared with those of PGI (P<.03–.001). The mean RGI gamma passing rates for H&N cases (except for GTV) were significantly different compared with those of PGI (P<.001). Differences in gamma passing rates between PGI and RGI were due to dose differences between the planned and predicted dose distributions. Radiobiological gamma distribution was visualized to identify areas where the dose was radiobiologically important. Conclusions: RGI was proposed to integrate radiobiological effects into PGI. This index would assist physicians and medical physicists not only in physical evaluations of treatment delivery accuracy, but also in clinical evaluations of predicted dose distribution.

  18. What are the most crucial soil factors for predicting the distribution of alpine plant species?

    Science.gov (United States)

    Buri, A.; Pinto-Figueroa, E.; Yashiro, E.; Guisan, A.

    2017-12-01

    Nowadays the use of species distribution models (SDM) is common to predict in space and time the distribution of organisms living in the critical zone. The realized environmental niche concept behind the development of SDM imply that many environmental factors must be accounted for simultaneously to predict species distributions. Climatic and topographic factors are often primary included, whereas soil factors are frequently neglected, mainly due to the paucity of soil information available spatially and temporally. Furthermore, among existing studies, most included soil pH only, or few other soil parameters. In this study we aimed at identifying what are the most crucial soil factors for explaining alpine plant distributions and, among those identified, which ones further improve the predictive power of plant SDMs. To test the relative importance of the soil factors, we performed plant SDMs using as predictors 52 measured soil properties of various types such as organic/inorganic compounds, chemical/physical properties, water related variables, mineral composition or grain size distribution. We added them separately to a standard set of topo-climatic predictors (temperature, slope, solar radiation and topographic position). We used ensemble forecasting techniques combining together several predictive algorithms to model the distribution of 116 plant species over 250 sites in the Swiss Alps. We recorded the variable importance for each model and compared the quality of the models including different soil proprieties (one at a time) as predictors to models having only topo-climatic variables as predictors. Results show that 46% of the soil proprieties tested become the second most important variable, after air temperature, to explain spatial distribution of alpine plants species. Moreover, we also assessed that addition of certain soil factors, such as bulk soil water density, could improve over 80% the quality of some plant species models. We confirm that soil p

  19. The P1-approximation for the Distribution of Neutrons from a Pulsed Source in Hydrogen

    International Nuclear Information System (INIS)

    Claesson, A.

    1963-12-01

    The asymptotic distribution of neutrons from a pulsed, high energy source in an infinite moderator has been obtained earlier in a 'diffusion' approximation. In that paper the cross section was assumed to be constant over the whole energy region and the time derivative of the first moment was disregarded. Here, first, an analytic expression is obtained for the density in a P 1 -approximation. However, the result is very complicated, and it is shown that an asymptotic solution can be found in a simpler way. By taking into account the low hydrogen scattering cross section at the source energy it follows that the space dependence of the distribution is less than that obtained earlier. The importance of keeping the time derivative of the first moment is further shown in a perturbation approximation

  20. Medial temporal lobe reinstatement of content-specific details predicts source memory

    Science.gov (United States)

    Liang, Jackson C.; Preston, Alison R.

    2016-01-01

    Leading theories propose that when remembering past events, medial temporal lobe (MTL) structures reinstate the neural patterns that were active when those events were initially encoded. Accurate reinstatement is hypothesized to support detailed recollection of memories, including their source. While several studies have linked cortical reinstatement to successful retrieval, indexing reinstatement within the MTL network and its relationship to memory performance has proved challenging. Here, we addressed this gap in knowledge by having participants perform an incidental encoding task, during which they visualized people, places, and objects in response to adjective cues. During a surprise memory test, participants saw studied and novel adjectives and indicated the imagery task they performed for each adjective. A multivariate pattern classifier was trained to discriminate the imagery tasks based on functional magnetic resonance imaging (fMRI) responses from hippocampus and MTL cortex at encoding. The classifier was then tested on MTL patterns during the source memory task. We found that MTL encoding patterns were reinstated during successful source retrieval. Moreover, when participants made source misattributions, errors were predicted by reinstatement of incorrect source content in MTL cortex. We further observed a gradient of content-specific reinstatement along the anterior-posterior axis of hippocampus and MTL cortex. Within anterior hippocampus, we found that reinstatement of person content was related to source memory accuracy, whereas reinstatement of place information across the entire hippocampal axis predicted correct source judgments. Content-specific reinstatement was also graded across MTL cortex, with PRc patterns evincing reactivation of people and more posterior regions, including PHc, showing evidence for reinstatement of places and objects. Collectively, these findings provide key evidence that source recollection relies on reinstatement of past

  1. Temporal-spatial distribution of non-point source pollution in a drinking water source reservoir watershed based on SWAT

    Directory of Open Access Journals (Sweden)

    M. Wang

    2015-05-01

    Full Text Available The conservation of drinking water source reservoirs has a close relationship between regional economic development and people’s livelihood. Research on the non-point pollution characteristics in its watershed is crucial for reservoir security. Tang Pu Reservoir watershed was selected as the study area. The non-point pollution model of Tang Pu Reservoir was established based on the SWAT (Soil and Water Assessment Tool model. The model was adjusted to analyse the temporal-spatial distribution patterns of total nitrogen (TN and total phosphorus (TP. The results showed that the loss of TN and TP in the reservoir watershed were related to precipitation in flood season. And the annual changes showed an "M" shape. It was found that the contribution of loss of TN and TP accounted for 84.5% and 85.3% in high flow years, and for 70.3% and 69.7% in low flow years, respectively. The contributions in normal flow years were 62.9% and 63.3%, respectively. The TN and TP mainly arise from Wangtan town, Gulai town, and Wangyuan town, etc. In addition, it was found that the source of TN and TP showed consistency in space.

  2. Spatial distribution of charged particles along the ion-optical axis in electron cyclotron resonance ion sources. Experimental results

    International Nuclear Information System (INIS)

    Panitzsch, Lauri

    2013-01-01

    The experimental determination of the spatial distribution of charged particles along the ion-optical axis in electron cyclotron resonance ion sources (ECRIS) defines the focus of this thesis. The spatial distributions of different ion species were obtained in the object plane of the bending magnet (∼45 cm downstream from the plasma electrode) and in the plane of the plasma electrode itself, both in high spatial resolution. The results show that each of the different ion species forms a bloated, triangular structure in the aperture of the plasma electrode. The geometry and the orientation of these structures are defined by the superposition of the radial and axial magnetic fields. The radial extent of each structure is defined by the charge of the ion. Higher charge states occupy smaller, more concentrated structures. The total current density increases towards the center of the plasma electrode. The circular and star-like structures that can be observed in the beam profiles of strongly focused, extracted ion beams are each dominated by ions of a single charge state. In addition, the spatially resolved current density distribution of charged particles in the plasma chamber that impinge on the plasma electrode was determined, differentiating between ions and electrons. The experimental results of this work show that the electrons of the plasma are strongly connected to the magnetic field lines in the source and thus spatially well confined in a triangular-like structure. The intensity of the electrons increases towards the center of the plasma electrode and the plasma chamber, as well. These electrons are surrounded by a spatially far less confined and less intense ion population. All the findings mentioned above were already predicted in parts by simulations of different groups. However, the results presented within this thesis represent the first (and by now only) direct experimental verification of those predictions and are qualitatively transferable to other

  3. Spatial distribution of charged particles along the ion-optical axis in electron cyclotron resonance ion sources. Experimental results

    Energy Technology Data Exchange (ETDEWEB)

    Panitzsch, Lauri

    2013-02-08

    The experimental determination of the spatial distribution of charged particles along the ion-optical axis in electron cyclotron resonance ion sources (ECRIS) defines the focus of this thesis. The spatial distributions of different ion species were obtained in the object plane of the bending magnet ({approx}45 cm downstream from the plasma electrode) and in the plane of the plasma electrode itself, both in high spatial resolution. The results show that each of the different ion species forms a bloated, triangular structure in the aperture of the plasma electrode. The geometry and the orientation of these structures are defined by the superposition of the radial and axial magnetic fields. The radial extent of each structure is defined by the charge of the ion. Higher charge states occupy smaller, more concentrated structures. The total current density increases towards the center of the plasma electrode. The circular and star-like structures that can be observed in the beam profiles of strongly focused, extracted ion beams are each dominated by ions of a single charge state. In addition, the spatially resolved current density distribution of charged particles in the plasma chamber that impinge on the plasma electrode was determined, differentiating between ions and electrons. The experimental results of this work show that the electrons of the plasma are strongly connected to the magnetic field lines in the source and thus spatially well confined in a triangular-like structure. The intensity of the electrons increases towards the center of the plasma electrode and the plasma chamber, as well. These electrons are surrounded by a spatially far less confined and less intense ion population. All the findings mentioned above were already predicted in parts by simulations of different groups. However, the results presented within this thesis represent the first (and by now only) direct experimental verification of those predictions and are qualitatively transferable to

  4. Cross correlations of quantum key distribution based on single-photon sources

    International Nuclear Information System (INIS)

    Dong Shuangli; Wang Xiaobo; Zhang Guofeng; Sun Jianhu; Zhang Fang; Xiao Liantuan; Jia Suotang

    2009-01-01

    We theoretically analyze the second-order correlation function in a quantum key distribution system with real single-photon sources. Based on single-event photon statistics, the influence of the modification caused by an eavesdropper's intervention and the effects of background signals on the cross correlations between authorized partners are presented. On this basis, we have shown a secure range of correlation against the intercept-resend attacks.

  5. Empirical tests of Zipf's law mechanism in open source Linux distribution.

    Science.gov (United States)

    Maillart, T; Sornette, D; Spaeth, S; von Krogh, G

    2008-11-21

    Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.

  6. Predicting cycle time distributions for integrated processing workstations : an aggregate modeling approach

    NARCIS (Netherlands)

    Veeger, C.P.L.; Etman, L.F.P.; Lefeber, A.A.J.; Adan, I.J.B.F.; Herk, van J.; Rooda, J.E.

    2011-01-01

    To predict cycle time distributions of integrated processing workstations, detailed simulation models are almost exclusively used; these models require considerable development and maintenance effort. As an alternative, we propose an aggregate model that is a lumped-parameter representation of the

  7. Climate change and plant distribution: local models predict high-elevation persistence

    DEFF Research Database (Denmark)

    Randin, Christophe F.; Engler, Robin; Normand, Signe

    2009-01-01

    Mountain ecosystems will likely be affected by global warming during the 21st century, with substantial biodiversity loss predicted by species distribution models (SDMs). Depending on the geographic extent, elevation range, and spatial resolution of data used in making these models, different rates...

  8. Real-time distributed economic model predictive control for complete vehicle energy management

    NARCIS (Netherlands)

    Romijn, Constantijn; Donkers, Tijs; Kessels, John; Weiland, Siep

    2017-01-01

    In this paper, a real-time distributed economic model predictive control approach for complete vehicle energy management (CVEM) is presented using a receding control horizon in combination with a dual decomposition. The dual decomposition allows the CVEM optimization problem to be solved by solving

  9. A Random Forest Approach to Predict the Spatial Distribution of Sediment Pollution in an Estuarine System

    Science.gov (United States)

    Modeling the magnitude and distribution of sediment-bound pollutants in estuaries is often limited by incomplete knowledge of the site and inadequate sample density. To address these modeling limitations, a decision-support tool framework was conceived that predicts sediment cont...

  10. The predictive skill of species distribution models for plankton in a changing climate

    DEFF Research Database (Denmark)

    Brun, Philipp Georg; Kiørboe, Thomas; Licandro, Priscilla

    2016-01-01

    Statistical species distribution models (SDMs) are increasingly used to project spatial relocations of marine taxa under future climate change scenarios. However, tests of their predictive skill in the real-world are rare. Here, we use data from the Continuous Plankton Recorder program, one...... null models, is essential to assess the robustness of projections of marine planktonic species under climate change...

  11. Predictive analytics for truck arrival time estimation : a field study at a European distribution center

    NARCIS (Netherlands)

    van der Spoel, Sjoerd; Amrit, Chintan Amrit; van Hillegersberg, Jos

    2017-01-01

    Distribution centres (DCs) are the hubs connecting transport streams in the supply chain. The synchronisation of coming and going cargo at a DC requires reliable arrival times. To achieve this, a reliable method to predict arrival times is needed. A literature review was performed to find the

  12. Predicting the spatial distribution of leaf litterfall in a mixed deciduous forest

    NARCIS (Netherlands)

    Staelens, Jeroen; Nachtergale, Lieven; Luyssaert, Sebastiaan

    2004-01-01

    An accurate prediction of the spatial distribution of litterfall can improve insight in the interaction between the canopy layer and forest floor characteristics, which is a key feature in forest nutrient cycling. Attempts to model the spatial variability of litterfall have been made across forest

  13. Prediction With Dimension Reduction of Multiple Molecular Data Sources for Patient Survival

    Directory of Open Access Journals (Sweden)

    Adam Kaplan

    2017-07-01

    Full Text Available Predictive modeling from high-dimensional genomic data is often preceded by a dimension reduction step, such as principal component analysis (PCA. However, the application of PCA is not straightforward for multisource data, wherein multiple sources of ‘omics data measure different but related biological components. In this article, we use recent advances in the dimension reduction of multisource data for predictive modeling. In particular, we apply exploratory results from Joint and Individual Variation Explained (JIVE, an extension of PCA for multisource data, for prediction of differing response types. We conduct illustrative simulations to illustrate the practical advantages and interpretability of our approach. As an application example, we consider predicting survival for patients with glioblastoma multiforme from 3 data sources measuring messenger RNA expression, microRNA expression, and DNA methylation. We also introduce a method to estimate JIVE scores for new samples that were not used in the initial dimension reduction and study its theoretical properties; this method is implemented in the R package R.JIVE on CRAN, in the function jive.predict.

  14. Boosting up quantum key distribution by learning statistics of practical single-photon sources

    International Nuclear Information System (INIS)

    Adachi, Yoritoshi; Yamamoto, Takashi; Koashi, Masato; Imoto, Nobuyuki

    2009-01-01

    We propose a simple quantum-key-distribution (QKD) scheme for practical single-photon sources (SPSs), which works even with a moderate suppression of the second-order correlation g (2) of the source. The scheme utilizes a passive preparation of a decoy state by monitoring a fraction of the signal via an additional beam splitter and a detector at the sender's side to monitor photon-number splitting attacks. We show that the achievable distance increases with the precision with which the sub-Poissonian tendency is confirmed in higher photon-number distribution of the source, rather than with actual suppression of the multiphoton emission events. We present an example of the secure key generation rate in the case of a poor SPS with g (2) =0.19, in which no secure key is produced with the conventional QKD scheme, and show that learning the photon-number distribution up to several numbers is sufficient for achieving almost the same distance as that of an ideal SPS.

  15. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  16. Polycyclic Aromatic Hydrocarbons in the Dagang Oilfield (China: Distribution, Sources, and Risk Assessment

    Directory of Open Access Journals (Sweden)

    Haihua Jiao

    2015-05-01

    Full Text Available The levels of 16 polycyclic aromatic hydrocarbons (PAHs were investigated in 27 upper layer (0–25 cm soil samples collected from the Dagang Oilfield (China in April 2013 to estimate their distribution, possible sources, and potential risks posed. The total concentrations of PAHs (∑PAHs varied between 103.6 µg·kg−1 and 5872 µg·kg−1, with a mean concentration of 919.8 µg·kg−1; increased concentrations were noted along a gradient from arable desert soil (mean 343.5 µg·kg−1, to oil well areas (mean of 627.3 µg·kg−1, to urban and residential zones (mean of 1856 µg·kg−1. Diagnostic ratios showed diverse source of PAHs, including petroleum, liquid fossil fuels, and biomass combustion sources. Combustion sources were most significant for PAHs in arable desert soils and residential zones, while petroleum sources were a significant source of PAHs in oilfield areas. Based ontheir carcinogenity, PAHs were classified as carcinogenic (B or not classified/non-carcinogenic (NB. The total concentrations of carcinogenic PAHs (∑BPAHs varied from 13.3 µg·kg−1 to 4397 µg·kg−1 across all samples, with a mean concentration of 594.4 µg·kg−1. The results suggest that oilfield soil is subject to a certain level of ecological environment risk.

  17. Prediction of thermal coagulation from the instantaneous strain distribution induced by high-intensity focused ultrasound

    Science.gov (United States)

    Iwasaki, Ryosuke; Takagi, Ryo; Tomiyasu, Kentaro; Yoshizawa, Shin; Umemura, Shin-ichiro

    2017-07-01

    The targeting of the ultrasound beam and the prediction of thermal lesion formation in advance are the requirements for monitoring high-intensity focused ultrasound (HIFU) treatment with safety and reproducibility. To visualize the HIFU focal zone, we utilized an acoustic radiation force impulse (ARFI) imaging-based method. After inducing displacements inside tissues with pulsed HIFU called the push pulse exposure, the distribution of axial displacements started expanding and moving. To acquire RF data immediately after and during the HIFU push pulse exposure to improve prediction accuracy, we attempted methods using extrapolation estimation and applying HIFU noise elimination. The distributions going back in the time domain from the end of push pulse exposure are in good agreement with tissue coagulation at the center. The results suggest that the proposed focal zone visualization employing pulsed HIFU entailing the high-speed ARFI imaging method is useful for the prediction of thermal coagulation in advance.

  18. Planck early results. XV. Spectral energy distributions and radio continuum spectra of northern extragalactic radio sources

    DEFF Research Database (Denmark)

    Aatrokoski, J.; Lähteenmäki, A.; Lavonen, N.

    2011-01-01

    Spectral energy distributions (SEDs) and radio continuum spectra are presented for a northern sample of 104 extragalactic radio sources, based on the Planck Early Release Compact Source Catalogue (ERCSC) and simultaneous multifrequency data. The nine Planck frequencies, from 30 to 857 GHz......, are complemented by a set of simultaneous observations ranging from radio to gamma-rays. This is the first extensive frequency coverage in the radio and millimetre domains for an essentially complete sample of extragalactic radio sources, and it shows how the individual shocks, each in their own phase...... of development, shape the radio spectra as they move in the relativistic jet. The SEDs presented in this paper were fitted with second and third degree polynomials to estimate the frequencies of the synchrotron and inverse Compton (IC) peaks, and the spectral indices of low and high frequency radio data...

  19. Blind Separation of Nonstationary Sources Based on Spatial Time-Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Zhang Yimin

    2006-01-01

    Full Text Available Blind source separation (BSS based on spatial time-frequency distributions (STFDs provides improved performance over blind source separation methods based on second-order statistics, when dealing with signals that are localized in the time-frequency (t-f domain. In this paper, we propose the use of STFD matrices for both whitening and recovery of the mixing matrix, which are two stages commonly required in many BSS methods, to provide robust BSS performance to noise. In addition, a simple method is proposed to select the auto- and cross-term regions of time-frequency distribution (TFD. To further improve the BSS performance, t-f grouping techniques are introduced to reduce the number of signals under consideration, and to allow the receiver array to separate more sources than the number of array sensors, provided that the sources have disjoint t-f signatures. With the use of one or more techniques proposed in this paper, improved performance of blind separation of nonstationary signals can be achieved.

  20. Regulatory actions to expand the offer of distributed generation from renewable energy sources in Brazil

    International Nuclear Information System (INIS)

    Pepitone da Nóbrega, André; Cabral Carvalho, Carlos Eduardo

    2015-01-01

    The composition of the Brazilian electric energy matrix has undergone transformations in recent years. However, it has still maintained significant participation of renewable energy sources, in particular hydropower plants of various magnitudes. Reasons for the growth of other renewable sources of energy, such as wind and solar, include the fact that the remaining hydropower capacity is mainly located in the Amazon, which is far from centers of consumption, the necessity of diversifying the energy mix and reducing dependence on hydrologic regimes, the increase in environmental restrictions, the increase of civil construction and land costs.Wind power generation has grown most significantly in Brazil. Positive results in the latest energy auctions show that wind power generation has reached competitive pricing. Solar energy is still incipient in Brazil, despite its high potential for conversion into electric energy. This energy source in the Brazilian electric energy matrix mainly involves solar centrals and distributed generation. Biomass thermal plants, mainly the ones that use bagasse of sugar cane, also have an important role in renewable generation in Brazil.This paper aims to present an overview of the present situation and discuss the actions and the regulations to expand the offer of renewable distributed generation in Brazil, mainly from wind power, solar and biomass energy sources. (full text)

  1. Multi-scale approach for predicting fish species distributions across coral reef seascapes.

    Directory of Open Access Journals (Sweden)

    Simon J Pittman

    Full Text Available Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5-300 m radii surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT and Maximum Entropy Species Distribution Modelling (MaxEnt. The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided 'outstanding' model predictions (AUC = >0.9 for three of five fish species. MaxEnt provided 'outstanding' model predictions for two of five species, with the remaining three models considered 'excellent' (AUC = 0.8-0.9. In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy than BRT (68% map accuracy. We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support

  2. Predicting Posttraumatic Stress Symptom Prevalence and Local Distribution after an Earthquake with Scarce Data.

    Science.gov (United States)

    Dussaillant, Francisca; Apablaza, Mauricio

    2017-08-01

    After a major earthquake, the assignment of scarce mental health emergency personnel to different geographic areas is crucial to the effective management of the crisis. The scarce information that is available in the aftermath of a disaster may be valuable in helping predict where are the populations that are in most need. The objectives of this study were to derive algorithms to predict posttraumatic stress (PTS) symptom prevalence and local distribution after an earthquake and to test whether there are algorithms that require few input data and are still reasonably predictive. A rich database of PTS symptoms, informed after Chile's 2010 earthquake and tsunami, was used. Several model specifications for the mean and centiles of the distribution of PTS symptoms, together with posttraumatic stress disorder (PTSD) prevalence, were estimated via linear and quantile regressions. The models varied in the set of covariates included. Adjusted R2 for the most liberal specifications (in terms of numbers of covariates included) ranged from 0.62 to 0.74, depending on the outcome. When only including peak ground acceleration (PGA), poverty rate, and household damage in linear and quadratic form, predictive capacity was still good (adjusted R2 from 0.59 to 0.67 were obtained). Information about local poverty, household damage, and PGA can be used as an aid to predict PTS symptom prevalence and local distribution after an earthquake. This can be of help to improve the assignment of mental health personnel to the affected localities. Dussaillant F , Apablaza M . Predicting posttraumatic stress symptom prevalence and local distribution after an earthquake with scarce data. Prehosp Disaster Med. 2017;32(4):357-367.

  3. Model Predictive Control techniques with application to photovoltaic, DC Microgrid, and a multi-sourced hybrid energy system

    Science.gov (United States)

    Shadmand, Mohammad Bagher

    Renewable energy sources continue to gain popularity. However, two major limitations exist that prevent widespread adoption: availability and variability of the electricity generated and the cost of the equipment. The focus of this dissertation is Model Predictive Control (MPC) for optimal sized photovoltaic (PV), DC Microgrid, and multi-sourced hybrid energy systems. The main considered applications are: maximum power point tracking (MPPT) by MPC, droop predictive control of DC microgrid, MPC of grid-interaction inverter, MPC of a capacitor-less VAR compensator based on matrix converter (MC). This dissertation firstly investigates a multi-objective optimization technique for a hybrid distribution system. The variability of a high-penetration PV scenario is also studied when incorporated into the microgrid concept. Emerging (PV) technologies have enabled the creation of contoured and conformal PV surfaces; the effect of using non-planar PV modules on variability is also analyzed. The proposed predictive control to achieve maximum power point for isolated and grid-tied PV systems speeds up the control loop since it predicts error before the switching signal is applied to the converter. The low conversion efficiency of PV cells means we want to ensure always operating at maximum possible power point to make the system economical. Thus the proposed MPPT technique can capture more energy compared to the conventional MPPT techniques from same amount of installed solar panel. Because of the MPPT requirement, the output voltage of the converter may vary. Therefore a droop control is needed to feed multiple arrays of photovoltaic systems to a DC bus in microgrid community. Development of a droop control technique by means of predictive control is another application of this dissertation. Reactive power, denoted as Volt Ampere Reactive (VAR), has several undesirable consequences on AC power system network such as reduction in power transfer capability and increase in

  4. Predicting the probability of slip in gait: methodology and distribution study.

    Science.gov (United States)

    Gragg, Jared; Yang, James

    2016-01-01

    The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.

  5. A Popularity Based Prediction and Data Redistribution Tool for ATLAS Distributed Data Management

    CERN Document Server

    Beermann, T; The ATLAS collaboration; Maettig, P

    2014-01-01

    This paper presents a system to predict future data popularity for data-intensive systems, such as ATLAS distributed data management (DDM). Using these predictions it is possible to make a better distribution of data, helping to reduce the waiting time for jobs using with this data. This system is based on a tracer infrastructure that is able to monitor and store historical data accesses and which is used to create popularity reports. These reports provide detailed summaries about data accesses in the past, including information about the accessed files, the involved users and the sites. From this past data it is possible to then make near-term forecasts for data popularity in the future. The prediction system introduced in this paper makes use of both simple prediction methods as well as predictions made by neural networks. The best prediction method is dependent on the type of data and the data is carefully filtered for use in either system. The second part of the paper introduces a system that effectively ...

  6. Using Bayesian Belief Network (BBN) modelling for rapid source term prediction. Final report

    International Nuclear Information System (INIS)

    Knochenhauer, M.; Swaling, V.H.; Dedda, F.D.; Hansson, F.; Sjoekvist, S.; Sunnegaerd, K.

    2013-10-01

    The project presented in this report deals with a number of complex issues related to the development of a tool for rapid source term prediction (RASTEP), based on a plant model represented as a Bayesian belief network (BBN) and a source term module which is used for assigning relevant source terms to BBN end states. Thus, RASTEP uses a BBN to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, composition, timing, and release path of released radio-nuclides). The output is a set of possible source terms with associated probabilities. One major issue has been associated with the integration of probabilistic and deterministic analyses are addressed, dealing with the challenge of making the source term determination flexible enough to give reliable and valid output throughout the accident scenario. The potential for connecting RASTEP to a fast running source term prediction code has been explored, as well as alternative ways of improving the deterministic connections of the tool. As part of the investigation, a comparison of two deterministic severe accident analysis codes has been performed. A second important task has been to develop a general method where experts' beliefs can be included in a systematic way when defining the conditional probability tables (CPTs) in the BBN. The proposed method includes expert judgement in a systematic way when defining the CPTs of a BBN. Using this iterative method results in a reliable BBN even though expert judgements, with their associated uncertainties, have been used. It also simplifies verification and validation of the considerable amounts of quantitative data included in a BBN. (Author)

  7. Using Bayesian Belief Network (BBN) modelling for rapid source term prediction. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Knochenhauer, M.; Swaling, V.H.; Dedda, F.D.; Hansson, F.; Sjoekvist, S.; Sunnegaerd, K. [Lloyd' s Register Consulting AB, Sundbyberg (Sweden)

    2013-10-15

    The project presented in this report deals with a number of complex issues related to the development of a tool for rapid source term prediction (RASTEP), based on a plant model represented as a Bayesian belief network (BBN) and a source term module which is used for assigning relevant source terms to BBN end states. Thus, RASTEP uses a BBN to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, composition, timing, and release path of released radio-nuclides). The output is a set of possible source terms with associated probabilities. One major issue has been associated with the integration of probabilistic and deterministic analyses are addressed, dealing with the challenge of making the source term determination flexible enough to give reliable and valid output throughout the accident scenario. The potential for connecting RASTEP to a fast running source term prediction code has been explored, as well as alternative ways of improving the deterministic connections of the tool. As part of the investigation, a comparison of two deterministic severe accident analysis codes has been performed. A second important task has been to develop a general method where experts' beliefs can be included in a systematic way when defining the conditional probability tables (CPTs) in the BBN. The proposed method includes expert judgement in a systematic way when defining the CPTs of a BBN. Using this iterative method results in a reliable BBN even though expert judgements, with their associated uncertainties, have been used. It also simplifies verification and validation of the considerable amounts of quantitative data included in a BBN. (Author)

  8. Predicting induced radioactivity for the accelerator operations at the Taiwan Photon Source.

    Science.gov (United States)

    Sheu, R J; Jiang, S H

    2010-12-01

    This study investigates the characteristics of induced radioactivity due to the operations of a 3-GeV electron accelerator at the Taiwan Photon Source (TPS). According to the beam loss analysis, the authors set two representative irradiation conditions for the activation analysis. The FLUKA Monte Carlo code has been used to predict the isotope inventories, residual activities, and remanent dose rates as a function of time. The calculation model itself is simple but conservative for the evaluation of induced radioactivity in a light source facility. This study highlights the importance of beam loss scenarios and demonstrates the great advantage of using FLUKA in comparing the predicted radioactivity with corresponding regulatory limits. The calculated results lead to the conclusion that, due to fairly low electron consumption, the radioactivity induced in the accelerator components and surrounding concrete walls of the TPS is rather moderate and manageable, while the possible activation of air and cooling water in the tunnel and their environmental releases are negligible.

  9. Effects of the infectious period distribution on predicted transitions in childhood disease dynamics.

    Science.gov (United States)

    Krylova, Olga; Earn, David J D

    2013-07-06

    The population dynamics of infectious diseases occasionally undergo rapid qualitative changes, such as transitions from annual to biennial cycles or to irregular dynamics. Previous work, based on the standard seasonally forced 'susceptible-exposed-infectious-removed' (SEIR) model has found that transitions in the dynamics of many childhood diseases result from bifurcations induced by slow changes in birth and vaccination rates. However, the standard SEIR formulation assumes that the stage durations (latent and infectious periods) are exponentially distributed, whereas real distributions are narrower and centred around the mean. Much recent work has indicated that realistically distributed stage durations strongly affect the dynamical structure of seasonally forced epidemic models. We investigate whether inferences drawn from previous analyses of transitions in patterns of measles dynamics are robust to the shapes of the stage duration distributions. As an illustrative example, we analyse measles dynamics in New York City from 1928 to 1972. We find that with a fixed mean infectious period in the susceptible-infectious-removed (SIR) model, the dynamical structure and predicted transitions vary substantially as a function of the shape of the infectious period distribution. By contrast, with fixed mean latent and infectious periods in the SEIR model, the shapes of the stage duration distributions have a less dramatic effect on model dynamical structure and predicted transitions. All these results can be understood more easily by considering the distribution of the disease generation time as opposed to the distributions of individual disease stages. Numerical bifurcation analysis reveals that for a given mean generation time the dynamics of the SIR and SEIR models for measles are nearly equivalent and are insensitive to the shapes of the disease stage distributions.

  10. Isotopes, Inventories and Seasonality: Unraveling Methane Source Distribution in the Complex Landscapes of the United Kingdom.

    Science.gov (United States)

    Lowry, D.; Fisher, R. E.; Zazzeri, G.; Lanoisellé, M.; France, J.; Allen, G.; Nisbet, E. G.

    2017-12-01

    Unlike the big open landscapes of many continents with large area sources dominated by one particular methane emission type that can be isotopically characterized by flight measurements and sampling, the complex patchwork of urban, fossil and agricultural methane sources across NW Europe require detailed ground surveys for characterization (Zazzeri et al., 2017). Here we outline the findings from multiple seasonal urban and rural measurement campaigns in the United Kingdom. These surveys aim to: 1) Assess source distribution and baseline in regions of planned fracking, and relate to on-site continuous baseline climatology. 2) Characterize spatial and seasonal differences in the isotopic signatures of the UNFCCC source categories, and 3) Assess the spatial validity of the 1 x 1 km UK inventory for large continuous emitters, proposed point sources, and seasonal / ephemeral emissions. The UK inventory suggests that 90% of methane emissions are from 3 source categories, ruminants, landfill and gas distribution. Bag sampling and GC-IRMS delta13C analysis shows that landfill gives a constant signature of -57 ±3 ‰ throughout the year. Fugitive gas emissions are consistent regionally depending on the North Sea supply regions feeding the network (-41 ± 2 ‰ in N England, -37 ± 2 ‰ in SE England). Ruminant, mostly cattle, emissions are far more complex as these spend winters in barns and summers in fields, but are essentially a mix of 2 end members, breath at -68 ±3 ‰ and manure at -51 ±3 ‰, resulting in broad summer field emission plumes of -64 ‰ and point winter barn emission plumes of -58 ‰. The inventory correctly locates emission hotspots from landfill, larger sewage treatment plants and gas compressor stations, giving a broad overview of emission distribution for regional model validation. Mobile surveys are adding an extra layer of detail to this which, combined with isotopic characterization, has identified spatial distribution of gas pipe leaks

  11. 10 CFR 32.74 - Manufacture and distribution of sources or devices containing byproduct material for medical use.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Manufacture and distribution of sources or devices... SPECIFIC DOMESTIC LICENSES TO MANUFACTURE OR TRANSFER CERTAIN ITEMS CONTAINING BYPRODUCT MATERIAL Generally Licensed Items § 32.74 Manufacture and distribution of sources or devices containing byproduct material for...

  12. Site-to-Source Finite Fault Distance Probability Distribution in Probabilistic Seismic Hazard and the Relationship Between Minimum Distances

    Science.gov (United States)

    Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.

    2017-12-01

    We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.

  13. Evaluating the impact of improvements to the FLAMBE smoke source model on forecasts of aerosol distribution from NAAPS

    Science.gov (United States)

    Hyer, E. J.; Reid, J. S.

    2006-12-01

    As more forecast models aim to include aerosol and chemical species, there is a need for source functions for biomass burning emissions that are accurate, robust, and operable in real-time. NAAPS is a global aerosol forecast model running every six hours and forecasting distributions of biomass burning, industrial sulfate, dust, and sea salt aerosols. This model is run operationally by the U.S. Navy as an aid to planning. The smoke emissions used as input to the model are calculated from the data collected by the FLAMBE system, driven by near-real-time active fire data from GOES WF_ABBA and MODIS Rapid Response. The smoke source function uses land cover data to predict properties of detected fires based on literature data from experimental burns. This scheme is very sensitive to the choice of land cover data sets. In areas of rapid land cover change, the use of static land cover data can produce artifactual changes in emissions unrelated to real changes in fire patterns. In South America, this change may be as large as 40% over five years. We demonstrate the impact of a modified land cover scheme on FLAMBE emissions and NAAPS forecasts, including a fire size algorithm developed using MODIS burned area data. We also describe the effects of corrections to emissions estimates for cloud and satellite coverage. We outline areas where existing data sources are incomplete and improvements are required to achieve accurate modeling of biomass burning emissions in real time.

  14. Measurements and predictions of the air distribution systems in high compute density (Internet) data centers

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jinkyun [HIMEC (Hanil Mechanical Electrical Consultants) Ltd., Seoul 150-103 (Korea); Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea); Lim, Taesub; Kim, Byungseon Sean [Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea)

    2009-10-15

    When equipment power density increases, a critical goal of a data center cooling system is to separate the equipment exhaust air from the equipment intake air in order to prevent the IT server from overheating. Cooling systems for data centers are primarily differentiated according to the way they distribute air. The six combinations of flooded and locally ducted air distribution make up the vast majority of all installations, except fully ducted air distribution methods. Once the air distribution system (ADS) is selected, there are other elements that must be integrated into the system design. In this research, the design parameters and IT environmental aspects of the cooling system were studied with a high heat density data center. CFD simulation analysis was carried out in order to compare the heat removal efficiencies of various air distribution systems. The IT environment of an actual operating data center is measured to validate a model for predicting the effect of different air distribution systems. A method for planning and design of the appropriate air distribution system is described. IT professionals versed in precision air distribution mechanisms, components, and configurations can work more effectively with mechanical engineers to ensure the specification and design of optimized cooling solutions. (author)

  15. Knowledge-based prediction of three-dimensional dose distributions for external beam radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Shiraishi, Satomi; Moore, Kevin L., E-mail: kevinmoore@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California 92093 (United States)

    2016-01-15

    Purpose: To demonstrate knowledge-based 3D dose prediction for external beam radiotherapy. Methods: Using previously treated plans as training data, an artificial neural network (ANN) was trained to predict a dose matrix based on patient-specific geometric and planning parameters, such as the closest distance (r) to planning target volume (PTV) and organ-at-risks (OARs). Twenty-three prostate and 43 stereotactic radiosurgery/radiotherapy (SRS/SRT) cases with at least one nearby OAR were studied. All were planned with volumetric-modulated arc therapy to prescription doses of 81 Gy for prostate and 12–30 Gy for SRS. Using these clinically approved plans, ANNs were trained to predict dose matrix and the predictive accuracy was evaluated using the dose difference between the clinical plan and prediction, δD = D{sub clin} − D{sub pred}. The mean (〈δD{sub r}〉), standard deviation (σ{sub δD{sub r}}), and their interquartile range (IQR) for the training plans were evaluated at a 2–3 mm interval from the PTV boundary (r{sub PTV}) to assess prediction bias and precision. Initially, unfiltered models which were trained using all plans in the cohorts were created for each treatment site. The models predict approximately the average quality of OAR sparing. Emphasizing a subset of plans that exhibited superior to the average OAR sparing during training, refined models were created to predict high-quality rectum sparing for prostate and brainstem sparing for SRS. Using the refined model, potentially suboptimal plans were identified where the model predicted further sparing of the OARs was achievable. Replans were performed to test if the OAR sparing could be improved as predicted by the model. Results: The refined models demonstrated highly accurate dose distribution prediction. For prostate cases, the average prediction bias for all voxels irrespective of organ delineation ranged from −1% to 0% with maximum IQR of 3% over r{sub PTV} ∈ [ − 6, 30] mm. The

  16. Knowledge-based prediction of three-dimensional dose distributions for external beam radiotherapy

    International Nuclear Information System (INIS)

    Shiraishi, Satomi; Moore, Kevin L.

    2016-01-01

    Purpose: To demonstrate knowledge-based 3D dose prediction for external beam radiotherapy. Methods: Using previously treated plans as training data, an artificial neural network (ANN) was trained to predict a dose matrix based on patient-specific geometric and planning parameters, such as the closest distance (r) to planning target volume (PTV) and organ-at-risks (OARs). Twenty-three prostate and 43 stereotactic radiosurgery/radiotherapy (SRS/SRT) cases with at least one nearby OAR were studied. All were planned with volumetric-modulated arc therapy to prescription doses of 81 Gy for prostate and 12–30 Gy for SRS. Using these clinically approved plans, ANNs were trained to predict dose matrix and the predictive accuracy was evaluated using the dose difference between the clinical plan and prediction, δD = D clin − D pred . The mean (〈δD r 〉), standard deviation (σ δD r ), and their interquartile range (IQR) for the training plans were evaluated at a 2–3 mm interval from the PTV boundary (r PTV ) to assess prediction bias and precision. Initially, unfiltered models which were trained using all plans in the cohorts were created for each treatment site. The models predict approximately the average quality of OAR sparing. Emphasizing a subset of plans that exhibited superior to the average OAR sparing during training, refined models were created to predict high-quality rectum sparing for prostate and brainstem sparing for SRS. Using the refined model, potentially suboptimal plans were identified where the model predicted further sparing of the OARs was achievable. Replans were performed to test if the OAR sparing could be improved as predicted by the model. Results: The refined models demonstrated highly accurate dose distribution prediction. For prostate cases, the average prediction bias for all voxels irrespective of organ delineation ranged from −1% to 0% with maximum IQR of 3% over r PTV ∈ [ − 6, 30] mm. The average prediction error was less

  17. Comparing predictive models of glioblastoma multiforme built using multi-institutional and local data sources.

    Science.gov (United States)

    Singleton, Kyle W; Hsu, William; Bui, Alex A T

    2012-01-01

    The growing amount of electronic data collected from patient care and clinical trials is motivating the creation of national repositories where multiple institutions share data about their patient cohorts. Such efforts aim to provide sufficient sample sizes for data mining and predictive modeling, ultimately improving treatment recommendations and patient outcome prediction. While these repositories offer the potential to improve our understanding of a disease, potential issues need to be addressed to ensure that multi-site data and resultant predictive models are useful to non-contributing institutions. In this paper we examine the challenges of utilizing National Cancer Institute datasets for modeling glioblastoma multiforme. We created several types of prognostic models and compared their results against models generated using data solely from our institution. While overall model performance between the data sources was similar, different variables were selected during model generation, suggesting that mapping data resources between models is not a straightforward issue.

  18. PHENOstruct: Prediction of human phenotype ontology terms using heterogeneous data sources.

    Science.gov (United States)

    Kahanda, Indika; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa

    2015-01-01

    The human phenotype ontology (HPO) was recently developed as a standardized vocabulary for describing the phenotype abnormalities associated with human diseases. At present, only a small fraction of human protein coding genes have HPO annotations. But, researchers believe that a large portion of currently unannotated genes are related to disease phenotypes. Therefore, it is important to predict gene-HPO term associations using accurate computational methods. In this work we demonstrate the performance advantage of the structured SVM approach which was shown to be highly effective for Gene Ontology term prediction in comparison to several baseline methods. Furthermore, we highlight a collection of informative data sources suitable for the problem of predicting gene-HPO associations, including large scale literature mining data.

  19. THE ENVIRONMENT AND DISTRIBUTION OF EMITTING ELECTRONS AS A FUNCTION OF SOURCE ACTIVITY IN MARKARIAN 421

    International Nuclear Information System (INIS)

    Mankuzhiyil, Nijil; Ansoldi, Stefano; Persic, Massimo; Tavecchio, Fabrizio

    2011-01-01

    For the high-frequency-peaked BL Lac object Mrk 421, we study the variation of the spectral energy distribution (SED) as a function of source activity, from quiescent to active. We use a fully automatized χ 2 -minimization procedure, instead of the 'eyeball' procedure more commonly used in the literature, to model nine SED data sets with a one-zone synchrotron self-Compton (SSC) model and examine how the model parameters vary with source activity. The latter issue can finally be addressed now, because simultaneous broadband SEDs (spanning from optical to very high energy photon) have finally become available. Our results suggest that in Mrk 421 the magnetic field (B) decreases with source activity, whereas the electron spectrum's break energy (γ br ) and the Doppler factor (δ) increase-the other SSC parameters turn out to be uncorrelated with source activity. In the SSC framework, these results are interpreted in a picture where the synchrotron power and peak frequency remain constant with varying source activity, through a combination of decreasing magnetic field and increasing number density of γ ≤ γ br electrons: since this leads to an increased electron-photon scattering efficiency, the resulting Compton power increases, and so does the total (= synchrotron plus Compton) emission.

  20. Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette

    Directory of Open Access Journals (Sweden)

    Fang Li

    2013-10-01

    Full Text Available This paper proposes an approach for acoustic emission (AE source localization in a large marble stone using distributed feedback (DFB fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location.

  1. Future prospects for ECR ion sources with improved charge state distributions

    International Nuclear Information System (INIS)

    Alton, G.D.

    1995-01-01

    Despite the steady advance in the technology of the ECR ion source, present art forms have not yet reached their full potential in terms of charge state and intensity within a particular charge state, in part, because of the narrow band width. single-frequency microwave radiation used to heat the plasma electrons. This article identifies fundamentally important methods which may enhance the performances of ECR ion sources through the use of: (1) a tailored magnetic field configuration (spatial domain) in combination with single-frequency microwave radiation to create a large uniformly distributed ECR ''volume'' or (2) the use of broadband frequency domain techniques (variable-frequency, broad-band frequency, or multiple-discrete-frequency microwave radiation), derived from standard TWT technology, to transform the resonant plasma ''surfaces'' of traditional ECR ion sources into resonant plasma ''volume''. The creation of a large ECR plasma ''volume'' permits coupling of more power into the plasma, resulting in the heating of a much larger electron population to higher energies, thereby producing higher charge state ions and much higher intensities within a particular charge state than possible in present forms of' the source. The ECR ion source concepts described in this article offer exciting opportunities to significantly advance the-state-of-the-art of ECR technology and as a consequence, open new opportunities in fundamental and applied research and for a variety of industrial applications

  2. Field distribution of a source and energy absorption in an inhomogeneous magneto-active plasma

    International Nuclear Information System (INIS)

    Galushko, N.P.; Erokhin, N.S.; Moiseev, S.S.

    1975-01-01

    In the present paper the distribution of source fields in in a magnetoactive plasma is studied from the standpoint of the possibility of an effective SHF heating of an inhomogeneous plasma in both high (ωapproximatelyωsub(pe) and low (ωapproximatelyωsub(pi) frequency ranges, where ωsub(pe) and ωsub(pi) are the electron and ion plasma frequencies. The localization of the HF energy absorption regions in cold and hot plasma and the effect of plasma inhomogeneity and source dimensions on the absorption efficiency are investigated. The linear wave transformation in an inhomogeneous hot plasma is taken into consideration. Attention is paid to the difference between the region localization for collisional and non-collisional absorption. It has been shown that the HF energy dissipation in plasma particle collisions is localized in the region of thin jets going from the source; the radiation field has a sharp peak in this region. At the same time, non-collisional HF energy dissipation is spread over the plasma volume as a result of Cherenkov and cyclotron wave attenuation. The essential contribution to the source field from resonances due to standing wave excitation in an inhomogeneous plasma shell near the source is pointed out

  3. Study (Prediction of Main Pipes Break Rates in Water Distribution Systems Using Intelligent and Regression Methods

    Directory of Open Access Journals (Sweden)

    Massoud Tabesh

    2011-07-01

    Full Text Available Optimum operation of water distribution networks is one of the priorities of sustainable development of water resources, considering the issues of increasing efficiency and decreasing the water losses. One of the key subjects in optimum operational management of water distribution systems is preparing rehabilitation and replacement schemes, prediction of pipes break rate and evaluation of their reliability. Several approaches have been presented in recent years regarding prediction of pipe failure rates which each one requires especial data sets. Deterministic models based on age and deterministic multi variables and stochastic group modeling are examples of the solutions which relate pipe break rates to parameters like age, material and diameters. In this paper besides the mentioned parameters, more factors such as pipe depth and hydraulic pressures are considered as well. Then using multi variable regression method, intelligent approaches (Artificial neural network and neuro fuzzy models and Evolutionary polynomial Regression method (EPR pipe burst rate are predicted. To evaluate the results of different approaches, a case study is carried out in a part ofMashhadwater distribution network. The results show the capability and advantages of ANN and EPR methods to predict pipe break rates, in comparison with neuro fuzzy and multi-variable regression methods.

  4. Life prediction for white OLED based on LSM under lognormal distribution

    Science.gov (United States)

    Zhang, Jianping; Liu, Fang; Liu, Yu; Wu, Helen; Zhu, Wenqing; Wu, Wenli; Wu, Liang

    2012-09-01

    In order to acquire the reliability information of White Organic Light Emitting Display (OLED), three groups of OLED constant stress accelerated life tests (CSALTs) were carried out to obtain failure data of samples. Lognormal distribution function was applied to describe OLED life distribution, and the accelerated life equation was determined by Least square method (LSM). The Kolmogorov-Smirnov test was performed to verify whether the white OLED life meets lognormal distribution or not. Author-developed software was employed to predict the average life and the median life. The numerical results indicate that the white OLED life submits to lognormal distribution, and that the accelerated life equation meets inverse power law completely. The estimated life information of the white OLED provides manufacturers and customers with important guidelines.

  5. Predicting moisture content and density distribution of Scots pine by microwave scanning of sawn timber

    International Nuclear Information System (INIS)

    Johansson, J.; Hagman, O.; Fjellner, B.A.

    2003-01-01

    This study was carried out to investigate the possibility of calibrating a prediction model for the moisture content and density distribution of Scots pine (Pinus sylvestris) using microwave sensors. The material was initially of green moisture content and was thereafter dried in several steps to zero moisture content. At each step, all the pieces were weighed, scanned with a microwave sensor (Satimo 9,4GHz), and computed tomography (CT)-scanned with a medical CT scanner (Siemens Somatom AR.T.). The output variables from the microwave sensor were used as predictors, and CT images that correlated with known moisture content were used as response variables. Multivariate models to predict average moisture content and density were calibrated using the partial least squares (PLS) regression. The models for average moisture content and density were applied at the pixel level, and the distribution was visualized. The results show that it is possible to predict both moisture content distribution and density distribution with high accuracy using microwave sensors. (author)

  6. Do abundance distributions and species aggregation correctly predict macroecological biodiversity patterns in tropical forests?

    Science.gov (United States)

    Wiegand, Thorsten; Lehmann, Sebastian; Huth, Andreas; Fortin, Marie‐Josée

    2016-01-01

    Abstract Aim It has been recently suggested that different ‘unified theories of biodiversity and biogeography’ can be characterized by three common ‘minimal sufficient rules’: (1) species abundance distributions follow a hollow curve, (2) species show intraspecific aggregation, and (3) species are independently placed with respect to other species. Here, we translate these qualitative rules into a quantitative framework and assess if these minimal rules are indeed sufficient to predict multiple macroecological biodiversity patterns simultaneously. Location Tropical forest plots in Barro Colorado Island (BCI), Panama, and in Sinharaja, Sri Lanka. Methods We assess the predictive power of the three rules using dynamic and spatial simulation models in combination with census data from the two forest plots. We use two different versions of the model: (1) a neutral model and (2) an extended model that allowed for species differences in dispersal distances. In a first step we derive model parameterizations that correctly represent the three minimal rules (i.e. the model quantitatively matches the observed species abundance distribution and the distribution of intraspecific aggregation). In a second step we applied the parameterized models to predict four additional spatial biodiversity patterns. Results Species‐specific dispersal was needed to quantitatively fulfil the three minimal rules. The model with species‐specific dispersal correctly predicted the species–area relationship, but failed to predict the distance decay, the relationship between species abundances and aggregations, and the distribution of a spatial co‐occurrence index of all abundant species pairs. These results were consistent over the two forest plots. Main conclusions The three ‘minimal sufficient’ rules only provide an incomplete approximation of the stochastic spatial geometry of biodiversity in tropical forests. The assumption of independent interspecific placements is most

  7. Improved Predictions of the Geographic Distribution of Invasive Plants Using Climatic Niche Models

    Science.gov (United States)

    Ramírez-Albores, Jorge E.; Bustamante, Ramiro O.

    2016-01-01

    Climatic niche models for invasive plants are usually constructed with occurrence records taken from literature and collections. Because these data neither discriminate among life-cycle stages of plants (adult or juvenile) nor the origin of individuals (naturally established or man-planted), the resulting models may mispredict the distribution ranges of these species. We propose that more accurate predictions could be obtained by modelling climatic niches with data of naturally established individuals, particularly with occurrence records of juvenile plants because this would restrict the predictions of models to those sites where climatic conditions allow the recruitment of the species. To test this proposal, we focused on the Peruvian peppertree (Schinus molle), a South American species that has largely invaded Mexico. Three climatic niche models were constructed for this species using high-resolution dataset gathered in the field. The first model included all occurrence records, irrespective of the life-cycle stage or origin of peppertrees (generalized niche model). The second model only included occurrence records of naturally established mature individuals (adult niche model), while the third model was constructed with occurrence records of naturally established juvenile plants (regeneration niche model). When models were compared, the generalized climatic niche model predicted the presence of peppertrees in sites located farther beyond the climatic thresholds that naturally established individuals can tolerate, suggesting that human activities influence the distribution of this invasive species. The adult and regeneration climatic niche models concurred in their predictions about the distribution of peppertrees, suggesting that naturally established adult trees only occur in sites where climatic conditions allow the recruitment of juvenile stages. These results support the proposal that climatic niches of invasive plants should be modelled with data of

  8. Identifying (subsurface) anthropogenic heat sources that influence temperature in the drinking water distribution system

    Science.gov (United States)

    Agudelo-Vera, Claudia M.; Blokker, Mirjam; de Kater, Henk; Lafort, Rob

    2017-09-01

    The water temperature in the drinking water distribution system and at customers' taps approaches the surrounding soil temperature at a depth of 1 m. Water temperature is an important determinant of water quality. In the Netherlands drinking water is distributed without additional residual disinfectant and the temperature of drinking water at customers' taps is not allowed to exceed 25 °C. In recent decades, the urban (sub)surface has been getting more occupied by various types of infrastructures, and some of these can be heat sources. Only recently have the anthropogenic sources and their influence on the underground been studied on coarse spatial scales. Little is known about the urban shallow underground heat profile on small spatial scales, of the order of 10 m × 10 m. Routine water quality samples at the tap in urban areas have shown up locations - so-called hotspots - in the city, with relatively high soil temperatures - up to 7 °C warmer - compared to the soil temperatures in the surrounding rural areas. Yet the sources and the locations of these hotspots have not been identified. It is expected that with climate change during a warm summer the soil temperature in the hotspots can be above 25 °C. The objective of this paper is to find a method to identify heat sources and urban characteristics that locally influence the soil temperature. The proposed method combines mapping of urban anthropogenic heat sources, retrospective modelling of the soil temperature, analysis of water temperature measurements at the tap, and extensive soil temperature measurements. This approach provided insight into the typical range of the variation of the urban soil temperature, and it is a first step to identifying areas with potential underground heat stress towards thermal underground management in cities.

  9. North Slope, Alaska: Source rock distribution, richness, thermal maturity, and petroleum charge

    Science.gov (United States)

    Peters, K.E.; Magoon, L.B.; Bird, K.J.; Valin, Z.C.; Keller, M.A.

    2006-01-01

    Four key marine petroleum source rock units were identified, characterized, and mapped in the subsurface to better understand the origin and distribution of petroleum on the North Slope of Alaska. These marine source rocks, from oldest to youngest, include four intervals: (1) Middle-Upper Triassic Shublik Formation, (2) basal condensed section in the Jurassic-Lower Cretaceous Kingak Shale, (3) Cretaceous pebble shale unit, and (4) Cretaceous Hue Shale. Well logs for more than 60 wells and total organic carbon (TOC) and Rock-Eval pyrolysis analyses for 1183 samples in 125 well penetrations of the source rocks were used to map the present-day thickness of each source rock and the quantity (TOC), quality (hydrogen index), and thermal maturity (Tmax) of the organic matter. Based on assumptions related to carbon mass balance and regional distributions of TOC, the present-day source rock quantity and quality maps were used to determine the extent of fractional conversion of the kerogen to petroleum and to map the original TOC (TOCo) and the original hydrogen index (HIo) prior to thermal maturation. The quantity and quality of oil-prone organic matter in Shublik Formation source rock generally exceeded that of the other units prior to thermal maturation (commonly TOCo > 4 wt.% and HIo > 600 mg hydrocarbon/g TOC), although all are likely sources for at least some petroleum on the North Slope. We used Rock-Eval and hydrous pyrolysis methods to calculate expulsion factors and petroleum charge for each of the four source rocks in the study area. Without attempting to identify the correct methods, we conclude that calculations based on Rock-Eval pyrolysis overestimate expulsion factors and petroleum charge because low pressure and rapid removal of thermally cracked products by the carrier gas retards cross-linking and pyrobitumen formation that is otherwise favored by natural burial maturation. Expulsion factors and petroleum charge based on hydrous pyrolysis may also be high

  10. The Density Functional Theory of Flies: Predicting distributions of interacting active organisms

    Science.gov (United States)

    Kinkhabwala, Yunus; Valderrama, Juan; Cohen, Itai; Arias, Tomas

    On October 2nd, 2016, 52 people were crushed in a stampede when a crowd panicked at a religious gathering in Ethiopia. The ability to predict the state of a crowd and whether it is susceptible to such transitions could help prevent such catastrophes. While current techniques such as agent based models can predict transitions in emergent behaviors of crowds, the assumptions used to describe the agents are often ad hoc and the simulations are computationally expensive making their application to real-time crowd prediction challenging. Here, we pursue an orthogonal approach and ask whether a reduced set of variables, such as the local densities, are sufficient to describe the state of a crowd. Inspired by the theoretical framework of Density Functional Theory, we have developed a system that uses only measurements of local densities to extract two independent crowd behavior functions: (1) preferences for locations and (2) interactions between individuals. With these two functions, we have accurately predicted how a model system of walking Drosophila melanogaster distributes itself in an arbitrary 2D environment. In addition, this density-based approach measures properties of the crowd from only observations of the crowd itself without any knowledge of the detailed interactions and thus it can make predictions about the resulting distributions of these flies in arbitrary environments, in real-time. This research was supported in part by ARO W911NF-16-1-0433.

  11. Predicting the distribution of bed material accumulation using river network sediment budgets

    Science.gov (United States)

    Wilkinson, Scott N.; Prosser, Ian P.; Hughes, Andrew O.

    2006-10-01

    Assessing the spatial distribution of bed material accumulation in river networks is important for determining the impacts of erosion on downstream channel form and habitat and for planning erosion and sediment management. A model that constructs spatially distributed budgets of bed material sediment is developed to predict the locations of accumulation following land use change. For each link in the river network, GIS algorithms are used to predict bed material supply from gullies, river banks, and upstream tributaries and to compare total supply with transport capacity. The model is tested in the 29,000 km2 Murrumbidgee River catchment in southeast Australia. It correctly predicts the presence or absence of accumulation in 71% of river links, which is significantly better performance than previous models, which do not account for spatial variability in sediment supply and transport capacity. Representing transient sediment storage is important for predicting smaller accumulations. Bed material accumulation is predicted in 25% of the river network, indicating its importance as an environmental problem in Australia.

  12. Temperature distribution of a simplified rotor due to a uniform heat source

    Science.gov (United States)

    Welzenbach, Sarah; Fischer, Tim; Meier, Felix; Werner, Ewald; kyzy, Sonun Ulan; Munz, Oliver

    2018-03-01

    In gas turbines, high combustion efficiency as well as operational safety are required. Thus, labyrinth seal systems with honeycomb liners are commonly used. In the case of rubbing events in the seal system, the components can be damaged due to cyclic thermal and mechanical loads. Temperature differences occurring at labyrinth seal fins during rubbing events can be determined by considering a single heat source acting periodically on the surface of a rotating cylinder. Existing literature analysing the temperature distribution on rotating cylindrical bodies due to a stationary heat source is reviewed. The temperature distribution on the circumference of a simplified labyrinth seal fin is calculated using an available and easy to implement analytical approach. A finite element model of the simplified labyrinth seal fin is created and the numerical results are compared to the analytical results. The temperature distributions calculated by the analytical and the numerical approaches coincide for low sliding velocities, while there are discrepancies of the calculated maximum temperatures for higher sliding velocities. The use of the analytical approach allows the conservative estimation of the maximum temperatures arising in labyrinth seal fins during rubbing events. At the same time, high calculation costs can be avoided.

  13. A practical algorithm for distribution state estimation including renewable energy sources

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher [Electronic and Electrical Department, Shiraz University of Technology, Modares Blvd., P.O. 71555-313, Shiraz (Iran); Firouzi, Bahman Bahmani [Islamic Azad University Marvdasht Branch, Marvdasht (Iran)

    2009-11-15

    Renewable energy is energy that is in continuous supply over time. These kinds of energy sources are divided into five principal renewable sources of energy: the sun, the wind, flowing water, biomass and heat from within the earth. According to some studies carried out by the research institutes, about 25% of the new generation will be generated by Renewable Energy Sources (RESs) in the near future. Therefore, it is necessary to study the impact of RESs on the power systems, especially on the distribution networks. This paper presents a practical Distribution State Estimation (DSE) including RESs and some practical consideration. The proposed algorithm is based on the combination of Nelder-Mead simplex search and Particle Swarm Optimization (PSO) algorithms, called PSO-NM. The proposed algorithm can estimate load and RES output values by Weighted Least-Square (WLS) approach. Some practical considerations are var compensators, Voltage Regulators (VRs), Under Load Tap Changer (ULTC) transformer modeling, which usually have nonlinear and discrete characteristics, and unbalanced three-phase power flow equations. The comparison results with other evolutionary optimization algorithms such as original PSO, Honey Bee Mating Optimization (HBMO), Neural Networks (NNs), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) for a test system demonstrate that PSO-NM is extremely effective and efficient for the DSE problems. (author)

  14. A review on the sources and spatial-temporal distributions of Pb in Jiaozhou Bay

    Science.gov (United States)

    Yang, Dongfang; Zhang, Jie; Wang, Ming; Zhu, Sixi; Wu, Yunjie

    2017-12-01

    This paper provided a review on the source, spatial-distribution, temporal variations of Pb in Jiaozhou Bay based on investigation of Pb in surface and waters in different seasons during 1979-1983. The source strengths of Pb sources in Jiaozhou Bay were showing increasing trends, and the pollution level of Pb in this bay was slight or moderate in the early stage of reform and opening-up. Pb contents in the marine bay were mainly determined by the strength and frequency of Pb inputs from human activities, and Pb could be moving from high content areas to low content areas in the ocean interior. Surface waters in the ocean was polluted by human activities, and bottom waters was polluted by means of vertical water’s effect. The process of spatial distribution of Pb in waters was including three steps, i.e., 1), Pb was transferring to surface waters in the bay, 2) Pb was transferring to surface waters, and 3) Pb was transferring to and accumulating in bottom waters.

  15. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    Science.gov (United States)

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  16. Hierarchical spatial models for predicting pygmy rabbit distribution and relative abundance

    Science.gov (United States)

    Wilson, T.L.; Odei, J.B.; Hooten, M.B.; Edwards, T.C.

    2010-01-01

    Conservationists routinely use species distribution models to plan conservation, restoration and development actions, while ecologists use them to infer process from pattern. These models tend to work well for common or easily observable species, but are of limited utility for rare and cryptic species. This may be because honest accounting of known observation bias and spatial autocorrelation are rarely included, thereby limiting statistical inference of resulting distribution maps. We specified and implemented a spatially explicit Bayesian hierarchical model for a cryptic mammal species (pygmy rabbit Brachylagus idahoensis). Our approach used two levels of indirect sign that are naturally hierarchical (burrows and faecal pellets) to build a model that allows for inference on regression coefficients as well as spatially explicit model parameters. We also produced maps of rabbit distribution (occupied burrows) and relative abundance (number of burrows expected to be occupied by pygmy rabbits). The model demonstrated statistically rigorous spatial prediction by including spatial autocorrelation and measurement uncertainty. We demonstrated flexibility of our modelling framework by depicting probabilistic distribution predictions using different assumptions of pygmy rabbit habitat requirements. Spatial representations of the variance of posterior predictive distributions were obtained to evaluate heterogeneity in model fit across the spatial domain. Leave-one-out cross-validation was conducted to evaluate the overall model fit. Synthesis and applications. Our method draws on the strengths of previous work, thereby bridging and extending two active areas of ecological research: species distribution models and multi-state occupancy modelling. Our framework can be extended to encompass both larger extents and other species for which direct estimation of abundance is difficult. ?? 2010 The Authors. Journal compilation ?? 2010 British Ecological Society.

  17. Shielding Characteristics Using an Ultrasonic Configurable Fan Artificial Noise Source to Generate Modes - Experimental Measurements and Analytical Predictions

    Science.gov (United States)

    Sutliff, Daniel L.; Walker, Bruce E.

    2014-01-01

    An Ultrasonic Configurable Fan Artificial Noise Source (UCFANS) was designed, built, and tested in support of the NASA Langley Research Center's 14x22 wind tunnel test of the Hybrid Wing Body (HWB) full 3-D 5.8% scale model. The UCFANS is a 5.8% rapid prototype scale model of a high-bypass turbofan engine that can generate the tonal signature of proposed engines using artificial sources (no flow). The purpose of the program was to provide an estimate of the acoustic shielding benefits possible from mounting an engine on the upper surface of a wing; a flat plate model was used as the shielding surface. Simple analytical simulations were used to preview the radiation patterns - Fresnel knife-edge diffraction was coupled with a dense phased array of point sources to compute shielded and unshielded sound pressure distributions for potential test geometries and excitation modes. Contour plots of sound pressure levels, and integrated power levels, from nacelle alone and shielded configurations for both the experimental measurements and the analytical predictions are presented in this paper.

  18. Dual-Source Linear Energy Prediction (LINE-P) Model in the Context of WSNs.

    Science.gov (United States)

    Ahmed, Faisal; Tamberg, Gert; Le Moullec, Yannick; Annus, Paul

    2017-07-20

    Energy harvesting technologies such as miniature power solar panels and micro wind turbines are increasingly used to help power wireless sensor network nodes. However, a major drawback of energy harvesting is its varying and intermittent characteristic, which can negatively affect the quality of service. This calls for careful design and operation of the nodes, possibly by means of, e.g., dynamic duty cycling and/or dynamic frequency and voltage scaling. In this context, various energy prediction models have been proposed in the literature; however, they are typically compute-intensive or only suitable for a single type of energy source. In this paper, we propose Linear Energy Prediction "LINE-P", a lightweight, yet relatively accurate model based on approximation and sampling theory; LINE-P is suitable for dual-source energy harvesting. Simulations and comparisons against existing similar models have been conducted with low and medium resolutions (i.e., 60 and 22 min intervals/24 h) for the solar energy source (low variations) and with high resolutions (15 min intervals/24 h) for the wind energy source. The results show that the accuracy of the solar-based and wind-based predictions is up to approximately 98% and 96%, respectively, while requiring a lower complexity and memory than the other models. For the cases where LINE-P's accuracy is lower than that of other approaches, it still has the advantage of lower computing requirements, making it more suitable for embedded implementation, e.g., in wireless sensor network coordinator nodes or gateways.

  19. Further comprehension of natural gas accumulation, distribution, and prediction prospects in China

    Directory of Open Access Journals (Sweden)

    Jun Li

    2017-06-01

    Full Text Available In-depth research reveals that the natural gas accumulation and distribution are characterized by cycle, sequence, equilibrium, traceability, and multi-stage. To be specific, every geotectonic cycle represents a gas reservoir forming system where natural gas is generated, migrated, accumulated, and formed into a reservoir in a certain play. Essentially, hydrocarbon accumulation occurs when migration force and resistance reach an equilibrium. In this situation, the closer to the source rock, the higher the accumulation efficiency is. Historically, reservoirs were formed in multiple phases. Moreover, zones in source rocks and adjacent to source rocks, unconformity belts, and faulted anticline belts are favorable areas to finding large gas fields. Apart from the common unconformity belts and faulted anticline belts, in-source and near-source zones should be considered as critical targets for future exploration. Subsequent exploration should focus on Upper Palaeozoic in the southeastern Ordos Basin, Triassic in southwestern Sichuan Basin, Jurassic in the northern section of the Kuqa Depression and other zones where no great breakthroughs have been made. Keywords: Large gas field, Distribution characteristics, Potential zone, Prospect

  20. The electron-dose distribution surrounding an 192Ir wire bracytherapy source investigated using EGS4 simulations and GafChromic film

    International Nuclear Information System (INIS)

    Cheung, Y.C.; Yu, P.K.N.; Young, E.C.M.; Wong, T.P.Y.

    1997-01-01

    The steep dose gradient around 192 Ir brachytherapy wire implants is predicted by the EGS4 (PRESTA version) Monte Carlo simulation. When considering radiation absorbing regions close to the wire source, the accurate dose distribution cannot be calculated by the GE Target II Sun Sparc treatment-planning system. Experiments using GafChromic TM film have been performed to prove the validity of the EGS4 user code when calculating the dose close to the wire source in a low energy range. (Author)

  1. Imaging phase holdup distribution of three phase flow systems using dual source gamma ray tomography

    International Nuclear Information System (INIS)

    Varma, Rajneesh; Al-Dahhan, Muthanna; O'Sullivan, Joseph

    2008-01-01

    Full text: Multiphase reaction and process systems are used in abundance in the chemical and biochemical industry. Tomography has been successfully employed to visualize the hydrodynamics of multiphase systems. Most of the tomography methods (gamma ray, x-ray and electrical capacitance and resistance) have been successfully implemented for two phase dynamic systems. However, a significant number of chemical and biochemical systems consists of dynamic three phases. Research effort directed towards the development of tomography techniques to image such dynamic system has met with partial successes for specific systems with applicability to limited operating conditions. A dual source tomography scanner has been developed that uses the 661 keV and 1332 keV photo peaks from the 137 Cs and 60 Co for imaging three phase systems. A new approach has been developed and applied that uses the polyenergetic Alternating Minimization (A-M) algorithm, developed by O'Sullivan and Benac (2007), for imaging the holdup distribution in three phases' dynamic systems. The new approach avoids the traditional post image processing approach used to determine the holdup distribution where the attenuation images of the mixed flow obtained from gamma ray photons of two different energies are used to determine the holdup of three phases. In this approach the holdup images are directly reconstructed from the gamma ray transmission data. The dual source gamma ray tomography scanner and the algorithm were validated using a three phase phantom. Based in the validation, three phase holdup studies we carried out in slurry bubble column containing gas liquid and solid phases in a dynamic state using the dual energy gamma ray tomography. The key results of the holdup distribution studies in the slurry bubble column along with the validation of the dual source gamma ray tomography system would be presented and discussed

  2. Spatiotemporal trends in Canadian domestic wild boar production and habitat predict wild pig distribution

    DEFF Research Database (Denmark)

    Michel, Nicole; Laforge, Michel; van Beest, Floris

    2017-01-01

    eradication of wild pigs is rarely feasible after establishment over large areas, effective management will depend on strengthening regulations and enforcement of containment practices for Canadian domestic wild boar farms. Initiation of coordinated provincial and federal efforts to implement population...... wild boar and test the propagule pressure hypothesis to improve predictive ability of an existing habitat-based model of wild pigs. We reviewed spatiotemporal patterns in domestic wild boar production across ten Canadian provinces during 1991–2011 and evaluated the ability of wild boar farm...... distribution to improve predictive models of wild pig occurrence using a resource selection probability function for wild pigs in Saskatchewan. Domestic wild boar production in Canada increased from 1991 to 2001 followed by sharp declines in all provinces. The distribution of domestic wild boar farms in 2006...

  3. Advection-diffusion model for the simulation of air pollution distribution from a point source emission

    Science.gov (United States)

    Ulfah, S.; Awalludin, S. A.; Wahidin

    2018-01-01

    Advection-diffusion model is one of the mathematical models, which can be used to understand the distribution of air pollutant in the atmosphere. It uses the 2D advection-diffusion model with time-dependent to simulate air pollution distribution in order to find out whether the pollutants are more concentrated at ground level or near the source of emission under particular atmospheric conditions such as stable, unstable, and neutral conditions. Wind profile, eddy diffusivity, and temperature are considered in the model as parameters. The model is solved by using explicit finite difference method, which is then visualized by a computer program developed using Lazarus programming software. The results show that the atmospheric conditions alone influencing the level of concentration of pollutants is not conclusive as the parameters in the model have their own effect on each atmospheric condition.

  4. Positron energy distributions from a hybrid positron source based on channeling radiation

    International Nuclear Information System (INIS)

    Azadegan, B.; Mahdipour, A.; Dabagov, S.B.; Wagner, W.

    2013-01-01

    A hybrid positron source which is based on the generation of channeling radiation by relativistic electrons channeled along different crystallographic planes and axes of a tungsten single crystal and subsequent conversion of radiation into e + e − -pairs in an amorphous tungsten target is described. The photon spectra of channeling radiation are calculated using the Doyle–Turner approximation for the continuum potentials and classical equations of motion for channeled particles to obtain their trajectories, velocities and accelerations. The spectral-angular distributions of channeling radiation are found applying classical electrodynamics. Finally, the conversion of radiation into e + e − -pairs and the energy distributions of positrons are simulated using the GEANT4 package

  5. 137Cs source dose distribution using the Fricke Xylenol Gel dosimetry

    International Nuclear Information System (INIS)

    Sato, R.; De Almeida, A.; Moreira, M.V.

    2009-01-01

    Dosimetric measurements close to radioisotope sources, such as those used in brachytherapy, require high spatial resolution to avoid incorrect results in the steep dose gradient region. In this work the Fricke Xylenol Gel dosimeter was used to obtain the spatial dose distribution. The readings from a 137 Cs source were performed using two methods, visible spectrophotometer and CCD camera images. Good agreement with the Sievert summation method was found for the transversal axis dose profile within uncertainties of 4% and 5%, for the spectrophotometer and CCD camera respectively. Our results show that the dosimeter is adequate for brachytherapy dosimetry and, owing to its relatively fast and easy preparation and reading, it is recommended for quality control in brachytherapy applications.

  6. Hydrogen distribution in a containment with a high-velocity hydrogen-steam source

    International Nuclear Information System (INIS)

    Bloom, G.R.; Muhlestein, L.D.; Postma, A.K.; Claybrook, S.W.

    1982-09-01

    Hydrogen mixing and distribution tests are reported for a modeled high velocity hydrogen-steam release from a postulated small pipe break or release from a pressurizer relief tank rupture disk into the lower compartment of an Ice Condenser Plant. The tests, which in most cases used helium as a simulant for hydrogen, demonstrated that the lower compartment gas was well mixed for both hydrogen release conditions used. The gas concentration differences between any spatial locations were less than 3 volume percent during the hydrogen/steam release period and were reduced to less than 0.5 volume percent within 20 minutes after termination of the hydrogen source. The high velocity hydrogen/steam jet provided the dominant mixing mechanism; however, natural convection and forced air recirculation played important roles in providing a well mixed atmosphere following termination of the hydrogen source. 5 figures, 4 tables

  7. An open source platform for multi-scale spatially distributed simulations of microbial ecosystems

    Energy Technology Data Exchange (ETDEWEB)

    Segre, Daniel [Boston Univ., MA (United States)

    2014-08-14

    The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.

  8. Electron Source Brightness and Illumination Semi-Angle Distribution Measurement in a Transmission Electron Microscope.

    Science.gov (United States)

    Börrnert, Felix; Renner, Julian; Kaiser, Ute

    2018-05-21

    The electron source brightness is an important parameter in an electron microscope. Reliable and easy brightness measurement routes are not easily found. A determination method for the illumination semi-angle distribution in transmission electron microscopy is even less well documented. Herein, we report a simple measurement route for both entities and demonstrate it on a state-of-the-art instrument. The reduced axial brightness of the FEI X-FEG with a monochromator was determined to be larger than 108 A/(m2 sr V).

  9. Distribution, sources and health risk assessment of mercury in kindergarten dust

    Science.gov (United States)

    Sun, Guangyi; Li, Zhonggen; Bi, Xiangyang; Chen, Yupeng; Lu, Shuangfang; Yuan, Xin

    2013-07-01

    Mercury (Hg) contamination in urban area is a hot issue in environmental research. In this study, the distribution, sources and health risk of Hg in dust from 69 kindergartens in Wuhan, China, were investigated. In comparison with most other cities, the concentrations of total mercury (THg) and methylmercury (MeHg) were significantly elevated, ranging from 0.15 to 10.59 mg kg-1 and from 0.64 to 3.88 μg kg-1, respectively. Among the five different urban areas, the educational area had the highest concentrations of THg and MeHg. The GIS mapping was used to identify the hot-spot areas and assess the potential pollution sources of Hg. The emissions of coal-power plants and coking plants were the main sources of THg in the dust, whereas the contributions of municipal solid waste (MSW) landfills and iron and steel smelting related industries were not significant. However, the emission of MSW landfills was considered to be an important source of MeHg in the studied area. The result of health risk assessment indicated that there was a high adverse health effect of the kindergarten dust in terms of Hg contamination on the children living in the educational area (Hazard index (HI) = 6.89).

  10. Root distribution of Nitraria sibirica with seasonally varying water sources in a desert habitat.

    Science.gov (United States)

    Zhou, Hai; Zhao, Wenzhi; Zheng, Xinjun; Li, Shoujuan

    2015-07-01

    In water-limited environments, the water sources used by desert shrubs are critical to understanding hydrological processes. Here we studied the oxygen stable isotope ratios (δ (18)O) of stem water of Nitraria sibirica as well as those of precipitation, groundwater and soil water from different layers to identify the possible water sources for the shrub. The results showed that the shrub used a mixture of soil water, recent precipitation and groundwater, with shallow lateral roots and deeply penetrating tap (sinker) roots, in different seasons. During the wet period (in spring), a large proportion of stem water in N. sibirica was from snow melt and recent precipitation, but use of these sources declined sharply with the decreasing summer rain at the site. At the height of summer, N. sibirica mainly utilized deep soil water from its tap roots, not only supporting the growth of shoots but also keeping the shallow lateral roots well-hydrated. This flexibility allowed the plants to maintain normal metabolic processes during prolonged periods when little precipitation occurs and upper soil layers become extremely dry. With the increase in precipitation that occurs as winter approaches, the percentage of water in the stem base of a plant derived from the tap roots (deep soil water or ground water) decreased again. These results suggested that the shrub's root distribution and morphology were the most important determinants of its ability to utilize different water sources, and that its adjustment to water availability was significant for acclimation to the desert habitat.

  11. Distributions and sources of volatile chlorocarbons and bromocarbons in the Yellow Sea and East China Sea

    International Nuclear Information System (INIS)

    Yang, Bin; Yang, Gui-Peng; Lu, Xiao-Lan; Li, Li; He, Zhen

    2015-01-01

    Highlights: • Concentrations of the six VHOC were determined in the Yellow Sea and East China Sea. • VHOC distributions were affected by anthropogenic, biologic and hydrographic factors. • Diurnal variations of the six VHOC were observed. • Relationships between VHOC and related parameters were discussed. • Sources of the six VHOC were identified by principal component analysis. - Abstract: Six volatile halogenated organic compounds (VHOC), namely, chloroform, carbon tetrachloride, trichloroethylene, bromodichloromethane, dibromochloromethane, and bromoform, were studied in the Yellow Sea and East China Sea from April to May, 2009. The spatial variability of these VHOC was influenced by various factors, including anthropogenic inputs, biogenic production and complicated hydrographic features such as Changjiang Diluted Water, Yellow Sea Cold Water Mass, and Kuroshio Current. Diurnal study results showed that factors such as solar irradiation, biological activity, and tide affected the abundance of these VHOC. Correlation analyses revealed that bromodichloromethane was positively correlated with chlorophyll a in surface seawater. Principal component analysis suggested that chlorinated compounds like carbon tetrachloride originated from anthropogenic sources whereas brominated compounds such as bromodichloromethane originated from biogenic sources. Sources of other chlorinated and brominated compounds may not be governed by biological processes in the marine environment

  12. Fast optical source for quantum key distribution based on semiconductor optical amplifiers.

    Science.gov (United States)

    Jofre, M; Gardelein, A; Anzolin, G; Amaya, W; Capmany, J; Ursin, R; Peñate, L; Lopez, D; San Juan, J L; Carrasco, J A; Garcia, F; Torcal-Milla, F J; Sanchez-Brea, L M; Bernabeu, E; Perdigues, J M; Jennewein, T; Torres, J P; Mitchell, M W; Pruneri, V

    2011-02-28

    A novel integrated optical source capable of emitting faint pulses with different polarization states and with different intensity levels at 100 MHz has been developed. The source relies on a single laser diode followed by four semiconductor optical amplifiers and thin film polarizers, connected through a fiber network. The use of a single laser ensures high level of indistinguishability in time and spectrum of the pulses for the four different polarizations and three different levels of intensity. The applicability of the source is demonstrated in the lab through a free space quantum key distribution experiment which makes use of the decoy state BB84 protocol. We achieved a lower bound secure key rate of the order of 3.64 Mbps and a quantum bit error ratio as low as 1.14×10⁻² while the lower bound secure key rate became 187 bps for an equivalent attenuation of 35 dB. To our knowledge, this is the fastest polarization encoded QKD system which has been reported so far. The performance, reduced size, low power consumption and the fact that the components used can be space qualified make the source particularly suitable for secure satellite communication.

  13. Plans for a Collaboratively Developed Distributed Control System for the Spallation Neutron Source

    International Nuclear Information System (INIS)

    DeVan, W.R.; Gurd, D.P.; Hammonds, J.; Lewis, S.A.; Smith, J.D.

    1999-01-01

    The Spallation Neutron Source (SNS) is an accelerator-based pulsed neutron source to be built in Oak Ridge, Tennessee. The facility has five major sections - a ''front end'' consisting of a 65 keV H - ion source followed by a 2.5 MeV RFQ; a 1 GeV linac; a storage ring; a 1MW spallation neutron target (upgradeable to 2 MW); the conventional facilities to support these machines and a suite of neutron scattering instruments to exploit them. These components will be designed and implemented by five collaborating institutions: Lawrence Berkeley National Laboratory (Front End), Los Alamos National Laboratory (Linac); Brookhaven National Laboratory (Storage Ring); Argonne National Laboratory (Instruments); and Oak Ridge National Laboratory (Neutron Source and Conventional Facilities). It is proposed to implement a fully integrated control system for all aspects of this complex. The system will be developed collaboratively, with some degree of local autonomy for distributed systems, but centralized accountability. Technical integration will be based upon the widely-used EPICS control system toolkit, and a complete set of hardware and software standards. The scope of the integrated control system includes site-wide timing and synchronization, networking and machine protection. This paper discusses the technical and organizational issues of planning a large control system to be developed collaboratively at five different institutions, the approaches being taken to address those issues, as well as some of the particular technical challenges for the SNS control system

  14. On Road Study of Colorado Front Range Greenhouse Gases Distribution and Sources

    Science.gov (United States)

    Petron, G.; Hirsch, A.; Trainer, M. K.; Karion, A.; Kofler, J.; Sweeney, C.; Andrews, A.; Kolodzey, W.; Miller, B. R.; Miller, L.; Montzka, S. A.; Kitzis, D. R.; Patrick, L.; Frost, G. J.; Ryerson, T. B.; Robers, J. M.; Tans, P.

    2008-12-01

    The Global Monitoring Division and Chemical Sciences Division of the NOAA Earth System Research Laboratory have teamed up over the summer 2008 to experiment with a new measurement strategy to characterize greenhouse gases distribution and sources in the Colorado Front Range. Combining expertise in greenhouse gases measurements and in local to regional scales air quality study intensive campaigns, we have built the 'Hybrid Lab'. A continuous CO2 and CH4 cavity ring down spectroscopic analyzer (Picarro, Inc.), a CO gas-filter correlation instrument (Thermo Environmental, Inc.) and a continuous UV absorption ozone monitor (2B Technologies, Inc., model 202SC) have been installed securely onboard a 2006 Toyota Prius Hybrid vehicle with an inlet bringing in outside air from a few meters above the ground. To better characterize point and distributed sources, air samples were taken with a Portable Flask Package (PFP) for later multiple species analysis in the lab. A GPS unit hooked up to the ozone analyzer and another one installed on the PFP kept track of our location allowing us to map measured concentrations on the driving route using Google Earth. The Hybrid Lab went out for several drives in the vicinity of the NOAA Boulder Atmospheric Observatory (BAO) tall tower located in Erie, CO and covering areas from Boulder, Denver, Longmont, Fort Collins and Greeley. Enhancements in CO2, CO and destruction of ozone mainly reflect emissions from traffic. Methane enhancements however are clearly correlated with nearby point sources (landfill, feedlot, natural gas compressor ...) or with larger scale air masses advected from the NE Colorado, where oil and gas drilling operations are widespread. The multiple species analysis (hydrocarbons, CFCs, HFCs) of the air samples collected along the way bring insightful information about the methane sources at play. We will present results of the analysis and interpretation of the Hybrid Lab Front Range Study and conclude with perspectives

  15. The occurrence and distribution of a group of organic micropollutants in Mexico City's water sources.

    Science.gov (United States)

    Félix-Cañedo, Thania E; Durán-Álvarez, Juan C; Jiménez-Cisneros, Blanca

    2013-06-01

    The occurrence and distribution of a group of 17 organic micropollutants in surface and groundwater sources from Mexico City was determined. Water samples were taken from 7 wells, 4 dams and 15 tanks where surface and groundwater are mixed and stored before distribution. Results evidenced the occurrence of seven of the target compounds in groundwater: salicylic acid, diclofenac, di-2-ethylhexylphthalate (DEHP), butylbenzylphthalate (BBP), triclosan, bisphenol A (BPA) and 4-nonylphenol (4-NP). In surface water, 11 target pollutants were detected: same found in groundwater as well as naproxen, ibuprofen, ketoprofen and gemfibrozil. In groundwater, concentration ranges of salicylic acid, 4-NP and DEHP, the most frequently found compounds, were 1-464, 1-47 and 19-232 ng/L, respectively; while in surface water, these ranges were 29-309, 89-655 and 75-2,282 ng/L, respectively. Eleven target compounds were detected in mixed water. Concentrations in mixed water were higher than those determined in groundwater but lower than the detected in surface water. Different to that found in ground and surface water, the pesticide 2,4-D was found in mixed water, indicating that some pollutants can reach areas where they are not originally present in the local water sources. Concentration of the organic micropollutants found in this study showed similar to lower to those reported in water sources from developed countries. This study provides information that enriches the state of the art on the occurrence of organic micropollutants in water sources worldwide, notably in megacities of developing countries. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. CDFMC: a program that calculates the fixed neutron source distribution for a BWR using Monte Carlo

    International Nuclear Information System (INIS)

    Gomez T, A.M.; Xolocostli M, J.V.; Palacios H, J.C.

    2006-01-01

    The three-dimensional neutron flux calculation using the synthesis method, it requires of the determination of the neutron flux in two two-dimensional configurations as well as in an unidimensional one. Most of the standard guides for the neutron flux calculation or fluences in the vessel of a nuclear reactor, make special emphasis in the appropriate calculation of the fixed neutron source that should be provided to the used transport code, with the purpose of finding sufficiently approximated flux values. The reactor core assemblies configuration is based on X Y geometry, however the considered problem is solved in R θ geometry for what is necessary to make an appropriate mapping to find the source term associated to the R θ intervals starting from a source distribution in rectangular coordinates. To develop the CDFMC computer program (Source Distribution calculation using Monte Carlo), it was necessary to develop a theory of independent mapping to those that have been in the literature. The method of meshes overlapping here used, is based on a technique of random points generation, commonly well-known as Monte Carlo technique. Although the 'randomness' of this technique it implies considering errors in the calculations, it is well known that when increasing the number of points randomly generated to measure an area or some other quantity of interest, the precision of the method increases. In the particular case of the CDFMC computer program, the developed technique reaches a good general behavior when it is used a considerably high number of points (bigger or equal to a hundred thousand), with what makes sure errors in the calculations of the order of 1%. (Author)

  17. Multivariate models for prediction of rheological characteristics of filamentous fermentation broth from the size distribution.

    Science.gov (United States)

    Petersen, Nanna; Stocks, Stuart; Gernaey, Krist V

    2008-05-01

    The main purpose of this article is to demonstrate that principal component analysis (PCA) and partial least squares regression (PLSR) can be used to extract information from particle size distribution data and predict rheological properties. Samples from commercially relevant Aspergillus oryzae fermentations conducted in 550 L pilot scale tanks were characterized with respect to particle size distribution, biomass concentration, and rheological properties. The rheological properties were described using the Herschel-Bulkley model. Estimation of all three parameters in the Herschel-Bulkley model (yield stress (tau(y)), consistency index (K), and flow behavior index (n)) resulted in a large standard deviation of the parameter estimates. The flow behavior index was not found to be correlated with any of the other measured variables and previous studies have suggested a constant value of the flow behavior index in filamentous fermentations. It was therefore chosen to fix this parameter to the average value thereby decreasing the standard deviation of the estimates of the remaining rheological parameters significantly. Using a PLSR model, a reasonable prediction of apparent viscosity (micro(app)), yield stress (tau(y)), and consistency index (K), could be made from the size distributions, biomass concentration, and process information. This provides a predictive method with a high predictive power for the rheology of fermentation broth, and with the advantages over previous models that tau(y) and K can be predicted as well as micro(app). Validation on an independent test set yielded a root mean square error of 1.21 Pa for tau(y), 0.209 Pa s(n) for K, and 0.0288 Pa s for micro(app), corresponding to R(2) = 0.95, R(2) = 0.94, and R(2) = 0.95 respectively. Copyright 2007 Wiley Periodicals, Inc.

  18. Comparison of predicted far-field temperatures for discrete and smeared heat sources

    International Nuclear Information System (INIS)

    Ryder, E.E.

    1992-01-01

    A fundamental concern in the design of the potential repository at Yucca Mountain. Nevada is the response of the host rock to the emplacement of heat-generating waste. The thermal perturbation of the rock mass has implications regarding the structural, hydrologic. and geochemical performance of the potential repository. The phenomenological coupling of many of these performance aspects makes repository thermal modeling a difficult task. For many of the more complex, coupled models, it is often necessary to reduce the geometry of the potential repository to a smeared heat-source approximation. Such simplifications have impacts on induced thermal profiles that in turn may influence other predicted responses through one- or two-way thermal couplings. The effect of waste employment layout on host-rock thermal was chosen as the primary emphasis of this study. Using a consistent set of modeling and input assumptions, far-field thermal response predictions made for discrete-source as well as plate source approximations of the repository geometry. Input values used in the simulations are consistent with a design-basis a real power density (APD) of 80 kW/acre as would be achieved assuming a 2010 emplacement start date, a levelized receipt schedule, and a limitation on available area as published in previous design studies. It was found that edge effects resulting from general repository layout have a significant influence on the shapes and extents of isothermal profiles, and should be accounted for in far-field modeling efforts

  19. Prediction of Near-Field Wave Attenuation Due to a Spherical Blast Source

    Science.gov (United States)

    Ahn, Jae-Kwang; Park, Duhee

    2017-11-01

    Empirical and theoretical far-field attenuation relationships, which do not capture the near-field response, are most often used to predict the peak amplitude of blast wave. Jiang et al. (Vibration due to a buried explosive source. PhD Thesis, Curtin University, Western Australian School of Mines, 1993) present rigorous wave equations that simulates the near-field attenuation to a spherical blast source in damped and undamped media. However, the effect of loading frequency and velocity of the media have not yet been investigated. We perform a suite of axisymmetric, dynamic finite difference analyses to simulate the propagation of stress waves induced by spherical blast source and to quantify the near-field attenuation. A broad range of loading frequencies, wave velocities, and damping ratios are used in the simulations. The near-field effect is revealed to be proportional to the rise time of the impulse load and wave velocity. We propose an empirical additive function to the theoretical far-field attenuation curve to predict the near-field range and attenuation. The proposed curve is validated against measurements recorded in a test blast.

  20. A maximum entropy model for predicting wild boar distribution in Spain

    Directory of Open Access Journals (Sweden)

    Jaime Bosch

    2014-09-01

    Full Text Available Wild boar (Sus scrofa populations in many areas of the Palearctic including the Iberian Peninsula have grown continuously over the last century. This increase has led to numerous different types of conflicts due to the damage these mammals can cause to agriculture, the problems they create in the conservation of natural areas, and the threat they pose to animal health. In the context of both wildlife management and the design of health programs for disease control, it is essential to know how wild boar are distributed on a large spatial scale. Given that the quantifying of the distribution of wild species using census techniques is virtually impossible in the case of large-scale studies, modeling techniques have thus to be used instead to estimate animals’ distributions, densities, and abundances. In this study, the potential distribution of wild boar in Spain was predicted by integrating data of presence and environmental variables into a MaxEnt approach. We built and tested models using 100 bootstrapped replicates. For each replicate or simulation, presence data was divided into two subsets that were used for model fitting (60% of the data and cross-validation (40% of the data. The final model was found to be accurate with an area under the receiver operating characteristic curve (AUC value of 0.79. Six explanatory variables for predicting wild boar distribution were identified on the basis of the percentage of their contribution to the model. The model exhibited a high degree of predictive accuracy, which has been confirmed by its agreement with satellite images and field surveys.

  1. Potential Distribution Predicted for Rhynchophorus ferrugineus in China under Different Climate Warming Scenarios.

    Directory of Open Access Journals (Sweden)

    Xuezhen Ge

    Full Text Available As the primary pest of palm trees, Rhynchophorus ferrugineus (Olivier (Coleoptera: Curculionidae has caused serious harm to palms since it first invaded China. The present study used CLIMEX 1.1 to predict the potential distribution of R. ferrugineus in China according to both current climate data (1981-2010 and future climate warming estimates based on simulated climate data for the 2020s (2011-2040 provided by the Tyndall Center for Climate Change Research (TYN SC 2.0. Additionally, the Ecoclimatic Index (EI values calculated for different climatic conditions (current and future, as simulated by the B2 scenario were compared. Areas with a suitable climate for R. ferrugineus distribution were located primarily in central China according to the current climate data, with the northern boundary of the distribution reaching to 40.1°N and including Tibet, north Sichuan, central Shaanxi, south Shanxi, and east Hebei. There was little difference in the potential distribution predicted by the four emission scenarios according to future climate warming estimates. The primary prediction under future climate warming models was that, compared with the current climate model, the number of highly favorable habitats would increase significantly and expand into northern China, whereas the number of both favorable and marginally favorable habitats would decrease. Contrast analysis of EI values suggested that climate change and the density of site distribution were the main effectors of the changes in EI values. These results will help to improve control measures, prevent the spread of this pest, and revise the targeted quarantine areas.

  2. High resolution stationary digital breast tomosynthesis using distributed carbon nanotube x-ray source array.

    Science.gov (United States)

    Qian, Xin; Tucker, Andrew; Gidcumb, Emily; Shan, Jing; Yang, Guang; Calderon-Colon, Xiomara; Sultana, Shabana; Lu, Jianping; Zhou, Otto; Spronk, Derrek; Sprenger, Frank; Zhang, Yiheng; Kennedy, Don; Farbizio, Tom; Jing, Zhenxue

    2012-04-01

    The purpose of this study is to investigate the feasibility of increasing the system spatial resolution and scanning speed of Hologic Selenia Dimensions digital breast tomosynthesis (DBT) scanner by replacing the rotating mammography x-ray tube with a specially designed carbon nanotube (CNT) x-ray source array, which generates all the projection images needed for tomosynthesis reconstruction by electronically activating individual x-ray sources without any mechanical motion. The stationary digital breast tomosynthesis (s-DBT) design aims to (i) increase the system spatial resolution by eliminating image blurring due to x-ray tube motion and (ii) reduce the scanning time. Low spatial resolution and long scanning time are the two main technical limitations of current DBT technology. A CNT x-ray source array was designed and evaluated against a set of targeted system performance parameters. Simulations were performed to determine the maximum anode heat load at the desired focal spot size and to design the electron focusing optics. Field emission current from CNT cathode was measured for an extended period of time to determine the stable life time of CNT cathode for an expected clinical operation scenario. The source array was manufactured, tested, and integrated with a Selenia scanner. An electronic control unit was developed to interface the source array with the detection system and to scan and regulate x-ray beams. The performance of the s-DBT system was evaluated using physical phantoms. The spatially distributed CNT x-ray source array comprised 31 individually addressable x-ray sources covering a 30 angular span with 1 pitch and an isotropic focal spot size of 0.6 mm at full width at half-maximum. Stable operation at 28 kV(peak) anode voltage and 38 mA tube current was demonstrated with extended lifetime and good source-to-source consistency. For the standard imaging protocol of 15 views over 14, 100 mAs dose, and 2 × 2 detector binning, the projection

  3. Predicting the geographical distribution of two invasive termite species from occurrence data.

    Science.gov (United States)

    Tonini, Francesco; Divino, Fabio; Lasinio, Giovanna Jona; Hochmair, Hartwig H; Scheffrahn, Rudolf H

    2014-10-01

    Predicting the potential habitat of species under both current and future climate change scenarios is crucial for monitoring invasive species and understanding a species' response to different environmental conditions. Frequently, the only data available on a species is the location of its occurrence (presence-only data). Using occurrence records only, two models were used to predict the geographical distribution of two destructive invasive termite species, Coptotermes gestroi (Wasmann) and Coptotermes formosanus Shiraki. The first model uses a Bayesian linear logistic regression approach adjusted for presence-only data while the second one is the widely used maximum entropy approach (Maxent). Results show that the predicted distributions of both C. gestroi and C. formosanus are strongly linked to urban development. The impact of future scenarios such as climate warming and population growth on the biotic distribution of both termite species was also assessed. Future climate warming seems to affect their projected probability of presence to a lesser extent than population growth. The Bayesian logistic approach outperformed Maxent consistently in all models according to evaluation criteria such as model sensitivity and ecological realism. The importance of further studies for an explicit treatment of residual spatial autocorrelation and a more comprehensive comparison between both statistical approaches is suggested.

  4. Predicting the potential distribution of the amphibian pathogen Batrachochytrium dendrobatidis in East and Southeast Asia.

    Science.gov (United States)

    Moriguchi, Sachiko; Tominaga, Atsushi; Irwin, Kelly J; Freake, Michael J; Suzuki, Kazutaka; Goka, Koichi

    2015-04-08

    Batrachochytrium dendrobatidis (Bd) is the pathogen responsible for chytridiomycosis, a disease that is associated with a worldwide amphibian population decline. In this study, we predicted the potential distribution of Bd in East and Southeast Asia based on limited occurrence data. Our goal was to design an effective survey area where efforts to detect the pathogen can be focused. We generated ecological niche models using the maximum-entropy approach, with alleviation of multicollinearity and spatial autocorrelation. We applied eigenvector-based spatial filters as independent variables, in addition to environmental variables, to resolve spatial autocorrelation, and compared the model's accuracy and the degree of spatial autocorrelation with those of a model estimated using only environmental variables. We were able to identify areas of high suitability for Bd with accuracy. Among the environmental variables, factors related to temperature and precipitation were more effective in predicting the potential distribution of Bd than factors related to land use and cover type. Our study successfully predicted the potential distribution of Bd in East and Southeast Asia. This information should now be used to prioritize survey areas and generate a surveillance program to detect the pathogen.

  5. Distributed Learning, Recognition, and Prediction by ART and ARTMAP Neural Networks.

    Science.gov (United States)

    Carpenter, Gail A.

    1997-11-01

    A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.

  6. Predictive modeling and mapping of Malayan Sun Bear (Helarctos malayanus) distribution using maximum entropy.

    Science.gov (United States)

    Nazeri, Mona; Jusoff, Kamaruzaman; Madani, Nima; Mahmud, Ahmad Rodzi; Bahman, Abdul Rani; Kumar, Lalit

    2012-01-01

    One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus) in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population.

  7. Predictive modeling and mapping of Malayan Sun Bear (Helarctos malayanus distribution using maximum entropy.

    Directory of Open Access Journals (Sweden)

    Mona Nazeri

    Full Text Available One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population.

  8. Asymptotically Constant-Risk Predictive Densities When the Distributions of Data and Target Variables Are Different

    Directory of Open Access Journals (Sweden)

    Keisuke Yano

    2014-05-01

    Full Text Available We investigate the asymptotic construction of constant-risk Bayesian predictive densities under the Kullback–Leibler risk when the distributions of data and target variables are different and have a common unknown parameter. It is known that the Kullback–Leibler risk is asymptotically equal to a trace of the product of two matrices: the inverse of the Fisher information matrix for the data and the Fisher information matrix for the target variables. We assume that the trace has a unique maximum point with respect to the parameter. We construct asymptotically constant-risk Bayesian predictive densities using a prior depending on the sample size. Further, we apply the theory to the subminimax estimator problem and the prediction based on the binary regression model.

  9. Localization Accuracy of Distributed Inverse Solutions for Electric and Magnetic Source Imaging of Interictal Epileptic Discharges in Patients with Focal Epilepsy.

    Science.gov (United States)

    Heers, Marcel; Chowdhury, Rasheda A; Hedrich, Tanguy; Dubeau, François; Hall, Jeffery A; Lina, Jean-Marc; Grova, Christophe; Kobayashi, Eliane

    2016-01-01

    Distributed inverse solutions aim to realistically reconstruct the origin of interictal epileptic discharges (IEDs) from noninvasively recorded electroencephalography (EEG) and magnetoencephalography (MEG) signals. Our aim was to compare the performance of different distributed inverse solutions in localizing IEDs: coherent maximum entropy on the mean (cMEM), hierarchical Bayesian implementations of independent identically distributed sources (IID, minimum norm prior) and spatially coherent sources (COH, spatial smoothness prior). Source maxima (i.e., the vertex with the maximum source amplitude) of IEDs in 14 EEG and 19 MEG studies from 15 patients with focal epilepsy were analyzed. We visually compared their concordance with intracranial EEG (iEEG) based on 17 cortical regions of interest and their spatial dispersion around source maxima. Magnetic source imaging (MSI) maxima from cMEM were most often confirmed by iEEG (cMEM: 14/19, COH: 9/19, IID: 8/19 studies). COH electric source imaging (ESI) maxima co-localized best with iEEG (cMEM: 8/14, COH: 11/14, IID: 10/14 studies). In addition, cMEM was less spatially spread than COH and IID for ESI and MSI (p < 0.001 Bonferroni-corrected post hoc t test). Highest positive predictive values for cortical regions with IEDs in iEEG could be obtained with cMEM for MSI and with COH for ESI. Additional realistic EEG/MEG simulations confirmed our findings. Accurate spatially extended sources, as found in cMEM (ESI and MSI) and COH (ESI) are desirable for source imaging of IEDs because this might influence surgical decision. Our simulations suggest that COH and IID overestimate the spatial extent of the generators compared to cMEM.

  10. Linking macroecology and community ecology: refining predictions of species distributions using biotic interaction networks.

    Science.gov (United States)

    Staniczenko, Phillip P A; Sivasubramaniam, Prabu; Suttle, K Blake; Pearson, Richard G

    2017-06-01

    Macroecological models for predicting species distributions usually only include abiotic environmental conditions as explanatory variables, despite knowledge from community ecology that all species are linked to other species through biotic interactions. This disconnect is largely due to the different spatial scales considered by the two sub-disciplines: macroecologists study patterns at large extents and coarse resolutions, while community ecologists focus on small extents and fine resolutions. A general framework for including biotic interactions in macroecological models would help bridge this divide, as it would allow for rigorous testing of the role that biotic interactions play in determining species ranges. Here, we present an approach that combines species distribution models with Bayesian networks, which enables the direct and indirect effects of biotic interactions to be modelled as propagating conditional dependencies among species' presences. We show that including biotic interactions in distribution models for species from a California grassland community results in better range predictions across the western USA. This new approach will be important for improving estimates of species distributions and their dynamics under environmental change. © 2017 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  11. Performance prediction of a synchronization link for distributed aerospace wireless systems.

    Science.gov (United States)

    Wang, Wen-Qin; Shao, Huaizong

    2013-01-01

    For reasons of stealth and other operational advantages, distributed aerospace wireless systems have received much attention in recent years. In a distributed aerospace wireless system, since the transmitter and receiver placed on separated platforms which use independent master oscillators, there is no cancellation of low-frequency phase noise as in the monostatic cases. Thus, high accurate time and frequency synchronization techniques are required for distributed wireless systems. The use of a dedicated synchronization link to quantify and compensate oscillator frequency instability is investigated in this paper. With the mathematical statistical models of phase noise, closed-form analytic expressions for the synchronization link performance are derived. The possible error contributions including oscillator, phase-locked loop, and receiver noise are quantified. The link synchronization performance is predicted by utilizing the knowledge of the statistical models, system error contributions, and sampling considerations. Simulation results show that effective synchronization error compensation can be achieved by using this dedicated synchronization link.

  12. Fast ignition: Dependence of the ignition energy on source and target parameters for particle-in-cell-modelled energy and angular distributions of the fast electrons

    Energy Technology Data Exchange (ETDEWEB)

    Bellei, C.; Divol, L.; Kemp, A. J.; Key, M. H.; Larson, D. J.; Strozzi, D. J.; Marinak, M. M.; Tabak, M.; Patel, P. K. [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, California 94550 (United States)

    2013-05-15

    The energy and angular distributions of the fast electrons predicted by particle-in-cell (PIC) simulations differ from those historically assumed in ignition designs of the fast ignition scheme. Using a particular 3D PIC calculation, we show how the ignition energy varies as a function of source-fuel distance, source size, and density of the pre-compressed fuel. The large divergence of the electron beam implies that the ignition energy scales with density more weakly than the ρ{sup −2} scaling for an idealized beam [S. Atzeni, Phys. Plasmas 6, 3316 (1999)], for any realistic source that is at some distance from the dense deuterium-tritium fuel. Due to the strong dependence of ignition energy with source-fuel distance, the use of magnetic or electric fields seems essential for the purpose of decreasing the ignition energy.

  13. eTOXlab, an open source modeling framework for implementing predictive models in production environments.

    Science.gov (United States)

    Carrió, Pau; López, Oriol; Sanz, Ferran; Pastor, Manuel

    2015-01-01

    Computational models based in Quantitative-Structure Activity Relationship (QSAR) methodologies are widely used tools for predicting the biological properties of new compounds. In many instances, such models are used as a routine in the industry (e.g. food, cosmetic or pharmaceutical industry) for the early assessment of the biological properties of new compounds. However, most of the tools currently available for developing QSAR models are not well suited for supporting the whole QSAR model life cycle in production environments. We have developed eTOXlab; an open source modeling framework designed to be used at the core of a self-contained virtual machine that can be easily deployed in production environments, providing predictions as web services. eTOXlab consists on a collection of object-oriented Python modules with methods mapping common tasks of standard modeling workflows. This framework allows building and validating QSAR models as well as predicting the properties of new compounds using either a command line interface or a graphic user interface (GUI). Simple models can be easily generated by setting a few parameters, while more complex models can be implemented by overriding pieces of the original source code. eTOXlab benefits from the object-oriented capabilities of Python for providing high flexibility: any model implemented using eTOXlab inherits the features implemented in the parent model, like common tools and services or the automatic exposure of the models as prediction web services. The particular eTOXlab architecture as a self-contained, portable prediction engine allows building models with confidential information within corporate facilities, which can be safely exported and used for prediction without disclosing the structures of the training series. The software presented here provides full support to the specific needs of users that want to develop, use and maintain predictive models in corporate environments. The technologies used by e

  14. Influence of covariate distribution on the predictive performance of pharmacokinetic models in paediatric research

    Science.gov (United States)

    Piana, Chiara; Danhof, Meindert; Della Pasqua, Oscar

    2014-01-01

    Aims The accuracy of model-based predictions often reported in paediatric research has not been thoroughly characterized. The aim of this exercise is therefore to evaluate the role of covariate distributions when a pharmacokinetic model is used for simulation purposes. Methods Plasma concentrations of a hypothetical drug were simulated in a paediatric population using a pharmacokinetic model in which body weight was correlated with clearance and volume of distribution. Two subgroups of children were then selected from the overall population according to a typical study design, in which pre-specified body weight ranges (10–15 kg and 30–40 kg) were used as inclusion criteria. The simulated data sets were then analyzed using non-linear mixed effects modelling. Model performance was assessed by comparing the accuracy of AUC predictions obtained for each subgroup, based on the model derived from the overall population and by extrapolation of the model parameters across subgroups. Results Our findings show that systemic exposure as well as pharmacokinetic parameters cannot be accurately predicted from the pharmacokinetic model obtained from a population with a different covariate range from the one explored during model building. Predictions were accurate only when a model was used for prediction in a subgroup of the initial population. Conclusions In contrast to current practice, the use of pharmacokinetic modelling in children should be limited to interpolations within the range of values observed during model building. Furthermore, the covariate point estimate must be kept in the model even when predictions refer to a subset different from the original population. PMID:24433411

  15. A two-stage predictive model to simultaneous control of trihalomethanes in water treatment plants and distribution systems: adaptability to treatment processes.

    Science.gov (United States)

    Domínguez-Tello, Antonio; Arias-Borrego, Ana; García-Barrera, Tamara; Gómez-Ariza, José Luis

    2017-10-01

    The trihalomethanes (TTHMs) and others disinfection by-products (DBPs) are formed in drinking water by the reaction of chlorine with organic precursors contained in the source water, in two consecutive and linked stages, that starts at the treatment plant and continues in second stage along the distribution system (DS) by reaction of residual chlorine with organic precursors not removed. Following this approach, this study aimed at developing a two-stage empirical model for predicting the formation of TTHMs in the water treatment plant and subsequently their evolution along the water distribution system (WDS). The aim of the two-stage model was to improve the predictive capability for a wide range of scenarios of water treatments and distribution systems. The two-stage model was developed using multiple regression analysis from a database (January 2007 to July 2012) using three different treatment processes (conventional and advanced) in the water supply system of Aljaraque area (southwest of Spain). Then, the new model was validated using a recent database from the same water supply system (January 2011 to May 2015). The validation results indicated no significant difference in the predictive and observed values of TTHM (R 2 0.874, analytical variance distribution systems studied, proving the adaptability of the new model to the boundary conditions. Finally the predictive capability of the new model was compared with 17 other models selected from the literature, showing satisfactory results prediction and excellent adaptability to treatment processes.

  16. Predicting protein-protein interactions from multimodal biological data sources via nonnegative matrix tri-factorization.

    Science.gov (United States)

    Wang, Hua; Huang, Heng; Ding, Chris; Nie, Feiping

    2013-04-01

    Protein interactions are central to all the biological processes and structural scaffolds in living organisms, because they orchestrate a number of cellular processes such as metabolic pathways and immunological recognition. Several high-throughput methods, for example, yeast two-hybrid system and mass spectrometry method, can help determine protein interactions, which, however, suffer from high false-positive rates. Moreover, many protein interactions predicted by one method are not supported by another. Therefore, computational methods are necessary and crucial to complete the interactome expeditiously. In this work, we formulate the problem of predicting protein interactions from a new mathematical perspective--sparse matrix completion, and propose a novel nonnegative matrix factorization (NMF)-based matrix completion approach to predict new protein interactions from existing protein interaction networks. Through using manifold regularization, we further develop our method to integrate different biological data sources, such as protein sequences, gene expressions, protein structure information, etc. Extensive experimental results on four species, Saccharomyces cerevisiae, Drosophila melanogaster, Homo sapiens, and Caenorhabditis elegans, have shown that our new methods outperform related state-of-the-art protein interaction prediction methods.

  17. XTALOPT version r11: An open-source evolutionary algorithm for crystal structure prediction

    Science.gov (United States)

    Avery, Patrick; Falls, Zackary; Zurek, Eva

    2018-01-01

    Version 11 of XTALOPT, an evolutionary algorithm for crystal structure prediction, has now been made available for download from the CPC library or the XTALOPT website, http://xtalopt.github.io. Whereas the previous versions of XTALOPT were published under the Gnu Public License (GPL), the current version is made available under the 3-Clause BSD License, which is an open source license that is recognized by the Open Source Initiative. Importantly, the new version can be executed via a command line interface (i.e., it does not require the use of a Graphical User Interface). Moreover, the new version is written as a stand-alone program, rather than an extension to AVOGADRO.

  18. Information system architecture to support transparent access to distributed, heterogeneous data sources

    International Nuclear Information System (INIS)

    Brown, J.C.

    1994-08-01

    Quality situation assessment and decision making require access to multiple sources of data and information. Insufficient accessibility to data exists for many large corporations and Government agencies. By utilizing current advances in computer technology, today's situation analyst's have a wealth of information at their disposal. There are many potential solutions to the information accessibility problem using today's technology. The United States Department of Energy (US-DOE) faced this problem when dealing with one class of problem in the US. The result of their efforts has been the creation of the Tank Waste Information Network System -- TWINS. The TWINS solution combines many technologies to address problems in several areas such as User Interfaces, Transparent Access to Multiple Data Sources, and Integrated Data Access. Data related to the complex is currently distributed throughout several US-DOE installations. Over time, each installation has adopted their own set of standards as related to information management. Heterogeneous hardware and software platforms exist both across the complex and within a single installation. Standards for information management vary between US-DOE mission areas within installations. These factors contribute to the complexity of accessing information in a manner that enhances the performance and decision making process of the analysts. This paper presents one approach taken by the DOE to resolve the problem of distributed, heterogeneous, multi-media information management for the HLW Tank complex. The information system architecture developed for the DOE by the TWINS effort is one that is adaptable to other problem domains and uses

  19. Pu and 137Cs in the Yangtze River estuary sediments: distribution and source identification.

    Science.gov (United States)

    Liu, Zhiyong; Zheng, Jian; Pan, Shaoming; Dong, Wei; Yamada, Masatoshi; Aono, Tatsuo; Guo, Qiuju

    2011-03-01

    Pu isotopes and (137)Cs were analyzed using sector field ICP-MS and γ spectrometry, respectively, in surface sediment and core sediment samples from the Yangtze River estuary. (239+240)Pu activity and (240)Pu/(239)Pu atom ratios (>0.18) shows a generally increasing trend from land to sea and from north to south in the estuary. This spatial distribution pattern indicates that the Pacific Proving Grounds (PPG) source Pu transported by ocean currents was intensively scavenged into the suspended sediment under favorable conditions, and mixed with riverine sediment as the water circulated in the estuary. This process is the main control for the distribution of Pu in the estuary. Moreover, Pu is also an important indicator for monitoring the changes of environmental radioactivity in the estuary as the river basin is currently the site of extensive human activities and the sea level is rising because of global climate changes. For core sediment samples the maximum peak of (239+240)Pu activity was observed at a depth of 172 cm. The sedimentation rate was estimated on the basis of the Pu maximum deposition peak in 1963-1964 to be 4.1 cm/a. The contributions of the PPG close-in fallout Pu (44%) and the riverine Pu (45%) in Yangtze River estuary sediments are equally important for the total Pu deposition in the estuary, which challenges the current hypothesis that the riverine Pu input was the major source of Pu budget in this area.

  20. An Active Power Sharing Method among Distributed Energy Sources in an Islanded Series Micro-Grid

    Directory of Open Access Journals (Sweden)

    Wei-Man Yang

    2014-11-01

    Full Text Available Active power-sharing among distributed energy sources (DESs is not only an important way to realize optimal operation of micro-grids, but also the key to maintaining stability for islanded operation. Due to the unique configuration of series micro-grids (SMGs, the power-sharing method adopted in an ordinary AC, DC, and hybrid AC/DC system cannot be directly applied into SMGs. Power-sharing in one SMG with multiple DESs involves two aspects. On the one hand, capacitor voltage stability based on an energy storage system (ESS in the DC link must be complemented. Actually, this is a problem of power allocation between the generating unit and the ESS in the DES; an extensively researched, similar problem has been grid-off distributed power generation, for which there are good solutions. On the other hand, power-sharing among DESs should be considered to optimize the operation of a series micro-grid. In this paper, a novel method combining master control with auxiliary control is proposed. Master action of a quasi-proportional resonant controller is responsible for stability of the islanded SMG; auxiliary action based on state of charge (SOC realizes coordinated allocation of load power among the source. At the same time, it is important to ensure that the auxiliary control does not influence the master action.

  1. Sources and distribution of sedimentary organic matter along the Andong salt marsh, Hangzhou Bay

    Science.gov (United States)

    Yuan, Hong-Wei; Chen, Jian-Fang; Ye, Ying; Lou, Zhang-Hua; Jin, Ai-Min; Chen, Xue-Gang; Jiang, Zong-Pei; Lin, Yu-Shih; Chen, Chen-Tung Arthur; Loh, Pei Sun

    2017-10-01

    Lignin oxidation products, δ13C values, C/N ratios and particle size were used to investigate the sources, distribution and chemical stability of sedimentary organic matter (OM) along the Andong salt marsh located in the southwestern end of Hangzhou Bay, China. Terrestrial OM was highest at the upper marshes and decreased closer to the sea, and the distribution of sedimentary total organic carbon (TOC) was influenced mostly by particle size. Terrestrial OM with a C3 signature was the predominant source of sedimentary OM in the Spartina alterniflora-dominated salt marsh system. This means that aside from contributions from the local marsh plants, the Andong salt marsh received input mostly from the Qiantang River and the Changjiang Estuary. Transect C, which was situated nearer to the Qiantang River mouth, was most likely influenced by input from the Qiantang River. Likewise, a nearby creek could be transporting materials from Hangzhou Bay into Transect A (farther east than Transect C), as Transect A showed a signal resembling that of the Changjiang Estuary. The predominance of terrestrial OM in the Andong salt marsh despite overall reductions in sedimentary and terrestrial OM input from the rivers is most likely due to increased contributions of sedimentary and terrestrial OM from erosion. This study shows that lower salt marsh accretion due to the presence of reservoirs upstream may be counterbalanced by increased erosion from the surrounding coastal areas.

  2. [Distribution and sources of oxygen and sulfur heterocyclic aromatic compounds in surface soil of Beijing, China].

    Science.gov (United States)

    He, Guang-Xiu; Zhang, Zhi-Huan; Peng, Xu-Yang; Zhu, Lei; Lu, Ling

    2011-11-01

    62 surface soil samples were collected from different environmental function zones in Beijing. Sulfur and oxygen heterocyclic aromatic compounds were detected by GC/MS. The objectives of this study were to identify the composition and distribution of these compounds, and discuss their sources. The results showed that the oxygen and sulfur heterocyclic aromatic compounds in the surface soils mainly contained dibenzofuran, methyl- and C2-dibenzofuran series, dibenzothiophene, methyl-, C2- and C3-dibenzothiophene series and benzonaphthothiophene series. The composition and distribution of the oxygen and sulfur heterocyclic aromatic compounds in the surface soil samples varied in the different environmental function zones, of which some factories and the urban area received oxygen and sulfur heterocyclic aromatic compounds most seriously. In Beijing, the degree of contamination by oxygen and sulfur heterocyclic aromatic compounds in the north surface soil was higher than that in the south. There were preferable linear correlations between the concentration of dibenzofuran series and fluorene series, as well as the concentration of dibenzothiophene series and dibenzofuran series. The oxygen and sulfur heterocyclic aromatic compounds in the surface soil were mainly derived from combustion products of oil and coal and direct input of mineral oil, etc. There were some variations in pollution sources of different environmental function zones.

  3. Developing and Validating a Survival Prediction Model for NSCLC Patients Through Distributed Learning Across 3 Countries.

    Science.gov (United States)

    Jochems, Arthur; Deist, Timo M; El Naqa, Issam; Kessler, Marc; Mayo, Chuck; Reeves, Jackson; Jolly, Shruti; Matuszak, Martha; Ten Haken, Randall; van Soest, Johan; Oberije, Cary; Faivre-Finn, Corinne; Price, Gareth; de Ruysscher, Dirk; Lambin, Philippe; Dekker, Andre

    2017-10-01

    Tools for survival prediction for non-small cell lung cancer (NSCLC) patients treated with chemoradiation or radiation therapy are of limited quality. In this work, we developed a predictive model of survival at 2 years. The model is based on a large volume of historical patient data and serves as a proof of concept to demonstrate the distributed learning approach. Clinical data from 698 lung cancer patients, treated with curative intent with chemoradiation or radiation therapy alone, were collected and stored at 2 different cancer institutes (559 patients at Maastro clinic (Netherlands) and 139 at Michigan university [United States]). The model was further validated on 196 patients originating from The Christie (United Kingdon). A Bayesian network model was adapted for distributed learning (the animation can be viewed at https://www.youtube.com/watch?v=ZDJFOxpwqEA). Two-year posttreatment survival was chosen as the endpoint. The Maastro clinic cohort data are publicly available at https://www.cancerdata.org/publication/developing-and-validating-survival-prediction-model-nsclc-patients-through-distributed, and the developed models can be found at www.predictcancer.org. Variables included in the final model were T and N category, age, performance status, and total tumor dose. The model has an area under the curve (AUC) of 0.66 on the external validation set and an AUC of 0.62 on a 5-fold cross validation. A model based on the T and N category performed with an AUC of 0.47 on the validation set, significantly worse than our model (PLearning the model in a centralized or distributed fashion yields a minor difference on the probabilities of the conditional probability tables (0.6%); the discriminative performance of the models on the validation set is similar (P=.26). Distributed learning from federated databases allows learning of predictive models on data originating from multiple institutions while avoiding many of the data-sharing barriers. We believe that

  4. A contribution to the analysis of the activity distribution of a radioactive source trapped inside a cylindrical volume, using the M.C.N.P.X. code

    International Nuclear Information System (INIS)

    Portugal, L.; Oliveira, C.; Trindade, R.; Paiva, I.

    2006-01-01

    Orphan sources, activated materials or contaminated materials with natural or artificial radionuclides have been detected in scrap metal products destined to recycling. The consequences of the melting of a source during the process could result in economical, environmental and social impacts. From the point of view of the radioactive waste management, a scenario of 100 ton of contaminated steel in one piece is a major problem. So, it is of great importance to develop a methodology that would allow us to predict the activity distribution inside a volume of steel. In previous work we were able to distinguish between the cases where the source is disseminated all over the entire cylinder and the cases where it is concentrated in different volumes. Now the main goal is to distinguish between different radiuses of spherical source geometries trapped inside the cylinder. For this, a methodology was proposed based on the ratio of the counts of two regions of the gamma spectrum, obtained with a sodium iodide detector, using the M.C.N.P.X. Monte Carlo simulation code. These calculated ratios allow us to determine a function r = aR 2 + bR + c, where R is the ratio between the counts of the two regions of the gamma spectrum and r is the radius of the source. For simulation purposes six 60 Co sources were used (a point source, four spheres of 5 cm, 10 cm, 15 cm and 20 cm radius and the overall contaminated cylinder) trapped inside two types of matrix, concrete and stainless steel. The methodology applied has shown to predict and distinguish accurately the distribution of a source inside a material roughly independently of the matrix and density considered. (authors)

  5. A contribution to the analysis of the activity distribution of a radioactive source trapped inside a cylindrical volume, using the M.C.N.P.X. code

    Energy Technology Data Exchange (ETDEWEB)

    Portugal, L.; Oliveira, C.; Trindade, R.; Paiva, I. [Instituto Tecnologico e Nuclear, Dpto. Proteccao Radiologica e Seguranca Nuclear, Sacavem (Portugal)

    2006-07-01

    Orphan sources, activated materials or contaminated materials with natural or artificial radionuclides have been detected in scrap metal products destined to recycling. The consequences of the melting of a source during the process could result in economical, environmental and social impacts. From the point of view of the radioactive waste management, a scenario of 100 ton of contaminated steel in one piece is a major problem. So, it is of great importance to develop a methodology that would allow us to predict the activity distribution inside a volume of steel. In previous work we were able to distinguish between the cases where the source is disseminated all over the entire cylinder and the cases where it is concentrated in different volumes. Now the main goal is to distinguish between different radiuses of spherical source geometries trapped inside the cylinder. For this, a methodology was proposed based on the ratio of the counts of two regions of the gamma spectrum, obtained with a sodium iodide detector, using the M.C.N.P.X. Monte Carlo simulation code. These calculated ratios allow us to determine a function r = aR{sup 2} + bR + c, where R is the ratio between the counts of the two regions of the gamma spectrum and r is the radius of the source. For simulation purposes six {sup 60}Co sources were used (a point source, four spheres of 5 cm, 10 cm, 15 cm and 20 cm radius and the overall contaminated cylinder) trapped inside two types of matrix, concrete and stainless steel. The methodology applied has shown to predict and distinguish accurately the distribution of a source inside a material roughly independently of the matrix and density considered. (authors)

  6. Energy models for commercial energy prediction and substitution of renewable energy sources

    International Nuclear Information System (INIS)

    Iniyan, S.; Suganthi, L.; Samuel, Anand A.

    2006-01-01

    In this paper, three models have been projected namely Modified Econometric Mathematical (MEM) model, Mathematical Programming Energy-Economy-Environment (MPEEE) model, and Optimal Renewable Energy Mathematical (OREM) model. The actual demand for coal, oil and electricity is predicted using the MEM model based on economic, technological and environmental factors. The results were used in the MPEEE model, which determines the optimum allocation of commercial energy sources based on environmental limitations. The gap between the actual energy demand from the MEM model and optimal energy use from the MPEEE model, has to be met by the renewable energy sources. The study develops an OREM model that would facilitate effective utilization of renewable energy sources in India, based on cost, efficiency, social acceptance, reliability, potential and demand. The economic variations in solar energy systems and inclusion of environmental constraint are also analyzed with OREM model. The OREM model will help policy makers in the formulation and implementation of strategies concerning renewable energy sources in India for the next two decades

  7. Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering

    Science.gov (United States)

    Koehler, Sarah Muraoka

    Industrial large-scale control problems present an interesting algorithmic design challenge. A number of controllers must cooperate in real-time on a network of embedded hardware with limited computing power in order to maximize system efficiency while respecting constraints and despite communication delays. Model predictive control (MPC) can automatically synthesize a centralized controller which optimizes an objective function subject to a system model, constraints, and predictions of disturbance. Unfortunately, the computations required by model predictive controllers for large-scale systems often limit its industrial implementation only to medium-scale slow processes. Distributed model predictive control (DMPC) enters the picture as a way to decentralize a large-scale model predictive control problem. The main idea of DMPC is to split the computations required by the MPC problem amongst distributed processors that can compute in parallel and communicate iteratively to find a solution. Some popularly proposed solutions are distributed optimization algorithms such as dual decomposition and the alternating direction method of multipliers (ADMM). However, these algorithms ignore two practical challenges: substantial communication delays present in control systems and also problem non-convexity. This thesis presents two novel and practically effective DMPC algorithms. The first DMPC algorithm is based on a primal-dual active-set method which achieves fast convergence, making it suitable for large-scale control applications which have a large communication delay across its communication network. In particular, this algorithm is suited for MPC problems with a quadratic cost, linear dynamics, forecasted demand, and box constraints. We measure the performance of this algorithm and show that it significantly outperforms both dual decomposition and ADMM in the presence of communication delay. The second DMPC algorithm is based on an inexact interior point method which is

  8. SOILD: A computer model for calculating the effective dose equivalent from external exposure to distributed gamma sources in soil

    International Nuclear Information System (INIS)

    Chen, S.Y.; LePoire, D.; Yu, C.; Schafetz, S.; Mehta, P.

    1991-01-01

    The SOLID computer model was developed for calculating the effective dose equivalent from external exposure to distributed gamma sources in soil. It is designed to assess external doses under various exposure scenarios that may be encountered in environmental restoration programs. The models four major functional features address (1) dose versus source depth in soil, (2) shielding of clean cover soil, (3) area of contamination, and (4) nonuniform distribution of sources. The model is also capable of adjusting doses when there are variations in soil densities for both source and cover soils. The model is supported by a data base of approximately 500 radionuclides. 4 refs

  9. Sources, distribution and export coefficient of phosphorus in lowland polders of Lake Taihu Basin, China.

    Science.gov (United States)

    Huang, Jiacong; Gao, Junfeng; Jiang, Yong; Yin, Hongbin; Amiri, Bahman Jabbarian

    2017-12-01

    Identifying phosphorus (P) sources, distribution and export from lowland polders is important for P pollution management, however, is challenging due to the high complexity of hydrological and P transport processes in lowland areas. In this study, the spatial pattern and temporal dynamics of P export coefficient (PEC) from all the 2539 polders in Lake Taihu Basin, China were estimated using a coupled P model for describing P dynamics in a polder system. The estimated amount of P export from polders in Lake Taihu Basin during 2013 was 1916.2 t/yr, with a spatially-averaged PEC of 1.8 kg/ha/yr. PEC had peak values (more than 4.0 kg/ha/yr) in the polders near/within the large cities, and was high during the rice-cropping season. Sensitivity analysis based on the coupled P model revealed that the sensitive factors controlling the PEC varied spatially and changed through time. Precipitation and air temperature were the most sensitive factors controlling PEC. Culvert controlling and fertilization were sensitive factors controlling PEC during some periods. This study demonstrated an estimation of PEC from 2539 polders in Lake Taihu Basin, and an identification of sensitive environmental factors affecting PEC. The investigation of polder P export in a watershed scale is helpful for water managers to learn the distribution of P sources, to identify key P sources, and thus to achieve best management practice in controlling P export from lowland areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Accurate bearing remaining useful life prediction based on Weibull distribution and artificial neural network

    Science.gov (United States)

    Ben Ali, Jaouher; Chebel-Morello, Brigitte; Saidi, Lotfi; Malinowski, Simon; Fnaiech, Farhat

    2015-05-01

    Accurate remaining useful life (RUL) prediction of critical assets is an important challenge in condition based maintenance to improve reliability and decrease machine's breakdown and maintenance's cost. Bearing is one of the most important components in industries which need to be monitored and the user should predict its RUL. The challenge of this study is to propose an original feature able to evaluate the health state of bearings and to estimate their RUL by Prognostics and Health Management (PHM) techniques. In this paper, the proposed method is based on the data-driven prognostic approach. The combination of Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network and Weibull distribution (WD) is explored. WD is used just in the training phase to fit measurement and to avoid areas of fluctuation in the time domain. SFAM training process is based on fitted measurements at present and previous inspection time points as input. However, the SFAM testing process is based on real measurements at present and previous inspections. Thanks to the fuzzy learning process, SFAM has an important ability and a good performance to learn nonlinear time series. As output, seven classes are defined; healthy bearing and six states for bearing degradation. In order to find the optimal RUL prediction, a smoothing phase is proposed in this paper. Experimental results show that the proposed method can reliably predict the RUL of rolling element bearings (REBs) based on vibration signals. The proposed prediction approach can be applied to prognostic other various mechanical assets.

  11. Spatially distributed flame transfer functions for predicting combustion dynamics in lean premixed gas turbine combustors

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K.T.; Lee, J.G.; Quay, B.D.; Santavicca, D.A. [Center for Advanced Power Generation, Department of Mechanical and Nuclear Engineering, Pennsylvania State University, University Park, PA (United States)

    2010-09-15

    The present paper describes a methodology to improve the accuracy of prediction of the eigenfrequencies and growth rates of self-induced instabilities and demonstrates its application to a laboratory-scale, swirl-stabilized, lean-premixed, gas turbine combustor. The influence of the spatial heat release distribution is accounted for using local flame transfer function (FTF) measurements. The two-microphone technique and CH{sup *} chemiluminescence intensity measurements are used to determine the input (inlet velocity perturbation) and the output functions (heat release oscillation), respectively, for the local flame transfer functions. The experimentally determined local flame transfer functions are superposed using the flame transfer function superposition principle, and the result is incorporated into an analytic thermoacoustic model, in order to predict the linear stability characteristics of a given system. Results show that when the flame length is not acoustically compact the model prediction calculated using the local flame transfer functions is better than the prediction made using the global flame transfer function. In the case of a flame in the compact flame regime, accurate predictions of eigenfrequencies and growth rates can be obtained using the global flame transfer function. It was also found that the general response characteristics of the local FTF (gain and phase) are qualitatively the same as those of the global FTF. (author)

  12. Optimal operation management of fuel cell/wind/photovoltaic power sources connected to distribution networks

    Science.gov (United States)

    Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid

    2011-10-01

    In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the 'best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.

  13. Security analysis of an untrusted source for quantum key distribution: passive approach

    International Nuclear Information System (INIS)

    Zhao Yi; Qi Bing; Lo, H-K; Qian Li

    2010-01-01

    We present a passive approach to the security analysis of quantum key distribution (QKD) with an untrusted source. A complete proof of its unconditional security is also presented. This scheme has significant advantages in real-life implementations as it does not require fast optical switching or a quantum random number generator. The essential idea is to use a beam splitter to split each input pulse. We show that we can characterize the source using a cross-estimate technique without active routing of each pulse. We have derived analytical expressions for the passive estimation scheme. Moreover, using simulations, we have considered four real-life imperfections: additional loss introduced by the 'plug and play' structure, inefficiency of the intensity monitor noise of the intensity monitor, and statistical fluctuation introduced by finite data size. Our simulation results show that the passive estimate of an untrusted source remains useful in practice, despite these four imperfections. Also, we have performed preliminary experiments, confirming the utility of our proposal in real-life applications. Our proposal makes it possible to implement the 'plug and play' QKD with the security guaranteed, while keeping the implementation practical.

  14. Qualitative analysis of precipiation distribution in Poland with use of different data sources

    Directory of Open Access Journals (Sweden)

    J. Walawender

    2008-04-01

    Full Text Available Geographical Information Systems (GIS can be used to integrate data from different sources and in different formats to perform innovative spatial and temporal analysis. GIS can be also applied for climatic research to manage, investigate and display all kinds of weather data.

    The main objective of this study is to demonstrate that GIS is a useful tool to examine and visualise precipitation distribution obtained from different data sources: ground measurements, satellite and radar data.

    Three selected days (30 cases with convective rainfall situations were analysed. Firstly, scalable GRID-based approach was applied to store data from three different sources in comparable layout. Then, geoprocessing algorithm was created within ArcGIS 9.2 environment. The algorithm included: GRID definition, reclassification and raster algebra. All of the calculations and procedures were performed automatically. Finally, contingency tables and pie charts were created to show relationship between ground measurements and both satellite and radar derived data. The results were visualised on maps.

  15. A stationary computed tomography system with cylindrically distributed sources and detectors.

    Science.gov (United States)

    Chen, Yi; Xi, Yan; Zhao, Jun

    2014-01-01

    The temporal resolution of current computed tomography (CT) systems is limited by the rotation speed of their gantries. A helical interlaced source detector array (HISDA) CT, which is a stationary CT system with distributed X-ray sources and detectors, is presented in this paper to overcome the aforementioned limitation and achieve high temporal resolution. Projection data can be obtained from different angles in a short time and do not require source, detector, or object motion. Axial coverage speed is increased further by employing a parallel scan scheme. Interpolation is employed to approximate the missing data in the gaps, and then a Katsevich-type reconstruction algorithm is applied to enable an approximate reconstruction. The proposed algorithm suppressed the cone beam and gap-induced artifacts in HISDA CT. The results also suggest that gap-induced artifacts can be reduced by employing a large helical pitch for a fixed gap height. HISDA CT is a promising 3D dynamic imaging architecture given its good temporal resolution and stationary advantage.

  16. Compressing Sensing Based Source Localization for Controlled Acoustic Signals Using Distributed Microphone Arrays

    Directory of Open Access Journals (Sweden)

    Wei Ke

    2017-01-01

    Full Text Available In order to enhance the accuracy of sound source localization in noisy and reverberant environments, this paper proposes an adaptive sound source localization method based on distributed microphone arrays. Since sound sources lie at a few points in the discrete spatial domain, our method can exploit this inherent sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing (CS theory. In this method, a two-step discrete cosine transform- (DCT- based feature extraction approach is utilized to cover both short-time and long-time properties of acoustic signals and reduce the dimensions of the sparse model. In addition, an online dictionary learning (DL method is used to adjust the dictionary for matching the changes of audio signals, and then the sparse solution could better represent location estimations. Moreover, we propose an improved block-sparse reconstruction algorithm using approximate l0 norm minimization to enhance reconstruction performance for sparse signals in low signal-noise ratio (SNR conditions. The effectiveness of the proposed scheme is demonstrated by simulation results and experimental results where substantial improvement for localization performance can be obtained in the noisy and reverberant conditions.

  17. Planck Early Results. XV. Spectral Energy Distributions and Radio Continuum Spectra of Northern Extragalactic Radio Sources

    Science.gov (United States)

    Aatrokoski, J.; Ade, P. A. R.; Aghanim, N.; Aller, H. D.; Aller, M. F.; Angelakis, E.; Amaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; hide

    2011-01-01

    Spectral energy distributions (SEDs) and radio continuum spectra are presented for a northern sample of 104 extragalactic radio sources. based on the Planck Early Release Compact Source Catalogue (ERCSC) and simultaneous multi frequency data. The nine Planck frequencies, from 30 to 857 GHz, are complemented by a set of simultaneous observations ranging from radio to gamma-rays. This is the first extensive frequency coverage in the radio and millimetre domains for an essentially complete sample of extragalactic radio sources, and it shows how the individual shocks, each in their own phase of development, shape the radio spectra as they move in the relativistic jet. The SEDs presented in this paper were fitted with second and third degree polynomials to estimate the frequencies of the synchrotron and inverse Compton (IC) peaks, and the spectral indices of low and high frequency radio data, including the Planck ERCSC data, were calculated. SED modelling methods are discussed, with an emphasis on proper. physical modelling of the synchrotron bump using multiple components. Planck ERCSC data also suggest that the original accelerated electron energy spectrum could be much harder than commonly thought, with power-law index around 1.5 instead of the canonical 2.5. The implications of this are discussed for the acceleration mechanisms effective in blazar shock. Furthermore in many cases the Planck data indicate that gamma-ray emission must originate in the same shocks that produce the radio emission.

  18. Prediction of metabolic flux distribution from gene expression data based on the flux minimization principle.

    Directory of Open Access Journals (Sweden)

    Hyun-Seob Song

    Full Text Available Prediction of possible flux distributions in a metabolic network provides detailed phenotypic information that links metabolism to cellular physiology. To estimate metabolic steady-state fluxes, the most common approach is to solve a set of macroscopic mass balance equations subjected to stoichiometric constraints while attempting to optimize an assumed optimal objective function. This assumption is justifiable in specific cases but may be invalid when tested across different conditions, cell populations, or other organisms. With an aim to providing a more consistent and reliable prediction of flux distributions over a wide range of conditions, in this article we propose a framework that uses the flux minimization principle to predict active metabolic pathways from mRNA expression data. The proposed algorithm minimizes a weighted sum of flux magnitudes, while biomass production can be bounded to fit an ample range from very low to very high values according to the analyzed context. We have formulated the flux weights as a function of the corresponding enzyme reaction's gene expression value, enabling the creation of context-specific fluxes based on a generic metabolic network. In case studies of wild-type Saccharomyces cerevisiae, and wild-type and mutant Escherichia coli strains, our method achieved high prediction accuracy, as gauged by correlation coefficients and sums of squared error, with respect to the experimentally measured values. In contrast to other approaches, our method was able to provide quantitative predictions for both model organisms under a variety of conditions. Our approach requires no prior knowledge or assumption of a context-specific metabolic functionality and does not require trial-and-error parameter adjustments. Thus, our framework is of general applicability for modeling the transcription-dependent metabolism of bacteria and yeasts.

  19. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    Science.gov (United States)

    Syfert, Mindy M; Smith, Matthew J; Coomes, David A

    2013-01-01

    Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  20. Predicting the distribution pattern of small carnivores in response to environmental factors in the Western Ghats.

    Science.gov (United States)

    Kalle, Riddhika; Ramesh, Tharmalingam; Qureshi, Qamar; Sankar, Kalyanasundaram

    2013-01-01

    Due to their secretive habits, predicting the pattern of spatial distribution of small carnivores has been typically challenging, yet for conservation management it is essential to understand the association between this group of animals and environmental factors. We applied maximum entropy modeling (MaxEnt) to build distribution models and identify environmental predictors including bioclimatic variables, forest and land cover type, topography, vegetation index and anthropogenic variables for six small carnivore species in Mudumalai Tiger Reserve. Species occurrence records were collated from camera-traps and vehicle transects during the years 2010 and 2011. We used the average training gain from forty model runs for each species to select the best set of predictors. The area under the curve (AUC) of the receiver operating characteristic plot (ROC) ranged from 0.81 to 0.93 for the training data and 0.72 to 0.87 for the test data. In habitat models for F. chaus, P. hermaphroditus, and H. smithii "distance to village" and precipitation of the warmest quarter emerged as some of the most important variables. "Distance to village" and aspect were important for V. indica while "distance to village" and precipitation of the coldest quarter were significant for H. vitticollis. "Distance to village", precipitation of the warmest quarter and land cover were influential variables in the distribution of H. edwardsii. The map of predicted probabilities of occurrence showed potentially suitable habitats accounting for 46 km(2) of the reserve for F. chaus, 62 km(2) for V. indica, 30 km(2) for P. hermaphroditus, 63 km(2) for H. vitticollis, 45 km(2) for H. smithii and 28 km(2) for H. edwardsii. Habitat heterogeneity driven by the east-west climatic gradient was correlated with the spatial distribution of small carnivores. This study exemplifies the usefulness of modeling small carnivore distribution to prioritize and direct conservation planning for habitat specialists in

  1. Sources, occurrence and predicted aquatic impact of legacy and contemporary pesticides in streams.

    Science.gov (United States)

    McKnight, Ursula S; Rasmussen, Jes J; Kronvang, Brian; Binning, Philip J; Bjerg, Poul L

    2015-05-01

    We couple current findings of pesticides in surface and groundwater to the history of pesticide usage, focusing on the potential contribution of legacy pesticides to the predicted ecotoxicological impact on benthic macroinvertebrates in headwater streams. Results suggest that groundwater, in addition to precipitation and surface runoff, is an important source of pesticides (particularly legacy herbicides) entering surface water. In addition to current-use active ingredients, legacy pesticides, metabolites and impurities are important for explaining the estimated total toxicity attributable to pesticides. Sediment-bound insecticides were identified as the primary source for predicted ecotoxicity. Our results support recent studies indicating that highly sorbing chemicals contribute and even drive impacts on aquatic ecosystems. They further indicate that groundwater contaminated by legacy and contemporary pesticides may impact adjoining streams. Stream observations of soluble and sediment-bound pesticides are valuable for understanding the long-term fate of pesticides in aquifers, and should be included in stream monitoring programs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Predictive networks: a flexible, open source, web application for integration and analysis of human gene networks.

    Science.gov (United States)

    Haibe-Kains, Benjamin; Olsen, Catharina; Djebbari, Amira; Bontempi, Gianluca; Correll, Mick; Bouton, Christopher; Quackenbush, John

    2012-01-01

    Genomics provided us with an unprecedented quantity of data on the genes that are activated or repressed in a wide range of phenotypes. We have increasingly come to recognize that defining the networks and pathways underlying these phenotypes requires both the integration of multiple data types and the development of advanced computational methods to infer relationships between the genes and to estimate the predictive power of the networks through which they interact. To address these issues we have developed Predictive Networks (PN), a flexible, open-source, web-based application and data services framework that enables the integration, navigation, visualization and analysis of gene interaction networks. The primary goal of PN is to allow biomedical researchers to evaluate experimentally derived gene lists in the context of large-scale gene interaction networks. The PN analytical pipeline involves two key steps. The first is the collection of a comprehensive set of known gene interactions derived from a variety of publicly available sources. The second is to use these 'known' interactions together with gene expression data to infer robust gene networks. The PN web application is accessible from http://predictivenetworks.org. The PN code base is freely available at https://sourceforge.net/projects/predictivenets/.

  3. Predictive modeling of deep-sea fish distribution in the Azores

    Science.gov (United States)

    Parra, Hugo E.; Pham, Christopher K.; Menezes, Gui M.; Rosa, Alexandra; Tempera, Fernando; Morato, Telmo

    2017-11-01

    Understanding the link between fish and their habitat is essential for an ecosystem approach to fisheries management. However, determining such relationship is challenging, especially for deep-sea species. In this study, we applied generalized additive models (GAMs) to relate presence-absence and relative abundance data of eight economically-important fish species to environmental variables (depth, slope, aspect, substrate type, bottom temperature, salinity and oxygen saturation). We combined 13 years of catch data collected from systematic longline surveys performed across the region. Overall, presence-absence GAMs performed better than abundance models and predictions made for the observed data successfully predicted the occurrence of the eight deep-sea fish species. Depth was the most influential predictor of all fish species occurrence and abundance distributions, whereas other factors were found to be significant for some species but did not show such a clear influence. Our results predicted that despite the extensive Azores EEZ, the habitats available for the studied deep-sea fish species are highly limited and patchy, restricted to seamounts slopes and summits, offshore banks and island slopes. Despite some identified limitations, our GAMs provide an improved knowledge of the spatial distribution of these commercially important fish species in the region.

  4. Prediction of the low-velocity distribution from the pore structure in simple porous media

    Science.gov (United States)

    de Anna, Pietro; Quaife, Bryan; Biros, George; Juanes, Ruben

    2017-12-01

    The macroscopic properties of fluid flow and transport through porous media are a direct consequence of the underlying pore structure. However, precise relations that characterize flow and transport from the statistics of pore-scale disorder have remained elusive. Here we investigate the relationship between pore structure and the resulting fluid flow and asymptotic transport behavior in two-dimensional geometries of nonoverlapping circular posts. We derive an analytical relationship between the pore throat size distribution fλ˜λ-β and the distribution of the low fluid velocities fu˜u-β /2 , based on a conceptual model of porelets (the flow established within each pore throat, here a Hagen-Poiseuille flow). Our model allows us to make predictions, within a continuous-time random-walk framework, for the asymptotic statistics of the spreading of fluid particles along their own trajectories. These predictions are confirmed by high-fidelity simulations of Stokes flow and advective transport. The proposed framework can be extended to other configurations which can be represented as a collection of known flow distributions.

  5. Effects of predicted climatic changes on distribution of organic contaminants in brackish water mesocosms.

    Science.gov (United States)

    Ripszam, M; Gallampois, C M J; Berglund, Å; Larsson, H; Andersson, A; Tysklind, M; Haglund, P

    2015-06-01

    Predicted consequences of future climate change in the northern Baltic Sea include increases in sea surface temperatures and terrestrial dissolved organic carbon (DOC) runoff. These changes are expected to alter environmental distribution of anthropogenic organic contaminants (OCs). To assess likely shifts in their distributions, outdoor mesocosms were employed to mimic pelagic ecosystems at two temperatures and two DOC concentrations, current: 15°C and 4 mg DOCL(-1) and, within ranges of predicted increases, 18°C and 6 mg DOCL(-1), respectively. Selected organic contaminants were added to the mesocosms to monitor changes in their distribution induced by the treatments. OC partitioning to particulate matter and sedimentation were enhanced at the higher DOC concentration, at both temperatures, while higher losses and lower partitioning of OCs to DOC were observed at the higher temperature. No combined effects of higher temperature and DOC on partitioning were observed, possibly because of the balancing nature of these processes. Therefore, changes in OCs' fates may largely depend on whether they are most sensitive to temperature or DOC concentration rises. Bromoanilines, phenanthrene, biphenyl and naphthalene were sensitive to the rise in DOC concentration, whereas organophosphates, chlorobenzenes (PCBz) and polychlorinated biphenyls (PCBs) were more sensitive to temperature. Mitotane and diflufenican were sensitive to both temperature and DOC concentration rises individually, but not in combination. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Human Papilloma Virus: Prevalence, distribution and predictive value to lymphatic metastasis in penile carcinoma

    Directory of Open Access Journals (Sweden)

    Aluizio Goncalves da Fonseca

    2013-07-01

    Full Text Available Objectives To evaluate the prevalence, distribution and association of HPV with histological pattern of worse prognosis of penile cancer, in order to evaluate its predictive value of inguinal metastasis, as well as evaluation of other previous reported prognostic factors. Material and Methods Tumor samples of 82 patients with penile carcinoma were tested in order to establish the prevalence and distribution of genotypic HPV using PCR. HPV status was correlated to histopathological factors and the presence of inguinal mestastasis. The influence of several histological characteristics was also correlated to inguinal disease-free survival. Results Follow-up varied from 1 to 71 months (median 22 months. HPV DNA was identified in 60.9% of sample, with higher prevalence of types 11 and 6 (64% and 32%, respectively. There was no significant correlation of the histological characteristics of worse prognosis of penile cancer with HPV status. Inguinal disease-free survival in 5 years did also not show HPV status influence (p = 0.45. The only independent pathologic factors of inguinal metastasis were: stage T ≥ T1b-T4 (p = 0.02, lymphovascular invasion (p = 0.04 and infiltrative invasion (p = 0.03. conclusions HPV status and distribution had shown no correlation with worse prognosis of histological aspects, or predictive value for lymphatic metastasis in penile carcinoma.

  7. Methods for Prediction of Temperature Distribution in Flashover Caused by Backdraft Fire

    Directory of Open Access Journals (Sweden)

    Guowei Zhang

    2014-01-01

    Full Text Available Accurately predicting temperature distribution in flashover fire is a key issue for evacuation and fire-fighting. Now many good flashover fire experiments have be conducted, but most of these experiments are proceeded in enclosure with fixed openings; researches on fire development and temperature distribution in flashover caused by backdraft fire did not receive enough attention. In order to study flashover phenomenon caused by backdraft fire, a full-scale fire experiment was conducted in one abandoned office building. Process of fire development and temperature distribution in room and corridor were separately recorded during the experiment. The experiment shows that fire development in enclosure is closely affected by the room ventilation. Unlike existing temperature curves which have only one temperature peak, temperature in flashover caused by backdraft may have more than one peak value and that there is a linear relationship between maximum peak temperature and distance away from fire compartment. Based on BFD curve and experimental data, mathematical models are proposed to predict temperature curve in flashover fire caused by backdraft at last. These conclusions and experiment data obtained in this paper could provide valuable reference to fire simulation, hazard assessment, and fire protection design.

  8. Evaluation and distribution of doses received by Cuban population due to environmental sources of radioactivity

    International Nuclear Information System (INIS)

    Zerquera, Juan T.; Prendes Alonso, Miguel; Fernandez Gomez, Isis M.; Lopez Bejerano, Gladys

    2008-01-01

    Full text: In the frame of a national research project supported by the Nuclear Energy Agency of the Ministry of Science, Technology and Environment of the Republic of Cuba doses received by Cuban population due to the exposure to existing in the environment sources of radiation were assessed. Direct measurements of sources representing 90% of average total doses to world population according to UNSCEAR data were made and estimations of doses were obtained for the different components of the total dose: doses due to the exposure to cosmic radiation, external terrestrial radiation, potassium contained in human body and inhalation and ingestion of radionuclides present in the environment. Using the obtained results it was made an estimation of total doses to Cuban population due to environmental radiation sources and the contributions of different dose components were assessed. This was carried out through a Monte Carlo simulation of the total doses using the parameter of dose distributions obtained for the different contributors (components) to total dose. On the basis of the estimations the average total effective dose to Cuban population due to the exposure to environmental sources was estimated as 1.1 ± 0.3 mSv per year. This low dose value is in the range of doses estimated by UNSCEAR for world population due to natural background and can be explained by the specific of Cuban environment: a majority of the population living at the sea level or at low altitudes, relative low content of primordial radionuclides in soils and high ventilation rates in dwellings. All the instructions specified in the Call for Abstracts should be taken into account. e/ 41 and 457. (author)

  9. Application of Phasor Measurement Units for Protection of Distribution Networks with High Penetration of Photovoltaic Sources

    Science.gov (United States)

    Meskin, Matin

    The rate of the integration of distributed generation (DG) units to the distribution level to meet the growth in demand increases as a reasonable replacement for costly network expansion. This integration brings many advantages to the consumers and power grids, as well as giving rise to more challenges in relation to protection and control. Recent research has brought to light the negative effects of DG units on short circuit currents and overcurrent (OC) protection systems in distribution networks. Change in the direction of fault current flow, increment or decrement of fault current magnitude, blindness of protection, feeder sympathy trip, nuisance trip of interrupting devices, and the disruption of coordination between protective devices are some potential impacts of DG unit integration. Among other types of DG units, the integration of renewable energy resources into the electric grid has seen a vast improvement in recent years. In particular, the interconnection of photovoltaic (PV) sources to the medium voltage (MV) distribution networks has experienced a rapid increase in the last decade. In this work, the effect of PV source on conventional OC relays in MV distribution networks is shown. It is indicated that the PV output fluctuation, due to changes in solar radiation, causes the magnitude and direction of the current to change haphazardly. These variations may result in the poor operation of OC relays as the main protective devices in the MV distribution networks. In other words, due to the bi-directional power flow characteristic and the fluctuation of current magnitude occurring in the presence of PV sources, a specific setting of OC relays is difficult to realize. Therefore, OC relays may operate in normal conditions. To improve the OC relay operation, a voltage-dependent-overcurrent protection is proposed. Although, this new method prevents the OC relay from maloperation, its ability to detect earth faults and high impedance faults is poor. Thus, a

  10. Distribution, partitioning and sources of polycyclic aromatic hydrocarbons in Daliao River water system in dry season, China

    International Nuclear Information System (INIS)

    Guo Wei; He Mengchang; Yang Zhifeng; Lin Chunye; Quan Xiangchun; Men Bing

    2009-01-01

    Eighteen polycyclic aromatic hydrocarbons (PAHs) were analyzed in 29 surface water, 29 suspended particulate matter (SPM), 28 sediment, and 10 pore water samples from Daliao River water system in dry season. The total PAH concentration ranged from 570.2 to 2318.6 ng L -1 in surface water, from 151.0 to 28483.8 ng L -1 in SPM, from 102.9 to 3419.2 ng g -1 in sediment and from 6.3 to 46.4 μg l -1 in pore water. The concentration of dissolved PAHs was higher than that of particulate PAHs at many sites, but the opposite results were generally observed at the sites of wastewater discharge. The soluble level of PAHs was much higher in the pore water than in the water column. Generally, the water column of the polluted branch streams contained higher content of PAHs than their mainstream. The environmental behaviors and fates of PAHs were examined according to some physicochemical parameters such as pH, organic carbon, SPM content, water content and grain size in sediments. Results showed that organic carbon was the primary factor controlling the distribution of the PAHs in the Daliao River water system. Partitioning of PAHs between sediment solid phase and pore water phase was studied, and the relationship between log K oc and log K ow of PAHs on some sediments and the predicted values was compared. PAHs other than naphthalene and acenaphthylene would be accumulated largely in the sediment of the Dalaio River water system. The sources of PAHs were evaluated employing ratios of specific PAHs compounds and different wastewater discharge sources, indicating that combustion was the main source of PAHs input.

  11. Open-source chemogenomic data-driven algorithms for predicting drug-target interactions.

    Science.gov (United States)

    Hao, Ming; Bryant, Stephen H; Wang, Yanli

    2018-02-06

    While novel technologies such as high-throughput screening have advanced together with significant investment by pharmaceutical companies during the past decades, the success rate for drug development has not yet been improved prompting researchers looking for new strategies of drug discovery. Drug repositioning is a potential approach to solve this dilemma. However, experimental identification and validation of potential drug targets encoded by the human genome is both costly and time-consuming. Therefore, effective computational approaches have been proposed to facilitate drug repositioning, which have proved to be successful in drug discovery. Doubtlessly, the availability of open-accessible data from basic chemical biology research and the success of human genome sequencing are crucial to develop effective in silico drug repositioning methods allowing the identification of potential targets for existing drugs. In this work, we review several chemogenomic data-driven computational algorithms with source codes publicly accessible for predicting drug-target interactions (DTIs). We organize these algorithms by model properties and model evolutionary relationships. We re-implemented five representative algorithms in R programming language, and compared these algorithms by means of mean percentile ranking, a new recall-based evaluation metric in the DTI prediction research field. We anticipate that this review will be objective and helpful to researchers who would like to further improve existing algorithms or need to choose appropriate algorithms to infer potential DTIs in the projects. The source codes for DTI predictions are available at: https://github.com/minghao2016/chemogenomicAlg4DTIpred. Published by Oxford University Press 2018. This work is written by US Government employees and is in the public domain in the US.

  12. HPSLPred: An Ensemble Multi-Label Classifier for Human Protein Subcellular Location Prediction with Imbalanced Source.

    Science.gov (United States)

    Wan, Shixiang; Duan, Yucong; Zou, Quan

    2017-09-01

    Predicting the subcellular localization of proteins is an important and challenging problem. Traditional experimental approaches are often expensive and time-consuming. Consequently, a growing number of research efforts employ a series of machine learning approaches to predict the subcellular location of proteins. There are two main challenges among the state-of-the-art prediction methods. First, most of the existing techniques are designed to deal with multi-class rather than multi-label classification, which ignores connections between multiple labels. In reality, multiple locations of particular proteins imply that there are vital and unique biological significances that deserve special focus and cannot be ignored. Second, techniques for handling imbalanced data in multi-label classification problems are necessary, but never employed. For solving these two issues, we have developed an ensemble multi-label classifier called HPSLPred, which can be applied for multi-label classification with an imbalanced protein source. For convenience, a user-friendly webserver has been established at http://server.malab.cn/HPSLPred. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Development and validation of a new virtual source model for portal image prediction and treatment quality control

    International Nuclear Information System (INIS)

    Chabert, Isabelle

    2015-01-01

    Intensity-Modulated Radiation Therapy (IMRT), require extensive verification procedures to ensure the correct dose delivery. Electronic Portal Imaging Devices (EPIDs) are widely used for quality assurance in radiotherapy, and also for dosimetric verifications. For this latter application, the images obtained during the treatment session can be compared to a pre-calculated reference image in order to highlight dose delivery errors. The quality control performance depends (1) on the accuracy of the pre-calculated reference image (2) on the ability of the tool used to compare images to detect errors. These two key points were studied during this PhD work. We chose to use a Monte Carlo (MC)-based method developed in the laboratory and based on the DPGLM (Dirichlet process generalized linear model) de-noising technique to predict high-resolution reference images. A model of the studied linear accelerator (linac Synergy, Elekta, Crawley, UK) was first developed using the PENELOPE MC codes, and then commissioned using measurements acquired in the Hopital Nord of Marseille. A 71 Go phase space file (PSF) stored under the flattening filter was then analyzed to build a new kind of virtual source model based on correlated histograms (200 Mo). This new and compact VSM is as much accurate as the PSF to calculate dose distributions in water if histogram sampling is based on adaptive method. The associated EPID modelling in PENELOPE suggests that hypothesis about linac primary source were too simple and should be reconsidered. The use of the VSM to predict high-resolution portal images however led to excellent results. The VSM associated to the linac and EPID MC models were used to detect errors in IMRT treatment plans. A preliminary study was conducted introducing on purpose treatment errors in portal image calculations (primary source parameters, phantom position and morphology changes). The γ-index commonly used in clinical routine appears to be less effective than the

  14. Study of burden distribution characteristics (IV): the development of a distribution predicting model in which coke collapse has been taken into account

    Energy Technology Data Exchange (ETDEWEB)

    Kamisaka, E; Okuno, Y; Irita, T; Matsuzaki, M; Isoyama, T; Kunitomo, K

    1984-01-01

    Using results quoted in a previous report (see Tetsu To Hagane, Vol. 68, page S 701, 1982), coke collapse has been quantified by means of landslide theory, according to which the stability of the burden is given by a safety factor which equals resistance moment/sliding moment. This has enabled coke collapse to be introduced in a model for predicting burden distribution. Application of this model has resulted in more accurate predictions of burden distribution, the computed values being in close agreement with the results of distribution experiments. 1 reference.

  15. Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline.

    Science.gov (United States)

    Zhang, Jie; Li, Qingyang; Caselli, Richard J; Thompson, Paul M; Ye, Jieping; Wang, Yalin

    2017-06-01

    Alzheimer's Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms.

  16. Finite Element Modelling of a Pattern of Temperature Distribution during Travelling Heat Source from Oxyacetylene Flame

    Directory of Open Access Journals (Sweden)

    Alkali Adam Umar

    2014-07-01

    Full Text Available A 3D Finite element model was developed to analyse the conduction temperature distribution on type 304 stainless steel workpiece. An experimental heating-only test was conducted using the input parameters from FEM model which predicted the temperature field on the 304 stainless steel work pieces. Similar temperature pattern was noticed for both the FEM model as well as the experimental. Conduction was observed to be the dominant heat transfer mode. Maximum temperatures were observed to occur at the regions of contact between flame heat and the work pieces. Maximum temperature attained during the two investigated runs was 355°C. Even so austenite crystal morphology was retained on the preheated workpiece.

  17. DemQSAR: predicting human volume of distribution and clearance of drugs.

    Science.gov (United States)

    Demir-Kavuk, Ozgur; Bentzien, Jörg; Muegge, Ingo; Knapp, Ernst-Walter

    2011-12-01

    In silico methods characterizing molecular compounds with respect to pharmacologically relevant properties can accelerate the identification of new drugs and reduce their development costs. Quantitative structure-activity/-property relationship (QSAR/QSPR) correlate structure and physico-chemical properties of molecular compounds with a specific functional activity/property under study. Typically a large number of molecular features are generated for the compounds. In many cases the number of generated features exceeds the number of molecular compounds with known property values that are available for learning. Machine learning methods tend to overfit the training data in such situations, i.e. the method adjusts to very specific features of the training data, which are not characteristic for the considered property. This problem can be alleviated by diminishing the influence of unimportant, redundant or even misleading features. A better strategy is to eliminate such features completely. Ideally, a molecular property can be described by a small number of features that are chemically interpretable. The purpose of the present contribution is to provide a predictive modeling approach, which combines feature generation, feature selection, model building and control of overtraining into a single application called DemQSAR. DemQSAR is used to predict human volume of distribution (VD(ss)) and human clearance (CL). To control overtraining, quadratic and linear regularization terms were employed. A recursive feature selection approach is used to reduce the number of descriptors. The prediction performance is as good as the best predictions reported in the recent literature. The example presented here demonstrates that DemQSAR can generate a model that uses very few features while maintaining high predictive power. A standalone DemQSAR Java application for model building of any user defined property as well as a web interface for the prediction of human VD(ss) and CL is

  18. Distribution of Short-Term and Lifetime Predicted Risks of Cardiovascular Diseases in Peruvian Adults.

    Science.gov (United States)

    Quispe, Renato; Bazo-Alvarez, Juan Carlos; Burroughs Peña, Melissa S; Poterico, Julio A; Gilman, Robert H; Checkley, William; Bernabé-Ortiz, Antonio; Huffman, Mark D; Miranda, J Jaime

    2015-08-07

    Short-term risk assessment tools for prediction of cardiovascular disease events are widely recommended in clinical practice and are used largely for single time-point estimations; however, persons with low predicted short-term risk may have higher risks across longer time horizons. We estimated short-term and lifetime cardiovascular disease risk in a pooled population from 2 studies of Peruvian populations. Short-term risk was estimated using the atherosclerotic cardiovascular disease Pooled Cohort Risk Equations. Lifetime risk was evaluated using the algorithm derived from the Framingham Heart Study cohort. Using previously published thresholds, participants were classified into 3 categories: low short-term and low lifetime risk, low short-term and high lifetime risk, and high short-term predicted risk. We also compared the distribution of these risk profiles across educational level, wealth index, and place of residence. We included 2844 participants (50% men, mean age 55.9 years [SD 10.2 years]) in the analysis. Approximately 1 of every 3 participants (34% [95% CI 33 to 36]) had a high short-term estimated cardiovascular disease risk. Among those with a low short-term predicted risk, more than half (54% [95% CI 52 to 56]) had a high lifetime predicted risk. Short-term and lifetime predicted risks were higher for participants with lower versus higher wealth indexes and educational levels and for those living in urban versus rural areas (PPeruvian adults were classified as low short-term risk but high lifetime risk. Vulnerable adults, such as those from low socioeconomic status and those living in urban areas, may need greater attention regarding cardiovascular preventive strategies. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  19. Distribution of Short-Term and Lifetime Predicted Risks of Cardiovascular Diseases in Peruvian Adults

    Science.gov (United States)

    Quispe, Renato; Bazo-Alvarez, Juan Carlos; Burroughs Peña, Melissa S; Poterico, Julio A; Gilman, Robert H; Checkley, William; Bernabé-Ortiz, Antonio; Huffman, Mark D; Miranda, J Jaime

    2015-01-01

    Background Short-term risk assessment tools for prediction of cardiovascular disease events are widely recommended in clinical practice and are used largely for single time-point estimations; however, persons with low predicted short-term risk may have higher risks across longer time horizons. Methods and Results We estimated short-term and lifetime cardiovascular disease risk in a pooled population from 2 studies of Peruvian populations. Short-term risk was estimated using the atherosclerotic cardiovascular disease Pooled Cohort Risk Equations. Lifetime risk was evaluated using the algorithm derived from the Framingham Heart Study cohort. Using previously published thresholds, participants were classified into 3 categories: low short-term and low lifetime risk, low short-term and high lifetime risk, and high short-term predicted risk. We also compared the distribution of these risk profiles across educational level, wealth index, and place of residence. We included 2844 participants (50% men, mean age 55.9 years [SD 10.2 years]) in the analysis. Approximately 1 of every 3 participants (34% [95% CI 33 to 36]) had a high short-term estimated cardiovascular disease risk. Among those with a low short-term predicted risk, more than half (54% [95% CI 52 to 56]) had a high lifetime predicted risk. Short-term and lifetime predicted risks were higher for participants with lower versus higher wealth indexes and educational levels and for those living in urban versus rural areas (PPeruvian adults were classified as low short-term risk but high lifetime risk. Vulnerable adults, such as those from low socioeconomic status and those living in urban areas, may need greater attention regarding cardiovascular preventive strategies. PMID:26254303

  20. Emphysema Distribution and Diffusion Capacity Predict Emphysema Progression in Human Immunodeficiency Virus Infection

    Science.gov (United States)

    Leung, Janice M; Malagoli, Andrea; Santoro, Antonella; Besutti, Giulia; Ligabue, Guido; Scaglioni, Riccardo; Dai, Darlene; Hague, Cameron; Leipsic, Jonathon; Sin, Don D.; Man, SF Paul; Guaraldi, Giovanni

    2016-01-01

    Background Chronic obstructive pulmonary disease (COPD) and emphysema are common amongst patients with human immunodeficiency virus (HIV). We sought to determine the clinical factors that are associated with emphysema progression in HIV. Methods 345 HIV-infected patients enrolled in an outpatient HIV metabolic clinic with ≥2 chest computed tomography scans made up the study cohort. Images were qualitatively scored for emphysema based on percentage involvement of the lung. Emphysema progression was defined as any increase in emphysema score over the study period. Univariate analyses of clinical, respiratory, and laboratory data, as well as multivariable logistic regression models, were performed to determine clinical features significantly associated with emphysema progression. Results 17.4% of the cohort were emphysema progressors. Emphysema progression was most strongly associated with having a low baseline diffusion capacity of carbon monoxide (DLCO) and having combination centrilobular and paraseptal emphysema distribution. In adjusted models, the odds ratio (OR) for emphysema progression for every 10% increase in DLCO percent predicted was 0.58 (95% confidence interval [CI] 0.41–0.81). The equivalent OR (95% CI) for centrilobular and paraseptal emphysema distribution was 10.60 (2.93–48.98). Together, these variables had an area under the curve (AUC) statistic of 0.85 for predicting emphysema progression. This was an improvement over the performance of spirometry (forced expiratory volume in 1 second to forced vital capacity ratio), which predicted emphysema progression with an AUC of only 0.65. Conclusion Combined paraseptal and centrilobular emphysema distribution and low DLCO could identify HIV patients who may experience emphysema progression. PMID:27902753

  1. Predicting plant invasions under climate change: are species distribution models validated by field trials?

    Science.gov (United States)

    Sheppard, Christine S; Burns, Bruce R; Stanley, Margaret C

    2014-09-01

    Climate change may facilitate alien species invasion into new areas, particularly for species from warm native ranges introduced into areas currently marginal for temperature. Although conclusions from modelling approaches and experimental studies are generally similar, combining the two approaches has rarely occurred. The aim of this study was to validate species distribution models by conducting field trials in sites of differing suitability as predicted by the models, thus increasing confidence in their ability to assess invasion risk. Three recently naturalized alien plants in New Zealand were used as study species (Archontophoenix cunninghamiana, Psidium guajava and Schefflera actinophylla): they originate from warm native ranges, are woody bird-dispersed species and of concern as potential weeds. Seedlings were grown in six sites across the country, differing both in climate and suitability (as predicted by the species distribution models). Seedling growth and survival were recorded over two summers and one or two winter seasons, and temperature and precipitation were monitored hourly at each site. Additionally, alien seedling performances were compared to those of closely related native species (Rhopalostylis sapida, Lophomyrtus bullata and Schefflera digitata). Furthermore, half of the seedlings were sprayed with pesticide, to investigate whether enemy release may influence performance. The results showed large differences in growth and survival of the alien species among the six sites. In the more suitable sites, performance was frequently higher compared to the native species. Leaf damage from invertebrate herbivory was low for both alien and native seedlings, with little evidence that the alien species should have an advantage over the native species because of enemy release. Correlations between performance in the field and predicted suitability of species distribution models were generally high. The projected increase in minimum temperature and reduced

  2. Distributed Sensor Network for meteorological observations and numerical weather Prediction Calculations

    Directory of Open Access Journals (Sweden)

    Á. Vas

    2013-06-01

    Full Text Available The prediction of weather generally means the solution of differential equations on the base of the measured initial conditions where the data of close and distant neighboring points are used for the calculations. It requires the maintenance of expensive weather stations and supercomputers. However, if weather stations are not only capable of measuring but can also communicate with each other, then these smart sensors can also be applied to run forecasting calculations. This applies the highest possible level of parallelization without the collection of measured data into one place. Furthermore, if more nodes are involved, the result becomes more accurate, but the computing power required from one node does not increase. Our Distributed Sensor Network for meteorological sensing and numerical weather Prediction Calculations (DSN-PC can be applied in several different areas where sensing and numerical calculations, even the solution of differential equations, are needed.

  3. Artificial neural network application for predicting soil distribution coefficient of nickel

    International Nuclear Information System (INIS)

    Falamaki, Amin

    2013-01-01

    The distribution (or partition) coefficient (K d ) is an applicable parameter for modeling contaminant and radionuclide transport as well as risk analysis. Selection of this parameter may cause significant error in predicting the impacts of contaminant migration or site-remediation options. In this regards, various models were presented to predict K d values for different contaminants specially heavy metals and radionuclides. In this study, artificial neural network (ANN) is used to present simplified model for predicting K d of nickel. The main objective is to develop a more accurate model with a minimal number of parameters, which can be determined experimentally or select by review of different studies. In addition, the effects of training as well as the type of the network are considered. The K d values of Ni is strongly dependent on pH of the soil and mathematical relationships were presented between pH and K d of nickel recently. In this study, the same database of these presented models was used to verify that neural network may be more useful tools for predicting of K d . Two different types of ANN, multilayer perceptron and redial basis function, were used to investigate the effect of the network geometry on the results. In addition, each network was trained by 80 and 90% of the data and tested for 20 and 10% of the rest data. Then the results of the networks compared with the results of the mathematical models. Although the networks trained by 80 and 90% of the data the results show that all the networks predict with higher accuracy relative to mathematical models which were derived by 100% of data. More training of a network increases the accuracy of the network. Multilayer perceptron network used in this study predicts better than redial basis function network. - Highlights: ► Simplified models for predicting K d of nickel presented using artificial neural networks. ► Multilayer perceptron and redial basis function used to predict K d of nickel in

  4. Analytic solution of field distribution and demagnetization function of ideal hollow cylindrical field source

    Science.gov (United States)

    Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min

    2017-09-01

    The Halbach type hollow cylindrical permanent magnet array (HCPMA) is a volume compact and energy conserved field source, which have attracted intense interests in many practical applications. Here, using the complex variable integration method based on the Biot-Savart Law (including current distributions inside the body and on the surfaces of magnet), we derive analytical field solutions to an ideal multipole HCPMA in entire space including the interior of magnet. The analytic field expression inside the array material is used to construct an analytic demagnetization function, with which we can explain the origin of demagnetization phenomena in HCPMA by taking into account an ideal magnetic hysteresis loop with finite coercivity. These analytical field expressions and demagnetization functions provide deeper insight into the nature of such permanent magnet array systems and offer guidance in designing optimized array system.

  5. Who bears the environmental burden in China? An analysis of the distribution of industrial pollution sources

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Chunbo [School of Agricultural and Resource Economics, University of Western Australia, 35 Stirling Highway, Crawley, 6009, Western Australia (Australia)

    2010-07-15

    A remaining challenge for environmental inequality researchers is to translate the principles developed in the U.S. to China which is experiencing the staggering environmental impacts of its astounding economic growth and social changes. This study builds on U.S. contemporary environmental justice literature and examines the issue of environmental inequality in China through an analysis of the geographical distribution of industrial pollution sources in Henan province. This study attempts to answer two central questions: (1) whether environmental inequality exists in China and if it does, (2) what socioeconomic lenses can be used to identify environmental inequality. The study found that: (1) race and income - the two common lenses used in many U.S. studies play different roles in the Chinese context; (2) rural residents and especially rural migrants are disproportionately exposed to industrial pollution. (author)

  6. Distributed source term analysis, a new approach to nuclear material inventory verification

    CERN Document Server

    Beddingfield, D H

    2002-01-01

    The Distributed Source-Term Analysis (DSTA) technique is a new approach to measuring in-process material holdup that is a significant departure from traditional hold-up measurement methodology. The DSTA method is a means of determining the mass of nuclear material within a large, diffuse, volume using passive neutron counting. The DSTA method is a more efficient approach than traditional methods of holdup measurement and inventory verification. The time spent in performing DSTA measurement and analysis is a fraction of that required by traditional techniques. The error ascribed to a DSTA survey result is generally less than that from traditional methods. Also, the negative bias ascribed to gamma-ray methods is greatly diminished because the DSTA method uses neutrons which are more penetrating than gamma-rays.

  7. Distributed source term analysis, a new approach to nuclear material inventory verification

    International Nuclear Information System (INIS)

    Beddingfield, D.H.; Menlove, H.O.

    2002-01-01

    The Distributed Source-Term Analysis (DSTA) technique is a new approach to measuring in-process material holdup that is a significant departure from traditional hold-up measurement methodology. The DSTA method is a means of determining the mass of nuclear material within a large, diffuse, volume using passive neutron counting. The DSTA method is a more efficient approach than traditional methods of holdup measurement and inventory verification. The time spent in performing DSTA measurement and analysis is a fraction of that required by traditional techniques. The error ascribed to a DSTA survey result is generally less than that from traditional methods. Also, the negative bias ascribed to γ-ray methods is greatly diminished because the DSTA method uses neutrons which are more penetrating than γ-rays

  8. Reference-Frame-Independent and Measurement-Device-Independent Quantum Key Distribution Using One Single Source

    Science.gov (United States)

    Li, Qian; Zhu, Changhua; Ma, Shuquan; Wei, Kejin; Pei, Changxing

    2018-04-01

    Measurement-device-independent quantum key distribution (MDI-QKD) is immune to all detector side-channel attacks. However, practical implementations of MDI-QKD, which require two-photon interferences from separated independent single-photon sources and a nontrivial reference alignment procedure, are still challenging with current technologies. Here, we propose a scheme that significantly reduces the experimental complexity of two-photon interferences and eliminates reference frame alignment by the combination of plug-and-play and reference frame independent MDI-QKD. Simulation results show that the secure communication distance can be up to 219 km in the finite-data case and the scheme has good potential for practical MDI-QKD systems.

  9. Distribution and Source of Sedimentary Polycyclic Aromatic Hydrocarbon (PAHs in River Sediment of Jakarta

    Directory of Open Access Journals (Sweden)

    Rinawati Rinawati

    2017-11-01

    Full Text Available In this study, the distribution and source identification of sedimentary PAHs from 13 rivers running through Jakarta City were investigated. Freeze-dried sediment samples were extracted by pressurized fluid extraction and purified by two-step of column chromatography. PAHs were identified and quantified by gas chromatography-mass spectrometry (GC-MS. High concentrations of PAHs, ranging from 1992 to 17635 ng/g-dw, were observed at all sampling locations. Ratios of alkylated PAHs to parent PAHs exhibited both petrogenic and pyrogenic signatures with predominantly petrogenic inputs. High hopanne concentrations (4238-40375 ng/g dry sediment supported the petrogenic input to Jakarta’s rivers. The high concentration of PAHs is indicator for organic micropollutant in the aquatic urban environment in Jakarta that may have the potential to cause adverse effect to the environment.

  10. Space power distribution of soft x-ray source ANGARA-5-1

    Energy Technology Data Exchange (ETDEWEB)

    Dyabilin, K S [High Energy Density Research Center, Moscow (Russian Federation); Fortov, V E; Grabovskij, E V; Lebedev, M E; Smirnov, V P [Troitsk Inst. of Innovative and Fusion Research, Troitsk (Russian Federation)

    1997-12-31

    The contribution deals with the investigation of shock waves in condensed targets generated by intense pulses of soft X radiation. Main attention is paid to the spatial distribution of the soft x-ray power, which influence strongly the shock wave front uniformity. Hot z-pinch plasma with the temperature of 60-100 eV produced by imploding double liner in the ANGARA-5-1 machine was used as a source of x rays. The maximum pinch current was as high as 3.5 MA. In order to eliminate the thermal heating of the targets, thick stepped Al/Pb, Sn/Pb, or pure Pb targets were used. The velocity of shock waves was determined by means of optical methods. Very uniform shock waves and shock pressures of up to several hundreds of GPa have been achieved. (J.U.). 3 figs., 2 refs.

  11. Predictions of Gene Family Distributions in Microbial Genomes: Evolution by Gene Duplication and Modification

    International Nuclear Information System (INIS)

    Yanai, Itai; Camacho, Carlos J.; DeLisi, Charles

    2000-01-01

    A universal property of microbial genomes is the considerable fraction of genes that are homologous to other genes within the same genome. The process by which these homologues are generated is not well understood, but sequence analysis of 20 microbial genomes unveils a recurrent distribution of gene family sizes. We show that a simple evolutionary model based on random gene duplication and point mutations fully accounts for these distributions and permits predictions for the number of gene families in genomes not yet complete. Our findings are consistent with the notion that a genome evolves from a set of precursor genes to a mature size by gene duplications and increasing modifications. (c) 2000 The American Physical Society

  12. Predictions of Gene Family Distributions in Microbial Genomes: Evolution by Gene Duplication and Modification

    Energy Technology Data Exchange (ETDEWEB)

    Yanai, Itai; Camacho, Carlos J.; DeLisi, Charles

    2000-09-18

    A universal property of microbial genomes is the considerable fraction of genes that are homologous to other genes within the same genome. The process by which these homologues are generated is not well understood, but sequence analysis of 20 microbial genomes unveils a recurrent distribution of gene family sizes. We show that a simple evolutionary model based on random gene duplication and point mutations fully accounts for these distributions and permits predictions for the number of gene families in genomes not yet complete. Our findings are consistent with the notion that a genome evolves from a set of precursor genes to a mature size by gene duplications and increasing modifications. (c) 2000 The American Physical Society.

  13. A distributed predictive control approach for periodic flow-based networks: application to drinking water systems

    Science.gov (United States)

    Grosso, Juan M.; Ocampo-Martinez, Carlos; Puig, Vicenç

    2017-10-01

    This paper proposes a distributed model predictive control approach designed to work in a cooperative manner for controlling flow-based networks showing periodic behaviours. Under this distributed approach, local controllers cooperate in order to enhance the performance of the whole flow network avoiding the use of a coordination layer. Alternatively, controllers use both the monolithic model of the network and the given global cost function to optimise the control inputs of the local controllers but taking into account the effect of their decisions over the remainder subsystems conforming the entire network. In this sense, a global (all-to-all) communication strategy is considered. Although the Pareto optimality cannot be reached due to the existence of non-sparse coupling constraints, the asymptotic convergence to a Nash equilibrium is guaranteed. The resultant strategy is tested and its effectiveness is shown when applied to a large-scale complex flow-based network: the Barcelona drinking water supply system.

  14. Predictive typing of drug-induced neurological sufferings from studies of the distribution of labelled drugs

    International Nuclear Information System (INIS)

    Takasu, T.

    1980-01-01

    A drug given to an animal becomes widely distributed throughout the body, acting on the living mechanisms or structures, and is gradually excreted. Some drugs can remain in some parts of the body for a long period. For example, 14 C-chloramphenical was found to remain preferentially in the salivary gland, liver and bone marrow of mice 24 hours after its oral administration. If such a drug is given repeatedly, it could possibly accumulate gradually in these organs. Thus, when its accumulation in a particular part of the body exceeds a certain level, the living mechanism or structure may possibly be injured. The harmful effects of a drug in repeated administration are called its chronic toxicity. The author discusses whether it is possible to predict the toxicity of a drug by studying its distribution in relation to time, and, if possible, the points in time. This problem is studied especially in relation to the nervous system. (Auth.)

  15. Distributed model predictive control for constrained nonlinear systems with decoupled local dynamics.

    Science.gov (United States)

    Zhao, Meng; Ding, Baocang

    2015-03-01

    This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  16. CFD prediction of flow and phase distribution in fuel assemblies with spacers

    Energy Technology Data Exchange (ETDEWEB)

    Anglart, H.; Nylund, O. [ABB Atom AB, Vasteras (Switzerland); Kurul, N. [Rensselaer Polytechnic Institute, Troy, NY (United States)] [and others

    1995-09-01

    This paper is concerned with the modeling and computation of multi-dimensional two-phase flows in BWR fuel assemblies. The modeling principles are presented based on using a two-fluid model in which lateral interfacial effects are accounted for. This model has been used to evaluate the velocity fields of both vapor and liquid phases, as well as phase distribution, between fuel elements in geometries similar to BWR fuel bundles. Furthermore, this model has been used to predict, in a detailed mechanistic manner, the effects of spacers on flow and phase distribution between, and pressure drop along, fuel elements. The related numerical simulations have been performed using a CFD computer code, CFDS-FLOW3D.

  17. Maxent modeling for predicting the potential geographical distribution of two peony species under climate change.

    Science.gov (United States)

    Zhang, Keliang; Yao, Linjun; Meng, Jiasong; Tao, Jun

    2018-09-01

    Paeonia (Paeoniaceae), an economically important plant genus, includes many popular ornamentals and medicinal plant species used in traditional Chinese medicine. Little is known about the properties of the habitat distribution and the important eco-environmental factors shaping the suitability. Based on high-resolution environmental data for current and future climate scenarios, we modeled the present and future suitable habitat for P. delavayi and P. rockii by Maxent, evaluated the importance of environmental factors in shaping their distribution, and identified distribution shifts under climate change scenarios. The results showed that the moderate and high suitable areas for P. delavayi and P. rockii encompassed ca. 4.46×10 5 km 2 and 1.89×10 5 km 2 , respectively. Temperature seasonality and isothermality were identified as the most critical factors shaping P. delavayi distribution, and UVB-4 and annual precipitation were identified as the most critical for shaping P. rockii distribution. Under the scenario with a low concentration of greenhouse gas emissions (RCP2.6), the range of both species increased as global warming intensified; however, under the scenario with higher concentrations of emissions (RCP8.5), the suitable habitat range of P. delavayi decreased while P. rockii increased. Overall, our prediction showed that a shift in distribution of suitable habitat to higher elevations would gradually become more significant. The information gained from this study should provide a useful reference for implementing long-term conservation and management strategies for these species. Copyright © 2018. Published by Elsevier B.V.

  18. The problem of predicting the size distribution of sediment supplied by hillslopes to rivers

    Science.gov (United States)

    Sklar, Leonard S.; Riebe, Clifford S.; Marshall, Jill A.; Genetti, Jennifer; Leclere, Shirin; Lukens, Claire L.; Merces, Viviane

    2017-01-01

    Sediments link hillslopes to river channels. The size of sediments entering channels is a key control on river morphodynamics across a range of scales, from channel response to human land use to landscape response to changes in tectonic and climatic forcing. However, very little is known about what controls the size distribution of particles eroded from bedrock on hillslopes, and how particle sizes evolve before sediments are delivered to channels. Here we take the first steps toward building a geomorphic transport law to predict the size distribution of particles produced on hillslopes and supplied to channels. We begin by identifying independent variables that can be used to quantify the influence of five key boundary conditions: lithology, climate, life, erosion rate, and topography, which together determine the suite of geomorphic processes that produce and transport sediments on hillslopes. We then consider the physical and chemical mechanisms that determine the initial size distribution of rock fragments supplied to the hillslope weathering system, and the duration and intensity of weathering experienced by particles on their journey from bedrock to the channel. We propose a simple modeling framework with two components. First, the initial rock fragment sizes are set by the distribution of spacing between fractures in unweathered rock, which is influenced by stresses encountered by rock during exhumation and by rock resistance to fracture propagation. That initial size distribution is then transformed by a weathering function that captures the influence of climate and mineralogy on chemical weathering potential, and the influence of erosion rate and soil depth on residence time and the extent of particle size reduction. Model applications illustrate how spatial variation in weathering regime can lead to bimodal size distributions and downstream fining of channel sediment by down-valley fining of hillslope sediment supply, two examples of hillslope control on

  19. Distributed Model Predictive Control for Active Power Control of Wind Farm

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Rasmussen, Claus Nygaard

    2014-01-01

    This paper presents the active power control of a wind farm using the Distributed Model Predictive Controller (D- MPC) via dual decomposition. Different from the conventional centralized wind farm control, multiple objectives such as power reference tracking performance and wind turbine load can...... be considered to achieve a trade-off between them. Additionally, D- MPC is based on communication among the subsystems. Through the interaction among the neighboring subsystems, the global optimization could be achieved, which significantly reduces the computation burden. It is suitable for the modern large......-scale wind farm control....

  20. Robust Distributed Model Predictive Load Frequency Control of Interconnected Power System

    Directory of Open Access Journals (Sweden)

    Xiangjie Liu

    2013-01-01

    Full Text Available Considering the load frequency control (LFC of large-scale power system, a robust distributed model predictive control (RDMPC is presented. The system uncertainty according to power system parameter variation alone with the generation rate constraints (GRC is included in the synthesis procedure. The entire power system is composed of several control areas, and the problem is formulated as convex optimization problem with linear matrix inequalities (LMI that can be solved efficiently. It minimizes an upper bound on a robust performance objective for each subsystem. Simulation results show good dynamic response and robustness in the presence of power system dynamic uncertainties.

  1. Impact of different satellite soil moisture products on the predictions of a continuous distributed hydrological model

    Science.gov (United States)

    Laiolo, P.; Gabellani, S.; Campo, L.; Silvestro, F.; Delogu, F.; Rudari, R.; Pulvirenti, L.; Boni, G.; Fascetti, F.; Pierdicca, N.; Crapolicchio, R.; Hasenauer, S.; Puca, S.

    2016-06-01

    The reliable estimation of hydrological variables in space and time is of fundamental importance in operational hydrology to improve the flood predictions and hydrological cycle description. Nowadays remotely sensed data can offer a chance to improve hydrological models especially in environments with scarce ground based data. The aim of this work is to update the state variables of a physically based, distributed and continuous hydrological model using four different satellite-derived data (three soil moisture products and a land surface temperature measurement) and one soil moisture analysis to evaluate, even with a non optimal technique, the impact on the hydrological cycle. The experiments were carried out for a small catchment, in the northern part of Italy, for the period July 2012-June 2013. The products were pre-processed according to their own characteristics and then they were assimilated into the model using a simple nudging technique. The benefits on the model predictions of discharge were tested against observations. The analysis showed a general improvement of the model discharge predictions, even with a simple assimilation technique, for all the assimilation experiments; the Nash-Sutcliffe model efficiency coefficient was increased from 0.6 (relative to the model without assimilation) to 0.7, moreover, errors on discharge were reduced up to the 10%. An added value to the model was found in the rainfall season (autumn): all the assimilation experiments reduced the errors up to the 20%. This demonstrated that discharge prediction of a distributed hydrological model, which works at fine scale resolution in a small basin, can be improved with the assimilation of coarse-scale satellite-derived data.

  2. Electromagnetic Modeling of Distributed-Source-Excitation of Coplanar Waveguides: Applications to Traveling-Wave Photomixers

    Science.gov (United States)

    Pasqualini, Davide; Neto, Andrea; Wyss, Rolf A.

    2001-01-01

    In this work an electromagnetic model and subsequent design is presented for a traveling-wave, coplanar waveguide (CPW) based source that will operate in the THz frequency regime. The radio frequency (RF) driving current is a result of photoexcitation of a thin GaAs membrane using two frequency-offset lasers. The GaAs film is grown by molecular-beam-epitaxy (MBE) and displays sub-ps carrier lifetimes which enable the material conductivity to be modulated at a very high rate. The RF current flows between electrodes deposited on the GaAs membrane which are biased with a DC voltage source. The electrodes form a CPW and are terminated with a double slot antenna that couples the power to a quasi-optical system. The membrane is suspended above a metallic reflector to launch all radiation in one direction. The theoretical investigation and consequent design is performed in two steps. The first step consists of a direct evaluation of the magnetic current distribution on an infinitely extended coplanar waveguide excited by an impressed electric current distributed over a finite area. The result of the analysis is the difference between the incident angle of the laser beams and the length of the excited area that maximizes the RF power coupled to the CPW. The optimal values for both parameters are found as functions of the CPW and membrane dimensions as well as the dielectric constants of the layers. In the second step, a design is presented of a double slot antenna that matches the CPW characteristic impedance and gives good overall performance. The design is presently being implemented and measurements will soon be available.

  3. Sources and distribution of yttrium and rare earth elements in surface sediments from Tagus estuary, Portugal.

    Science.gov (United States)

    Brito, Pedro; Prego, Ricardo; Mil-Homens, Mário; Caçador, Isabel; Caetano, Miguel

    2018-04-15

    The distribution and sources of yttrium and rare-earth elements (YREE) in surface sediments were studied on 78 samples collected in the Tagus estuary (SW Portugal, SW Europe). Yttrium and total REE contents ranged from 2.4 to 32mg·kg -1 and 18 to 210mg·kg -1 , respectively, and exhibited significant correlations with sediment grain-size, Al, Fe, Mg and Mn, suggesting a preferential association to fine-grained material (e.g. aluminosilicates but also Al hydroxides and Fe oxyhydroxides). The PAAS (Post-Archean Australian Shale) normalized patterns display three distinct YREE fractionation pattern groups along the Tagus estuary: a first group, characterized by medium to coarse-grained material, a depleted and almost flat PAAS-normalized pattern, with a positive anomaly of Eu, representing one of the lithogenic components; a second group, characterized mainly by fine-grained sediment, with higher shale-normalized ratios and an enrichment of LREE relative to HREE, associated with waste water treatment plant (WWTP) outfalls, located in the northern margin; and, a third group, of fine-grained material, marked by a significant enrichment of Y, a depletion of Ce and an enrichment of HREE over LREE, located near an inactive chemical-industrial complex (e.g. pyrite roast plant, chemical and phosphorous fertilizer industries), in the southern margin. The data allow the quantification of the YREE contents and its spatial distribution in the surface sediments of the Tagus estuary, identifying the main potential sources and confirming the use of rare earth elements as tracers of anthropogenic activities in highly hydrodynamic estuaries. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Predicting occupancy for pygmy rabbits in Wyoming: an independent evaluation of two species distribution models

    Science.gov (United States)

    Germaine, Stephen S.; Ignizio, Drew; Keinath, Doug; Copeland, Holly

    2014-01-01

    Species distribution models are an important component of natural-resource conservation planning efforts. Independent, external evaluation of their accuracy is important before they are used in management contexts. We evaluated the classification accuracy of two species distribution models designed to predict the distribution of pygmy rabbit Brachylagus idahoensis habitat in southwestern Wyoming, USA. The Nature Conservancy model was deductive and based on published information and expert opinion, whereas the Wyoming Natural Diversity Database model was statistically derived using historical observation data. We randomly selected 187 evaluation survey points throughout southwestern Wyoming in areas predicted to be habitat and areas predicted to be nonhabitat for each model. The Nature Conservancy model correctly classified 39 of 77 (50.6%) unoccupied evaluation plots and 65 of 88 (73.9%) occupied plots for an overall classification success of 63.3%. The Wyoming Natural Diversity Database model correctly classified 53 of 95 (55.8%) unoccupied plots and 59 of 88 (67.0%) occupied plots for an overall classification success of 61.2%. Based on 95% asymptotic confidence intervals, classification success of the two models did not differ. The models jointly classified 10.8% of the area as habitat and 47.4% of the area as nonhabitat, but were discordant in classifying the remaining 41.9% of the area. To evaluate how anthropogenic development affected model predictive success, we surveyed 120 additional plots among three density levels of gas-field road networks. Classification success declined sharply for both models as road-density level increased beyond 5 km of roads per km-squared area. Both models were more effective at predicting habitat than nonhabitat in relatively undeveloped areas, and neither was effective at accounting for the effects of gas-energy-development road networks. Resource managers who wish to know the amount of pygmy rabbit habitat present in an

  5. Free-Space Quantum Key Distribution with a High Generation Rate KTP Waveguide Photon-Pair Source

    Science.gov (United States)

    Wilson, J.; Chaffee, D.; Wilson, N.; Lekki, J.; Tokars, R.; Pouch, J.; Lind, A.; Cavin, J.; Helmick, S.; Roberts, T.; hide

    2016-01-01

    NASA awarded Small Business Innovative Research (SBIR) contracts to AdvR, Inc to develop a high generation rate source of entangled photons that could be used to explore quantum key distribution (QKD) protocols. The final product, a photon pair source using a dual-element periodically- poled potassium titanyl phosphate (KTP) waveguide, was delivered to NASA Glenn Research Center in June of 2015. This paper describes the source, its characterization, and its performance in a B92 (Bennett, 1992) protocol QKD experiment.

  6. Luminosity distribution in the central regions of Messier 87: Isothermal core, point source, or black hole

    International Nuclear Information System (INIS)

    de Vaucouleurs, G.; Nieto, J.

    1979-01-01

    A combination of photographic and photoelectric photometry with the McDonald 2 m reflector is used to derive a precise mean luminosity profile μ/sub B/(r*) of M87 (jet excluded) at approx.0''.6 resolution out to r*=70''. Within 8'' from the center the luminosity is less than predicted by extrapolation of the r/sup 1/4/ law defined by the main body of the galaxy (8'' 0 =30.5) the structural length of the underlying isothermal is α=2''.78=170 pc, the mass of the ''black hole'' M 0 =1.7.10 9 M/sub sun/ and the luminosity of the point source (B 0 =16.95, M 0 =-13.55) equals 4.2% of the integrated luminosity B (6'') =13.52 of the galaxy within r*=6''. These results agree closely with and confirm the work of the Hale team. Comparison of the McDonald and Hale data suggests that the central source may have been slightly brighter (approx.0.5 mag) in 1964 than in 1975--1977

  7. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    Science.gov (United States)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models

  8. A statistical kinematic source inversion approach based on the QUESO library for uncertainty quantification and prediction

    Science.gov (United States)

    Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo

    2014-05-01

    Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.

  9. Dose distribution considerations of medium energy electron beams at extended source-to-surface distance

    International Nuclear Information System (INIS)

    Saw, Cheng B.; Ayyangar, Komanduri M.; Pawlicki, Todd; Korb, Leroy J.

    1995-01-01

    Purpose: To determine the effects of extended source-to-surface distance (SSD) on dose distributions for a range of medium energy electron beams and cone sizes. Methods and Materials: The depth-dose curves and isodose distributions of 6 MeV, 10 MeV, and 14 MeV electron beams from a dual photon and multielectron energies linear accelerator were studied. To examine the influence of cone size, the smallest and the largest cone sizes available were used. Measurements were carried out in a water phantom with the water surface set at three different SSDs from 101 to 116 cm. Results: In the region between the phantom surface and the depth of maximum dose, the depth-dose decreases as the SSD increases for all electron beam energies. The effects of extended SSD in the region beyond the depth of maximum dose are unobservable and, hence, considered minimal. Extended SSD effects are apparent for higher electron beam energy with small cone size causing the depth of maximum dose and the rapid dose fall-off region to shift deeper into the phantom. However, the change in the depth-dose curve is small. On the other hand, the rapid dose fall-off region is essentially unaltered when the large cone is used. The penumbra enlarges and electron beam flatness deteriorates with increasing SSD

  10. Spatial Distribution, Sources Apportionment and Health Risk of Metals in Topsoil in Beijing, China

    Directory of Open Access Journals (Sweden)

    Chunyuan Sun

    2016-07-01

    Full Text Available In order to acquire the pollution feature and regularities of distribution of metals in the topsoil within the sixth ring road in Beijing, a total of 46 soil samples were collected, and the concentrations of twelve elements (Nickel, Ni, Lithium, Li, Vanadium, V, Cobalt, Co, Barium, Ba, Strontium, Sr, Chrome, Cr, Molybdenum, Mo, Copper, Cu, Cadmium, Cd, Zinc, Zn, Lead, Pb were analyzed. Geostatistics and multivariate statistics were conducted to identify spatial distribution characteristics and sources. In addition, the health risk of the analyzed heavy metals to humans (adult was evaluated by an U.S. Environmental Protection Agency health risk assessment model. The results indicate that these metals have notable variation in spatial scale. The concentration of Cr was high in the west and low in the east, while that of Mo was high in the north and low in the south. High concentrations of Cu, Cd, Zn, and Pb were found in the central part of the city. The average enrichment degree of Cd is 5.94, reaching the standard of significant enrichment. The accumulation of Cr, Mo, Cu, Cd, Zn, and Pb is influenced by anthropogenic activity, including vehicle exhaustion, coal burning, and industrial processes. Health risk assessment shows that both non-carcinogenic and carcinogenic risks of selected heavy metals are within the safety standard and the rank of the carcinogenic risk of the four heavy metals is Cr > Co > Ni > Cd.

  11. Spatial Distribution, Sources Apportionment and Health Risk of Metals in Topsoil in Beijing, China.

    Science.gov (United States)

    Sun, Chunyuan; Zhao, Wenji; Zhang, Qianzhong; Yu, Xue; Zheng, Xiaoxia; Zhao, Jiayin; Lv, Ming

    2016-07-20

    In order to acquire the pollution feature and regularities of distribution of metals in the topsoil within the sixth ring road in Beijing, a total of 46 soil samples were collected, and the concentrations of twelve elements (Nickel, Ni, Lithium, Li, Vanadium, V, Cobalt, Co, Barium, Ba, Strontium, Sr, Chrome, Cr, Molybdenum, Mo, Copper, Cu, Cadmium, Cd, Zinc, Zn, Lead, Pb) were analyzed. Geostatistics and multivariate statistics were conducted to identify spatial distribution characteristics and sources. In addition, the health risk of the analyzed heavy metals to humans (adult) was evaluated by an U.S. Environmental Protection Agency health risk assessment model. The results indicate that these metals have notable variation in spatial scale. The concentration of Cr was high in the west and low in the east, while that of Mo was high in the north and low in the south. High concentrations of Cu, Cd, Zn, and Pb were found in the central part of the city. The average enrichment degree of Cd is 5.94, reaching the standard of significant enrichment. The accumulation of Cr, Mo, Cu, Cd, Zn, and Pb is influenced by anthropogenic activity, including vehicle exhaustion, coal burning, and industrial processes. Health risk assessment shows that both non-carcinogenic and carcinogenic risks of selected heavy metals are within the safety standard and the rank of the carcinogenic risk of the four heavy metals is Cr > Co > Ni > Cd.

  12. Analysis of ultrasonically rotating droplet using moving particle semi-implicit and distributed point source methods

    Science.gov (United States)

    Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro

    2016-07-01

    Numerical analysis of the rotation of an ultrasonically levitated droplet with a free surface boundary is discussed. The ultrasonically levitated droplet is often reported to rotate owing to the surface tangential component of acoustic radiation force. To observe the torque from an acoustic wave and clarify the mechanism underlying the phenomena, it is effective to take advantage of numerical simulation using the distributed point source method (DPSM) and moving particle semi-implicit (MPS) method, both of which do not require a calculation grid or mesh. In this paper, the numerical treatment of the viscoacoustic torque, which emerges from the viscous boundary layer and governs the acoustical droplet rotation, is discussed. The Reynolds stress traction force is calculated from the DPSM result using the idea of effective normal particle velocity through the boundary layer and input to the MPS surface particles. A droplet levitated in an acoustic chamber is simulated using the proposed calculation method. The droplet is vertically supported by a plane standing wave from an ultrasonic driver and subjected to a rotating sound field excited by two acoustic sources on the side wall with different phases. The rotation of the droplet is successfully reproduced numerically and its acceleration is discussed and compared with those in the literature.

  13. Distribution and source analysis of aluminum in rivers near Xi'an City, China.

    Science.gov (United States)

    Wang, Dongqi; He, Yanling; Liang, Jidong; Liu, Pei; Zhuang, Pengyu

    2013-02-01

    To study the status and source of aluminum (Al) contamination, a total of 21 sampling sites along six rivers near Xi'an City (Shaanxi province, China) were investigated during 2008-2010. The results indicated that the average concentration of total Al (Al(t)) in the six rivers increased by 1.6 times from 2008 to 2010. The spatial distribution of Al(t) concentrations in the rivers near Xi'an City was significantly different, ranged from 367 μg/L (Bahe River) to 1,978 μg/L (Taiping River). The Al(t) concentration was highest near an industrial area for pulp and paper-making (2,773 μg/L), where the Al level greatly exceeded the water quality criteria of both the USA (Criterion Continuous Concentration, 87 μg/L) and Canada (100 μg/L). The average concentration of inorganic monometric aluminum (Al(im)) was 72 μg/L which would pose threats to fishes and other aquatic lives in the rivers. The concentrations of exchangeable Al (Al(ex)) in the sediment of the Taiping River sampled were relatively high, making it to be an alternative explanation of increasing Al concentrations in the rivers near Xi'an City. Furthermore, an increasing Al level has been detected in the upstream watershed near Xi'an City in recent years, which might indicate another notable pollution source of Al.

  14. Dose Distributions of an 192Ir Brachytherapy Source in Different Media

    Directory of Open Access Journals (Sweden)

    C. H. Wu

    2014-01-01

    Full Text Available This study used MCNPX code to investigate the brachytherapy 192Ir dose distributions in water, bone, and lung tissue and performed radiophotoluminescent glass dosimeter measurements to verify the obtained MCNPX results. The results showed that the dose-rate constant, radial dose function, and anisotropy function in water were highly consistent with data in the literature. However, the lung dose near the source would be overestimated by up to 12%, if the lung tissue is assumed to be water, and, hence, if a tumor is located in the lung, the tumor dose will be overestimated, if the material density is not taken into consideration. In contrast, the lung dose far from the source would be underestimated by up to 30%. Radial dose functions were found to depend not only on the phantom size but also on the material density. The phantom size affects the radial dose function in bone more than those in the other tissues. On the other hand, the anisotropy function in lung tissue was not dependent on the radial distance. Our simulation results could represent valid clinical reference data and be used to improve the accuracy of the doses delivered during brachytherapy applied to patients with lung cancer.

  15. HARVESTING, INTEGRATING AND DISTRIBUTING LARGE OPEN GEOSPATIAL DATASETS USING FREE AND OPEN-SOURCE SOFTWARE

    Directory of Open Access Journals (Sweden)

    R. Oliveira

    2016-06-01

    Full Text Available Federal, State and Local government agencies in the USA are investing heavily on the dissemination of Open Data sets produced by each of them. The main driver behind this thrust is to increase agencies’ transparency and accountability, as well as to improve citizens’ awareness. However, not all Open Data sets are easy to access and integrate with other Open Data sets available even from the same agency. The City and County of Denver Open Data Portal distributes several types of geospatial datasets, one of them is the city parcels information containing 224,256 records. Although this data layer contains many pieces of information it is incomplete for some custom purposes. Open-Source Software were used to first collect data from diverse City of Denver Open Data sets, then upload them to a repository in the Cloud where they were processed using a PostgreSQL installation on the Cloud and Python scripts. Our method was able to extract non-spatial information from a ‘not-ready-to-download’ source that could then be combined with the initial data set to enhance its potential use.

  16. Development of a hemispherical rotational modulation collimator system for imaging spatial distribution of radiation sources

    Science.gov (United States)

    Na, M.; Lee, S.; Kim, G.; Kim, H. S.; Rho, J.; Ok, J. G.

    2017-12-01

    Detecting and mapping the spatial distribution of radioactive materials is of great importance for environmental and security issues. We design and present a novel hemispherical rotational modulation collimator (H-RMC) system which can visualize the location of the radiation source by collecting signals from incident rays that go through collimator masks. The H-RMC system comprises a servo motor-controlled rotating module and a hollow heavy-metallic hemisphere with slits/slats equally spaced with the same angle subtended from the main axis. In addition, we also designed an auxiliary instrument to test the imaging performance of the H-RMC system, comprising a high-precision x- and y-axis staging station on which one can mount radiation sources of various shapes. We fabricated the H-RMC system which can be operated in a fully-automated fashion through the computer-based controller, and verify the accuracy and reproducibility of the system by measuring the rotational and linear positions with respect to the programmed values. Our H-RMC system may provide a pivotal tool for spatial radiation imaging with high reliability and accuracy.

  17. Distributions, Sources, and Backward Trajectories of Atmospheric Polycyclic Aromatic Hydrocarbons at Lake Small Baiyangdian, Northern China

    Directory of Open Access Journals (Sweden)

    Ning Qin

    2012-01-01

    Full Text Available Air samples were collected seasonally at Lake Small Baiyangdian, a shallow lake in northern China, between October 2007 and September 2008. Gas phase, particulate phase and dust fall concentrations of polycyclic aromatic hydrocarbons (PAHs were measured using a gas chromatograph-mass spectrometer (GC-MS. The distribution and partitioning of atmospheric PAHs were studied, and the major sources were identified; the backward trajectories of air masses starting from the center of Lake Small Baiyangdian were calculated for the entire year. The following results were obtained: (1 The total concentration of 16 priority controlled PAHs (PAH16 in the gas phase was 417.2±299.8 ng·m−3, in the particulate phase was 150.9±99.2 ng·m−3, and in dust fall was 6930.2±3206.5 ng·g−1. (2 Vehicle emission, coal combustion, and biomass combustion were the major sources in the Small Baiyangdian atmosphere and accounted for 28.9%, 45.1% and 26.0% of the total PAHs, respectively. (3 Winter was dominated by relatively greater PAHs polluted northwesterly air mass pathways. Summer showed a dominant relatively clean southern pathway, whereas the trajectories in autumn and spring might be associated with high pollution from Shanxi or Henan province.

  18. Macronutrient Distribution and Dietary Sources in the Spanish Population: Findings from the ANIBES Study

    Directory of Open Access Journals (Sweden)

    Emma Ruiz

    2016-03-01

    Full Text Available Our aim was to analyze dietary macronutrient intake and its main sources according to sex and age. Results were derived from the ANIBES (“Anthropometry, Intake and Energy Balance in Spain” cross-sectional study using a nationally-representative sample of the Spanish population (9–75 years old. Mean dietary protein intake was 74.5 ± 22.4 g/day, with meat and meat products as the main sources (33.0%. Mean carbohydrate intake was 185.4 ± 60.9 g/day and was higher in children and adolescents; grains (49%, mainly bread, were the main contributor. Milk and dairy products (23% ranked first for sugar intake. Mean lipid intake was 78.1 ± 26.1 g/day and was higher in younger age groups; contributions were mainly from oils and fats (32.5%; olive oil 25.6% and meat and meat products (22.0%. Lipid profiles showed relatively high monounsaturated fatty acid intake, of which olive oil contributed 38.8%. Saturated fatty acids were mainly (>70% combined from meat and meat products, milk and dairy products and oils and fats. Polyunsaturated fatty acids were mainly from oils and fats (31.5%. The macronutrient intake and distribution in the Spanish population is far from population reference intakes and nutritional goals, especially for children and adolescents.

  19. Availability of added sugars in Brazil: distribution, food sources and time trends.

    Science.gov (United States)

    Levy, Renata Bertazzi; Claro, Rafael Moreira; Bandoni, Daniel Henrique; Mondini, Lenise; Monteiro, Carlos Augusto

    2012-03-01

    To describe the regional and socio-economic distribution of consumption of added sugar in Brazil in 2002/03, particularly products, sources of sugar and trends in the past 15 years. The study used data from Household Budget Surveys since the 1980s about the type and quantity of food and beverages bought by Brazilian families. Different indicators were analyzed: % of sugar calories over the total diet energy and caloric % of table sugar fractions and sugar added to processed food/ sugar calories of diet. In 2002/03, of the total energy available for consumption, 16.7% came from added sugar in all regional and socio-economic strata. The table sugar/ sugar added to processed food ratio was inversely proportional to increase in income. Although this proportion fell in the past 15 years, sugar added to processed food doubled, especially in terms of consumption of soft drinks and cookies. Brazilians consume more sugar than the recommended levels determined by the WHO and the sources of consumption of sugar have changed significantly.

  20. Bacterial composition in a metropolitan drinking water distribution system utilizing different source waters.

    Science.gov (United States)

    Gomez-Alvarez, Vicente; Humrighouse, Ben W; Revetta, Randy P; Santo Domingo, Jorge W

    2015-03-01

    We investigated the bacterial composition of water samples from two service areas within a drinking water distribution system (DWDS), each associated with a different primary source of water (groundwater, GW; surface water, SW) and different treatment process. Community analysis based on 16S rRNA gene clone libraries indicated that Actinobacteria (Mycobacterium spp.) and α-Proteobacteria represented nearly 43 and 38% of the total sequences, respectively. Sequences closely related to Legionella, Pseudomonas, and Vibrio spp. were also identified. In spite of the high number of sequences (71%) shared in both areas, multivariable analysis revealed significant differences between the GW and SW areas. While the dominant phylotypes where not significantly contributing in the ordination of samples, the populations associated with the core of phylotypes (1-10% in each sample) significantly contributed to the differences between both service areas. Diversity indices indicate that the microbial community inhabiting the SW area is more diverse and contains more distantly related species coexisting with local assemblages as compared with the GW area. The bacterial community structure of SW and GW service areas were dissimilar, suggesting that their respective source water and/or water quality parameters shaped by the treatment processes may contribute to the differences in community structure observed.

  1. Distribution and sources of n-alkanes in surface sediments of Taihu Lake, China

    Directory of Open Access Journals (Sweden)

    Yu Yunlong

    2016-03-01

    Full Text Available The last study on n-alkanes in surface sediments of Taihu Lake was in 2000, only 13 surface sediment samples were analysed, in order to have a comprehensive and up-to-date understanding of n-alkanes in the surface sediments of Taihu Lake, 41 surface sediment samples were analyzed by GC-MS. C10 to C37 were detected, the total concentrations of n-alkanes ranged from 2109 ng g−1 to 9096 ng g−1 (dry weight. There was strong odd carbon predominance in long chain n-alkanes and even carbon predominance in short chain n-alkanes. When this finding was combined with the analysis results of wax n-alkanes (WaxCn, carbon preference index (CPI, unresolved complex mixture (UCM, hopanes and steranes, it was considered that the long chain n-alkanes were mainly from terrigenous higher plants, and that the short chain n-alkanes mainly originated from bacteria and algae in the lake, compared with previous studies, there were no obvious anthropogenic petrogenic inputs. Terrestrial and aquatic hydrocarbons ratio (TAR and C21−/C25+ indicated that terrigenous input was higher than aquatic sources and the nearshore n-alkanes were mainly from land-derived sources. Moreover, the distribution of short chain n-alkanes presented a relatively uniform pattern, while the long chain n-alkanes presented a trend that concentrations dropped from nearshore places to the middle of lake.

  2. Distribution, regional sources and deposition fluxes of organochlorine pesticides in precipitation in Guangzhou, South China

    Science.gov (United States)

    Huang, De-Yin; Peng, Ping'an; Xu, Yi-Gang; Sun, Cui-Xiang; Deng, Hong-Mei; Deng, Yun-Yun

    2010-07-01

    We analyzed rainwater collected from multiple sites, Guangzhou, China, from March to August 2005, with the aim to characterize the distribution, regional sources and deposition fluxes of organochlorine pesticides (OCPs) in South China. Eight species of organochlorine pesticide were detected, including hexachlorocyclohexanes (HCHs), dichlorodiphenyltrichloroethanes (DDTs), and endosulfans. Volume-weighted mean monthly total concentrations varied from 3.65 ± 0.95 to 9.37 ± 2.63 ng L - 1 , and the estimated total wet deposition flux was about 11.43 ± 3.27 µg m - 2 during the monitoring period. Pesticides were mainly detected in the dissolved phase. Distribution coefficients between particulate and dissolved phases in March and April were generally higher than in other months. HCHs, p,p'-DDD and p,p'-DDT in precipitation were attributed to both the residues and present usage of insecticides in Pearl River Delta. The concentrations of p,p'-DDD + p,p'-DDT were relatively high from April to August, which were related to the usage of antifouling paints containing DDT for fishing ships in seaports of the South China Sea in summer. In contrast, endosulfans were relatively high in March, which was related to their seasonal atmospheric transport from cotton fields in eastern China by the Asian winter monsoon. The consistency of the variation of endosulfans, p,p'-DDD and p,p'-DDT concentrations with the alternation of summer and winter monsoon suggested that the Asian monsoon played an important role in the long-range transport of OCPs. In addition, the wet deposition of OCPs may influence not only Pearl River water but also the surface land distributions of pesticides in the Guangzhou area, especially for endosulfans, p,p'-DDD and p,p'-DDT.

  3. Bayesian Belief Networks for predicting drinking water distribution system pipe breaks

    International Nuclear Information System (INIS)

    Francis, Royce A.; Guikema, Seth D.; Henneman, Lucas

    2014-01-01

    In this paper, we use Bayesian Belief Networks (BBNs) to construct a knowledge model for pipe breaks in a water zone. To the authors’ knowledge, this is the first attempt to model drinking water distribution system pipe breaks using BBNs. Development of expert systems such as BBNs for analyzing drinking water distribution system data is not only important for pipe break prediction, but is also a first step in preventing water loss and water quality deterioration through the application of machine learning techniques to facilitate data-based distribution system monitoring and asset management. Due to the difficulties in collecting, preparing, and managing drinking water distribution system data, most pipe break models can be classified as “statistical–physical” or “hypothesis-generating.” We develop the BBN with the hope of contributing to the “hypothesis-generating” class of models, while demonstrating the possibility that BBNs might also be used as “statistical–physical” models. Our model is learned from pipe breaks and covariate data from a mid-Atlantic United States (U.S.) drinking water distribution system network. BBN models are learned using a constraint-based method, a score-based method, and a hybrid method. Model evaluation is based on log-likelihood scoring. Sensitivity analysis using mutual information criterion is also reported. While our results indicate general agreement with prior results reported in pipe break modeling studies, they also suggest that it may be difficult to select among model alternatives. This model uncertainty may mean that more research is needed for understanding whether additional pipe break risk factors beyond age, break history, pipe material, and pipe diameter might be important for asset management planning. - Highlights: • We show Bayesian Networks for predictive and diagnostic management of water distribution systems. • Our model may enable system operators and managers to prioritize system

  4. Effects of source and receiver locations in predicting room transfer functions by a phased beam tracing method

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho; Ih, Jeong-Guon

    2012-01-01

    The accuracy of a phased beam tracing method in predicting transfer functions is investigated with a special focus on the positions of the source and receiver. Simulated transfer functions for various source-receiver pairs using the phased beam tracing method were compared with analytical Green’s...

  5. Trace elements in particulate matter from metropolitan regions of Northern China: Sources, concentrations and size distributions.

    Science.gov (United States)

    Pan, Yuepeng; Tian, Shili; Li, Xingru; Sun, Ying; Li, Yi; Wentworth, Gregory R; Wang, Yuesi

    2015-12-15

    Public concerns over airborne trace elements (TEs) in metropolitan areas are increasing, but long-term and multi-site observations of size-resolved aerosol TEs in China are still lacking. Here, we identify highly elevated levels of atmospheric TEs in megacities and industrial sites in a Beijing-Tianjin-Hebei urban agglomeration relative to background areas, with the annual mean values of As, Pb, Ni, Cd and Mn exceeding the acceptable limits of the World Health Organization. Despite the spatial variability in concentrations, the size distribution pattern of each trace element was quite similar across the region. Crustal elements of Al and Fe were mainly found in coarse particles (2.1-9 μm), whereas the main fraction of toxic metals, such as Cu, Zn, As, Se, Cd and Pb, was found in submicron particles (metals were enriched by over 100-fold relative to the Earth's crust. The size distributions of Na, Mg, K, Ca, V, Cr, Mn, Ni, Mo and Ba were bimodal, with two peaks at 0.43-0.65 μm and 4.7-5.8 μm. The combination of the size distribution information, principal component analysis and air mass back trajectory model offered a robust technique for distinguishing the main sources for airborne TEs, e.g., soil dust, fossil fuel combustion and industrial emissions, at different sites. In addition, higher elemental concentrations coincided with westerly flow, indicating that polluted soil and fugitive dust were major sources of TEs on the regional scale. However, the contribution of coal burning, iron industry/oil combustion and non-ferrous smelters to atmospheric metal pollution in Northern China should be given more attention. Considering that the concentrations of heavy metals associated with fine particles in the target region were significantly higher than those in other Asian sites, the implementations of strict environmental standards in China are required to reduce the amounts of these hazardous pollutants released into the atmosphere. Copyright © 2015 Elsevier B

  6. Dark Energy Survey Year 1 Results: Redshift distributions of the weak lensing source galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Hoyle, B.; et al.

    2017-08-04

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributions $n^i_{PZ}(z)$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $n^i(z)=n^i_{PZ}(z-\\Delta z^i)$ to correct the mean redshift of $n^i(z)$ for biases in $n^i_{\\rm PZ}$. The $\\Delta z^i$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $\\Delta z^i$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15

  7. Dark Energy Survey Year 1 Results: Redshift distributions of the weak lensing source galaxies

    Science.gov (United States)

    Hoyle, B.; Gruen, D.; Bernstein, G. M.; Rau, M. M.; De Vicente, J.; Hartley, W. G.; Gaztanaga, E.; DeRose, J.; Troxel, M. A.; Davis, C.; Alarcon, A.; MacCrann, N.; Prat, J.; Sánchez, C.; Sheldon, E.; Wechsler, R. H.; Asorey, J.; Becker, M. R.; Bonnett, C.; Carnero Rosell, A.; Carollo, D.; Carrasco Kind, M.; Castander, F. J.; Cawthon, R.; Chang, C.; Childress, M.; Davis, T. M.; Drlica-Wagner, A.; Gatti, M.; Glazebrook, K.; Gschwend, J.; Hinton, S. R.; Hoormann, J. K.; Kim, A. G.; King, A.; Kuehn, K.; Lewis, G.; Lidman, C.; Lin, H.; Macaulay, E.; Maia, M. A. G.; Martini, P.; Mudd, D.; Möller, A.; Nichol, R. C.; Ogando, R. L. C.; Rollins, R. P.; Roodman, A.; Ross, A. J.; Rozo, E.; Rykoff, E. S.; Samuroff, S.; Sevilla-Noarbe, I.; Sharp, R.; Sommer, N. E.; Tucker, B. E.; Uddin, S. A.; Varga, T. N.; Vielzeuf, P.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Busha, M. T.; Capozzi, D.; Carretero, J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Eifler, T. F.; Estrada, J.; Evrard, A. E.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Giannantonio, T.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Kirk, D.; Krause, E.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Miquel, R.; Nord, B.; O'Neill, C. R.; Plazas, A. A.; Romer, A. K.; Sako, M.; Sanchez, E.; Santiago, B.; Scarpine, V.; Schindler, R.; Schubnell, M.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.; Yanny, B.; Zuntz, J.; DES Collaboration

    2018-04-01

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the populations of galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z ≈ 0.2 and ≈1.3, and to produce initial estimates of the lensing-weighted redshift distributions n^i_PZ(z)∝ dn^i/dz for members of bin i. Accurate determination of cosmological parameters depends critically on knowledge of ni but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts n^i(z)=n^i_PZ(z-Δ z^i) to correct the mean redshift of ni(z) for biases in n^i_PZ. The Δzi are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the Δzi of the three lowest redshift bins are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15 < z < 0.9. This paper details the BPZ and COSMOS procedures, and demonstrates that the cosmological inference is insensitive to details of the ni(z) beyond the choice of Δzi. The clustering and COSMOS validation methods produce consistent estimates of Δzi in the bins where both can be applied, with combined uncertainties of σ _{Δ z^i}=0.015, 0.013, 0.011, and 0.022 in the four bins. Repeating the photo-z proceedure instead using the Directional Neighborhood Fitting (DNF) algorithm, or using the ni(z) estimated from the matched sample in COSMOS, yields no discernible difference in cosmological inferences.

  8. Climatic associations of British species distributions show good transferability in time but low predictive accuracy for range change.

    Directory of Open Access Journals (Sweden)

    Giovanni Rapacciuolo

    Full Text Available Conservation planners often wish to predict how species distributions will change in response to environmental changes. Species distribution models (SDMs are the primary tool for making such predictions. Many methods are widely used; however, they all make simplifying assumptions, and predictions can therefore be subject to high uncertainty. With global change well underway, field records of observed range shifts are increasingly being used for testing SDM transferability. We used an unprecedented distribution dataset documenting recent range changes of British vascular plants, birds, and butterflies to test whether correlative SDMs based on climate change provide useful approximations of potential distribution shifts. We modelled past species distributions from climate using nine single techniques and a consensus approach, and projected the geographical extent of these models to a more recent time period based on climate change; we then compared model predictions with recent observed distributions in order to estimate the temporal transferability and prediction accuracy of our models. We also evaluated the relative effect of methodological and taxonomic variation on the performance of SDMs. Models showed good transferability in time when assessed using widespread metrics of accuracy. However, models had low accuracy to predict where occupancy status changed between time periods, especially for declining species. Model performance varied greatly among species within major taxa, but there was also considerable variation among modelling frameworks. Past climatic associations of British species distributions retain a high explanatory power when transferred to recent time--due to their accuracy to predict large areas retained by species--but fail to capture relevant predictors of change. We strongly emphasize the need for caution when using SDMs to predict shifts in species distributions: high explanatory power on temporally-independent records

  9. The source and distribution of thermogenic dissolved organic matter in the ocean

    Science.gov (United States)

    Dittmar, T.; Suryaputra, I. G. N. A.; Paeng, J.

    2009-04-01

    Thermogenic organic matter (ThOM) is abundant in the environment. ThOM is produced at elevated temperature and pressure in deep sediments and earth's crust, and it is also a residue of fossil fuel and biomass burning ("black carbon"). Because of its refractory character, it accumulates in soils and sediments and, therefore, may sequester carbon from active cycles. It was hypothesized that a significant component of marine dissolved organic matter (DOM) might be thermogenic. Here we present a detailed data set on the distribution of thermogenic DOM in major water masses of the deep and surface ocean. In addition, several potential sources of thermogenic DOM to the ocean were investigated: active seeps of brine fluids in the deep Gulf of Mexico, rivers, estuaries and submarine groundwaters. Studies on deep-sea hydrothermal vents and aerosol deposition are ongoing. All DOM samples were isolated from seawater via solid phase extraction (SPE-DOM). ThOM was quantified in the extracts as benzene-polycarboxylic acids (BPCAs) after nitric acid oxidation via high-performance liquid chromatography and diode array detection (HPLC-DAD). BPCAs are produced exclusively from fused ring systems and are therefore unambiguous molecular tracers for ThOM. In addition to BPCA determination, the molecular composition and structure of ThOM was characterized in detail via ultrahigh resolution mass spectrometry (FT-ICR-MS). All marine and river DOM samples yielded significant amounts of BPCAs. The cold seep system in the deep Gulf of Mexico, but also black water rivers (like the Suwannee River) were particularly rich in ThOM. Up to 10% of total dissolved organic carbon was thermogenic in both systems. The most abundant BPCA was benzene-pentacarboxylic acid (B5CA). The molecular composition of BPCAs and the FT-ICR-MS data indicate a relatively small number (5-8) of fused aromatic rings per molecule. Overall, the molecular BPCA patterns were very similar independent of the source of Th

  10. Hydrological-niche models predict water plant functional group distributions in diverse wetland types.

    Science.gov (United States)

    Deane, David C; Nicol, Jason M; Gehrig, Susan L; Harding, Claire; Aldridge, Kane T; Goodman, Abigail M; Brookes, Justin D

    2017-06-01

    Human use of water resources threatens environmental water supplies. If resource managers are to develop policies that avoid unacceptable ecological impacts, some means to predict ecosystem response to changes in water availability is necessary. This is difficult to achieve at spatial scales relevant for water resource management because of the high natural variability in ecosystem hydrology and ecology. Water plant functional groups classify species with similar hydrological niche preferences together, allowing a qualitative means to generalize community responses to changes in hydrology. We tested the potential for functional groups in making quantitative prediction of water plant functional group distributions across diverse wetland types over a large geographical extent. We sampled wetlands covering a broad range of hydrogeomorphic and salinity conditions in South Australia, collecting both hydrological and floristic data from 687 quadrats across 28 wetland hydrological gradients. We built hydrological-niche models for eight water plant functional groups using a range of candidate models combining different surface inundation metrics. We then tested the predictive performance of top-ranked individual and averaged models for each functional group. Cross validation showed that models achieved acceptable predictive performance, with correct classification rates in the range 0.68-0.95. Model predictions can be made at any spatial scale that hydrological data are available and could be implemented in a geographical information system. We show the response of water plant functional groups to inundation is consistent enough across diverse wetland types to quantify the probability of hydrological impacts over regional spatial scales. © 2017 by the Ecological Society of America.

  11. An Open-Source Web-Based Tool for Resource-Agnostic Interactive Translation Prediction

    Directory of Open Access Journals (Sweden)

    Daniel Torregrosa

    2014-09-01

    Full Text Available We present a web-based open-source tool for interactive translation prediction (ITP and describe its underlying architecture. ITP systems assist human translators by making context-based computer-generated suggestions as they type. Most of the ITP systems in literature are strongly coupled with a statistical machine translation system that is conveniently adapted to provide the suggestions. Our system, however, follows a resource-agnostic approach and suggestions are obtained from any unmodified black-box bilingual resource. This paper reviews our ITP method and describes the architecture of Forecat, a web tool, partly based on the recent technology of web components, that eases the use of our ITP approach in any web application requiring this kind of translation assistance. We also evaluate the performance of our method when using an unmodified Moses-based statistical machine translation system as the bilingual resource.

  12. Development of unfolding method to obtain pin-wise source strength distribution from PWR spent fuel assembly measurement

    International Nuclear Information System (INIS)

    Sitompul, Yos Panagaman; Shin, Hee-Sung; Park, Se-Hwan; Oh, Jong Myeong; Seo, Hee; Kim, Ho Dong

    2013-01-01

    An unfolding method has been developed to obtain a pin-wise source strength distribution of a 14 × 14 pressurized water reactor (PWR) spent fuel assembly. Sixteen measured gamma dose rates at 16 control rod guide tubes of an assembly are unfolded to 179 pin-wise source strengths of the assembly. The method calculates and optimizes five coefficients of the quadratic fitting function for X-Y source strength distribution, iteratively. The pin-wise source strengths are obtained at the sixth iteration, with a maximum difference between two sequential iterations of about 0.2%. The relative distribution of pin-wise source strength from the unfolding is checked using a comparison with the design code (Westinghouse APA code). The result shows that the relative distribution from the unfolding and design code is consistent within a 5% difference. The absolute value of the pin-wise source strength is also checked by reproducing the dose rates at the measurement points. The result shows that the pin-wise source strengths from the unfolding reproduce the dose rates within a 2% difference. (author)

  13. Evaluation of distribution coefficients for the prediction of strontium and cesium migration in a uniform sand

    International Nuclear Information System (INIS)

    Reynolds, W.D.; Gillham, R.W.; Cherry, J.A.

    1982-01-01

    The validity of using a distribution coefficient (Ksub(d)) in the mathematical prediction of strontium and cesium transport through uniform saturated sand was investigated by comparing measured breakthrough curves with curves of simulations using the advection-dispersion and the advection equations. Values for Ksub(d) were determined by batch equilibration tests and, indirectly, by fitting the mathematical model to breakthrough data from column experiments. Although the advection-dispersion equation accurately represented the breakthrough curves for two nonreactive solutes (chloride and tritium), neither it nor the advection equation provided close representations of the strontium and cesium curves. The simulated breakthrough curves for strontium and cesium were nearly symmetrical, whereas the data curves were very asymmetrical, with long tails. Column experiments with different pore-water velocities indicated that the shape of the normalized breakthrough curves was not sensitive to velocity. This suggests that the asymmetry of the measured curves was the result of nonlinear partitioning of the cations between the solid and liquid phases, rather than nonequilibrium effects. The results indicate that the distribution coefficient, when used in advection-dispersion models for prediction of the migration of strontium and cesium in field situations, can result in significant error

  14. Remotely Sensed High-Resolution Global Cloud Dynamics for Predicting Ecosystem and Biodiversity Distributions.

    Directory of Open Access Journals (Sweden)

    Adam M Wilson

    2016-03-01

    Full Text Available Cloud cover can influence numerous important ecological processes, including reproduction, growth, survival, and behavior, yet our assessment of its importance at the appropriate spatial scales has remained remarkably limited. If captured over a large extent yet at sufficiently fine spatial grain, cloud cover dynamics may provide key information for delineating a variety of habitat types and predicting species distributions. Here, we develop new near-global, fine-grain (≈1 km monthly cloud frequencies from 15 y of twice-daily Moderate Resolution Imaging Spectroradiometer (MODIS satellite images that expose spatiotemporal cloud cover dynamics of previously undocumented global complexity. We demonstrate that cloud cover varies strongly in its geographic heterogeneity and that the direct, observation-based nature of cloud-derived metrics can improve predictions of habitats, ecosystem, and species distributions with reduced spatial autocorrelation compared to commonly used interpolated climate data. These findings support the fundamental role of remote sensing as an effective lens through which to understand and globally monitor the fine-grain spatial variability of key biodiversity and ecosystem properties.

  15. Multi-Model Prediction for Demand Forecast in Water Distribution Networks

    Directory of Open Access Journals (Sweden)

    Rodrigo Lopez Farias

    2018-03-01

    Full Text Available This paper presents a multi-model predictor called Qualitative Multi-Model Predictor Plus (QMMP+ for demand forecast in water distribution networks. QMMP+ is based on the decomposition of the quantitative and qualitative information of the time-series. The quantitative component (i.e., the daily consumption prediction is forecasted and the pattern mode estimated using a Nearest Neighbor (NN classifier and a Calendar. The patterns are updated via a simple Moving Average scheme. The NN classifier and the Calendar are executed simultaneously every period and the most suited model for prediction is selected using a probabilistic approach. The proposed solution for water demand forecast is compared against Radial Basis Function Artificial Neural Networks (RBF-ANN, the statistical Autoregressive Integrated Moving Average (ARIMA, and Double Seasonal Holt-Winters (DSHW approaches, providing the best results when applied to real demand of the Barcelona Water Distribution Network. QMMP+ has demonstrated that the special modelling treatment of water consumption patterns improves the forecasting accuracy.

  16. Quantification of the impact of precipitation spatial distribution uncertainty on predictive uncertainty of a snowmelt runoff model

    Science.gov (United States)

    Jacquin, A. P.

    2012-04-01

    This study is intended to quantify the impact of uncertainty about precipitation spatial distribution on predictive uncertainty of a snowmelt runoff model. This problem is especially relevant in mountain catchments with a sparse precipitation observation network and relative short precipitation records. The model analysed is a conceptual watershed model operating at a monthly time step. The model divides the catchment into five elevation zones, where the fifth zone corresponds to the catchment's glaciers. Precipitation amounts at each elevation zone i are estimated as the product between observed precipitation at a station and a precipitation factor FPi. If other precipitation data are not available, these precipitation factors must be adjusted during the calibration process and are thus seen as parameters of the model. In the case of the fifth zone, glaciers are seen as an inexhaustible source of water that melts when the snow cover is depleted.The catchment case study is Aconcagua River at Chacabuquito, located in the Andean region of Central Chile. The model's predictive uncertainty is measured in terms of the output variance of the mean squared error of the Box-Cox transformed discharge, the relative volumetric error, and the weighted average of snow water equivalent in the elevation zones at the end of the simulation period. Sobol's variance decomposition (SVD) method is used for assessing the impact of precipitation spatial distribution, represented by the precipitation factors FPi, on the models' predictive uncertainty. In the SVD method, the first order effect of a parameter (or group of parameters) indicates the fraction of predictive uncertainty that could be reduced if the true value of this parameter (or group) was known. Similarly, the total effect of a parameter (or group) measures the fraction of predictive uncertainty that would remain if the true value of this parameter (or group) was unknown, but all the remaining model parameters could be fixed

  17. SESAM – a new framework integrating macroecological and species distribution models for predicting spatio-temporal patterns of species assemblages

    DEFF Research Database (Denmark)

    Guisan, Antoine; Rahbek, Carsten

    2011-01-01

    Two different approaches currently prevail for predicting spatial patterns of species assemblages. The first approach (macroecological modelling, MEM) focuses directly on realized properties of species assemblages, whereas the second approach (stacked species distribution modelling, S-SDM) starts...

  18. The radio spectral energy distribution of infrared-faint radio sources

    Science.gov (United States)

    Herzog, A.; Norris, R. P.; Middelberg, E.; Seymour, N.; Spitler, L. R.; Emonts, B. H. C.; Franzen, T. M. O.; Hunstead, R.; Intema, H. T.; Marvil, J.; Parker, Q. A.; Sirothia, S. K.; Hurley-Walker, N.; Bell, M.; Bernardi, G.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Callingham, J. R.; Deshpande, A. A.; Dwarakanath, K. S.; For, B.-Q.; Greenhill, L. J.; Hancock, P.; Hazelton, B. J.; Hindson, L.; Johnston-Hollitt, M.; Kapińska, A. D.; Kaplan, D. L.; Lenc, E.; Lonsdale, C. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Morgan, J.; Oberoi, D.; Offringa, A.; Ord, S. M.; Prabu, T.; Procopio, P.; Udaya Shankar, N.; Srivani, K. S.; Staveley-Smith, L.; Subrahmanyan, R.; Tingay, S. J.; Wayth, R. B.; Webster, R. L.; Williams, A.; Williams, C. L.; Wu, C.; Zheng, Q.; Bannister, K. W.; Chippendale, A. P.; Harvey-Smith, L.; Heywood, I.; Indermuehle, B.; Popping, A.; Sault, R. J.; Whiting, M. T.

    2016-10-01

    Context. Infrared-faint radio sources (IFRS) are a class of radio-loud (RL) active galactic nuclei (AGN) at high redshifts (z ≥ 1.7) that are characterised by their relative infrared faintness, resulting in enormous radio-to-infrared flux density ratios of up to several thousand. Aims: Because of their optical and infrared faintness, it is very challenging to study IFRS at these wavelengths. However, IFRS are relatively bright in the radio regime with 1.4 GHz flux densities of a few to a few tens of mJy. Therefore, the radio regime is the most promising wavelength regime in which to constrain their nature. We aim to test the hypothesis that IFRS are young AGN, particularly GHz peaked-spectrum (GPS) and compact steep-spectrum (CSS) sources that have a low frequency turnover. Methods: We use the rich radio data set available for the Australia Telescope Large Area Survey fields, covering the frequency range between 150 MHz and 34 GHz with up to 19 wavebands from different telescopes, and build radio spectral energy distributions (SEDs) for 34 IFRS. We then study the radio properties of this class of object with respect to turnover, spectral index, and behaviour towards higher frequencies. We also present the highest-frequency radio observations of an IFRS, observed with the Plateau de Bure Interferometer at 105 GHz, and model the multi-wavelength and radio-far-infrared SED of this source. Results: We find IFRS usually follow single power laws down to observed frequencies of around 150 MHz. Mostly, the radio SEDs are steep (α IFRS show statistically significantly steeper radio SEDs than the broader RL AGN population. Our analysis reveals that the fractions of GPS and CSS sources in the population of IFRS are consistent with the fractions in the broader RL AGN population. We find that at least % of IFRS contain young AGN, although the fraction might be significantly higher as suggested by the steep SEDs and the compact morphology of IFRS. The detailed multi

  19. A Random Forest approach to predict the spatial distribution of sediment pollution in an estuarine system.

    Directory of Open Access Journals (Sweden)

    Eric S Walsh

    Full Text Available Modeling the magnitude and distribution of sediment-bound pollutants in estuaries is often limited by incomplete knowledge of the site and inadequate sample density. To address these modeling limitations, a decision-support tool framework was conceived that predicts sediment contamination from the sub-estuary to broader estuary extent. For this study, a Random Forest (RF model was implemented to predict the distribution of a model contaminant, triclosan (5-chloro-2-(2,4-dichlorophenoxyphenol (TCS, in Narragansett Bay, Rhode Island, USA. TCS is an unregulated contaminant used in many personal care products. The RF explanatory variables were associated with TCS transport and fate (proxies and direct and indirect environmental entry. The continuous RF TCS concentration predictions were discretized into three levels of contamination (low, medium, and high for three different quantile thresholds. The RF model explained 63% of the variance with a minimum number of variables. Total organic carbon (TOC (transport and fate proxy was a strong predictor of TCS contamination causing a mean squared error increase of 59% when compared to permutations of randomized values of TOC. Additionally, combined sewer overflow discharge (environmental entry and sand (transport and fate proxy were strong predictors. The discretization models identified a TCS area of greatest concern in the northern reach of Narragansett Bay (Providence River sub-estuary, which was validated with independent test samples. This decision-support tool performed well at the sub-estuary extent and provided the means to identify areas of concern and prioritize bay-wide sampling.

  20. Does scale matter? A systematic review of incorporating biological realism when predicting changes in species distributions.

    Science.gov (United States)

    Record, Sydne; Strecker, Angela; Tuanmu, Mao-Ning; Beaudrot, Lydia; Zarnetske, Phoebe; Belmaker, Jonathan; Gerstner, Beth

    2018-01-01

    There is ample evidence that biotic factors, such as biotic interactions and dispersal capacity, can affe