WorldWideScience

Sample records for source distribution predictions

  1. Over-Distribution in Source Memory

    Science.gov (United States)

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  2. Using a topographic index to distribute variable source area runoff predicted with the SCS curve-number equation

    Science.gov (United States)

    Lyon, Steve W.; Walter, M. Todd; Gérard-Marchant, Pierre; Steenhuis, Tammo S.

    2004-10-01

    Because the traditional Soil Conservation Service curve-number (SCS-CN) approach continues to be used ubiquitously in water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed and tested a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Predicting the location of source areas is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point-source pollution. The method presented here used the traditional SCS-CN approach to predict runoff volume and spatial extent of saturated areas and a topographic index, like that used in TOPMODEL, to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was applied to two subwatersheds of the Delaware basin in the Catskill Mountains region of New York State and one watershed in south-eastern Australia to produce runoff-probability maps. Observed saturated area locations in the watersheds agreed with the distributed CN-VSA method. Results showed good agreement with those obtained from the previously validated soil moisture routing (SMR) model. When compared with the traditional SCS-CN method, the distributed CN-VSA method predicted a similar total volume of runoff, but vastly different locations of runoff generation. Thus, the distributed CN-VSA approach provides a physically based method that is simple enough to be incorporated into water quality models, and other tools that currently use the traditional SCS-CN method, while still adhering to the principles of VSA hydrology.

  3. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    Science.gov (United States)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  4. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  5. Hybrid ATDL-gamma distribution model for predicting area source acid gas concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Jakeman, A J; Taylor, J A

    1985-01-01

    An air quality model is developed to predict the distribution of concentrations of acid gas in an urban airshed. The model is hybrid in character, combining reliable features of a deterministic ATDL-based model with statistical distributional approaches. The gamma distribution was identified from a range of distributional models as the best model. The paper shows that the assumptions of a previous hybrid model may be relaxed and presents a methodology for characterizing the uncertainty associated with model predictions. Results are demonstrated for the 98-percentile predictions of 24-h average data over annual periods at six monitoring sites. This percentile relates to the World Health Organization goal for acid gas concentrations.

  6. Thematic and spatial resolutions affect model-based predictions of tree species distribution.

    Science.gov (United States)

    Liang, Yu; He, Hong S; Fraser, Jacob S; Wu, ZhiWei

    2013-01-01

    Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance). We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity) had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types) and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution.

  7. Fiber optic distributed temperature sensing for fire source localization

    Science.gov (United States)

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Sigrist, Markus W.; Li, Jun; Dong, Fengzhong

    2017-08-01

    A method for localizing a fire source based on a distributed temperature sensor system is proposed. Two sections of optical fibers were placed orthogonally to each other as the sensing elements. A tray of alcohol was lit to act as a fire outbreak in a cabinet with an uneven ceiling to simulate a real scene of fire. Experiments were carried out to demonstrate the feasibility of the method. Rather large fluctuations and systematic errors with respect to predicting the exact room coordinates of the fire source caused by the uneven ceiling were observed. Two mathematical methods (smoothing recorded temperature curves and finding temperature peak positions) to improve the prediction accuracy are presented, and the experimental results indicate that the fluctuation ranges and systematic errors are significantly reduced. The proposed scheme is simple and appears reliable enough to locate a fire source in large spaces.

  8. Predictable return distributions

    DEFF Research Database (Denmark)

    Pedersen, Thomas Quistgaard

    trace out the entire distribution. A univariate quantile regression model is used to examine stock and bond return distributions individually, while a multivariate model is used to capture their joint distribution. An empirical analysis on US data shows that certain parts of the return distributions......-of-sample analyses show that the relative accuracy of the state variables in predicting future returns varies across the distribution. A portfolio study shows that an investor with power utility can obtain economic gains by applying the empirical return distribution in portfolio decisions instead of imposing...

  9. Relationship between the Prediction Accuracy of Tsunami Inundation and Relative Distribution of Tsunami Source and Observation Arrays: A Case Study in Tokyo Bay

    Science.gov (United States)

    Takagawa, T.

    2017-12-01

    A rapid and precise tsunami forecast based on offshore monitoring is getting attention to reduce human losses due to devastating tsunami inundation. We developed a forecast method based on the combination of hierarchical Bayesian inversion with pre-computed database and rapid post-computing of tsunami inundation. The method was applied to Tokyo bay to evaluate the efficiency of observation arrays against three tsunamigenic earthquakes. One is a scenario earthquake at Nankai trough and the other two are historic ones of Genroku in 1703 and Enpo in 1677. In general, rich observation array near the tsunami source has an advantage in both accuracy and rapidness of tsunami forecast. To examine the effect of observation time length we used four types of data with the lengths of 5, 10, 20 and 45 minutes after the earthquake occurrences. Prediction accuracy of tsunami inundation was evaluated by the simulated tsunami inundation areas around Tokyo bay due to target earthquakes. The shortest time length of accurate prediction varied with target earthquakes. Here, accurate prediction means the simulated values fall within the 95% credible intervals of prediction. In Enpo earthquake case, 5-minutes observation is enough for accurate prediction for Tokyo bay, but 10-minutes and 45-minutes are needed in the case of Nankai trough and Genroku, respectively. The difference of the shortest time length for accurate prediction shows the strong relationship with the relative distance from the tsunami source and observation arrays. In the Enpo case, offshore tsunami observation points are densely distributed even in the source region. So, accurate prediction can be rapidly achieved within 5 minutes. This precise prediction is useful for early warnings. Even in the worst case of Genroku, where less observation points are available near the source, accurate prediction can be obtained within 45 minutes. This information can be useful to figure out the outline of the hazard in an early

  10. Predicting volume of distribution with decision tree-based regression methods using predicted tissue:plasma partition coefficients.

    Science.gov (United States)

    Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat

    2015-01-01

    Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.

  11. FDTD verification of deep-set brain tumor hyperthermia using a spherical microwave source distribution

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, D. [20th Intelligence Squadron, Offutt AFB, NE (United States); Rappaport, C.M. [Northeastern Univ., Boston, MA (United States). Center for Electromagnetics Research; Terzuoli, A.J. Jr. [Air Force Inst. of Tech., Dayton, OH (United States). Graduate School of Engineering

    1996-10-01

    Although use of noninvasive microwave hyperthermia to treat cancer is problematic in many human body structures, careful selection of the source electric field distribution around the entire surface of the head can generate a tightly focused global power density maximum at the deepest point within the brain. An analytic prediction of the optimum volume field distribution in a layered concentric head model based on summing spherical harmonic modes is derived and presented. This ideal distribution is then verified using a three-dimensional finite difference time domain (TDTD) simulation with a discretized, MRI-based head model excited by the spherical source. The numerical computation gives a very similar dissipated power pattern as the analytic prediction. This study demonstrates that microwave hyperthermia can theoretically be a feasible cancer treatment modality for tumors in the head, providing a well-resolved hot-spot at depth without overheating any other healthy tissue.

  12. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  13. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    Science.gov (United States)

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Source distribution dependent scatter correction for PVI

    International Nuclear Information System (INIS)

    Barney, J.S.; Harrop, R.; Dykstra, C.J.

    1993-01-01

    Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction

  15. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    Science.gov (United States)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  16. Image authentication using distributed source coding.

    Science.gov (United States)

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  17. Coded aperture imaging of alpha source spatial distribution

    International Nuclear Information System (INIS)

    Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul

    2012-01-01

    The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.

  18. Characterization of DNAPL Source Zone Architecture and Prediction of Associated Plume Response: Progress and Perspectives

    Science.gov (United States)

    Abriola, L. M.; Pennell, K. D.; Ramsburg, C. A.; Miller, E. L.; Christ, J.; Capiro, N. L.; Mendoza-Sanchez, I.; Boroumand, A.; Ervin, R. E.; Walker, D. I.; Zhang, H.

    2012-12-01

    It is now widely recognized that the distribution of contaminant mass will control both the evolution of aqueous phase plumes and the effectiveness of many source zone remediation technologies at sites contaminated by dense nonaqueous phase liquids (DNAPLs). Advances in the management of sites containing DNAPL source zones, however, are currently hampered by the difficulty associated with characterizing subsurface DNAPL 'architecture'. This presentation provides an overview of recent research, integrating experimental and mathematical modeling studies, designed to improve our ability to characterize DNAPL distributions and predict associated plume response. Here emphasis is placed on estimation of the most information-rich DNAPL architecture metrics, through a combination of localized in situ tests and more readily available plume transect concentration observations. Estimated metrics will then serve as inputs to an upscaled screening model for prediction of long term plume response. Machine learning techniques were developed and refined to identify a variety of source zone metrics and associated confidence intervals through the processing of down gradient concentration data. Estimated metrics include the volumes and volume percentages of DNAPL in pools and ganglia, as well as their ratio (pool fraction). Multiphase flow and transport simulations provided training data for model development and assessment that are representative of field-scale DNAPL source zones and their evolving plumes. Here, a variety of release and site heterogeneity (sequential Gaussian permeability) conditions were investigated. Push-pull tracer tests were also explored as a means to provide localized in situ observations to refine these metric estimates. Here, two-dimensional aquifer cell experiments and mathematical modeling were used to quantify upscaled interphase mass transfer rates and the interplay between injection and extraction rates, local source zone architecture, and tracer

  19. Precise Mapping Of A Spatially Distributed Radioactive Source

    International Nuclear Information System (INIS)

    Beck, A.; Caras, I.; Piestum, S.; Sheli, E.; Melamud, Y.; Berant, S.; Kadmon, Y.; Tirosh, D.

    1999-01-01

    Spatial distribution measurement of radioactive sources is a routine task in the nuclear industry. The precision of each measurement depends upon the specific application. However, the technological edge of this precision is motivated by the production of standards for calibration. Within this definition, the most demanding field is the calibration of standards for medical equipment. In this paper, a semi-empirical method for controlling the measurement precision is demonstrated, using a relatively simple laboratory apparatus. The spatial distribution of the source radioactivity is measured as part of the quality assurance tests, during the production of flood sources. These sources are further used in calibration of medical gamma cameras. A typical flood source is a 40 x 60 cm 2 plate with an activity of 10 mCi (or more) of 57 Co isotope. The measurement set-up is based on a single NaI(Tl) scintillator with a photomultiplier tube, moving on an X Y table which scans the flood source. In this application the source is required to have a uniform activity distribution over its surface

  20. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  1. Galactic distribution of X-ray burst sources

    International Nuclear Information System (INIS)

    Lewin, W.H.G.; Hoffman, J.A.; Doty, J.; Clark, G.W.; Swank, J.H.; Becker, R.H.; Pravdo, S.H.; Serlemitsos, P.J.

    1977-01-01

    It is stated that 18 X-ray burst sources have been observed to date, applying the following definition for these bursts - rise times of less than a few seconds, durations of seconds to minutes, and recurrence in some regular pattern. If single burst events that meet the criteria of rise time and duration, but not recurrence are included, an additional seven sources can be added. A sky map is shown indicating their positions. The sources are spread along the galactic equator and cluster near low galactic longitudes, and their distribution is different from that of the observed globular clusters. Observations based on the SAS-3 X-ray observatory studies and the Goddard X-ray Spectroscopy Experiment on OSO-9 are described. The distribution of the sources is examined and the effect of uneven sky exposure on the observed distribution is evaluated. It has been suggested that the bursts are perhaps produced by remnants of disrupted globular clusters and specifically supermassive black holes. This would imply the existence of a new class of unknown objects, and at present is merely an ad hoc method of relating the burst sources to globular clusters. (U.K.)

  2. Assessment of subchannel code ASSERT-PV for flow-distribution predictions

    International Nuclear Information System (INIS)

    Nava-Dominguez, A.; Rao, Y.F.; Waddington, G.M.

    2014-01-01

    Highlights: • Assessment of the subchannel code ASSERT-PV 3.2 for the prediction of flow distribution. • Open literature and in-house experimental data to quantify ASSERT-PV predictions. • Model changes assessed against vertical and horizontal flow experiments. • Improvement of flow-distribution predictions under CANDU-relevant conditions. - Abstract: This paper reports an assessment of the recently released subchannel code ASSERT-PV 3.2 for the prediction of flow-distribution in fuel bundles, including subchannel void fraction, quality and mass fluxes. Experimental data from open literature and from in-house tests are used to assess the flow-distribution models in ASSERT-PV 3.2. The prediction statistics using the recommended model set of ASSERT-PV 3.2 are compared to those from previous code versions. Separate-effects sensitivity studies are performed to quantify the contribution of each flow-distribution model change or enhancement to the improvement in flow-distribution prediction. The assessment demonstrates significant improvement in the prediction of flow-distribution in horizontal fuel channels containing CANDU bundles

  3. Assessment of subchannel code ASSERT-PV for flow-distribution predictions

    Energy Technology Data Exchange (ETDEWEB)

    Nava-Dominguez, A., E-mail: navadoma@aecl.ca; Rao, Y.F., E-mail: raoy@aecl.ca; Waddington, G.M., E-mail: waddingg@aecl.ca

    2014-08-15

    Highlights: • Assessment of the subchannel code ASSERT-PV 3.2 for the prediction of flow distribution. • Open literature and in-house experimental data to quantify ASSERT-PV predictions. • Model changes assessed against vertical and horizontal flow experiments. • Improvement of flow-distribution predictions under CANDU-relevant conditions. - Abstract: This paper reports an assessment of the recently released subchannel code ASSERT-PV 3.2 for the prediction of flow-distribution in fuel bundles, including subchannel void fraction, quality and mass fluxes. Experimental data from open literature and from in-house tests are used to assess the flow-distribution models in ASSERT-PV 3.2. The prediction statistics using the recommended model set of ASSERT-PV 3.2 are compared to those from previous code versions. Separate-effects sensitivity studies are performed to quantify the contribution of each flow-distribution model change or enhancement to the improvement in flow-distribution prediction. The assessment demonstrates significant improvement in the prediction of flow-distribution in horizontal fuel channels containing CANDU bundles.

  4. Moving Towards Dynamic Ocean Management: How Well Do Modeled Ocean Products Predict Species Distributions?

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2016-02-01

    Full Text Available Species distribution models are now widely used in conservation and management to predict suitable habitat for protected marine species. The primary sources of dynamic habitat data have been in situ and remotely sensed oceanic variables (both are considered “measured data”, but now ocean models can provide historical estimates and forecast predictions of relevant habitat variables such as temperature, salinity, and mixed layer depth. To assess the performance of modeled ocean data in species distribution models, we present a case study for cetaceans that compares models based on output from a data assimilative implementation of the Regional Ocean Modeling System (ROMS to those based on measured data. Specifically, we used seven years of cetacean line-transect survey data collected between 1991 and 2009 to develop predictive habitat-based models of cetacean density for 11 species in the California Current Ecosystem. Two different generalized additive models were compared: one built with a full suite of ROMS output and another built with a full suite of measured data. Model performance was assessed using the percentage of explained deviance, root mean squared error (RMSE, observed to predicted density ratios, and visual inspection of predicted and observed distributions. Predicted distribution patterns were similar for models using ROMS output and measured data, and showed good concordance between observed sightings and model predictions. Quantitative measures of predictive ability were also similar between model types, and RMSE values were almost identical. The overall demonstrated success of the ROMS-based models opens new opportunities for dynamic species management and biodiversity monitoring because ROMS output is available in near real time and can be forecast.

  5. Distributed power sources for Mars colonization

    International Nuclear Information System (INIS)

    Miley, George H.; Shaban, Yasser

    2003-01-01

    One of the fundamental needs for Mars colonization is an abundant source of energy. The total energy system will probably use a mixture of sources based on solar energy, fuel cells, and nuclear energy. Here we concentrate on the possibility of developing a distributed system employing several unique new types of nuclear energy sources, specifically small fusion devices using inertial electrostatic confinement and portable 'battery type' proton reaction cells

  6. Distributed coding of multiview sparse sources with joint recovery

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren

    2016-01-01

    In support of applications involving multiview sources in distributed object recognition using lightweight cameras, we propose a new method for the distributed coding of sparse sources as visual descriptor histograms extracted from multiview images. The problem is challenging due to the computati...... transform (SIFT) descriptors extracted from multiview images shows that our method leads to bit-rate saving of up to 43% compared to the state-of-the-art distributed compressed sensing method with independent encoding of the sources....

  7. Activity distribution of a cobalt-60 teletherapy source

    International Nuclear Information System (INIS)

    Jaffray, D.A.; Munro, P.; Battista, J.J.; Fenster, A.

    1991-01-01

    In the course of quantifying the effect of radiation source size on the spatial resolution of portal images, a concentric ring structure in the activity distribution of a Cobalt-60 teletherapy source has been observed. The activity distribution was measured using a strip integral technique and confirmed independently by a contact radiograph of an identical but inactive source replica. These two techniques suggested that this concentric ring structure is due to the packing configuration of the small 60Co pellets that constitute the source. The source modulation transfer function (MTF) showed that this ring structure has a negligible influence on the spatial resolution of therapy images when compared to the effect of the large size of the 60Co source

  8. Spatial Regression and Prediction of Water Quality in a Watershed with Complex Pollution Sources.

    Science.gov (United States)

    Yang, Xiaoying; Liu, Qun; Luo, Xingzhang; Zheng, Zheng

    2017-08-16

    Fast economic development, burgeoning population growth, and rapid urbanization have led to complex pollution sources contributing to water quality deterioration simultaneously in many developing countries including China. This paper explored the use of spatial regression to evaluate the impacts of watershed characteristics on ambient total nitrogen (TN) concentration in a heavily polluted watershed and make predictions across the region. Regression results have confirmed the substantial impact on TN concentration by a variety of point and non-point pollution sources. In addition, spatial regression has yielded better performance than ordinary regression in predicting TN concentrations. Due to its best performance in cross-validation, the river distance based spatial regression model was used to predict TN concentrations across the watershed. The prediction results have revealed a distinct pattern in the spatial distribution of TN concentrations and identified three critical sub-regions in priority for reducing TN loads. Our study results have indicated that spatial regression could potentially serve as an effective tool to facilitate water pollution control in watersheds under diverse physical and socio-economical conditions.

  9. Gas source localization and gas distribution mapping with a micro-drone

    International Nuclear Information System (INIS)

    Neumann, Patrick P.

    2013-01-01

    uses gas and wind measurements to reason about the trajectory of a gas patch since it was released by the gas source until it reaches the measurement position of the micro-drone. Because of the chaotic nature of wind, an uncertainty about the wind direction has to be considered in the reconstruction process, which extends this trajectory to a patch path envelope (PPE). In general, the PPE describes the envelope of an area which the gas patch has passed with high probability. Then, the weights of the particles are updated based on the PPE. Given a uniform wind field over the search space and a single gas source, the reconstruction of multiple trajectories at different measurement locations using sufficient gas and wind measurements can lead to an accurate estimate of the gas source location, whose distance to the true source location is used as the main performance criterion. Simulations and real-world experiments are used to validate the proposed method. The aspect of environmental monitoring with a micro-drone is also discussed. Two different sampling approaches are suggested in order to address this problem. One method is the use of a predefined sweeping trajectory to explore the target area with the micro-drone in real-world gas distribution mapping experiments. As an alternative sampling approach an adaptive strategy is presented, which suggests next sampling points based on an artificial potential field to direct the micro-drone towards areas of high predictive mean and high predictive variance, while maximizing the coverage area. The purpose of the sensor planning component is to reduce the time that is necessary to converge to the final gas distribution model or to reliably identify important parameters of the distribution such as areas of high concentration. It is demonstrated that gas distribution models can provide an accurate estimate of the location of stationary gas sources. These strategies have been successfully tested in a variety of real

  10. Gas source localization and gas distribution mapping with a micro-drone

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, Patrick P.

    2013-07-01

    -based GSL algorithm uses gas and wind measurements to reason about the trajectory of a gas patch since it was released by the gas source until it reaches the measurement position of the micro-drone. Because of the chaotic nature of wind, an uncertainty about the wind direction has to be considered in the reconstruction process, which extends this trajectory to a patch path envelope (PPE). In general, the PPE describes the envelope of an area which the gas patch has passed with high probability. Then, the weights of the particles are updated based on the PPE. Given a uniform wind field over the search space and a single gas source, the reconstruction of multiple trajectories at different measurement locations using sufficient gas and wind measurements can lead to an accurate estimate of the gas source location, whose distance to the true source location is used as the main performance criterion. Simulations and real-world experiments are used to validate the proposed method. The aspect of environmental monitoring with a micro-drone is also discussed. Two different sampling approaches are suggested in order to address this problem. One method is the use of a predefined sweeping trajectory to explore the target area with the micro-drone in real-world gas distribution mapping experiments. As an alternative sampling approach an adaptive strategy is presented, which suggests next sampling points based on an artificial potential field to direct the micro-drone towards areas of high predictive mean and high predictive variance, while maximizing the coverage area. The purpose of the sensor planning component is to reduce the time that is necessary to converge to the final gas distribution model or to reliably identify important parameters of the distribution such as areas of high concentration. It is demonstrated that gas distribution models can provide an accurate estimate of the location of stationary gas sources. These strategies have been successfully tested in a variety of real

  11. Gas source localization and gas distribution mapping with a micro-drone

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, Patrick P.

    2013-07-01

    uses gas and wind measurements to reason about the trajectory of a gas patch since it was released by the gas source until it reaches the measurement position of the micro-drone. Because of the chaotic nature of wind, an uncertainty about the wind direction has to be considered in the reconstruction process, which extends this trajectory to a patch path envelope (PPE). In general, the PPE describes the envelope of an area which the gas patch has passed with high probability. Then, the weights of the particles are updated based on the PPE. Given a uniform wind field over the search space and a single gas source, the reconstruction of multiple trajectories at different measurement locations using sufficient gas and wind measurements can lead to an accurate estimate of the gas source location, whose distance to the true source location is used as the main performance criterion. Simulations and real-world experiments are used to validate the proposed method. The aspect of environmental monitoring with a micro-drone is also discussed. Two different sampling approaches are suggested in order to address this problem. One method is the use of a predefined sweeping trajectory to explore the target area with the micro-drone in real-world gas distribution mapping experiments. As an alternative sampling approach an adaptive strategy is presented, which suggests next sampling points based on an artificial potential field to direct the micro-drone towards areas of high predictive mean and high predictive variance, while maximizing the coverage area. The purpose of the sensor planning component is to reduce the time that is necessary to converge to the final gas distribution model or to reliably identify important parameters of the distribution such as areas of high concentration. It is demonstrated that gas distribution models can provide an accurate estimate of the location of stationary gas sources. These strategies have been successfully tested in a variety of real

  12. Microseism Source Distribution Observed from Ireland

    Science.gov (United States)

    Craig, David; Bean, Chris; Donne, Sarah; Le Pape, Florian; Möllhoff, Martin

    2017-04-01

    Ocean generated microseisms (OGM) are recorded globally with similar spectral features observed everywhere. The generation mechanism for OGM and their subsequent propagation to continental regions has led to their use as a proxy for sea-state characteristics. Also many modern seismological methods make use of OGM signals. For example, the Earth's crust and upper mantle can be imaged using ``ambient noise tomography``. For many of these methods an understanding of the source distribution is necessary to properly interpret the results. OGM recorded on near coastal seismometers are known to be related to the local ocean wavefield. However, contributions from more distant sources may also be present. This is significant for studies attempting to use OGM as a proxy for sea-state characteristics such as significant wave height. Ireland has a highly energetic ocean wave climate and is close to one of the major source regions for OGM. This provides an ideal location to study an OGM source region in detail. Here we present the source distribution observed from seismic arrays in Ireland. The region is shown to consist of several individual source areas. These source areas show some frequency dependence and generally occur at or near the continental shelf edge. We also show some preliminary results from an off-shore OBS network to the North-West of Ireland. The OBS network includes instruments on either side of the shelf and should help interpret the array observations.

  13. Computer program for source distribution process in radiation facility

    International Nuclear Information System (INIS)

    Al-Kassiri, H.; Abdul Ghani, B.

    2007-08-01

    Computer simulation for dose distribution using Visual Basic has been done according to the arrangement and activities of Co-60 sources. This program provides dose distribution in treated products depending on the product density and desired dose. The program is useful for optimization of sources distribution during loading process. there is good agreement between calculated data for the program and experimental data.(Author)

  14. Comments on the Redshift Distribution of 44,200 SDSS Quasars: Evidence for Predicted Preferred Redshifts?

    OpenAIRE

    Bell, M. B.

    2004-01-01

    A Sloan Digital Sky Survey (SDSS) source sample containing 44,200 quasar redshifts is examined. Although arguments have been put forth to explain some of the structure observed in the redshift distribution, it is argued here that this structure may just as easily be explained by the presence of previously predicted preferred redshifts.

  15. Radiation Source Mapping with Bayesian Inverse Methods

    Science.gov (United States)

    Hykes, Joshua Michael

    We present a method to map the spectral and spatial distributions of radioactive sources using a small number of detectors. Locating and identifying radioactive materials is important for border monitoring, accounting for special nuclear material in processing facilities, and in clean-up operations. Most methods to analyze these problems make restrictive assumptions about the distribution of the source. In contrast, the source-mapping method presented here allows an arbitrary three-dimensional distribution in space and a flexible group and gamma peak distribution in energy. To apply the method, the system's geometry and materials must be known. A probabilistic Bayesian approach is used to solve the resulting inverse problem (IP) since the system of equations is ill-posed. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint flux, discrete ordinates solutions, obtained in this work by the Denovo code, are required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes are then used to form the linear model to map the state space to the response space. The test for the method is simultaneously locating a set of 137Cs and 60Co gamma sources in an empty room. This test problem is solved using synthetic measurements generated by a Monte Carlo (MCNP) model and using experimental measurements that we collected for this purpose. With the synthetic data, the predicted source distributions identified the locations of the sources to within tens of centimeters, in a room with an approximately four-by-four meter floor plan. Most of the predicted source intensities were within a factor of ten of their true value. The chi-square value of the predicted source was within a factor of five from the expected value based on the number of measurements employed. With a favorable uniform initial guess, the predicted source map was nearly identical to the true distribution

  16. The dose distribution surrounding 192Ir and 137Cs seed sources

    International Nuclear Information System (INIS)

    Thomason, C.; Mackie, T.R.; Wisconsin Univ., Madison, WI; Lindstrom, M.J.; Higgins, P.D.

    1991-01-01

    Dose distributions in water were measured using LiF thermoluminescent dosemeters for 192 Ir seed sources with stainless steel and with platinum encapsulation to determine the effect of differing encapsulation. Dose distribution was measured for a 137 Cs seed source. In addition, dose distributions surrounding these sources were calculated using the EGS4 Monte Carlo code and were compared to measured data. The two methods are in good agreement for all three sources. Tables are given describing dose distribution surrounding each source as a function of distance and angle. Specific dose constants were also determined from results of Monte Carlo simulation. This work confirms the use of the EGS4 Monte Carlo code in modelling 192 Ir and 137 Cs seed sources to obtain brachytherapy dose distributions. (author)

  17. A Heuristic Approach to Distributed Generation Source Allocation for Electrical Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    M. Sharma

    2010-12-01

    Full Text Available The recent trends in electrical power distribution system operation and management are aimed at improving system conditions in order to render good service to the customer. The reforms in distribution sector have given major scope for employment of distributed generation (DG resources which will boost the system performance. This paper proposes a heuristic technique for allocation of distribution generation source in a distribution system. The allocation is determined based on overall improvement in network performance parameters like reduction in system losses, improvement in voltage stability, improvement in voltage profile. The proposed Network Performance Enhancement Index (NPEI along with the heuristic rules facilitate determination of feasible location and corresponding capacity of DG source. The developed approach is tested with different test systems to ascertain its effectiveness.

  18. Model predictive control for Z-source power converter

    DEFF Research Database (Denmark)

    Mo, W.; Loh, P.C.; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of impedance-source (commonly known as Z-source) power converter. Output voltage control and current control for Z-source inverter are analyzed and simulated. With MPC's ability of multi- system variables regulation, load current and voltage...

  19. Experimental Validation of Energy Resources Integration in Microgrids via Distributed Predictive Control

    DEFF Research Database (Denmark)

    Mantovani, Giancarlo; Costanzo, Giuseppe Tommaso; Marinelli, Mattia

    2014-01-01

    This paper presents an innovative control scheme for the management of energy consumption in commercial build- ings with local energy production, such as photovoltaic panels or wind turbine and an energy storage unit. The presented scheme is based on distributed model predictive controllers, which...... sources, a vanadium redox battery system, resistive load, and a point of common coupling to the national grid. Several experiments are carried to assess the performance of the control scheme in managing local energy pro- duction and consumption....

  20. A joint calibration model for combining predictive distributions

    Directory of Open Access Journals (Sweden)

    Patrizia Agati

    2013-05-01

    Full Text Available In many research fields, as for example in probabilistic weather forecasting, valuable predictive information about a future random phenomenon may come from several, possibly heterogeneous, sources. Forecast combining methods have been developed over the years in order to deal with ensembles of sources: the aim is to combine several predictions in such a way to improve forecast accuracy and reduce risk of bad forecasts.In this context, we propose the use of a Bayesian approach to information combining, which consists in treating the predictive probability density functions (pdfs from the individual ensemble members as data in a Bayesian updating problem. The likelihood function is shown to be proportional to the product of the pdfs, adjusted by a joint “calibration function” describing the predicting skill of the sources (Morris, 1977. In this paper, after rephrasing Morris’ algorithm in a predictive context, we propose to model the calibration function in terms of bias, scale and correlation and to estimate its parameters according to the least squares criterion. The performance of our method is investigated and compared with that of Bayesian Model Averaging (Raftery, 2005 on simulated data.

  1. Different Predictive Control Strategies for Active Load Management in Distributed Power Systems with High Penetration of Renewable Energy Sources

    DEFF Research Database (Denmark)

    Zong, Yi; Bindner, Henrik W.; Gehrke, Oliver

    2013-01-01

    In order to achieve a Danish energy supply based on 100% renewable energy from combinations of wind, biomass, wave and solar power in 2050 and to cover 50% of the Danish electricity consumption by wind power in 2020, it requires more renewable energy in buildings and industries (e.g. cold stores......, greenhouses, etc.), and to coordinate the management of large numbers of distributed energy resources with the smart grid solution. This paper presents different predictive control (Genetic Algorithm-based and Model Predictive Control-based) strategies that schedule controlled loads in the industrial...... and residential sectors, based on dynamic power price and weather forecast, considering users’ comfort settings to meet an optimization objective, such as maximum profit or minimum energy consumption. Some field tests were carried out on a facility for intelligent, active and distributed power systems, which...

  2. Escherichia coli at Ohio Bathing Beaches--Distribution, Sources, Wastewater Indicators, and Predictive Modeling

    Science.gov (United States)

    Francy, Donna S.; Gifford, Amie M.; Darner, Robert A.

    2003-01-01

    Results of studies during the recreational seasons of 2000 and 2001 strengthen the science that supports monitoring of our Nation?s beaches. Water and sediment samples were collected and analyzed for concentrations of Escherichia coli (E. coli). Ancillary water-quality and environmental data were collected or compiled to determine their relation to E. coli concentrations. Data were collected at three Lake Erie urban beaches (Edgewater, Villa Angela, and Huntington), two Lake Erie beaches in a less populated area (Mentor Headlands and Fairport Harbor), and one inland-lake beach (Mosquito Lake). The distribution of E. coli in water and sediments within the bathing area, outside the bathing area, and near the swash zone was investigated at the three Lake Erie urban beaches and at Mosquito Lake. (The swash zone is the zone that is alternately covered and exposed by waves.) Lake-bottom sediments from outside the bathing area were not significant deposition areas for E. coli. In contrast, interstitial water and subsurface sediments from near the swash zone were enriched with E. coli. For example, E. coli concentrations were as high as 100,000 colonies per 100 milliliters in some interstitial waters. Although there are no standards for E. coli in swash-zone materials, the high concentrations found at some locations warrant concern for public health. Studies were done at Mosquito Lake to identify sources of fecal contamination to the lake and bathing beach. Escherichia coli concentrations decreased with distance from a suspected source of fecal contamination that is north of the beach but increased at the bathing beach. This evidence indicated that elevated E. coli concentrations at the bathing beach are of local origin rather than from transport of bacteria from sites to the north. Samples collected from the three Lake Erie urban beaches and Mosquito Lake were analyzed to determine whether wastewater indicators could be used as surrogates for E. coli at bathing beaches

  3. The Competition Between a Localised and Distributed Source of Buoyancy

    Science.gov (United States)

    Partridge, Jamie; Linden, Paul

    2012-11-01

    We propose a new mathematical model to study the competition between localised and distributed sources of buoyancy within a naturally ventilated filling box. The main controlling parameters in this configuration are the buoyancy fluxes of the distributed and local source, specifically their ratio Ψ. The steady state dynamics of the flow are heavily dependent on this parameter. For large Ψ, where the distributed source dominates, we find the space becomes well mixed as expected if driven by an distributed source alone. Conversely, for small Ψ we find the space reaches a stable two layer stratification. This is analogous to the classical case of a purely local source but here the lower layer is buoyant compared to the ambient, due to the constant flux of buoyancy emanating from the distributed source. The ventilation flow rate, buoyancy of the layers and also the location of the interface height, which separates the two layer stratification, are obtainable from the model. To validate the theoretical model, small scale laboratory experiments were carried out. Water was used as the working medium with buoyancy being driven directly by temperature differences. Theoretical results were compared with experimental data and overall good agreement was found. A CASE award project with Arup.

  4. Using open source computational tools for predicting human metabolic stability and additional absorption, distribution, metabolism, excretion, and toxicity properties.

    Science.gov (United States)

    Gupta, Rishi R; Gifford, Eric M; Liston, Ted; Waller, Chris L; Hohman, Moses; Bunin, Barry A; Ekins, Sean

    2010-11-01

    Ligand-based computational models could be more readily shared between researchers and organizations if they were generated with open source molecular descriptors [e.g., chemistry development kit (CDK)] and modeling algorithms, because this would negate the requirement for proprietary commercial software. We initially evaluated open source descriptors and model building algorithms using a training set of approximately 50,000 molecules and a test set of approximately 25,000 molecules with human liver microsomal metabolic stability data. A C5.0 decision tree model demonstrated that CDK descriptors together with a set of Smiles Arbitrary Target Specification (SMARTS) keys had good statistics [κ = 0.43, sensitivity = 0.57, specificity = 0.91, and positive predicted value (PPV) = 0.64], equivalent to those of models built with commercial Molecular Operating Environment 2D (MOE2D) and the same set of SMARTS keys (κ = 0.43, sensitivity = 0.58, specificity = 0.91, and PPV = 0.63). Extending the dataset to ∼193,000 molecules and generating a continuous model using Cubist with a combination of CDK and SMARTS keys or MOE2D and SMARTS keys confirmed this observation. When the continuous predictions and actual values were binned to get a categorical score we observed a similar κ statistic (0.42). The same combination of descriptor set and modeling method was applied to passive permeability and P-glycoprotein efflux data with similar model testing statistics. In summary, open source tools demonstrated predictive results comparable to those of commercial software with attendant cost savings. We discuss the advantages and disadvantages of open source descriptors and the opportunity for their use as a tool for organizations to share data precompetitively, avoiding repetition and assisting drug discovery.

  5. Economic Model Predictive Control for Large-Scale and Distributed Energy Systems

    DEFF Research Database (Denmark)

    Standardi, Laura

    Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption.  We apply the Economic Model Predictive Control (EMPC......) strategy to optimise the economic performances of the energy systems and to balance the power production and consumption. In the case of large-scale energy systems, the electrical grid connects a high number of power units. Because of this, the related control problem involves a high number of variables......In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies.  An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...

  6. Popularity Prediction Tool for ATLAS Distributed Data Management

    Science.gov (United States)

    Beermann, T.; Maettig, P.; Stewart, G.; Lassnig, M.; Garonne, V.; Barisits, M.; Vigne, R.; Serfon, C.; Goossens, L.; Nairz, A.; Molfetas, A.; Atlas Collaboration

    2014-06-01

    This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distributions. This article describes the popularity prediction method and the simulator that is used to evaluate the redistribution.

  7. Popularity prediction tool for ATLAS distributed data management

    International Nuclear Information System (INIS)

    Beermann, T; Maettig, P; Stewart, G; Lassnig, M; Garonne, V; Barisits, M; Vigne, R; Serfon, C; Goossens, L; Nairz, A; Molfetas, A

    2014-01-01

    This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distributions. This article describes the popularity prediction method and the simulator that is used to evaluate the redistribution.

  8. High-Lift Propeller Noise Prediction for a Distributed Electric Propulsion Flight Demonstrator

    Science.gov (United States)

    Nark, Douglas M.; Buning, Pieter G.; Jones, William T.; Derlaga, Joseph M.

    2017-01-01

    Over the past several years, the use of electric propulsion technologies within aircraft design has received increased attention. The characteristics of electric propulsion systems open up new areas of the aircraft design space, such as the use of distributed electric propulsion (DEP). In this approach, electric motors are placed in many different locations to achieve increased efficiency through integration of the propulsion system with the airframe. Under a project called Scalable Convergent Electric Propulsion Technology Operations Research (SCEPTOR), NASA is designing a flight demonstrator aircraft that employs many "high-lift propellers" distributed upstream of the wing leading edge and two cruise propellers (one at each wingtip). As the high-lift propellers are operational at low flight speeds (take-off/approach flight conditions), the impact of the DEP configuration on the aircraft noise signature is also an important design consideration. This paper describes efforts toward the development of a mulit-fidelity aerodynamic and acoustic methodology for DEP high-lift propeller aeroacoustic modeling. Specifically, the PAS, OVERFLOW 2, and FUN3D codes are used to predict the aerodynamic performance of a baseline high-lift propeller blade set. Blade surface pressure results from the aerodynamic predictions are then used with PSU-WOPWOP and the F1A module of the NASA second generation Aircraft NOise Prediction Program to predict the isolated high-lift propeller noise source. Comparisons of predictions indicate that general trends related to angle of attack effects at the blade passage frequency are captured well with the various codes. Results for higher harmonics of the blade passage frequency appear consistent for the CFD based methods. Conversely, evidence of the need for a study of the effects of increased azimuthal grid resolution on the PAS based results is indicated and will be pursued in future work. Overall, the results indicate that the computational

  9. The Impact of Source Distribution on Scalar Transport over Forested Hills

    Science.gov (United States)

    Ross, Andrew N.; Harman, Ian N.

    2015-08-01

    Numerical simulations of neutral flow over a two-dimensional, isolated, forested ridge are conducted to study the effects of scalar source distribution on scalar concentrations and fluxes over forested hills. Three different constant-flux sources are considered that span a range of idealized but ecologically important source distributions: a source at the ground, one uniformly distributed through the canopy, and one decaying with depth in the canopy. A fourth source type, where the in-canopy source depends on both the wind speed and the difference in concentration between the canopy and a reference concentration on the leaf, designed to mimic deposition, is also considered. The simulations show that the topographically-induced perturbations to the scalar concentration and fluxes are quantitatively dependent on the source distribution. The net impact is a balance of different processes affecting both advection and turbulent mixing, and can be significant even for moderate topography. Sources that have significant input in the deep canopy or at the ground exhibit a larger magnitude advection and turbulent flux-divergence terms in the canopy. The flows have identical velocity fields and so the differences are entirely due to the different tracer concentration fields resulting from the different source distributions. These in-canopy differences lead to larger spatial variations in above-canopy scalar fluxes for sources near the ground compared to cases where the source is predominantly located near the canopy top. Sensitivity tests show that the most significant impacts are often seen near to or slightly downstream of the flow separation or reattachment points within the canopy flow. The qualitative similarities to previous studies using periodic hills suggest that important processes occurring over isolated and periodic hills are not fundamentally different. The work has important implications for the interpretation of flux measurements over forests, even in

  10. Confronting species distribution model predictions with species functional traits.

    Science.gov (United States)

    Wittmann, Marion E; Barnes, Matthew A; Jerde, Christopher L; Jones, Lisa A; Lodge, David M

    2016-02-01

    Species distribution models are valuable tools in studies of biogeography, ecology, and climate change and have been used to inform conservation and ecosystem management. However, species distribution models typically incorporate only climatic variables and species presence data. Model development or validation rarely considers functional components of species traits or other types of biological data. We implemented a species distribution model (Maxent) to predict global climate habitat suitability for Grass Carp (Ctenopharyngodon idella). We then tested the relationship between the degree of climate habitat suitability predicted by Maxent and the individual growth rates of both wild (N = 17) and stocked (N = 51) Grass Carp populations using correlation analysis. The Grass Carp Maxent model accurately reflected the global occurrence data (AUC = 0.904). Observations of Grass Carp growth rate covered six continents and ranged from 0.19 to 20.1 g day(-1). Species distribution model predictions were correlated (r = 0.5, 95% CI (0.03, 0.79)) with observed growth rates for wild Grass Carp populations but were not correlated (r = -0.26, 95% CI (-0.5, 0.012)) with stocked populations. Further, a review of the literature indicates that the few studies for other species that have previously assessed the relationship between the degree of predicted climate habitat suitability and species functional traits have also discovered significant relationships. Thus, species distribution models may provide inferences beyond just where a species may occur, providing a useful tool to understand the linkage between species distributions and underlying biological mechanisms.

  11. Brightness distribution data on 2918 radio sources at 365 MHz

    International Nuclear Information System (INIS)

    Cotton, W.D.; Owen, F.N.; Ghigo, F.D.

    1975-01-01

    This paper is the second in a series describing the results of a program attempting to fit models of the brightness distribution to radio sources observed at 365 MHz with the Bandwidth Synthesis Interferometer (BSI) operated by the University of Texas Radio Astronomy Observatory. Results for a further 2918 radio sources are given. An unresolved model and three symmetric extended models with angular sizes in the range 10--70 arcsec were attempted for each radio source. In addition, for 348 sources for which other observations of brightness distribution are published, the reference to the observations and a brief description are included

  12. The dose distribution surrounding sup 192 Ir and sup 137 Cs seed sources

    Energy Technology Data Exchange (ETDEWEB)

    Thomason, C [Wisconsin Univ., Madison, WI (USA). Dept. of Medical Physics; Mackie, T R [Wisconsin Univ., Madison, WI (USA). Dept. of Medical Physics Wisconsin Univ., Madison, WI (USA). Dept. of Human Oncology; Lindstrom, M J [Wisconsin Univ., Madison, WI (USA). Biostatistics Center; Higgins, P D [Cleveland Clinic Foundation, OH (USA). Dept. of Radiation Oncology

    1991-04-01

    Dose distributions in water were measured using LiF thermoluminescent dosemeters for {sup 192}Ir seed sources with stainless steel and with platinum encapsulation to determine the effect of differing encapsulation. Dose distribution was measured for a {sup 137}Cs seed source. In addition, dose distributions surrounding these sources were calculated using the EGS4 Monte Carlo code and were compared to measured data. The two methods are in good agreement for all three sources. Tables are given describing dose distribution surrounding each source as a function of distance and angle. Specific dose constants were also determined from results of Monte Carlo simulation. This work confirms the use of the EGS4 Monte Carlo code in modelling {sup 192}Ir and {sup 137}Cs seed sources to obtain brachytherapy dose distributions. (author).

  13. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  14. Effect of source angular distribution on the evaluation of gamma-ray skyshine

    Energy Technology Data Exchange (ETDEWEB)

    Sheu, R.D.; Jiang, S.H. [Dept. of Engineering and System Science, National Tsing Hua Univ., Taiwan (China); Chang, B.J.; Chen, I.J. [Division of Health Physics, Inst. of Nuclear Energy Research, Taiwan (China)

    2000-03-01

    The effect of the angular distribution of the equivalent point source on the analysis of the skyshine dose rates was investigated in detail. The dedicated skyshine codes SKYDOSE and McSKY were revised to include the capability of dealing with the anisotropic source. It was found that a replace of the cosine-distributed source by an isotropic source will overestimate the skyshine dose rates for large roof-subtended angles and cause underestimation for small roof-subtended angles. For building with roof shielding, however, replacing the cosine-distributed source by an isotropic source will always underestimate the skyshine dose rates. The skyshine dose rates from a volume source calculated by the dedicated skyshine code agree very well with those of the MCNP Monte Carlo calculation. (author)

  15. Application of 'SPICE' to predict temperature distribution in heat pipes

    Energy Technology Data Exchange (ETDEWEB)

    Li, H M; Liu, Y; Damodaran, M [Nanyang Technological Univ., Singapore (SG). School of Mechanical and Production Engineering

    1991-11-01

    This article presents a new alternative approach to predict temperature distribution in heat pipes. In this method, temperature distribution in a heat pipe, modelled as an analogous electrical circuit, is predicted by applying SPICE, a general-purpose circuit simulation program. SPICE is used to simulate electrical circuit designs before the prototype is assembled. Useful predictions are obtained for heat pipes with and without adiabatic sections and for heat pipes with various evaporator and condenser lengths. Comparison of the predicted results with experiments demonstrates fairly good agreement. It is also shown how interdisciplinary developments could be used appropriately. (author).

  16. Predictive Analytics for Coordinated Optimization in Distribution Systems

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Rui [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-04-13

    This talk will present NREL's work on developing predictive analytics that enables the optimal coordination of all the available resources in distribution systems to achieve the control objectives of system operators. Two projects will be presented. One focuses on developing short-term state forecasting-based optimal voltage regulation in distribution systems; and the other one focuses on actively engaging electricity consumers to benefit distribution system operations.

  17. On-line test of power distribution prediction system for boiling water reactors

    International Nuclear Information System (INIS)

    Nishizawa, Y.; Kiguchi, T.; Kobayashi, S.; Takumi, K.; Tanaka, H.; Tsutsumi, R.; Yokomi, M.

    1982-01-01

    A power distribution prediction system for boiling water reactors has been developed and its on-line performance test has proceeded at an operating commercial reactor. This system predicts the power distribution or thermal margin in advance of control rod operations and core flow rate change. This system consists of an on-line computer system, an operator's console with a color cathode-ray tube, and plant data input devices. The main functions of this system are present power distribution monitoring, power distribution prediction, and power-up trajectory prediction. The calculation method is based on a simplified nuclear thermal-hydraulic calculation, which is combined with a method of model identification to the actual reactor core state. It has been ascertained by the on-line test that the predicted power distribution (readings of traversing in-core probe) agrees with the measured data within 6% root-mean-square. The computing time required for one prediction calculation step is less than or equal to 1.5 min by an HIDIC-80 on-line computer

  18. A Predictive Distribution Model for Cooperative Braking System of an Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Hongqiang Guo

    2014-01-01

    Full Text Available A predictive distribution model for a series cooperative braking system of an electric vehicle is proposed, which can solve the real-time problem of the optimum braking force distribution. To get the predictive distribution model, firstly three disciplines of the maximum regenerative energy recovery capability, the maximum generating efficiency and the optimum braking stability are considered, then an off-line process optimization stream is designed, particularly the optimal Latin hypercube design (Opt LHD method and radial basis function neural network (RBFNN are utilized. In order to decouple the variables between different disciplines, a concurrent subspace design (CSD algorithm is suggested. The established predictive distribution model is verified in a dynamic simulation. The off-line optimization results show that the proposed process optimization stream can improve the regenerative energy recovery efficiency, and optimize the braking stability simultaneously. Further simulation tests demonstrate that the predictive distribution model can achieve high prediction accuracy and is very beneficial for the cooperative braking system.

  19. Using geomorphological variables to predict the spatial distribution of plant species in agricultural drainage networks.

    Science.gov (United States)

    Rudi, Gabrielle; Bailly, Jean-Stéphane; Vinatier, Fabrice

    2018-01-01

    To optimize ecosystem services provided by agricultural drainage networks (ditches) in headwater catchments, we need to manage the spatial distribution of plant species living in these networks. Geomorphological variables have been shown to be important predictors of plant distribution in other ecosystems because they control the water regime, the sediment deposition rates and the sun exposure in the ditches. Whether such variables may be used to predict plant distribution in agricultural drainage networks is unknown. We collected presence and absence data for 10 herbaceous plant species in a subset of a network of drainage ditches (35 km long) within a Mediterranean agricultural catchment. We simulated their spatial distribution with GLM and Maxent model using geomorphological variables and distance to natural lands and roads. Models were validated using k-fold cross-validation. We then compared the mean Area Under the Curve (AUC) values obtained for each model and other metrics issued from the confusion matrices between observed and predicted variables. Based on the results of all metrics, the models were efficient at predicting the distribution of seven species out of ten, confirming the relevance of geomorphological variables and distance to natural lands and roads to explain the occurrence of plant species in this Mediterranean catchment. In particular, the importance of the landscape geomorphological variables, ie the importance of the geomorphological features encompassing a broad environment around the ditch, has been highlighted. This suggests that agro-ecological measures for managing ecosystem services provided by ditch plants should focus on the control of the hydrological and sedimentological connectivity at the catchment scale. For example, the density of the ditch network could be modified or the spatial distribution of vegetative filter strips used for sediment trapping could be optimized. In addition, the vegetative filter strips could constitute

  20. Deformation due to distributed sources in micropolar thermodiffusive medium

    Directory of Open Access Journals (Sweden)

    Sachin Kaushal

    2010-10-01

    Full Text Available The general solution to the field equations in micropolar generalized thermodiffusive in the context of G-L theory is investigated by applying the Laplace and Fourier transform's as a result of various sources. An application of distributed normal forces or thermal sources or potential sources has been taken to show the utility of the problem. To get the solution in the physical form, a numerical inversion technique has been applied. The transformed components of stress, temperature distribution and chemical potential for G-L theory and CT theory has been depicted graphically and results are compared analytically to show the impact of diffusion, relaxation times and micropolarity on these quantities. Some special case of interest are also deduced from present investigation.

  1. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  2. Visibility from roads predict the distribution of invasive fishes in agricultural ponds.

    Science.gov (United States)

    Kizuka, Toshikazu; Akasaka, Munemitsu; Kadoya, Taku; Takamura, Noriko

    2014-01-01

    Propagule pressure and habitat characteristics are important factors used to predict the distribution of invasive alien species. For species exhibiting strong propagule pressure because of human-mediated introduction of species, indicators of introduction potential must represent the behavioral characteristics of humans. This study examined 64 agricultural ponds to assess the visibility of ponds from surrounding roads and its value as a surrogate of propagule pressure to explain the presence and absence of two invasive fish species. A three-dimensional viewshed analysis using a geographic information system quantified the visual exposure of respective ponds to humans. Binary classification trees were developed as a function of their visibility from roads, as well as five environmental factors: river density, connectivity with upstream dam reservoirs, pond area, chlorophyll a concentration, and pond drainage. Traditional indicators of human-mediated introduction (road density and proportion of urban land-use area) were alternatively included for comparison instead of visual exposure. The presence of Bluegill (Lepomis macrochirus) was predicted by the ponds' higher visibility from roads and pond connection with upstream dam reservoirs. Results suggest that fish stocking into ponds and their dispersal from upstream sources facilitated species establishment. Largemouth bass (Micropterus salmoides) distribution was constrained by chlorophyll a concentration, suggesting their lower adaptability to various environments than that of Bluegill. Based on misclassifications from classification trees for Bluegill, pond visual exposure to roads showed greater predictive capability than traditional indicators of human-mediated introduction. Pond visibility is an effective predictor of invasive species distribution. Its wider use might improve management and mitigate further invasion. The visual exposure of recipient ecosystems to humans is important for many invasive species that

  3. Effect of tissue inhomogeneity on dose distribution of point sources of low-energy electrons

    International Nuclear Information System (INIS)

    Kwok, C.S.; Bialobzyski, P.J.; Yu, S.K.; Prestwich, W.V.

    1990-01-01

    Perturbation in dose distributions of point sources of low-energy electrons at planar interfaces of cortical bone (CB) and red marrow (RM) was investigated experimentally and by Monte Carlo codes EGS and the TIGER series. Ultrathin LiF thermoluminescent dosimeters were used to measure the dose distributions of point sources of 204 Tl and 147 Pm in RM. When the point sources were at 12 mg/cm 2 from a planar interface of CB and RM equivalent plastics, dose enhancement ratios in RM averaged over the region 0--12 mg/cm 2 from the interface were measured to be 1.08±0.03 (SE) and 1.03±0.03 (SE) for 204 Tl and 147 Pm, respectively. The Monte Carlo codes predicted 1.05±0.02 and 1.01±0.02 for the two nuclides, respectively. However, EGS gave consistently 3% higher dose in the dose scoring region than the TIGER series when point sources of monoenergetic electrons up to 0.75 MeV energy were considered in the homogeneous RM situation or in the CB and RM heterogeneous situation. By means of the TIGER series, it was demonstrated that aluminum, which is normally assumed to be equivalent to CB in radiation dosimetry, leads to an overestimation of backscattering of low-energy electrons in soft tissue at a CB--soft-tissue interface by as much as a factor of 2

  4. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  5. Maxent modelling for predicting the potential distribution of Thai Palms

    DEFF Research Database (Denmark)

    Tovaranonte, Jantrararuk; Barfod, Anders S.; Overgaard, Anne Blach

    2011-01-01

    on presence data. The aim was to identify potential hot spot areas, assess the determinants of palm distribution ranges, and provide a firmer knowledge base for future conservation actions. We focused on a relatively small number of climatic, environmental and spatial variables in order to avoid...... overprediction of species distribution ranges. The models with the best predictive power were found by calculating the area under the curve (AUC) of receiver-operating characteristic (ROC). Here, we provide examples of contrasting predicted species distribution ranges as well as a map of modeled palm diversity...

  6. Prediction of calcite Cement Distribution in Shallow Marine Sandstone Reservoirs using Seismic Data

    Energy Technology Data Exchange (ETDEWEB)

    Bakke, N.E.

    1996-12-31

    This doctoral thesis investigates how calcite cemented layers can be detected by reflection seismic data and how seismic data combined with other methods can be used to predict lateral variation in calcite cementation in shallow marine sandstone reservoirs. Focus is on the geophysical aspects. Sequence stratigraphy and stochastic modelling aspects are only covered superficially. Possible sources of calcite in shallow marine sandstone are grouped into internal and external sources depending on their location relative to the presently cemented rock. Well data and seismic data from the Troll Field in the Norwegian North Sea have been analysed. Tuning amplitudes from stacks of thin calcite cemented layers are analysed. Tuning effects are constructive or destructive interference of pulses resulting from two or more closely spaced reflectors. The zero-offset tuning amplitude is shown to depend on calcite content in the stack and vertical stack size. The relationship is found by regression analysis based on extensive seismic modelling. The results are used to predict calcite distribution in a synthetic and a real data example. It is found that describing calcite cemented beds in shallow marine sandstone reservoirs is not a deterministic problem. Hence seismic inversion and sequence stratigraphy interpretation of well data have been combined in a probabilistic approach to produce models of calcite cemented barriers constrained by a maximum amount of information. It is concluded that seismic data can provide valuable information on distribution of calcite cemented beds in reservoirs where the background sandstones are relatively homogeneous. 63 refs., 78 figs., 10 tabs.

  7. Y-Source Boost DC/DC Converter for Distributed Generation

    DEFF Research Database (Denmark)

    Siwakoti, Yam P.; Loh, Poh Chiang; Blaabjerg, Frede

    2015-01-01

    This paper introduces a versatile Y-source boost dc/dc converter intended for distributed power generation, where high gain is often demanded. The proposed converter uses a Y-source impedance network realized with a tightly coupled three-winding inductor for high voltage boosting that is presently...

  8. Supply and distribution for γ-ray sources

    International Nuclear Information System (INIS)

    Yamamoto, Takeo

    1997-01-01

    Japan Atomic energy Research Institute (JAERI) is the only facility to supply and distribute radioisotopes (RI) in Japan. The γ-ray sources for medical use are 192 Ir and 169 Yb for non-destructive examination and 192 Ir, 198 Au and 153 Gd for clinical use. All of these demands in Japan are supplied with domestic products at present. Meanwhile, γ-ray sources imported are 60 Co sources for medical and industrial uses including sterilization of medical instruments, 137 Cs for irradiation to blood and 241 Am for industrial measurements. The major overseas suppliers are Nordion International Inc. and Amersham International plc. RI products on the market are divided into two groups; one is the primary products which are supplied in liquid or solid after chemical or physical treatments of radioactive materials obtained from reactor and the other is the secondary product which is a final product after various processing. Generally these secondary products are used in practice. In Japan, both of the domestic and imported products are supplied to the users via JRIA (Japan Radioisotope Association). The association participates in the sales and the distributions of the secondary products and also in the processings of the primary ones to their sealed sources. Furthermore, stable supplying systems for these products are almost established according to the half life of each nuclide only if there is no accident in the reactor. (M.N.)

  9. Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua

    2014-01-01

    the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...... is analytically intractable. By considering the concave property of the multivariate inverse beta function, we introduce an upper-bound to the true predictive distribution. As the global minimum of this upper-bound exists, the problem is reduced to seek an approximation to the true predictive distribution...

  10. Do Staphylococcus epidermidis Genetic Clusters Predict Isolation Sources?

    Science.gov (United States)

    Tolo, Isaiah; Thomas, Jonathan C.; Fischer, Rebecca S. B.; Brown, Eric L.; Gray, Barry M.

    2016-01-01

    Staphylococcus epidermidis is a ubiquitous colonizer of human skin and a common cause of medical device-associated infections. The extent to which the population genetic structure of S. epidermidis distinguishes commensal from pathogenic isolates is unclear. Previously, Bayesian clustering of 437 multilocus sequence types (STs) in the international database revealed a population structure of six genetic clusters (GCs) that may reflect the species' ecology. Here, we first verified the presence of six GCs, including two (GC3 and GC5) with significant admixture, in an updated database of 578 STs. Next, a single nucleotide polymorphism (SNP) assay was developed that accurately assigned 545 (94%) of 578 STs to GCs. Finally, the hypothesis that GCs could distinguish isolation sources was tested by SNP typing and GC assignment of 154 isolates from hospital patients with bacteremia and those with blood culture contaminants and from nonhospital carriage. GC5 was isolated almost exclusively from hospital sources. GC1 and GC6 were isolated from all sources but were overrepresented in isolates from nonhospital and infection sources, respectively. GC2, GC3, and GC4 were relatively rare in this collection. No association was detected between fdh-positive isolates (GC2 and GC4) and nonhospital sources. Using a machine learning algorithm, GCs predicted hospital and nonhospital sources with 80% accuracy and predicted infection and contaminant sources with 45% accuracy, which was comparable to the results seen with a combination of five genetic markers (icaA, IS256, sesD [bhp], mecA, and arginine catabolic mobile element [ACME]). Thus, analysis of population structure with subgenomic data shows the distinction of hospital and nonhospital sources and the near-inseparability of sources within a hospital. PMID:27076664

  11. Stand diameter distribution modelling and prediction based on Richards function.

    Directory of Open Access Journals (Sweden)

    Ai-guo Duan

    Full Text Available The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM or maximum likelihood estimates method (MLEM were applied to estimate the parameters of models, and the parameter prediction method (PPM and parameter recovery method (PRM were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1 R distribution presented a more accurate simulation than three-parametric Weibull function; (2 the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3 the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4 the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.

  12. The interplay of various sources of noise on reliability of species distribution models hinges on ecological specialisation.

    Science.gov (United States)

    Soultan, Alaaeldin; Safi, Kamran

    2017-01-01

    Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.

  13. Predicting Polylepis distribution: vulnerable and increasingly important Andean woodlands

    Directory of Open Access Journals (Sweden)

    Brian R. Zutta

    2012-11-01

    Full Text Available Polylepis woodlands are a vital resource for preserving biodiversity and hydrological functions, which will be altered by climate change and challenge the sustainability of local human communities. However, these highaltitude Andean ecosystems are becoming increasingly vulnerable due to anthropogenic pressure including fragmentation, deforestation and the increase in livestock. Predicting the distribution of native woodlands has become increasingly important to counteract the negative effects of climate change through reforestation and conservation. The objective of this study was to develop and analyze the distribution models of two species that form extensive woodlands along the Andes, namely Polylepis sericea and P. weberbaueri. This study utilized the program Maxent, climate and remotely sensed environmental layers at 1 km resolution. The predicted distribution model for P. sericea indicated that the species could be located in a variety of habitats along the Andean Cordillera, while P. weberbaueri was restricted to the high elevations of southern Peru and Bolivia. For both species, elevation and temperature metrics were the most significant factors for predicted distribution. Further model refinement of Polylepis and other Andean species using increasingly available satellite data demonstrate the potential to help define areas of diversity and improve conservation strategies for the Andes.

  14. Sediment sources and their Distribution in Chwaka Bay, Zanzibar ...

    African Journals Online (AJOL)

    This work establishes sediment sources, character and their distribution in Chwaka Bay using (i) stable isotopes compositions of organic carbon (OC) and nitrogen, (ii) contents of OC, nitrogen and CaCO3, (iii) C/N ratios, (iv) distribution of sediment mean grain size and sorting, and (v) thickness of unconsolidated sediments.

  15. The electron-dose distribution surrounding an 192Ir wire bracytherapy source investigated using EGS4 simulations and GafChromic film

    International Nuclear Information System (INIS)

    Cheung, Y.C.; Yu, P.K.N.; Young, E.C.M.; Wong, T.P.Y.

    1997-01-01

    The steep dose gradient around 192 Ir brachytherapy wire implants is predicted by the EGS4 (PRESTA version) Monte Carlo simulation. When considering radiation absorbing regions close to the wire source, the accurate dose distribution cannot be calculated by the GE Target II Sun Sparc treatment-planning system. Experiments using GafChromic TM film have been performed to prove the validity of the EGS4 user code when calculating the dose close to the wire source in a low energy range. (Author)

  16. [Effects of sampling plot number on tree species distribution prediction under climate change].

    Science.gov (United States)

    Liang, Yu; He, Hong-Shi; Wu, Zhi-Wei; Li, Xiao-Na; Luo, Xu

    2013-05-01

    Based on the neutral landscapes under different degrees of landscape fragmentation, this paper studied the effects of sampling plot number on the prediction of tree species distribution at landscape scale under climate change. The tree species distribution was predicted by the coupled modeling approach which linked an ecosystem process model with a forest landscape model, and three contingent scenarios and one reference scenario of sampling plot numbers were assumed. The differences between the three scenarios and the reference scenario under different degrees of landscape fragmentation were tested. The results indicated that the effects of sampling plot number on the prediction of tree species distribution depended on the tree species life history attributes. For the generalist species, the prediction of their distribution at landscape scale needed more plots. Except for the extreme specialist, landscape fragmentation degree also affected the effects of sampling plot number on the prediction. With the increase of simulation period, the effects of sampling plot number on the prediction of tree species distribution at landscape scale could be changed. For generalist species, more plots are needed for the long-term simulation.

  17. Optimal Prediction of Moving Sound Source Direction in the Owl.

    Directory of Open Access Journals (Sweden)

    Weston Cox

    2015-07-01

    Full Text Available Capturing nature's statistical structure in behavioral responses is at the core of the ability to function adaptively in the environment. Bayesian statistical inference describes how sensory and prior information can be combined optimally to guide behavior. An outstanding open question of how neural coding supports Bayesian inference includes how sensory cues are optimally integrated over time. Here we address what neural response properties allow a neural system to perform Bayesian prediction, i.e., predicting where a source will be in the near future given sensory information and prior assumptions. The work here shows that the population vector decoder will perform Bayesian prediction when the receptive fields of the neurons encode the target dynamics with shifting receptive fields. We test the model using the system that underlies sound localization in barn owls. Neurons in the owl's midbrain show shifting receptive fields for moving sources that are consistent with the predictions of the model. We predict that neural populations can be specialized to represent the statistics of dynamic stimuli to allow for a vector read-out of Bayes-optimal predictions.

  18. The effect of energy distribution of external source on source multiplication in fast assemblies

    International Nuclear Information System (INIS)

    Karam, R.A.; Vakilian, M.

    1976-02-01

    The essence of this study is the effect of energy distribution of a source on the detection rate as a function of K effective in fast assemblies. This effectiveness, as a function of K was studied in a fission chamber, using the ABN cross-section set and Mach 1 code. It was found that with a source which has a fission spectrum, the reciprocal count rate versus mass relationship is linear down to K effective 0.59. For a thermal source, the linearity was never achieved. (author)

  19. A Distributed Model Predictive Control approach for the integration of flexible loads, storage and renewables

    DEFF Research Database (Denmark)

    Ferrarini, Luca; Mantovani, Giancarlo; Costanzo, Giuseppe Tommaso

    2014-01-01

    This paper presents an innovative solution based on distributed model predictive controllers to integrate the control and management of energy consumption, energy storage, PV and wind generation at customer side. The overall goal is to enable an advanced prosumer to autoproduce part of the energy...... he needs with renewable sources and, at the same time, to optimally exploit the thermal and electrical storages, to trade off its comfort requirements with different pricing schemes (including real-time pricing), and apply optimal control techniques rather than sub-optimal heuristics....

  20. Affordable non-traditional source data mining for context assessment to improve distributed fusion system robustness

    Science.gov (United States)

    Bowman, Christopher; Haith, Gary; Steinberg, Alan; Morefield, Charles; Morefield, Michael

    2013-05-01

    This paper describes methods to affordably improve the robustness of distributed fusion systems by opportunistically leveraging non-traditional data sources. Adaptive methods help find relevant data, create models, and characterize the model quality. These methods also can measure the conformity of this non-traditional data with fusion system products including situation modeling and mission impact prediction. Non-traditional data can improve the quantity, quality, availability, timeliness, and diversity of the baseline fusion system sources and therefore can improve prediction and estimation accuracy and robustness at all levels of fusion. Techniques are described that automatically learn to characterize and search non-traditional contextual data to enable operators integrate the data with the high-level fusion systems and ontologies. These techniques apply the extension of the Data Fusion & Resource Management Dual Node Network (DNN) technical architecture at Level 4. The DNN architecture supports effectively assessment and management of the expanded portfolio of data sources, entities of interest, models, and algorithms including data pattern discovery and context conformity. Affordable model-driven and data-driven data mining methods to discover unknown models from non-traditional and `big data' sources are used to automatically learn entity behaviors and correlations with fusion products, [14 and 15]. This paper describes our context assessment software development, and the demonstration of context assessment of non-traditional data to compare to an intelligence surveillance and reconnaissance fusion product based upon an IED POIs workflow.

  1. Predicting plant distribution in an heterogeneous Alpine landscape: does soil matter?

    Science.gov (United States)

    Buri, Aline; Cianfrani, Carmen; Pradervand, Jean-Nicolas; Guisan, Antoine

    2016-04-01

    Topographic and climatic factors are usually used to predict plant distribution because they are known to explain their presence or absence. Soil properties have been widely shown to influence plant growth and distributions. However, they are rarely taken into account as predictors of plant species distribution models (SDM) in an edaphically heterogeneous landscape. Or, when it happens, interpolation techniques are used to project soil factors in space. In heterogeneous landscape, such as in the Alps region, where soil properties change abruptly as a function of environmental conditions over short distances, interpolation techniques require a huge quantities of samples to be efficient. This is costly and time consuming, and bring more errors than predictive approach for an equivalent number of samples. In this study we aimed to assess whether soil proprieties may be generalized over entire mountainous geographic extents and can improve predictions of plant distributions over traditional topo-climatic predictors. First, we used a predictive approach to map two soil proprieties based on field measurements in the western Swiss Alps region; the soil pH and the ratio of stable isotopes 13C/12C (called δ13CSOM). We used ensemble forecasting techniques combining together several predictive algorithms to build models of the geographic variation in the values of both soil proprieties and projected them in the entire study area. As predictive factors, we employed very high resolution topo-climatic data. In a second step, output maps from the previous task were used as an input for vegetation regional models. We integrated the predicted soil proprieties to a set of basic topo-climatic predictors known to be important to model plants species. Then we modelled the distribution of 156 plant species inhabiting the study area. Finally, we compared the quality of the models having or not soil proprieties as predictors to evaluate their effect on the predictive power of our models

  2. Minimum-phase distribution of cosmic source brightness

    International Nuclear Information System (INIS)

    Gal'chenko, A.A.; Malov, I.F.; Mogil'nitskaya, L.F.; Frolov, V.A.

    1984-01-01

    Minimum-phase distributions of brightness (profiles) for cosmic radio sources 3C 144 (the wave lambda=21 cm), 3C 338 (lambda=3.5 m), and 3C 353 (labda=31.3 cm and 3.5 m) are obtained. A real possibility for the profile recovery from module fragments of its Fourier-image is shown

  3. Building predictive models of soil particle-size distribution

    Directory of Open Access Journals (Sweden)

    Alessandro Samuel-Rosa

    2013-04-01

    Full Text Available Is it possible to build predictive models (PMs of soil particle-size distribution (psd in a region with complex geology and a young and unstable land-surface? The main objective of this study was to answer this question. A set of 339 soil samples from a small slope catchment in Southern Brazil was used to build PMs of psd in the surface soil layer. Multiple linear regression models were constructed using terrain attributes (elevation, slope, catchment area, convergence index, and topographic wetness index. The PMs explained more than half of the data variance. This performance is similar to (or even better than that of the conventional soil mapping approach. For some size fractions, the PM performance can reach 70 %. Largest uncertainties were observed in geologically more complex areas. Therefore, significant improvements in the predictions can only be achieved if accurate geological data is made available. Meanwhile, PMs built on terrain attributes are efficient in predicting the particle-size distribution (psd of soils in regions of complex geology.

  4. Popularity Prediction Tool for ATLAS Distributed Data Management

    CERN Document Server

    Beermann, T; The ATLAS collaboration; Stewart, G; Lassnig, M; Garonne, V; Barisits, M; Vigne, R; Serfon, C; Goossens, L; Nairz, A; Molfetas, A

    2013-01-01

    This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distri...

  5. Popularity Prediction Tool for ATLAS Distributed Data Management

    CERN Document Server

    Beermann, T; The ATLAS collaboration; Stewart, G; Lassnig, M; Garonne, V; Barisits, M; Vigne, R; Serfon, C; Goossens, L; Nairz, A; Molfetas, A

    2014-01-01

    This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distri...

  6. Predictive access control for distributed computation

    DEFF Research Database (Denmark)

    Yang, Fan; Hankin, Chris; Nielson, Flemming

    2013-01-01

    We show how to use aspect-oriented programming to separate security and trust issues from the logical design of mobile, distributed systems. The main challenge is how to enforce various types of security policies, in particular predictive access control policies — policies based on the future beh...... behavior of a program. A novel feature of our approach is that we can define policies concerning secondary use of data....

  7. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  8. DBH Prediction Using Allometry Described by Bivariate Copula Distribution

    Science.gov (United States)

    Xu, Q.; Hou, Z.; Li, B.; Greenberg, J. A.

    2017-12-01

    Forest biomass mapping based on single tree detection from the airborne laser scanning (ALS) usually depends on an allometric equation that relates diameter at breast height (DBH) with per-tree aboveground biomass. The incapability of the ALS technology in directly measuring DBH leads to the need to predict DBH with other ALS-measured tree-level structural parameters. A copula-based method is proposed in the study to predict DBH with the ALS-measured tree height and crown diameter using a dataset measured in the Lassen National Forest in California. Instead of exploring an explicit mathematical equation that explains the underlying relationship between DBH and other structural parameters, the copula-based prediction method utilizes the dependency between cumulative distributions of these variables, and solves the DBH based on an assumption that for a single tree, the cumulative probability of each structural parameter is identical. Results show that compared with the bench-marking least-square linear regression and the k-MSN imputation, the copula-based method obtains better accuracy in the DBH for the Lassen National Forest. To assess the generalization of the proposed method, prediction uncertainty is quantified using bootstrapping techniques that examine the variability of the RMSE of the predicted DBH. We find that the copula distribution is reliable in describing the allometric relationship between tree-level structural parameters, and it contributes to the reduction of prediction uncertainty.

  9. Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...

  10. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420 21...

  11. Natural ventilation in an enclosure induced by a heat source distributed uniformly over a vertical wall

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z.D.; Li, Y.; Mahoney, J. [CSIRO Building, Construction and Engineering, Advanced Thermo-Fluids Technologies Lab., Highett, VIC (Australia)

    2001-05-01

    A simple multi-layer stratification model is suggested for displacement ventilation in a single-zone building driven by a heat source distributed uniformly over a vertical wall. Theoretical expressions are obtained for the stratification interface height and ventilation flow rate and compared with those obtained by an existing model available in the literature. Experiments were also carried out using a recently developed fine-bubble modelling technique. It was shown that the experimental results obtained using the fine-bubble technique are in good agreement with the theoretical predictions. (Author)

  12. The distribution of polarized radio sources >15 μJy IN GOODS-N

    International Nuclear Information System (INIS)

    Rudnick, L.; Owen, F. N.

    2014-01-01

    We present deep Very Large Array observations of the polarization of radio sources in the GOODS-N field at 1.4 GHz at resolutions of 1.''6 and 10''. At 1.''6, we find that the peak flux cumulative number count distribution is N(> p) ∼ 45*(p/30 μJy) –0.6 per square degree above a detection threshold of 14.5 μJy. This represents a break from the steeper slopes at higher flux densities, resulting in fewer sources predicted for future surveys with the Square Kilometer Array and its precursors. It provides a significant challenge for using background rotation measures (RMs) to study clusters of galaxies or individual galaxies. Most of the polarized sources are well above our detection limit, and they are also radio galaxies that are well-resolved even at 10'', with redshifts from ∼0.2-1.9. We determined a total polarized flux for each source by integrating the 10'' polarized intensity maps, as will be done by upcoming surveys such as POSSUM. These total polarized fluxes are a factor of two higher, on average, than the peak polarized flux at 1.''6; this would increase the number counts by ∼50% at a fixed flux level. The detected sources have RMs with a characteristic rms scatter of ∼11 rad m –2 around the local Galactic value, after eliminating likely outliers. The median fractional polarization from all total intensity sources does not continue the trend of increasing at lower flux densities, as seen for stronger sources. The changes in the polarization characteristics seen at these low fluxes likely represent the increasing dominance of star-forming galaxies.

  13. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    International Nuclear Information System (INIS)

    Sig Drellack, Lance Prothro

    2007-01-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  14. A two-stage predictive model to simultaneous control of trihalomethanes in water treatment plants and distribution systems: adaptability to treatment processes.

    Science.gov (United States)

    Domínguez-Tello, Antonio; Arias-Borrego, Ana; García-Barrera, Tamara; Gómez-Ariza, José Luis

    2017-10-01

    The trihalomethanes (TTHMs) and others disinfection by-products (DBPs) are formed in drinking water by the reaction of chlorine with organic precursors contained in the source water, in two consecutive and linked stages, that starts at the treatment plant and continues in second stage along the distribution system (DS) by reaction of residual chlorine with organic precursors not removed. Following this approach, this study aimed at developing a two-stage empirical model for predicting the formation of TTHMs in the water treatment plant and subsequently their evolution along the water distribution system (WDS). The aim of the two-stage model was to improve the predictive capability for a wide range of scenarios of water treatments and distribution systems. The two-stage model was developed using multiple regression analysis from a database (January 2007 to July 2012) using three different treatment processes (conventional and advanced) in the water supply system of Aljaraque area (southwest of Spain). Then, the new model was validated using a recent database from the same water supply system (January 2011 to May 2015). The validation results indicated no significant difference in the predictive and observed values of TTHM (R 2 0.874, analytical variance distribution systems studied, proving the adaptability of the new model to the boundary conditions. Finally the predictive capability of the new model was compared with 17 other models selected from the literature, showing satisfactory results prediction and excellent adaptability to treatment processes.

  15. Neural correlates of encoding processes predicting subsequent cued recall and source memory.

    Science.gov (United States)

    Angel, Lucie; Isingrini, Michel; Bouazzaoui, Badiâa; Fay, Séverine

    2013-03-06

    In this experiment, event-related potentials were used to examine whether the neural correlates of encoding processes predicting subsequent successful recall differed from those predicting successful source memory retrieval. During encoding, participants studied lists of words and were instructed to memorize each word and the list in which it occurred. At test, they had to complete stems (the first four letters) with a studied word and then make a judgment of the initial temporal context (i.e. list). Event-related potentials recorded during encoding were segregated according to subsequent memory performance to examine subsequent memory effects (SMEs) reflecting successful cued recall (cued recall SME) and successful source retrieval (source memory SME). Data showed a cued recall SME on parietal electrode sites from 400 to 1200 ms and a late inversed cued recall SME on frontal sites in the 1200-1400 ms period. Moreover, a source memory SME was reported from 400 to 1400 ms on frontal areas. These findings indicate that patterns of encoding-related activity predicting successful recall and source memory are clearly dissociated.

  16. Continuous-variable quantum key distribution with Gaussian source noise

    International Nuclear Information System (INIS)

    Shen Yujie; Peng Xiang; Yang Jian; Guo Hong

    2011-01-01

    Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.

  17. Determining the temperature and density distribution from a Z-pinch radiation source

    International Nuclear Information System (INIS)

    Matuska, W.; Lee, H.

    1997-01-01

    High temperature radiation sources exceeding one hundred eV can be produced via z-pinches using currently available pulsed power. The usual approach to compare the z-pinch simulation and experimental data is to convert the radiation output at the source, whose temperature and density distributions are computed from the 2-D MHD code, into simulated data such as a spectrometer reading. This conversion process involves a radiation transfer calculation through the axially symmetric source, assuming local thermodynamic equilibrium (LTE), and folding the radiation that reaches the detector with the frequency-dependent response function. In this paper the authors propose a different approach by which they can determine the temperature and density distributions of the radiation source directly from the spatially resolved spectral data. This unfolding process is reliable and unambiguous for the ideal case where LTE holds and the source is axially symmetric. In reality, imperfect LTE and axial symmetry will introduce inaccuracies into the unfolded distributions. The authors use a parameter optimization routine to find the temperature and density distributions that best fit the data. They know from their past experience that the radiation source resulting from the implosion of a thin foil does not exhibit good axial symmetry. However, recent experiments carried out at Sandia National Laboratory using multiple wire arrays were very promising to achieve reasonably good symmetry. For these experiments the method will provide a valuable diagnostic tool

  18. Prediction of spatial distribution for some land use allometric ...

    African Journals Online (AJOL)

    Prediction of spatial distribution for some land use allometric characteristics in land use planning models with geostatistic and Geographical Information System (GIS) (Case study: Boein and Miandasht, Isfahan Province, Iran)

  19. ASSERT and COBRA predictions of flow distribution in vertical bundles

    International Nuclear Information System (INIS)

    Tahir, A.; Carver, M.B.

    1983-01-01

    COBRA and ASSERT are subchannel codes which compute flow and enthalpy distributions in rod bundles. COBRA is a well known code, ASSERT is under development at CRNL. This paper gives a comparison of the two codes with boiling experiments in vertical seven rod bundles. ASSERT predictions of the void distribution are shown to be in good agreement with reported experimental results, while COBRA predictions are unsatisfactory. The mixing models in both COBRA and ASSERT are briefly discussed. The reasons for the failure of COBRA-IV and the success of ASSERT in simulating the experiments are highlighted

  20. Radial dose distribution of 192Ir and 137Cs seed sources

    International Nuclear Information System (INIS)

    Thomason, C.; Higgins, P.

    1989-01-01

    The radial dose distributions in water around /sup 192/ Ir seed sources with both platinum and stainless steel encapsulation have been measured using LiF thermoluminescent dosimeters (TLD) for distances of 1 to 12 cm along the perpendicular bisector of the source to determine the effect of source encapsulation. Similar measurements also have been made around a /sup 137/ Cs seed source of comparable dimensions. The data were fit to a third order polynomial to obtain an empirical equation for the radial dose factor which then can be used in dosimetry. The coefficients of this equation for each of the three sources are given. The radial dose factor of the stainless steel encapsulated /sup 192/ Ir and that of the platinum encapsulated /sup 192/ Ir agree to within 2%. The radial dose distributions measured here for /sup 192/ Ir with either type of encapsulation and for /sup 137/ Cs are indistinguishable from those of other authors when considering uncertainties involved. For clinical dosimetry based on isotropic point or line source models, any of these equations may be used without significantly affecting accuracy

  1. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    . However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones......Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity...

  2. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  3. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  4. Model Predictive Control of Z-source Neutral Point Clamped Inverter

    DEFF Research Database (Denmark)

    Mo, Wei; Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of Z-source Neutral Point Clamped (NPC) inverter. For illustration, current control of Z-source NPC grid-connected inverter is analyzed and simulated. With MPC’s advantage of easily including system constraints, load current, impedance network...... response are obtained at the same time with a formulated Z-source NPC inverter network model. Operation steady state and transient state simulation results of MPC are going to be presented, which shows good reference tracking ability of this method. It provides new control method for Z-source NPC inverter...

  5. How the Assumed Size Distribution of Dust Minerals Affects the Predicted Ice Forming Nuclei

    Science.gov (United States)

    Perlwitz, Jan P.; Fridlind, Ann M.; Garcia-Pando, Carlos Perez; Miller, Ron L.; Knopf, Daniel A.

    2015-01-01

    The formation of ice in clouds depends on the availability of ice forming nuclei (IFN). Dust aerosol particles are considered the most important source of IFN at a global scale. Recent laboratory studies have demonstrated that the mineral feldspar provides the most efficient dust IFN for immersion freezing and together with kaolinite for deposition ice nucleation, and that the phyllosilicates illite and montmorillonite (a member of the smectite group) are of secondary importance.A few studies have applied global models that simulate mineral specific dust to predict the number and geographical distribution of IFN. These studies have been based on the simple assumption that the mineral composition of soil as provided in data sets from the literature translates directly into the mineral composition of the dust aerosols. However, these tables are based on measurements of wet-sieved soil where dust aggregates are destroyed to a large degree. In consequence, the size distribution of dust is shifted to smaller sizes, and phyllosilicates like illite, kaolinite, and smectite are only found in the size range 2 m. In contrast, in measurements of the mineral composition of dust aerosols, the largest mass fraction of these phyllosilicates is found in the size range 2 m as part of dust aggregates. Conversely, the mass fraction of feldspar is smaller in this size range, varying with the geographical location. This may have a significant effect on the predicted IFN number and its geographical distribution.An improved mineral specific dust aerosol module has been recently implemented in the NASA GISS Earth System ModelE2. The dust module takes into consideration the disaggregated state of wet-sieved soil, on which the tables of soil mineral fractions are based. To simulate the atmospheric cycle of the minerals, the mass size distribution of each mineral in aggregates that are emitted from undispersed parent soil is reconstructed. In the current study, we test the null

  6. Localization Accuracy of Distributed Inverse Solutions for Electric and Magnetic Source Imaging of Interictal Epileptic Discharges in Patients with Focal Epilepsy.

    Science.gov (United States)

    Heers, Marcel; Chowdhury, Rasheda A; Hedrich, Tanguy; Dubeau, François; Hall, Jeffery A; Lina, Jean-Marc; Grova, Christophe; Kobayashi, Eliane

    2016-01-01

    Distributed inverse solutions aim to realistically reconstruct the origin of interictal epileptic discharges (IEDs) from noninvasively recorded electroencephalography (EEG) and magnetoencephalography (MEG) signals. Our aim was to compare the performance of different distributed inverse solutions in localizing IEDs: coherent maximum entropy on the mean (cMEM), hierarchical Bayesian implementations of independent identically distributed sources (IID, minimum norm prior) and spatially coherent sources (COH, spatial smoothness prior). Source maxima (i.e., the vertex with the maximum source amplitude) of IEDs in 14 EEG and 19 MEG studies from 15 patients with focal epilepsy were analyzed. We visually compared their concordance with intracranial EEG (iEEG) based on 17 cortical regions of interest and their spatial dispersion around source maxima. Magnetic source imaging (MSI) maxima from cMEM were most often confirmed by iEEG (cMEM: 14/19, COH: 9/19, IID: 8/19 studies). COH electric source imaging (ESI) maxima co-localized best with iEEG (cMEM: 8/14, COH: 11/14, IID: 10/14 studies). In addition, cMEM was less spatially spread than COH and IID for ESI and MSI (p < 0.001 Bonferroni-corrected post hoc t test). Highest positive predictive values for cortical regions with IEDs in iEEG could be obtained with cMEM for MSI and with COH for ESI. Additional realistic EEG/MEG simulations confirmed our findings. Accurate spatially extended sources, as found in cMEM (ESI and MSI) and COH (ESI) are desirable for source imaging of IEDs because this might influence surgical decision. Our simulations suggest that COH and IID overestimate the spatial extent of the generators compared to cMEM.

  7. Prediction of monthly average global solar radiation based on statistical distribution of clearness index

    International Nuclear Information System (INIS)

    Ayodele, T.R.; Ogunjuyigbe, A.S.O.

    2015-01-01

    In this paper, probability distribution of clearness index is proposed for the prediction of global solar radiation. First, the clearness index is obtained from the past data of global solar radiation, then, the parameters of the appropriate distribution that best fit the clearness index are determined. The global solar radiation is thereafter predicted from the clearness index using inverse transformation of the cumulative distribution function. To validate the proposed method, eight years global solar radiation data (2000–2007) of Ibadan, Nigeria are used to determine the parameters of appropriate probability distribution for clearness index. The calculated parameters are then used to predict the future monthly average global solar radiation for the following year (2008). The predicted values are compared with the measured values using four statistical tests: the Root Mean Square Error (RMSE), MAE (Mean Absolute Error), MAPE (Mean Absolute Percentage Error) and the coefficient of determination (R"2). The proposed method is also compared to the existing regression models. The results show that logistic distribution provides the best fit for clearness index of Ibadan and the proposed method is effective in predicting the monthly average global solar radiation with overall RMSE of 0.383 MJ/m"2/day, MAE of 0.295 MJ/m"2/day, MAPE of 2% and R"2 of 0.967. - Highlights: • Distribution of clearnes index is proposed for prediction of global solar radiation. • The clearness index is obtained from the past data of global solar radiation. • The parameters of distribution that best fit the clearness index are determined. • Solar radiation is predicted from the clearness index using inverse transformation. • The method is effective in predicting the monthly average global solar radiation.

  8. Predicting the probability of slip in gait: methodology and distribution study.

    Science.gov (United States)

    Gragg, Jared; Yang, James

    2016-01-01

    The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.

  9. A Predictive Model for Microbial Counts on Beaches where Intertidal Sand is the Primary Source

    Science.gov (United States)

    Feng, Zhixuan; Reniers, Ad; Haus, Brian K.; Solo-Gabriele, Helena M.; Wang, John D.; Fleming, Lora E.

    2015-01-01

    Human health protection at recreational beaches requires accurate and timely information on microbiological conditions to issue advisories. The objective of this study was to develop a new numerical mass balance model for enterococci levels on nonpoint source beaches. The significant advantage of this model is its easy implementation, and it provides a detailed description of the cross-shore distribution of enterococci that is useful for beach management purposes. The performance of the balance model was evaluated by comparing predicted exceedances of a beach advisory threshold value to field data, and to a traditional regression model. Both the balance model and regression equation predicted approximately 70% the advisories correctly at the knee depth and over 90% at the waist depth. The balance model has the advantage over the regression equation in its ability to simulate spatiotemporal variations of microbial levels, and it is recommended for making more informed management decisions. PMID:25840869

  10. Distributed estimation based on observations prediction in wireless sensor networks

    KAUST Repository

    Bouchoucha, Taha

    2015-03-19

    We consider wireless sensor networks (WSNs) used for distributed estimation of unknown parameters. Due to the limited bandwidth, sensor nodes quantize their noisy observations before transmission to a fusion center (FC) for the estimation process. In this letter, the correlation between observations is exploited to reduce the mean-square error (MSE) of the distributed estimation. Specifically, sensor nodes generate local predictions of their observations and then transmit the quantized prediction errors (innovations) to the FC rather than the quantized observations. The analytic and numerical results show that transmitting the innovations rather than the observations mitigates the effect of quantization noise and hence reduces the MSE. © 2015 IEEE.

  11. Microscopic prediction of speech intelligibility in spatially distributed speech-shaped noise for normal-hearing listeners.

    Science.gov (United States)

    Geravanchizadeh, Masoud; Fallah, Ali

    2015-12-01

    A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.

  12. Prediction method for thermal ratcheting of a cylinder subjected to axially moving temperature distribution

    International Nuclear Information System (INIS)

    Wada, Hiroshi; Igari, Toshihide; Kitade, Shoji.

    1989-01-01

    A prediction method was proposed for plastic ratcheting of a cylinder, which was subjected to axially moving temperature distribution without primary stress. First, a mechanism of this ratcheting was proposed, which considered the movement of temperature distribution as a driving force of this phenomenon. Predictive equations of the ratcheting strain for two representative temperature distributions were proposed based on this mechanism by assuming the elastic-perfectly-plastic material behavior. Secondly, an elastic-plastic analysis was made on a cylinder subjected to the representative two temperature distributions. Analytical results coincided well with the predicted results, and the applicability of the proposed equations was confirmed. (author)

  13. Quantum key distribution with an unknown and untrusted source

    Science.gov (United States)

    Zhao, Yi; Qi, Bing; Lo, Hoi-Kwong

    2009-03-01

    The security of a standard bi-directional ``plug & play'' quantum key distribution (QKD) system has been an open question for a long time. This is mainly because its source is equivalently controlled by an eavesdropper, which means the source is unknown and untrusted. Qualitative discussion on this subject has been made previously. In this paper, we present the first quantitative security analysis on a general class of QKD protocols whose sources are unknown and untrusted. The securities of standard BB84 protocol, weak+vacuum decoy state protocol, and one-decoy decoy state protocol, with unknown and untrusted sources are rigorously proved. We derive rigorous lower bounds to the secure key generation rates of the above three protocols. Our numerical simulation results show that QKD with an untrusted source gives a key generation rate that is close to that with a trusted source. Our work is published in [1]. [4pt] [1] Y. Zhao, B. Qi, and H.-K. Lo, Phys. Rev. A, 77:052327 (2008).

  14. Do predictions from Species Sensitivity Distributions match with field data?

    International Nuclear Information System (INIS)

    Smetanová, S.; Bláha, L.; Liess, M.; Schäfer, R.B.; Beketov, M.A.

    2014-01-01

    Species Sensitivity Distribution (SSD) is a statistical model that can be used to predict effects of contaminants on biological communities, but only few comparisons of this model with field studies have been conducted so far. In the present study we used measured pesticides concentrations from streams in Germany, France, and Finland, and we used SSD to calculate msPAF (multiple substance potentially affected fraction) values based on maximum toxic stress at localities. We compared these SSD-based predictions with the actual effects on stream invertebrates quantified by the SPEAR pesticides bioindicator. The results show that the msPAFs correlated well with the bioindicator, however, the generally accepted SSD threshold msPAF of 0.05 (5% of species are predicted to be affected) severely underestimated the observed effects (msPAF values causing significant effects are 2–1000-times lower). These results demonstrate that validation with field data is required to define the appropriate thresholds for SSD predictions. - Highlights: • We validated the statistical model Species Sensitivity Distribution with field data. • Good correlation was found between the model predictions and observed effects. • But, the generally accepted threshold msPAF 0.05 severely underestimated the effects. - Comparison of the SSD-based prediction with the field data evaluated with the SPEAR pesticides index shows that SSD threshold msPAF of 0.05 severely underestimates the effects observed in the field

  15. Blind Separation of Nonstationary Sources Based on Spatial Time-Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Zhang Yimin

    2006-01-01

    Full Text Available Blind source separation (BSS based on spatial time-frequency distributions (STFDs provides improved performance over blind source separation methods based on second-order statistics, when dealing with signals that are localized in the time-frequency (t-f domain. In this paper, we propose the use of STFD matrices for both whitening and recovery of the mixing matrix, which are two stages commonly required in many BSS methods, to provide robust BSS performance to noise. In addition, a simple method is proposed to select the auto- and cross-term regions of time-frequency distribution (TFD. To further improve the BSS performance, t-f grouping techniques are introduced to reduce the number of signals under consideration, and to allow the receiver array to separate more sources than the number of array sensors, provided that the sources have disjoint t-f signatures. With the use of one or more techniques proposed in this paper, improved performance of blind separation of nonstationary signals can be achieved.

  16. Research on Fault Prediction of Distribution Network Based on Large Data

    Directory of Open Access Journals (Sweden)

    Jinglong Zhou

    2017-01-01

    Full Text Available With the continuous development of information technology and the improvement of distribution automation level. Especially, the amount of on-line monitoring and statistical data is increasing, and large data is used data distribution system, describes the technology to collect, data analysis and data processing of the data distribution system. The artificial neural network mining algorithm and the large data are researched in the fault diagnosis and prediction of the distribution network.

  17. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    Science.gov (United States)

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  18. Simulated and measured neutron/gamma light output distribution for poly-energetic neutron/gamma sources

    Science.gov (United States)

    Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.

    2018-03-01

    In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.

  19. Confusion-limited extragalactic source survey at 4.755 GHz. I. Source list and areal distributions

    International Nuclear Information System (INIS)

    Ledden, J.E.; Broderick, J.J.; Condon, J.J.; Brown, R.L.

    1980-01-01

    A confusion-limited 4.755-GHz survey covering 0.00 956 sr between right ascensions 07/sup h/05/sup m/ and 18/sup h/ near declination +35 0 has been made with the NRAO 91-m telescope. The survey found 237 sources and is complete above 15 mJy. Source counts between 15 and 100 mJy were obtained directly. The P(D) distribution was used to determine the number counts between 0.5 and 13.2 mJy, to search for anisotropy in the density of faint extragalactic sources, and to set a 99%-confidence upper limit of 1.83 mK to the rms temperature fluctuation of the 2.7-K cosmic microwave background on angular scales smaller than 7.3 arcmin. The discrete-source density, normalized to the static Euclidean slope, falls off sufficiently rapidly below 100 mJy that no new population of faint flat-spectrum sources is required to explain the 4.755-GHz source counts

  20. Novel Radiobiological Gamma Index for Evaluation of 3-Dimensional Predicted Dose Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Sumida, Iori, E-mail: sumida@radonc.med.osaka-u.ac.jp [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Yamaguchi, Hajime; Kizaki, Hisao; Aboshi, Keiko; Tsujii, Mari; Yoshikawa, Nobuhiko; Yamada, Yuji [Department of Radiation Oncology, NTT West Osaka Hospital, Osaka (Japan); Suzuki, Osamu; Seo, Yuji [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Isohashi, Fumiaki [Department of Radiation Oncology, NTT West Osaka Hospital, Osaka (Japan); Yoshioka, Yasuo [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Ogawa, Kazuhiko [Department of Radiation Oncology, NTT West Osaka Hospital, Osaka (Japan)

    2015-07-15

    Purpose: To propose a gamma index-based dose evaluation index that integrates the radiobiological parameters of tumor control (TCP) and normal tissue complication probabilities (NTCP). Methods and Materials: Fifteen prostate and head and neck (H&N) cancer patients received intensity modulated radiation therapy. Before treatment, patient-specific quality assurance was conducted via beam-by-beam analysis, and beam-specific dose error distributions were generated. The predicted 3-dimensional (3D) dose distribution was calculated by back-projection of relative dose error distribution per beam. A 3D gamma analysis of different organs (prostate: clinical [CTV] and planned target volumes [PTV], rectum, bladder, femoral heads; H&N: gross tumor volume [GTV], CTV, spinal cord, brain stem, both parotids) was performed using predicted and planned dose distributions under 2%/2 mm tolerance and physical gamma passing rate was calculated. TCP and NTCP values were calculated for voxels with physical gamma indices (PGI) >1. We propose a new radiobiological gamma index (RGI) to quantify the radiobiological effects of TCP and NTCP and calculate radiobiological gamma passing rates. Results: The mean RGI gamma passing rates for prostate cases were significantly different compared with those of PGI (P<.03–.001). The mean RGI gamma passing rates for H&N cases (except for GTV) were significantly different compared with those of PGI (P<.001). Differences in gamma passing rates between PGI and RGI were due to dose differences between the planned and predicted dose distributions. Radiobiological gamma distribution was visualized to identify areas where the dose was radiobiologically important. Conclusions: RGI was proposed to integrate radiobiological effects into PGI. This index would assist physicians and medical physicists not only in physical evaluations of treatment delivery accuracy, but also in clinical evaluations of predicted dose distribution.

  1. Nonparametric Bayesian predictive distributions for future order statistics

    Science.gov (United States)

    Richard A. Johnson; James W. Evans; David W. Green

    1999-01-01

    We derive the predictive distribution for a specified order statistic, determined from a future random sample, under a Dirichlet process prior. Two variants of the approach are treated and some limiting cases studied. A practical application to monitoring the strength of lumber is discussed including choices of prior expectation and comparisons made to a Bayesian...

  2. Uncertainties in predicting species distributions under climate change: a case study using Tetranychus evansi (Acari: Tetranychidae), a widespread agricultural pest.

    Science.gov (United States)

    Meynard, Christine N; Migeon, Alain; Navajas, Maria

    2013-01-01

    Many species are shifting their distributions due to climate change and to increasing international trade that allows dispersal of individuals across the globe. In the case of agricultural pests, such range shifts may heavily impact agriculture. Species distribution modelling may help to predict potential changes in pest distributions. However, these modelling strategies are subject to large uncertainties coming from different sources. Here we used the case of the tomato red spider mite (Tetranychus evansi), an invasive pest that affects some of the most important agricultural crops worldwide, to show how uncertainty may affect forecasts of the potential range of the species. We explored three aspects of uncertainty: (1) species prevalence; (2) modelling method; and (3) variability in environmental responses between mites belonging to two invasive clades of T. evansi. Consensus techniques were used to forecast the potential range of the species under current and two different climate change scenarios for 2080, and variance between model projections were mapped to identify regions of high uncertainty. We revealed large predictive variations linked to all factors, although prevalence had a greater influence than the statistical model once the best modelling strategies were selected. The major areas threatened under current conditions include tropical countries in South America and Africa, and temperate regions in North America, the Mediterranean basin and Australia. Under future scenarios, the threat shifts towards northern Europe and some other temperate regions in the Americas, whereas tropical regions in Africa present a reduced risk. Analysis of niche overlap suggests that the current differential distribution of mites of the two clades of T. evansi can be partially attributed to environmental niche differentiation. Overall this study shows how consensus strategies and analysis of niche overlap can be used jointly to draw conclusions on invasive threat

  3. Uncertainties in predicting species distributions under climate change: a case study using Tetranychus evansi (Acari: Tetranychidae, a widespread agricultural pest.

    Directory of Open Access Journals (Sweden)

    Christine N Meynard

    Full Text Available Many species are shifting their distributions due to climate change and to increasing international trade that allows dispersal of individuals across the globe. In the case of agricultural pests, such range shifts may heavily impact agriculture. Species distribution modelling may help to predict potential changes in pest distributions. However, these modelling strategies are subject to large uncertainties coming from different sources. Here we used the case of the tomato red spider mite (Tetranychus evansi, an invasive pest that affects some of the most important agricultural crops worldwide, to show how uncertainty may affect forecasts of the potential range of the species. We explored three aspects of uncertainty: (1 species prevalence; (2 modelling method; and (3 variability in environmental responses between mites belonging to two invasive clades of T. evansi. Consensus techniques were used to forecast the potential range of the species under current and two different climate change scenarios for 2080, and variance between model projections were mapped to identify regions of high uncertainty. We revealed large predictive variations linked to all factors, although prevalence had a greater influence than the statistical model once the best modelling strategies were selected. The major areas threatened under current conditions include tropical countries in South America and Africa, and temperate regions in North America, the Mediterranean basin and Australia. Under future scenarios, the threat shifts towards northern Europe and some other temperate regions in the Americas, whereas tropical regions in Africa present a reduced risk. Analysis of niche overlap suggests that the current differential distribution of mites of the two clades of T. evansi can be partially attributed to environmental niche differentiation. Overall this study shows how consensus strategies and analysis of niche overlap can be used jointly to draw conclusions on invasive

  4. A calculation of dose distribution around 32P spherical sources and its clinical application

    International Nuclear Information System (INIS)

    Ohara, Ken; Tanaka, Yoshiaki; Nishizawa, Kunihide; Maekoshi, Hisashi

    1977-01-01

    In order to avoid the radiation hazard in radiation therapy of craniopharyngioma by using 32 P, it is helpful to prepare a detailed dose distribution in the vicinity of the source in the tissue. Valley's method is used for calculations. A problem of the method is pointed out and the method itself is refined numerically: it extends a region of xi where an approximate polynomial is available, and it determines an optimum degree of the polynomial as 9. Usefulness of the polynomial is examined by comparing with Berger's scaled absorbed dose distribution F(xi) and the Valley's result. The dose and dose rate distributions around uniformly distributed spherical sources are computed from the termwise integration of our polynomial of degree 9 over the range of xi from 0 to 1.7. The dose distributions calculated from the spherical surface to a point at 0.5 cm outside the source, are given, when the radii of sources are 0.5, 0.6, 0.7, 1.0, and 1.5 cm respectively. The therapeutic dose for a craniopharyngioma which has a spherically shaped cyst, and the absorbed dose to the normal tissue, (oculomotor nerve), are obtained from these dose rate distributions. (auth.)

  5. [Prediction of potential geographic distribution of Lyme disease in Qinghai province with Maximum Entropy model].

    Science.gov (United States)

    Zhang, Lin; Hou, Xuexia; Liu, Huixin; Liu, Wei; Wan, Kanglin; Hao, Qin

    2016-01-01

    To predict the potential geographic distribution of Lyme disease in Qinghai by using Maximum Entropy model (MaxEnt). The sero-diagnosis data of Lyme disease in 6 counties (Huzhu, Zeku, Tongde, Datong, Qilian and Xunhua) and the environmental and anthropogenic data including altitude, human footprint, normalized difference vegetation index (NDVI) and temperature in Qinghai province since 1990 were collected. By using the data of Huzhu Zeku and Tongde, the prediction of potential distribution of Lyme disease in Qinghai was conducted with MaxEnt. The prediction results were compared with the human sero-prevalence of Lyme disease in Datong, Qilian and Xunhua counties in Qinghai. Three hot spots of Lyme disease were predicted in Qinghai, which were all in the east forest areas. Furthermore, the NDVI showed the most important role in the model prediction, followed by human footprint. Datong, Qilian and Xunhua counties were all in eastern Qinghai. Xunhua was in hot spot areaⅡ, Datong was close to the north of hot spot area Ⅲ, while Qilian with lowest sero-prevalence of Lyme disease was not in the hot spot areas. The data were well modeled in MaxEnt (Area Under Curve=0.980). The actual distribution of Lyme disease in Qinghai was in consistent with the results of the model prediction. MaxEnt could be used in predicting the potential distribution patterns of Lyme disease. The distribution of vegetation and the range and intensity of human activity might be related with Lyme disease distribution.

  6. Modeling the distribution of Culex tritaeniorhynchus to predict Japanese encephalitis distribution in the Republic of Korea

    Directory of Open Access Journals (Sweden)

    Penny Masuoka

    2010-11-01

    Full Text Available Over 35,000 cases of Japanese encephalitis (JE are reported worldwide each year. Culex tritaeniorhynchus is the primary vector of the JE virus, while wading birds are natural reservoirs and swine amplifying hosts. As part of a JE risk analysis, the ecological niche modeling programme, Maxent, was used to develop a predictive model for the distribution of Cx. tritaeniorhynchus in the Republic of Korea, using mosquito collection data, temperature, precipitation, elevation, land cover and the normalized difference vegetation index (NDVI. The resulting probability maps from the model were consistent with the known environmental limitations of the mosquito with low probabilities predicted for forest covered mountains. July minimum temperature and land cover were the most important variables in the model. Elevation, summer NDVI (July-September, precipitation in July, summer minimum temperature (May-August and maximum temperature for fall and winter months also contributed to the model. Comparison of the Cx. tritaeniorhynchus model to the distribution of JE cases in the Republic of Korea from 2001 to 2009 showed that cases among a highly vaccinated Korean population were located in high-probability areas for Cx. tritaeniorhynchus. No recent JE cases were reported from the eastern coastline, where higher probabilities of mosquitoes were predicted, but where only small numbers of pigs are raised. The geographical distribution of reported JE cases corresponded closely with the predicted high-probability areas for Cx. tritaeniorhynchus, making the map a useful tool for health risk analysis that could be used for planning preventive public health measures.

  7. Multi-scale approach for predicting fish species distributions across coral reef seascapes.

    Directory of Open Access Journals (Sweden)

    Simon J Pittman

    Full Text Available Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5-300 m radii surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT and Maximum Entropy Species Distribution Modelling (MaxEnt. The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided 'outstanding' model predictions (AUC = >0.9 for three of five fish species. MaxEnt provided 'outstanding' model predictions for two of five species, with the remaining three models considered 'excellent' (AUC = 0.8-0.9. In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy than BRT (68% map accuracy. We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support

  8. Quantum key distribution with entangled photon sources

    International Nuclear Information System (INIS)

    Ma Xiongfeng; Fung, Chi-Hang Fred; Lo, H.-K.

    2007-01-01

    A parametric down-conversion (PDC) source can be used as either a triggered single-photon source or an entangled-photon source in quantum key distribution (QKD). The triggering PDC QKD has already been studied in the literature. On the other hand, a model and a post-processing protocol for the entanglement PDC QKD are still missing. We fill in this important gap by proposing such a model and a post-processing protocol for the entanglement PDC QKD. Although the PDC model is proposed to study the entanglement-based QKD, we emphasize that our generic model may also be useful for other non-QKD experiments involving a PDC source. Since an entangled PDC source is a basis-independent source, we apply Koashi and Preskill's security analysis to the entanglement PDC QKD. We also investigate the entanglement PDC QKD with two-way classical communications. We find that the recurrence scheme increases the key rate and the Gottesman-Lo protocol helps tolerate higher channel losses. By simulating a recent 144-km open-air PDC experiment, we compare three implementations: entanglement PDC QKD, triggering PDC QKD, and coherent-state QKD. The simulation result suggests that the entanglement PDC QKD can tolerate higher channel losses than the coherent-state QKD. The coherent-state QKD with decoy states is able to achieve highest key rate in the low- and medium-loss regions. By applying the Gottesman-Lo two-way post-processing protocol, the entanglement PDC QKD can tolerate up to 70 dB combined channel losses (35 dB for each channel) provided that the PDC source is placed in between Alice and Bob. After considering statistical fluctuations, the PDC setup can tolerate up to 53 dB channel losses

  9. Predictions for an invaded world: A strategy to predict the distribution of native and non-indigenous species at multiple scales

    Science.gov (United States)

    Reusser, D.A.; Lee, H.

    2008-01-01

    Habitat models can be used to predict the distributions of marine and estuarine non-indigenous species (NIS) over several spatial scales. At an estuary scale, our goal is to predict the estuaries most likely to be invaded, but at a habitat scale, the goal is to predict the specific locations within an estuary that are most vulnerable to invasion. As an initial step in evaluating several habitat models, model performance for a suite of benthic species with reasonably well-known distributions on the Pacific coast of the US needs to be compared. We discuss the utility of non-parametric multiplicative regression (NPMR) for predicting habitat- and estuary-scale distributions of native and NIS. NPMR incorporates interactions among variables, allows qualitative and categorical variables, and utilizes data on absence as well as presence. Preliminary results indicate that NPMR generally performs well at both spatial scales and that distributions of NIS are predicted as well as those of native species. For most species, latitude was the single best predictor, although similar model performance could be obtained at both spatial scales with combinations of other habitat variables. Errors of commission were more frequent at a habitat scale, with omission and commission errors approximately equal at an estuary scale. ?? 2008 International Council for the Exploration of the Sea. Published by Oxford Journals. All rights reserved.

  10. The Integration of Renewable Energy Sources into Electric Power Distribution Systems, Vol. II Utility Case Assessments

    Energy Technology Data Exchange (ETDEWEB)

    Zaininger, H.W.

    1994-01-01

    Electric utility distribution system impacts associated with the integration of renewable energy sources such as photovoltaics (PV) and wind turbines (WT) are considered in this project. The impacts are expected to vary from site to site according to the following characteristics: the local solar insolation and/or wind characteristics, renewable energy source penetration level, whether battery or other energy storage systems are applied, and local utility distribution design standards and planning practices. Small, distributed renewable energy sources are connected to the utility distribution system like other, similar kW- and MW-scale equipment and loads. Residential applications are expected to be connected to single-phase 120/240-V secondaries. Larger kW-scale applications may be connected to three+phase secondaries, and larger hundred-kW and y-scale applications, such as MW-scale windfarms, or PV plants, may be connected to electric utility primary systems via customer-owned primary and secondary collection systems. In any case, the installation of small, distributed renewable energy sources is expected to have a significant impact on local utility distribution primary and secondary system economics. Small, distributed renewable energy sources installed on utility distribution systems will also produce nonsite-specific utility generation system benefits such as energy and capacity displacement benefits, in addition to the local site-specific distribution system benefits. Although generation system benefits are not site-specific, they are utility-specific, and they vary significantly among utilities in different regions. In addition, transmission system benefits, environmental benefits and other benefits may apply. These benefits also vary significantly among utilities and regions. Seven utility case studies considering PV, WT, and battery storage were conducted to identify a range of potential renewable energy source distribution system applications. The

  11. A Monte Carlo Method for the Analysis of Gamma Radiation Transport from Distributed Sources in Laminated Shields

    Energy Technology Data Exchange (ETDEWEB)

    Leimdoerfer, M

    1964-02-15

    A description is given of a method for calculating the penetration and energy deposition of gamma radiation, based on Monte Carlo techniques. The essential feature is the application of the exponential transformation to promote the transport of penetrating quanta and to balance the steep spatial variations of the source distributions which appear in secondary gamma emission problems. The estimated statistical errors in a number of sample problems, involving concrete shields with thicknesses up to 500 cm, are shown to be quite favorable, even at relatively short computing times. A practical reactor shielding problem is also shown and the predictions compared with measurements.

  12. A Monte Carlo Method for the Analysis of Gamma Radiation Transport from Distributed Sources in Laminated Shields

    International Nuclear Information System (INIS)

    Leimdoerfer, M.

    1964-02-01

    A description is given of a method for calculating the penetration and energy deposition of gamma radiation, based on Monte Carlo techniques. The essential feature is the application of the exponential transformation to promote the transport of penetrating quanta and to balance the steep spatial variations of the source distributions which appear in secondary gamma emission problems. The estimated statistical errors in a number of sample problems, involving concrete shields with thicknesses up to 500 cm, are shown to be quite favorable, even at relatively short computing times. A practical reactor shielding problem is also shown and the predictions compared with measurements

  13. Searching Malware and Sources of Its Distribution in the Internet

    Directory of Open Access Journals (Sweden)

    L. L. Protsenko

    2011-09-01

    Full Text Available In the article is considered for the first time developed by the author algorithm of searching malware and sources of its distribution, based on published HijackThis logs in the Internet.

  14. Hierarchical spatial models for predicting pygmy rabbit distribution and relative abundance

    Science.gov (United States)

    Wilson, T.L.; Odei, J.B.; Hooten, M.B.; Edwards, T.C.

    2010-01-01

    Conservationists routinely use species distribution models to plan conservation, restoration and development actions, while ecologists use them to infer process from pattern. These models tend to work well for common or easily observable species, but are of limited utility for rare and cryptic species. This may be because honest accounting of known observation bias and spatial autocorrelation are rarely included, thereby limiting statistical inference of resulting distribution maps. We specified and implemented a spatially explicit Bayesian hierarchical model for a cryptic mammal species (pygmy rabbit Brachylagus idahoensis). Our approach used two levels of indirect sign that are naturally hierarchical (burrows and faecal pellets) to build a model that allows for inference on regression coefficients as well as spatially explicit model parameters. We also produced maps of rabbit distribution (occupied burrows) and relative abundance (number of burrows expected to be occupied by pygmy rabbits). The model demonstrated statistically rigorous spatial prediction by including spatial autocorrelation and measurement uncertainty. We demonstrated flexibility of our modelling framework by depicting probabilistic distribution predictions using different assumptions of pygmy rabbit habitat requirements. Spatial representations of the variance of posterior predictive distributions were obtained to evaluate heterogeneity in model fit across the spatial domain. Leave-one-out cross-validation was conducted to evaluate the overall model fit. Synthesis and applications. Our method draws on the strengths of previous work, thereby bridging and extending two active areas of ecological research: species distribution models and multi-state occupancy modelling. Our framework can be extended to encompass both larger extents and other species for which direct estimation of abundance is difficult. ?? 2010 The Authors. Journal compilation ?? 2010 British Ecological Society.

  15. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    Science.gov (United States)

    Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.

    2017-01-01

    Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.

  16. Performance prediction model for distributed applications on multicore clusters

    CSIR Research Space (South Africa)

    Khanyile, NP

    2012-07-01

    Full Text Available discusses some of the short comings of this law in the current age. We propose a theoretical model for predicting the behavior of a distributed algorithm given the network restrictions of the cluster used. The paper focuses on the impact of latency...

  17. Predicting the distribution of bed material accumulation using river network sediment budgets

    Science.gov (United States)

    Wilkinson, Scott N.; Prosser, Ian P.; Hughes, Andrew O.

    2006-10-01

    Assessing the spatial distribution of bed material accumulation in river networks is important for determining the impacts of erosion on downstream channel form and habitat and for planning erosion and sediment management. A model that constructs spatially distributed budgets of bed material sediment is developed to predict the locations of accumulation following land use change. For each link in the river network, GIS algorithms are used to predict bed material supply from gullies, river banks, and upstream tributaries and to compare total supply with transport capacity. The model is tested in the 29,000 km2 Murrumbidgee River catchment in southeast Australia. It correctly predicts the presence or absence of accumulation in 71% of river links, which is significantly better performance than previous models, which do not account for spatial variability in sediment supply and transport capacity. Representing transient sediment storage is important for predicting smaller accumulations. Bed material accumulation is predicted in 25% of the river network, indicating its importance as an environmental problem in Australia.

  18. Prediction future asset price which is non-concordant with the historical distribution

    Science.gov (United States)

    Seong, Ng Yew; Hin, Pooi Ah

    2015-12-01

    This paper attempts to predict the major characteristics of the future asset price which is non-concordant with the distribution estimated from the price today and the prices on a large number of previous days. The three major characteristics of the i-th non-concordant asset price are the length of the interval between the occurrence time of the previous non-concordant asset price and that of the present non-concordant asset price, the indicator which denotes that the non-concordant price is extremely small or large by its values -1 and 1 respectively, and the degree of non-concordance given by the negative logarithm of the probability of the left tail or right tail of which one of the end points is given by the observed future price. The vector of three major characteristics of the next non-concordant price is modelled to be dependent on the vectors corresponding to the present and l - 1 previous non-concordant prices via a 3-dimensional conditional distribution which is derived from a 3(l + 1)-dimensional power-normal mixture distribution. The marginal distribution for each of the three major characteristics can then be derived from the conditional distribution. The mean of the j-th marginal distribution is an estimate of the value of the j-th characteristics of the next non-concordant price. Meanwhile, the 100(α/2) % and 100(1 - α/2) % points of the j-th marginal distribution can be used to form a prediction interval for the j-th characteristic of the next non-concordant price. The performance measures of the above estimates and prediction intervals indicate that the fitted conditional distribution is satisfactory. Thus the incorporation of the distribution of the characteristics of the next non-concordant price in the model for asset price has a good potential of yielding a more realistic model.

  19. Topographic Metric Predictions of Soil redistribution and Organic Carbon Distribution in Croplands

    Science.gov (United States)

    Mccarty, G.; Li, X.

    2017-12-01

    Landscape topography is a key factor controlling soil redistribution and soil organic carbon (SOC) distribution in Iowa croplands (USA). In this study, we adopted a combined approach based on carbon () and cesium (137Cs) isotope tracers, and digital terrain analysis to understand patterns of SOC redistribution and carbon sequestration dynamics as influenced by landscape topography in tilled cropland under long term corn/soybean management. The fallout radionuclide 137Cs was used to estimate soil redistribution rates and a Lidar-derived DEM was used to obtain a set of topographic metrics for digital terrain analysis. Soil redistribution rates and patterns of SOC distribution were examined across 560 sampling locations at two field sites as well as at larger scale within the watershed. We used δ13C content in SOC to partition C3 and C4 plant derived C density at 127 locations in one of the two field sites with corn being the primary source of C4 C. Topography-based models were developed to simulate SOC distribution and soil redistribution using stepwise ordinary least square regression (SOLSR) and stepwise principal component regression (SPCR). All topography-based models developed through SPCR and SOLSR demonstrated good simulation performance, explaining more than 62% variability in SOC density and soil redistribution rates across two field sites with intensive samplings. However, the SOLSR models showed lower reliability than the SPCR models in predicting SOC density at the watershed scale. Spatial patterns of C3-derived SOC density were highly related to those of SOC density. Topographic metrics exerted substantial influence on C3-derived SOC density with the SPCR model accounting for 76.5% of the spatial variance. In contrast C4 derived SOC density had poor spatial structure likely reflecting the substantial contribution of corn vegetation to recently sequestered SOC density. Results of this study highlighted the utility of topographic SPCR models for scaling

  20. A maximum entropy model for predicting wild boar distribution in Spain

    Directory of Open Access Journals (Sweden)

    Jaime Bosch

    2014-09-01

    Full Text Available Wild boar (Sus scrofa populations in many areas of the Palearctic including the Iberian Peninsula have grown continuously over the last century. This increase has led to numerous different types of conflicts due to the damage these mammals can cause to agriculture, the problems they create in the conservation of natural areas, and the threat they pose to animal health. In the context of both wildlife management and the design of health programs for disease control, it is essential to know how wild boar are distributed on a large spatial scale. Given that the quantifying of the distribution of wild species using census techniques is virtually impossible in the case of large-scale studies, modeling techniques have thus to be used instead to estimate animals’ distributions, densities, and abundances. In this study, the potential distribution of wild boar in Spain was predicted by integrating data of presence and environmental variables into a MaxEnt approach. We built and tested models using 100 bootstrapped replicates. For each replicate or simulation, presence data was divided into two subsets that were used for model fitting (60% of the data and cross-validation (40% of the data. The final model was found to be accurate with an area under the receiver operating characteristic curve (AUC value of 0.79. Six explanatory variables for predicting wild boar distribution were identified on the basis of the percentage of their contribution to the model. The model exhibited a high degree of predictive accuracy, which has been confirmed by its agreement with satellite images and field surveys.

  1. Distributed quantum computing with single photon sources

    International Nuclear Information System (INIS)

    Beige, A.; Kwek, L.C.

    2005-01-01

    Full text: Distributed quantum computing requires the ability to perform nonlocal gate operations between the distant nodes (stationary qubits) of a large network. To achieve this, it has been proposed to interconvert stationary qubits with flying qubits. In contrast to this, we show that distributed quantum computing only requires the ability to encode stationary qubits into flying qubits but not the conversion of flying qubits into stationary qubits. We describe a scheme for the realization of an eventually deterministic controlled phase gate by performing measurements on pairs of flying qubits. Our scheme could be implemented with a linear optics quantum computing setup including sources for the generation of single photons on demand, linear optics elements and photon detectors. In the presence of photon loss and finite detector efficiencies, the scheme could be used to build large cluster states for one way quantum computing with a high fidelity. (author)

  2. Prediction of temperature and HAZ in thermal-based processes with Gaussian heat source by a hybrid GA-ANN model

    Science.gov (United States)

    Fazli Shahri, Hamid Reza; Mahdavinejad, Ramezanali

    2018-02-01

    Thermal-based processes with Gaussian heat source often produce excessive temperature which can impose thermally-affected layers in specimens. Therefore, the temperature distribution and Heat Affected Zone (HAZ) of materials are two critical factors which are influenced by different process parameters. Measurement of the HAZ thickness and temperature distribution within the processes are not only difficult but also expensive. This research aims at finding a valuable knowledge on these factors by prediction of the process through a novel combinatory model. In this study, an integrated Artificial Neural Network (ANN) and genetic algorithm (GA) was used to predict the HAZ and temperature distribution of the specimens. To end this, a series of full factorial design of experiments were conducted by applying a Gaussian heat flux on Ti-6Al-4 V at first, then the temperature of the specimen was measured by Infrared thermography. The HAZ width of each sample was investigated through measuring the microhardness. Secondly, the experimental data was used to create a GA-ANN model. The efficiency of GA in design and optimization of the architecture of ANN was investigated. The GA was used to determine the optimal number of neurons in hidden layer, learning rate and momentum coefficient of both output and hidden layers of ANN. Finally, the reliability of models was assessed according to the experimental results and statistical indicators. The results demonstrated that the combinatory model predicted the HAZ and temperature more effective than a trial-and-error ANN model.

  3. Prediction With Dimension Reduction of Multiple Molecular Data Sources for Patient Survival

    Directory of Open Access Journals (Sweden)

    Adam Kaplan

    2017-07-01

    Full Text Available Predictive modeling from high-dimensional genomic data is often preceded by a dimension reduction step, such as principal component analysis (PCA. However, the application of PCA is not straightforward for multisource data, wherein multiple sources of ‘omics data measure different but related biological components. In this article, we use recent advances in the dimension reduction of multisource data for predictive modeling. In particular, we apply exploratory results from Joint and Individual Variation Explained (JIVE, an extension of PCA for multisource data, for prediction of differing response types. We conduct illustrative simulations to illustrate the practical advantages and interpretability of our approach. As an application example, we consider predicting survival for patients with glioblastoma multiforme from 3 data sources measuring messenger RNA expression, microRNA expression, and DNA methylation. We also introduce a method to estimate JIVE scores for new samples that were not used in the initial dimension reduction and study its theoretical properties; this method is implemented in the R package R.JIVE on CRAN, in the function jive.predict.

  4. CMP reflection imaging via interferometry of distributed subsurface sources

    Science.gov (United States)

    Kim, D.; Brown, L. D.; Quiros, D. A.

    2015-12-01

    The theoretical foundations of recovering body wave energy via seismic interferometry are well established. However in practice, such recovery remains problematic. Here, synthetic seismograms computed for subsurface sources are used to evaluate the geometrical combinations of realistic ambient source and receiver distributions that result in useful recovery of virtual body waves. This study illustrates how surface receiver arrays that span a limited distribution suite of sources, can be processed to reproduce virtual shot gathers that result in CMP gathers which can be effectively stacked with traditional normal moveout corrections. To verify the feasibility of the approach in practice, seismic recordings of 50 aftershocks following the magnitude of 5.8 Virginia earthquake occurred in August, 2011 have been processed using seismic interferometry to produce seismic reflection images of the crustal structure above and beneath the aftershock cluster. Although monotonic noise proved to be problematic by significantly reducing the number of usable recordings, the edited dataset resulted in stacked seismic sections characterized by coherent reflections that resemble those seen on a nearby conventional reflection survey. In particular, "virtual" reflections at travel times of 3 to 4 seconds suggest reflector sat approximately 7 to 12 km depth that would seem to correspond to imbricate thrust structures formed during the Appalachian orogeny. The approach described here represents a promising new means of body wave imaging of 3D structure that can be applied to a wide array of geologic and energy problems. Unlike other imaging techniques using natural sources, this technique does not require precise source locations or times. It can thus exploit aftershocks too small for conventional analyses. This method can be applied to any type of microseismic cloud, whether tectonic, volcanic or man-made.

  5. Correlated Sources in Distributed Networks--Data Transmission, Common Information Characterization and Inferencing

    Science.gov (United States)

    Liu, Wei

    2011-01-01

    Correlation is often present among observations in a distributed system. This thesis deals with various design issues when correlated data are observed at distributed terminals, including: communicating correlated sources over interference channels, characterizing the common information among dependent random variables, and testing the presence of…

  6. Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic Sensor Networks

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider the problem of remote vector Gaussian source coding for a wireless acoustic sensor network. Each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information. The node’s measurement...... and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network. We show that for this distributed source coding scenario, one can encode a so-called conditional sufficient statistic of the sources instead of jointly...

  7. Nonparametric Tree-Based Predictive Modeling of Storm Outages on an Electric Distribution Network.

    Science.gov (United States)

    He, Jichao; Wanik, David W; Hartman, Brian M; Anagnostou, Emmanouil N; Astitha, Marina; Frediani, Maria E B

    2017-03-01

    This article compares two nonparametric tree-based models, quantile regression forests (QRF) and Bayesian additive regression trees (BART), for predicting storm outages on an electric distribution network in Connecticut, USA. We evaluated point estimates and prediction intervals of outage predictions for both models using high-resolution weather, infrastructure, and land use data for 89 storm events (including hurricanes, blizzards, and thunderstorms). We found that spatially BART predicted more accurate point estimates than QRF. However, QRF produced better prediction intervals for high spatial resolutions (2-km grid cells and towns), while BART predictions aggregated to coarser resolutions (divisions and service territory) more effectively. We also found that the predictive accuracy was dependent on the season (e.g., tree-leaf condition, storm characteristics), and that the predictions were most accurate for winter storms. Given the merits of each individual model, we suggest that BART and QRF be implemented together to show the complete picture of a storm's potential impact on the electric distribution network, which would allow for a utility to make better decisions about allocating prestorm resources. © 2016 Society for Risk Analysis.

  8. Testing and intercomparison of model predictions of radionuclide migration from a hypothetical area source

    International Nuclear Information System (INIS)

    O'Brien, R.S.; Yu, C.; Zeevaert, T.; Olyslaegers, G.; Amado, V.; Setlow, L.W.; Waggitt, P.W.

    2008-01-01

    This work was carried out as part of the International Atomic Energy Agency's EMRAS program. One aim of the work was to develop scenarios for testing computer models designed for simulating radionuclide migration in the environment, and to use these scenarios for testing the models and comparing predictions from different models. This paper presents the results of the development and testing of a hypothetical area source of NORM waste/residue using two complex computer models and one screening model. There are significant differences in the methods used to model groundwater flow between the complex models. The hypothetical source was used because of its relative simplicity and because of difficulties encountered in finding comprehensive, well-validated data sets for real sites. The source consisted of a simple repository of uniform thickness, with 1 Bq g -1 of uranium-238 ( 238 U) (in secular equilibrium with its decay products) distributed uniformly throughout the waste. These approximate real situations, such as engineered repositories, waste rock piles, tailings piles and landfills. Specification of the site also included the physical layout, vertical stratigraphic details, soil type for each layer of material, precipitation and runoff details, groundwater flow parameters, and meteorological data. Calculations were carried out with and without a cover layer of clean soil above the waste, for people working and living at different locations relative to the waste. The predictions of the two complex models showed several differences which need more detailed examination. The scenario is available for testing by other modelers. It can also be used as a planning tool for remediation work or for repository design, by changing the scenario parameters and running the models for a range of different inputs. Further development will include applying models to real scenarios and integrating environmental impact assessment methods with the safety assessment tools currently

  9. A New Method for the 2D DOA Estimation of Coherently Distributed Sources

    Directory of Open Access Journals (Sweden)

    Liang Zhou

    2014-03-01

    Full Text Available The purpose of this paper is to develop a new technique for estimating the two- dimensional (2D direction-of-arrivals (DOAs of coherently distributed (CD sources, which can estimate effectively the central azimuth and central elevation of CD sources at the cost of less computational cost. Using the special L-shape array, a new approach for parametric estimation of CD sources is proposed. The proposed method is based on two rotational invariance relations under small angular approximation, and estimates two rotational matrices which depict the relations, using propagator technique. And then the central DOA estimations are obtained by utilizing the primary diagonal elements of two rotational matrices. Simulation results indicate that the proposed method can exhibit a good performance under small angular spread and be applied to the multisource scenario where different sources may have different angular distribution shapes. Without any peak-finding search and the eigendecomposition of the high-dimensional sample covariance matrix, the proposed method has significantly reduced the computational cost compared with the existing methods, and thus is beneficial to real-time processing and engineering realization. In addition, our approach is also a robust estimator which does not depend on the angular distribution shape of CD sources.

  10. Perceived loudness of spatially distributed sound sources

    DEFF Research Database (Denmark)

    Song, Woo-keun; Ellermeier, Wolfgang; Minnaar, Pauli

    2005-01-01

    psychoacoustic attributes into account. Therefore, a method for deriving loudness maps was developed in an earlier study [Song, Internoise2004, paper 271]. The present experiment investigates to which extent perceived loudness depends on the distribution of individual sound sources. Three loudspeakers were...... positioned 1.5 m from the centre of the listener’s head, one straight ahead, and two 10 degrees to the right and left, respectively. Six participants matched the loudness of either one, or two simultaneous sounds (narrow-band noises with 1-kHz, and 3.15-kHz centre frequencies) to a 2-kHz, 60-dB SPL narrow......-band noise placed in the frontal loudspeaker. The two sounds were either originating from the central speaker, or from the two offset loudspeakers. It turned out that the subjects perceived the noises to be softer when they were distributed in space. In addition, loudness was calculated from the recordings...

  11. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  12. Fundamental-mode sources in approach to critical experiments

    International Nuclear Information System (INIS)

    Goda, J.; Busch, R.

    2000-01-01

    An equivalent fundamental-mode source is an imaginary source that is distributed identically in space, energy, and angle to the fundamental-mode fission source. Therefore, it produces the same neutron multiplication as the fundamental-mode fission source. Even if two source distributions produce the same number of spontaneous fission neutrons, they will not necessarily contribute equally toward the multiplication of a given system. A method of comparing the relative importance of source distributions is needed. A factor, denoted as g* and defined as the ratio of the fixed-source multiplication to the fundamental-mode multiplication, is used to convert a given source strength to its equivalent fundamental-mode source strength. This factor is of interest to criticality safety as it relates to the 1/M method of approach to critical. Ideally, a plot of 1/M versus κ eff is linear. However, since 1/M = (1 minus κ eff )/g*, the plot will be linear only if g* is constant with κ eff . When g* increases with κ eff , the 1/M plot is said to be conservative because the critical mass is underestimated. However, it is possible for g* to decrease with κ eff yielding a nonconservative 1/M plot. A better understanding of g* would help predict whether a given approach to critical will be conservative or nonconservative. The equivalent fundamental-mode source strength g*S can be predicted by experiment. The experimental method was tested on the XIX-1 core on the Fast Critical Assembly at the Japan Atomic Energy Research Institute. The results showed a 30% difference between measured and calculated values. However, the XIX-1 reactor had significant intermediate-energy neutrons. The presence of intermediate-energy neutrons may have made the cross-section set used for predicted values less than ideal for the system

  13. Theoretical predictions of lactate and hydrogen ion distributions in tumours.

    Directory of Open Access Journals (Sweden)

    Maymona Al-Husari

    Full Text Available High levels of lactate and H(+-ions play an important role in the invasive and metastatic cascade of some tumours. We develop a mathematical model of cellular pH regulation focusing on the activity of the Na(+/H(+ exchanger (NHE and the lactate/H(+ symporter (MCT to investigate the spatial correlations of extracellular lactate and H(+-ions. We highlight a crucial role for blood vessel perfusion rates in determining the spatial correlation between these two cations. We also predict critical roles for blood lactate, the activity of the MCTs and NHEs on the direction of the cellular pH gradient in the tumour. We also incorporate experimentally determined heterogeneous distributions of the NHE and MCT transporters. We show that this can give rise to a higher intracellular pH and a lower intracellular lactate but does not affect the direction of the reversed cellular pH gradient or redistribution of protons away from the glycolytic source. On the other hand, including intercellular gap junction communication in our model can give rise to a reversed cellular pH gradient and can influence the levels of pH.

  14. Validation of model predictions of pore-scale fluid distributions during two-phase flow

    Science.gov (United States)

    Bultreys, Tom; Lin, Qingyang; Gao, Ying; Raeini, Ali Q.; AlRatrout, Ahmed; Bijeljic, Branko; Blunt, Martin J.

    2018-05-01

    Pore-scale two-phase flow modeling is an important technology to study a rock's relative permeability behavior. To investigate if these models are predictive, the calculated pore-scale fluid distributions which determine the relative permeability need to be validated. In this work, we introduce a methodology to quantitatively compare models to experimental fluid distributions in flow experiments visualized with microcomputed tomography. First, we analyzed five repeated drainage-imbibition experiments on a single sample. In these experiments, the exact fluid distributions were not fully repeatable on a pore-by-pore basis, while the global properties of the fluid distribution were. Then two fractional flow experiments were used to validate a quasistatic pore network model. The model correctly predicted the fluid present in more than 75% of pores and throats in drainage and imbibition. To quantify what this means for the relevant global properties of the fluid distribution, we compare the main flow paths and the connectivity across the different pore sizes in the modeled and experimental fluid distributions. These essential topology characteristics matched well for drainage simulations, but not for imbibition. This suggests that the pore-filling rules in the network model we used need to be improved to make reliable predictions of imbibition. The presented analysis illustrates the potential of our methodology to systematically and robustly test two-phase flow models to aid in model development and calibration.

  15. Measurement-device-independent quantum key distribution with correlated source-light-intensity errors

    Science.gov (United States)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2018-04-01

    We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.

  16. Characterization and modeling of the heat source

    Energy Technology Data Exchange (ETDEWEB)

    Glickstein, S.S.; Friedman, E.

    1993-10-01

    A description of the input energy source is basic to any numerical modeling formulation designed to predict the outcome of the welding process. The source is fundamental and unique to each joining process. The resultant output of any numerical model will be affected by the initial description of both the magnitude and distribution of the input energy of the heat source. Thus, calculated weld shape, residual stresses, weld distortion, cooling rates, metallurgical structure, material changes due to excessive temperatures and potential weld defects are all influenced by the initial characterization of the heat source. Understandings of both the physics and the mathematical formulation of these sources are essential for describing the input energy distribution. This section provides a brief review of the physical phenomena that influence the input energy distributions and discusses several different models of heat sources that have been used in simulating arc welding, high energy density welding and resistance welding processes. Both simplified and detailed models of the heat source are discussed.

  17. Boosting up quantum key distribution by learning statistics of practical single-photon sources

    International Nuclear Information System (INIS)

    Adachi, Yoritoshi; Yamamoto, Takashi; Koashi, Masato; Imoto, Nobuyuki

    2009-01-01

    We propose a simple quantum-key-distribution (QKD) scheme for practical single-photon sources (SPSs), which works even with a moderate suppression of the second-order correlation g (2) of the source. The scheme utilizes a passive preparation of a decoy state by monitoring a fraction of the signal via an additional beam splitter and a detector at the sender's side to monitor photon-number splitting attacks. We show that the achievable distance increases with the precision with which the sub-Poissonian tendency is confirmed in higher photon-number distribution of the source, rather than with actual suppression of the multiphoton emission events. We present an example of the secure key generation rate in the case of a poor SPS with g (2) =0.19, in which no secure key is produced with the conventional QKD scheme, and show that learning the photon-number distribution up to several numbers is sufficient for achieving almost the same distance as that of an ideal SPS.

  18. Prediction of sound transmission loss through multilayered panels by using Gaussian distribution of directional incident energy

    Science.gov (United States)

    Kang; Ih; Kim; Kim

    2000-03-01

    In this study, a new prediction method is suggested for sound transmission loss (STL) of multilayered panels of infinite extent. Conventional methods such as random or field incidence approach often given significant discrepancies in predicting STL of multilayered panels when compared with the experiments. In this paper, appropriate directional distributions of incident energy to predict the STL of multilayered panels are proposed. In order to find a weighting function to represent the directional distribution of incident energy on the wall in a reverberation chamber, numerical simulations by using a ray-tracing technique are carried out. Simulation results reveal that the directional distribution can be approximately expressed by the Gaussian distribution function in terms of the angle of incidence. The Gaussian function is applied to predict the STL of various multilayered panel configurations as well as single panels. The compared results between the measurement and the prediction show good agreements, which validate the proposed Gaussian function approach.

  19. Distribution and Source Identification of Pb Contamination in industrial soil

    Science.gov (United States)

    Ko, M. S.

    2017-12-01

    INTRODUCTION Lead (Pb) is toxic element that induce neurotoxic effect to human, because competition of Pb and Ca in nerve system. Lead is classified as a chalophile element and galena (PbS) is the major mineral. Although the Pb is not an abundant element in nature, various anthropogenic source has been enhanced Pb enrichment in the environment after the Industrial Revolution. The representative anthropogenic sources are batteries, paint, mining, smelting, and combustion of fossil fuel. Isotope analysis widely used to identify the Pb contamination source. The Pb has four stable isotopes that are 208Pb, 207Pb, 206Pb, and 204Pb in natural. The Pb is stable isotope and the ratios maintain during physical and chemical fractionation. Therefore, variations of Pb isotope abundance and relative ratios could imply the certain Pb contamination source. In this study, distributions and isotope ratios of Pb in industrial soil were used to identify the Pb contamination source and dispersion pathways. MATERIALS AND METHODS Soil samples were collected at depth 0­-6 m from an industrial area in Korea. The collected soil samples were dried and sieved under 2 mm. Soil pH, aqua-regia digestion and TCLP carried out using sieved soil sample. The isotope analysis was carried out to determine the abundance of Pb isotope. RESULTS AND DISCUSSION The study area was developed land for promotion of industrial facilities. The study area was forest in 1980, and the satellite image show the alterations of land use with time. The variations of land use imply the possibilities of bringing in external contaminated soil. The Pb concentrations in core samples revealed higher in lower soil compare with top soil. Especially, 4 m soil sample show highest Pb concentrations that are approximately 1500 mg/kg. This result indicated that certain Pb source existed at 4 m depth. CONCLUSIONS This study investigated the distribution and source identification of Pb in industrial soil. The land use and Pb

  20. Predictive modeling of deep-sea fish distribution in the Azores

    Science.gov (United States)

    Parra, Hugo E.; Pham, Christopher K.; Menezes, Gui M.; Rosa, Alexandra; Tempera, Fernando; Morato, Telmo

    2017-11-01

    Understanding the link between fish and their habitat is essential for an ecosystem approach to fisheries management. However, determining such relationship is challenging, especially for deep-sea species. In this study, we applied generalized additive models (GAMs) to relate presence-absence and relative abundance data of eight economically-important fish species to environmental variables (depth, slope, aspect, substrate type, bottom temperature, salinity and oxygen saturation). We combined 13 years of catch data collected from systematic longline surveys performed across the region. Overall, presence-absence GAMs performed better than abundance models and predictions made for the observed data successfully predicted the occurrence of the eight deep-sea fish species. Depth was the most influential predictor of all fish species occurrence and abundance distributions, whereas other factors were found to be significant for some species but did not show such a clear influence. Our results predicted that despite the extensive Azores EEZ, the habitats available for the studied deep-sea fish species are highly limited and patchy, restricted to seamounts slopes and summits, offshore banks and island slopes. Despite some identified limitations, our GAMs provide an improved knowledge of the spatial distribution of these commercially important fish species in the region.

  1. Study of burden distribution characteristics (IV): the development of a distribution predicting model in which coke collapse has been taken into account

    Energy Technology Data Exchange (ETDEWEB)

    Kamisaka, E; Okuno, Y; Irita, T; Matsuzaki, M; Isoyama, T; Kunitomo, K

    1984-01-01

    Using results quoted in a previous report (see Tetsu To Hagane, Vol. 68, page S 701, 1982), coke collapse has been quantified by means of landslide theory, according to which the stability of the burden is given by a safety factor which equals resistance moment/sliding moment. This has enabled coke collapse to be introduced in a model for predicting burden distribution. Application of this model has resulted in more accurate predictions of burden distribution, the computed values being in close agreement with the results of distribution experiments. 1 reference.

  2. Determining profile of dose distribution for PD-103 brachytherapy source

    International Nuclear Information System (INIS)

    Berkay, Camgoz; Mehmet, N. Kumru; Gultekin, Yegin

    2006-01-01

    Full text: Brachytherapy is a particular radiotherapy for cancer treatments. By destructing cancerous cells using radiation, the treatment proceeded. When alive tissues are subject it is hazardous to study experimental. For brachytherapy sources generally are studied as theoretical using computer simulation. General concept of the treatment is to locate the radioactive source into cancerous area of related tissue. In computer studies Monte Carlo mathematical method that is in principle based on random number generations, is used. Palladium radioisotope is LDR (Low radiation Dose Rate) source. Main radioactive material was coated with titanium cylinder with 3mm length, 0.25 mm radius. There are two parts of Pd-103 in the titanium cylinder. It is impossible to investigate differential effects come from two part as experimental. Because the source dimensions are small compared with measurement distances. So there is only simulation method. In dosimetric studies it is aimed to determine absorbed dose distribution in tissue as radial and angular. In nuclear physics it is obligation to use computer based methods for researchers. Radiation studies have hazards for scientist and people interacted with radiation. When hazard exceed over recommended limits or physical conditions are not suitable (long work time, non economical experiments, inadequate sensitivity of materials etc.) it is unavoidable to simulate works and experiments before practices of scientific methods in life. In medical area, usage of radiation is required computational work for cancer treatments. Some computational studies are routine in clinics and other studies have scientific development purposes. In brachytherapy studies there are significant differences between experimental measurements and theoretical (computer based) output data. Errors of data taken from experimental studies are larger than simulation values errors. In design of a new brachytherapy source it is important to consider detailed

  3. Further comprehension of natural gas accumulation, distribution, and prediction prospects in China

    Directory of Open Access Journals (Sweden)

    Jun Li

    2017-06-01

    Full Text Available In-depth research reveals that the natural gas accumulation and distribution are characterized by cycle, sequence, equilibrium, traceability, and multi-stage. To be specific, every geotectonic cycle represents a gas reservoir forming system where natural gas is generated, migrated, accumulated, and formed into a reservoir in a certain play. Essentially, hydrocarbon accumulation occurs when migration force and resistance reach an equilibrium. In this situation, the closer to the source rock, the higher the accumulation efficiency is. Historically, reservoirs were formed in multiple phases. Moreover, zones in source rocks and adjacent to source rocks, unconformity belts, and faulted anticline belts are favorable areas to finding large gas fields. Apart from the common unconformity belts and faulted anticline belts, in-source and near-source zones should be considered as critical targets for future exploration. Subsequent exploration should focus on Upper Palaeozoic in the southeastern Ordos Basin, Triassic in southwestern Sichuan Basin, Jurassic in the northern section of the Kuqa Depression and other zones where no great breakthroughs have been made. Keywords: Large gas field, Distribution characteristics, Potential zone, Prospect

  4. Sources, occurrence and predicted aquatic impact of legacy and contemporary pesticides in streams

    International Nuclear Information System (INIS)

    McKnight, Ursula S.; Rasmussen, Jes J.; Kronvang, Brian; Binning, Philip J.; Bjerg, Poul L.

    2015-01-01

    We couple current findings of pesticides in surface and groundwater to the history of pesticide usage, focusing on the potential contribution of legacy pesticides to the predicted ecotoxicological impact on benthic macroinvertebrates in headwater streams. Results suggest that groundwater, in addition to precipitation and surface runoff, is an important source of pesticides (particularly legacy herbicides) entering surface water. In addition to current-use active ingredients, legacy pesticides, metabolites and impurities are important for explaining the estimated total toxicity attributable to pesticides. Sediment-bound insecticides were identified as the primary source for predicted ecotoxicity. Our results support recent studies indicating that highly sorbing chemicals contribute and even drive impacts on aquatic ecosystems. They further indicate that groundwater contaminated by legacy and contemporary pesticides may impact adjoining streams. Stream observations of soluble and sediment-bound pesticides are valuable for understanding the long-term fate of pesticides in aquifers, and should be included in stream monitoring programs. - Highlights: • Findings comprised a range of contemporary and banned legacy pesticides in streams. • Groundwater is a significant pathway for some herbicides entering streams. • Legacy pesticides increased predicted aquatic toxicity by four orders of magnitude. • Sediment-bound insecticides were identified as the primary source for ecotoxicity. • Stream monitoring programs should include legacy pesticides to assess impacts. - Legacy pesticides, particularly sediment-bound insecticides were identified as the primary source for predicted ecotoxicity impacting benthic macroinvertebrates in headwater streams

  5. Development of unfolding method to obtain pin-wise source strength distribution from PWR spent fuel assembly measurement

    International Nuclear Information System (INIS)

    Sitompul, Yos Panagaman; Shin, Hee-Sung; Park, Se-Hwan; Oh, Jong Myeong; Seo, Hee; Kim, Ho Dong

    2013-01-01

    An unfolding method has been developed to obtain a pin-wise source strength distribution of a 14 × 14 pressurized water reactor (PWR) spent fuel assembly. Sixteen measured gamma dose rates at 16 control rod guide tubes of an assembly are unfolded to 179 pin-wise source strengths of the assembly. The method calculates and optimizes five coefficients of the quadratic fitting function for X-Y source strength distribution, iteratively. The pin-wise source strengths are obtained at the sixth iteration, with a maximum difference between two sequential iterations of about 0.2%. The relative distribution of pin-wise source strength from the unfolding is checked using a comparison with the design code (Westinghouse APA code). The result shows that the relative distribution from the unfolding and design code is consistent within a 5% difference. The absolute value of the pin-wise source strength is also checked by reproducing the dose rates at the measurement points. The result shows that the pin-wise source strengths from the unfolding reproduce the dose rates within a 2% difference. (author)

  6. Sources and distribution of anthropogenic radionuclides in different marine environments

    International Nuclear Information System (INIS)

    Holm, E.

    1997-01-01

    The knowledge of the distribution in time and space radiologically important radionuclides from different sources in different marine environments is important for assessment of dose commitment following controlled or accidental releases and for detecting eventual new sources. Present sources from nuclear explosion tests, releases from nuclear facilities and the Chernobyl accident provide a tool for such studies. The different sources can be distinguished by different isotopic and radionuclide composition. Results show that radiocaesium behaves rather conservatively in the south and north Atlantic while plutonium has a residence time of about 8 years. On the other hand enhanced concentrations of plutonium in surface waters in arctic regions where vertical mixing is small and iceformation plays an important role. Significantly increased concentrations of plutonium are also found below the oxic layer in anoxic basins due to geochemical concentration. (author)

  7. Medial temporal lobe reinstatement of content-specific details predicts source memory

    Science.gov (United States)

    Liang, Jackson C.; Preston, Alison R.

    2016-01-01

    Leading theories propose that when remembering past events, medial temporal lobe (MTL) structures reinstate the neural patterns that were active when those events were initially encoded. Accurate reinstatement is hypothesized to support detailed recollection of memories, including their source. While several studies have linked cortical reinstatement to successful retrieval, indexing reinstatement within the MTL network and its relationship to memory performance has proved challenging. Here, we addressed this gap in knowledge by having participants perform an incidental encoding task, during which they visualized people, places, and objects in response to adjective cues. During a surprise memory test, participants saw studied and novel adjectives and indicated the imagery task they performed for each adjective. A multivariate pattern classifier was trained to discriminate the imagery tasks based on functional magnetic resonance imaging (fMRI) responses from hippocampus and MTL cortex at encoding. The classifier was then tested on MTL patterns during the source memory task. We found that MTL encoding patterns were reinstated during successful source retrieval. Moreover, when participants made source misattributions, errors were predicted by reinstatement of incorrect source content in MTL cortex. We further observed a gradient of content-specific reinstatement along the anterior-posterior axis of hippocampus and MTL cortex. Within anterior hippocampus, we found that reinstatement of person content was related to source memory accuracy, whereas reinstatement of place information across the entire hippocampal axis predicted correct source judgments. Content-specific reinstatement was also graded across MTL cortex, with PRc patterns evincing reactivation of people and more posterior regions, including PHc, showing evidence for reinstatement of places and objects. Collectively, these findings provide key evidence that source recollection relies on reinstatement of past

  8. A Popularity Based Prediction and Data Redistribution Tool for ATLAS Distributed Data Management

    CERN Document Server

    Beermann, T; The ATLAS collaboration; Maettig, P

    2014-01-01

    This paper presents a system to predict future data popularity for data-intensive systems, such as ATLAS distributed data management (DDM). Using these predictions it is possible to make a better distribution of data, helping to reduce the waiting time for jobs using with this data. This system is based on a tracer infrastructure that is able to monitor and store historical data accesses and which is used to create popularity reports. These reports provide detailed summaries about data accesses in the past, including information about the accessed files, the involved users and the sites. From this past data it is possible to then make near-term forecasts for data popularity in the future. The prediction system introduced in this paper makes use of both simple prediction methods as well as predictions made by neural networks. The best prediction method is dependent on the type of data and the data is carefully filtered for use in either system. The second part of the paper introduces a system that effectively ...

  9. A contribution to the analysis of the activity distribution of a radioactive source trapped inside a cylindrical volume, using the M.C.N.P.X. code

    International Nuclear Information System (INIS)

    Portugal, L.; Oliveira, C.; Trindade, R.; Paiva, I.

    2006-01-01

    Orphan sources, activated materials or contaminated materials with natural or artificial radionuclides have been detected in scrap metal products destined to recycling. The consequences of the melting of a source during the process could result in economical, environmental and social impacts. From the point of view of the radioactive waste management, a scenario of 100 ton of contaminated steel in one piece is a major problem. So, it is of great importance to develop a methodology that would allow us to predict the activity distribution inside a volume of steel. In previous work we were able to distinguish between the cases where the source is disseminated all over the entire cylinder and the cases where it is concentrated in different volumes. Now the main goal is to distinguish between different radiuses of spherical source geometries trapped inside the cylinder. For this, a methodology was proposed based on the ratio of the counts of two regions of the gamma spectrum, obtained with a sodium iodide detector, using the M.C.N.P.X. Monte Carlo simulation code. These calculated ratios allow us to determine a function r = aR 2 + bR + c, where R is the ratio between the counts of the two regions of the gamma spectrum and r is the radius of the source. For simulation purposes six 60 Co sources were used (a point source, four spheres of 5 cm, 10 cm, 15 cm and 20 cm radius and the overall contaminated cylinder) trapped inside two types of matrix, concrete and stainless steel. The methodology applied has shown to predict and distinguish accurately the distribution of a source inside a material roughly independently of the matrix and density considered. (authors)

  10. A contribution to the analysis of the activity distribution of a radioactive source trapped inside a cylindrical volume, using the M.C.N.P.X. code

    Energy Technology Data Exchange (ETDEWEB)

    Portugal, L.; Oliveira, C.; Trindade, R.; Paiva, I. [Instituto Tecnologico e Nuclear, Dpto. Proteccao Radiologica e Seguranca Nuclear, Sacavem (Portugal)

    2006-07-01

    Orphan sources, activated materials or contaminated materials with natural or artificial radionuclides have been detected in scrap metal products destined to recycling. The consequences of the melting of a source during the process could result in economical, environmental and social impacts. From the point of view of the radioactive waste management, a scenario of 100 ton of contaminated steel in one piece is a major problem. So, it is of great importance to develop a methodology that would allow us to predict the activity distribution inside a volume of steel. In previous work we were able to distinguish between the cases where the source is disseminated all over the entire cylinder and the cases where it is concentrated in different volumes. Now the main goal is to distinguish between different radiuses of spherical source geometries trapped inside the cylinder. For this, a methodology was proposed based on the ratio of the counts of two regions of the gamma spectrum, obtained with a sodium iodide detector, using the M.C.N.P.X. Monte Carlo simulation code. These calculated ratios allow us to determine a function r = aR{sup 2} + bR + c, where R is the ratio between the counts of the two regions of the gamma spectrum and r is the radius of the source. For simulation purposes six {sup 60}Co sources were used (a point source, four spheres of 5 cm, 10 cm, 15 cm and 20 cm radius and the overall contaminated cylinder) trapped inside two types of matrix, concrete and stainless steel. The methodology applied has shown to predict and distinguish accurately the distribution of a source inside a material roughly independently of the matrix and density considered. (authors)

  11. Source inversion in the full-wave tomography; Full wave tomography ni okeru source inversion

    Energy Technology Data Exchange (ETDEWEB)

    Tsuchiya, T [DIA Consultants Co. Ltd., Tokyo (Japan)

    1997-10-22

    In order to consider effects of characteristics of a vibration source in the full-wave tomography (FWT), a study has been performed on a method to invert vibration source parameters together with V(p)/V(s) distribution. The study has expanded an analysis method which uses as the basic the gradient method invented by Tarantola and the partial space method invented by Sambridge, and conducted numerical experiments. The experiment No. 1 has performed inversion of only the vibration source parameters, and the experiment No. 2 has executed simultaneous inversion of the V(p)/V(s) distribution and the vibration source parameters. The result of the discussions revealed that and effective analytical procedure would be as follows: in order to predict maximum stress, the average vibration source parameters and the property parameters are first inverted simultaneously; in order to estimate each vibration source parameter at a high accuracy, the property parameters are fixed, and each vibration source parameter is inverted individually; and the derived vibration source parameters are fixed, and the property parameters are again inverted from the initial values. 5 figs., 2 tabs.

  12. Spatial distribution of saline water and possible sources of intrusion ...

    African Journals Online (AJOL)

    The spatial distribution of saline water and possible sources of intrusion into Lekki lagoon and transitional effects on the lacustrine ichthyofaunal characteristics were studied during March, 2006 and February, 2008. The water quality analysis indicated that, salinity has drastically increased recently in the lagoon (0.007 to ...

  13. Predicted and measured velocity distribution in a model heat exchanger

    International Nuclear Information System (INIS)

    Rhodes, D.B.; Carlucci, L.N.

    1984-01-01

    This paper presents a comparison between numerical predictions, using the porous media concept, and measurements of the two-dimensional isothermal shell-side velocity distributions in a model heat exchanger. Computations and measurements were done with and without tubes present in the model. The effect of tube-to-baffle leakage was also investigated. The comparison was made to validate certain porous media concepts used in a computer code being developed to predict the detailed shell-side flow in a wide range of shell-and-tube heat exchanger geometries

  14. A hybrid approach to advancing quantitative prediction of tissue distribution of basic drugs in human

    International Nuclear Information System (INIS)

    Poulin, Patrick; Ekins, Sean; Theil, Frank-Peter

    2011-01-01

    A general toxicity of basic drugs is related to phospholipidosis in tissues. Therefore, it is essential to predict the tissue distribution of basic drugs to facilitate an initial estimate of that toxicity. The objective of the present study was to further assess the original prediction method that consisted of using the binding to red blood cells measured in vitro for the unbound drug (RBCu) as a surrogate for tissue distribution, by correlating it to unbound tissue:plasma partition coefficients (Kpu) of several tissues, and finally to predict volume of distribution at steady-state (V ss ) in humans under in vivo conditions. This correlation method demonstrated inaccurate predictions of V ss for particular basic drugs that did not follow the original correlation principle. Therefore, the novelty of this study is to provide clarity on the actual hypotheses to identify i) the impact of pharmacological mode of action on the generic correlation of RBCu-Kpu, ii) additional mechanisms of tissue distribution for the outlier drugs, iii) molecular features and properties that differentiate compounds as outliers in the original correlation analysis in order to facilitate its applicability domain alongside the properties already used so far, and finally iv) to present a novel and refined correlation method that is superior to what has been previously published for the prediction of human V ss of basic drugs. Applying a refined correlation method after identifying outliers would facilitate the prediction of more accurate distribution parameters as key inputs used in physiologically based pharmacokinetic (PBPK) and phospholipidosis models.

  15. Model Predictive Control techniques with application to photovoltaic, DC Microgrid, and a multi-sourced hybrid energy system

    Science.gov (United States)

    Shadmand, Mohammad Bagher

    Renewable energy sources continue to gain popularity. However, two major limitations exist that prevent widespread adoption: availability and variability of the electricity generated and the cost of the equipment. The focus of this dissertation is Model Predictive Control (MPC) for optimal sized photovoltaic (PV), DC Microgrid, and multi-sourced hybrid energy systems. The main considered applications are: maximum power point tracking (MPPT) by MPC, droop predictive control of DC microgrid, MPC of grid-interaction inverter, MPC of a capacitor-less VAR compensator based on matrix converter (MC). This dissertation firstly investigates a multi-objective optimization technique for a hybrid distribution system. The variability of a high-penetration PV scenario is also studied when incorporated into the microgrid concept. Emerging (PV) technologies have enabled the creation of contoured and conformal PV surfaces; the effect of using non-planar PV modules on variability is also analyzed. The proposed predictive control to achieve maximum power point for isolated and grid-tied PV systems speeds up the control loop since it predicts error before the switching signal is applied to the converter. The low conversion efficiency of PV cells means we want to ensure always operating at maximum possible power point to make the system economical. Thus the proposed MPPT technique can capture more energy compared to the conventional MPPT techniques from same amount of installed solar panel. Because of the MPPT requirement, the output voltage of the converter may vary. Therefore a droop control is needed to feed multiple arrays of photovoltaic systems to a DC bus in microgrid community. Development of a droop control technique by means of predictive control is another application of this dissertation. Reactive power, denoted as Volt Ampere Reactive (VAR), has several undesirable consequences on AC power system network such as reduction in power transfer capability and increase in

  16. Prediction of vertical distribution and ambient development temperature of Baltic cod, Gadus morhua L., eggs

    DEFF Research Database (Denmark)

    Wieland, Kai; Jarre, Astrid

    1997-01-01

    An artificial neural network (ANN) model was established to predict the vertical distribution of Baltic cod eggs. Data from vertical distribution sampling in the Bornholm Basin over the period 1986-1995 were used to train and test the network, while data sets from sampling in 1996 were used...... for validation. The model explained 82% of the variance between observed and predicted relative frequencies of occurrence of the eggs in relation to salinity, temperature and oxygen concentration; The ANN fitted all observations satisfactorily except for one sampling date, where an exceptional hydrographic...... situation was observed. Mean ambient temperatures, calculated from the predicted vertical distributions of the eggs and used for the computation of egg developmental times, were overestimated by 0.05 degrees C on average. This corresponds to an error in prediction of egg developmental time of less than 1%...

  17. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local universe

    DEFF Research Database (Denmark)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene

    2017-01-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe....... Assuming that the distribution of the neutrino sources follows that of matter we look for correlations between `warm' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance...... (including that of IceCube-Gen2) we demonstrate that sources with local density exceeding $10^{-6} \\, \\text{Mpc}^{-3}$ and neutrino luminosity $L_{\

  18. An efficient central DOA tracking algorithm for multiple incoherently distributed sources

    Science.gov (United States)

    Hassen, Sonia Ben; Samet, Abdelaziz

    2015-12-01

    In this paper, we develop a new tracking method for the direction of arrival (DOA) parameters assuming multiple incoherently distributed (ID) sources. The new approach is based on a simple covariance fitting optimization technique exploiting the central and noncentral moments of the source angular power densities to estimate the central DOAs. The current estimates are treated as measurements provided to the Kalman filter that model the dynamic property of directional changes for the moving sources. Then, the covariance-fitting-based algorithm and the Kalman filtering theory are combined to formulate an adaptive tracking algorithm. Our algorithm is compared to the fast approximated power iteration-total least square-estimation of signal parameters via rotational invariance technique (FAPI-TLS-ESPRIT) algorithm using the TLS-ESPRIT method and the subspace updating via FAPI-algorithm. It will be shown that the proposed algorithm offers an excellent DOA tracking performance and outperforms the FAPI-TLS-ESPRIT method especially at low signal-to-noise ratio (SNR) values. Moreover, the performances of the two methods increase as the SNR values increase. This increase is more prominent with the FAPI-TLS-ESPRIT method. However, their performances degrade when the number of sources increases. It will be also proved that our method depends on the form of the angular distribution function when tracking the central DOAs. Finally, it will be shown that the more the sources are spaced, the more the proposed method can exactly track the DOAs.

  19. Flows and Stratification of an Enclosure Containing Both Localised and Vertically Distributed Sources of Buoyancy

    Science.gov (United States)

    Partridge, Jamie; Linden, Paul

    2013-11-01

    We examine the flows and stratification established in a naturally ventilated enclosure containing both a localised and vertically distributed source of buoyancy. The enclosure is ventilated through upper and lower openings which connect the space to an external ambient. Small scale laboratory experiments were carried out with water as the working medium and buoyancy being driven directly by temperature differences. A point source plume gave localised heating while the distributed source was driven by a controllable heater mat located in the side wall of the enclosure. The transient temperatures, as well as steady state temperature profiles, were recorded and are reported here. The temperature profiles inside the enclosure were found to be dependent on the effective opening area A*, a combination of the upper and lower openings, and the ratio of buoyancy fluxes from the distributed and localised source Ψ =Bw/Bp . Industrial CASE award with ARUP.

  20. Predictive models of threatened plant species distribution in the Iberian arid south-east

    OpenAIRE

    Benito, Blas M.

    2013-01-01

    Poster on the distribution of three rare, endemic and endangered annual plants of arid zones in the south-eastern Iberian peninsula. Presented in the workshop "Predictive Modelling of Species Distribution: New Tools for the XXI Century (Baeza, Spain, november 2005).

  1. Prediction of in-phantom dose distribution using in-air neutron beam characteristics for BNCS

    International Nuclear Information System (INIS)

    Verbeke, Jerome M.

    1999-01-01

    A monoenergetic neutron beam simulation study is carried out to determine the optimal neutron energy range for treatment of rheumatoid arthritis using radiation synovectomy. The goal of the treatment is the ablation of diseased synovial membranes in joints, such as knees and fingers. This study focuses on human knee joints. Two figures-of-merit are used to measure the neutron beam quality, the ratio of the synovium absorbed dose to the skin absorbed dose, and the ratio of the synovium absorbed dose to the bone absorbed dose. It was found that (a) thermal neutron beams are optimal for treatment, (b) similar absorbed dose rates and therapeutic ratios are obtained with monodirectional and isotropic neutron beams. Computation of the dose distribution in a human knee requires the simulation of particle transport from the neutron source to the knee phantom through the moderator. A method was developed to predict the dose distribution in a knee phantom from any neutron and photon beam spectra incident on the knee. This method was revealed to be reasonably accurate and enabled one to reduce by a factor of 10 the particle transport simulation time by modeling the moderator only

  2. Prediction of in-phantom dose distribution using in-air neutron beam characteristics for BNCS

    Energy Technology Data Exchange (ETDEWEB)

    Verbeke, Jerome M.

    1999-12-14

    A monoenergetic neutron beam simulation study is carried out to determine the optimal neutron energy range for treatment of rheumatoid arthritis using radiation synovectomy. The goal of the treatment is the ablation of diseased synovial membranes in joints, such as knees and fingers. This study focuses on human knee joints. Two figures-of-merit are used to measure the neutron beam quality, the ratio of the synovium absorbed dose to the skin absorbed dose, and the ratio of the synovium absorbed dose to the bone absorbed dose. It was found that (a) thermal neutron beams are optimal for treatment, (b) similar absorbed dose rates and therapeutic ratios are obtained with monodirectional and isotropic neutron beams. Computation of the dose distribution in a human knee requires the simulation of particle transport from the neutron source to the knee phantom through the moderator. A method was developed to predict the dose distribution in a knee phantom from any neutron and photon beam spectra incident on the knee. This method was revealed to be reasonably accurate and enabled one to reduce by a factor of 10 the particle transport simulation time by modeling the moderator only.

  3. Panchromatic spectral energy distributions of Herschel sources

    Science.gov (United States)

    Berta, S.; Lutz, D.; Santini, P.; Wuyts, S.; Rosario, D.; Brisbin, D.; Cooray, A.; Franceschini, A.; Gruppioni, C.; Hatziminaoglou, E.; Hwang, H. S.; Le Floc'h, E.; Magnelli, B.; Nordon, R.; Oliver, S.; Page, M. J.; Popesso, P.; Pozzetti, L.; Pozzi, F.; Riguccini, L.; Rodighiero, G.; Roseboom, I.; Scott, D.; Symeonidis, M.; Valtchanov, I.; Viero, M.; Wang, L.

    2013-03-01

    Combining far-infrared Herschel photometry from the PACS Evolutionary Probe (PEP) and Herschel Multi-tiered Extragalactic Survey (HerMES) guaranteed time programs with ancillary datasets in the GOODS-N, GOODS-S, and COSMOS fields, it is possible to sample the 8-500 μm spectral energy distributions (SEDs) of galaxies with at least 7-10 bands. Extending to the UV, optical, and near-infrared, the number of bands increases up to 43. We reproduce the distribution of galaxies in a carefully selected restframe ten colors space, based on this rich data-set, using a superposition of multivariate Gaussian modes. We use this model to classify galaxies and build median SEDs of each class, which are then fitted with a modified version of the magphys code that combines stellar light, emission from dust heated by stars and a possible warm dust contribution heated by an active galactic nucleus (AGN). The color distribution of galaxies in each of the considered fields can be well described with the combination of 6-9 classes, spanning a large range of far- to near-infrared luminosity ratios, as well as different strength of the AGN contribution to bolometric luminosities. The defined Gaussian grouping is used to identify rare or odd sources. The zoology of outliers includes Herschel-detected ellipticals, very blue z ~ 1 Ly-break galaxies, quiescent spirals, and torus-dominated AGN with star formation. Out of these groups and outliers, a new template library is assembled, consisting of 32 SEDs describing the intrinsic scatter in the restframe UV-to-submm colors of infrared galaxies. This library is tested against L(IR) estimates with and without Herschel data included, and compared to eightother popular methods often adopted in the literature. When implementing Herschel photometry, these approaches produce L(IR) values consistent with each other within a median absolute deviation of 10-20%, the scatter being dominated more by fine tuning of the codes, rather than by the choice of

  4. Using Bayesian Belief Network (BBN) modelling for Rapid Source Term Prediction. RASTEP Phase 1

    International Nuclear Information System (INIS)

    Knochenhauer, M.; Swaling, V.H.; Alfheim, P.

    2012-09-01

    The project is connected to the development of RASTEP, a computerized source term prediction tool aimed at providing a basis for improving off-site emergency management. RASTEP uses Bayesian belief networks (BBN) to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, timing, and pathway of released radio-nuclides). The output is a set of possible source terms with associated probabilities. In the NKS project, a number of complex issues associated with the integration of probabilistic and deterministic analyses are addressed. This includes issues related to the method for estimating source terms, signal validation, and sensitivity analysis. One major task within Phase 1 of the project addressed the problem of how to make the source term module flexible enough to give reliable and valid output throughout the accident scenario. Of the alternatives evaluated, it is recommended that RASTEP is connected to a fast running source term prediction code, e.g., MARS, with a possibility of updating source terms based on real-time observations. (Author)

  5. Using Bayesian Belief Network (BBN) modelling for Rapid Source Term Prediction. RASTEP Phase 1

    Energy Technology Data Exchange (ETDEWEB)

    Knochenhauer, M.; Swaling, V.H.; Alfheim, P. [Scandpower AB, Sundbyberg (Sweden)

    2012-09-15

    The project is connected to the development of RASTEP, a computerized source term prediction tool aimed at providing a basis for improving off-site emergency management. RASTEP uses Bayesian belief networks (BBN) to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, timing, and pathway of released radio-nuclides). The output is a set of possible source terms with associated probabilities. In the NKS project, a number of complex issues associated with the integration of probabilistic and deterministic analyses are addressed. This includes issues related to the method for estimating source terms, signal validation, and sensitivity analysis. One major task within Phase 1 of the project addressed the problem of how to make the source term module flexible enough to give reliable and valid output throughout the accident scenario. Of the alternatives evaluated, it is recommended that RASTEP is connected to a fast running source term prediction code, e.g., MARS, with a possibility of updating source terms based on real-time observations. (Author)

  6. Production, Distribution, and Applications of Californium-252 Neutron Sources

    International Nuclear Information System (INIS)

    Balo, P.A.; Knauer, J.B.; Martin, R.C.

    1999-01-01

    The radioisotope 252 Cf is routinely encapsulated into compact, portable, intense neutron sources with a 2.6-year half-life. A source the size of a person's little finger can emit up to 10 11 neutrons/s. Californium-252 is used commercially as a reliable, cost-effective neutron source for prompt gamma neutron activation analysis (PGNAA) of coal, cement, and minerals, as well as for detection and identification of explosives, laud mines, and unexploded military ordnance. Other uses are neutron radiography, nuclear waste assays, reactor start-up sources, calibration standards, and cancer therapy. The inherent safety of source encapsulations is demonstrated by 30 years of experience and by U.S. Bureau of Mines tests of source survivability during explosions. The production and distribution center for the U. S Department of Energy (DOE) Californium Program is the Radiochemical Engineering Development Center (REDC) at Oak Ridge National Laboratory (ORNL). DOE sells 252 Cf to commercial reencapsulators domestically and internationally. Sealed 252 Cf sources are also available for loan to agencies and subcontractors of the U.S. government and to universities for educational, research, and medical applications. The REDC has established the Californium User Facility (CUF) for Neutron Science to make its large inventory of 252 Cf sources available to researchers for irradiations inside uncontaminated hot cells. Experiments at the CUF include a land mine detection system, neutron damage testing of solid-state detectors, irradiation of human cancer cells for boron neutron capture therapy experiments, and irradiation of rice to induce genetic mutations

  7. Geometric effects in alpha particle detection from distributed air sources

    International Nuclear Information System (INIS)

    Gil, L.R.; Leitao, R.M.S.; Marques, A.; Rivera, A.

    1994-08-01

    Geometric effects associated to detection of alpha particles from distributed air sources, as it happens in Radon and Thoron measurements, are revisited. The volume outside which no alpha particle may reach the entrance window of the detector is defined and determined analytically for rectangular and cylindrical symmetry geometries. (author). 3 figs

  8. Open source machine-learning algorithms for the prediction of optimal cancer drug therapies.

    Science.gov (United States)

    Huang, Cai; Mezencev, Roman; McDonald, John F; Vannberg, Fredrik

    2017-01-01

    Precision medicine is a rapidly growing area of modern medical science and open source machine-learning codes promise to be a critical component for the successful development of standardized and automated analysis of patient data. One important goal of precision cancer medicine is the accurate prediction of optimal drug therapies from the genomic profiles of individual patient tumors. We introduce here an open source software platform that employs a highly versatile support vector machine (SVM) algorithm combined with a standard recursive feature elimination (RFE) approach to predict personalized drug responses from gene expression profiles. Drug specific models were built using gene expression and drug response data from the National Cancer Institute panel of 60 human cancer cell lines (NCI-60). The models are highly accurate in predicting the drug responsiveness of a variety of cancer cell lines including those comprising the recent NCI-DREAM Challenge. We demonstrate that predictive accuracy is optimized when the learning dataset utilizes all probe-set expression values from a diversity of cancer cell types without pre-filtering for genes generally considered to be "drivers" of cancer onset/progression. Application of our models to publically available ovarian cancer (OC) patient gene expression datasets generated predictions consistent with observed responses previously reported in the literature. By making our algorithm "open source", we hope to facilitate its testing in a variety of cancer types and contexts leading to community-driven improvements and refinements in subsequent applications.

  9. Open source machine-learning algorithms for the prediction of optimal cancer drug therapies.

    Directory of Open Access Journals (Sweden)

    Cai Huang

    Full Text Available Precision medicine is a rapidly growing area of modern medical science and open source machine-learning codes promise to be a critical component for the successful development of standardized and automated analysis of patient data. One important goal of precision cancer medicine is the accurate prediction of optimal drug therapies from the genomic profiles of individual patient tumors. We introduce here an open source software platform that employs a highly versatile support vector machine (SVM algorithm combined with a standard recursive feature elimination (RFE approach to predict personalized drug responses from gene expression profiles. Drug specific models were built using gene expression and drug response data from the National Cancer Institute panel of 60 human cancer cell lines (NCI-60. The models are highly accurate in predicting the drug responsiveness of a variety of cancer cell lines including those comprising the recent NCI-DREAM Challenge. We demonstrate that predictive accuracy is optimized when the learning dataset utilizes all probe-set expression values from a diversity of cancer cell types without pre-filtering for genes generally considered to be "drivers" of cancer onset/progression. Application of our models to publically available ovarian cancer (OC patient gene expression datasets generated predictions consistent with observed responses previously reported in the literature. By making our algorithm "open source", we hope to facilitate its testing in a variety of cancer types and contexts leading to community-driven improvements and refinements in subsequent applications.

  10. Spatial distribution of charged particles along the ion-optical axis in electron cyclotron resonance ion sources. Experimental results

    International Nuclear Information System (INIS)

    Panitzsch, Lauri

    2013-01-01

    The experimental determination of the spatial distribution of charged particles along the ion-optical axis in electron cyclotron resonance ion sources (ECRIS) defines the focus of this thesis. The spatial distributions of different ion species were obtained in the object plane of the bending magnet (∼45 cm downstream from the plasma electrode) and in the plane of the plasma electrode itself, both in high spatial resolution. The results show that each of the different ion species forms a bloated, triangular structure in the aperture of the plasma electrode. The geometry and the orientation of these structures are defined by the superposition of the radial and axial magnetic fields. The radial extent of each structure is defined by the charge of the ion. Higher charge states occupy smaller, more concentrated structures. The total current density increases towards the center of the plasma electrode. The circular and star-like structures that can be observed in the beam profiles of strongly focused, extracted ion beams are each dominated by ions of a single charge state. In addition, the spatially resolved current density distribution of charged particles in the plasma chamber that impinge on the plasma electrode was determined, differentiating between ions and electrons. The experimental results of this work show that the electrons of the plasma are strongly connected to the magnetic field lines in the source and thus spatially well confined in a triangular-like structure. The intensity of the electrons increases towards the center of the plasma electrode and the plasma chamber, as well. These electrons are surrounded by a spatially far less confined and less intense ion population. All the findings mentioned above were already predicted in parts by simulations of different groups. However, the results presented within this thesis represent the first (and by now only) direct experimental verification of those predictions and are qualitatively transferable to other

  11. Spatial distribution of charged particles along the ion-optical axis in electron cyclotron resonance ion sources. Experimental results

    Energy Technology Data Exchange (ETDEWEB)

    Panitzsch, Lauri

    2013-02-08

    The experimental determination of the spatial distribution of charged particles along the ion-optical axis in electron cyclotron resonance ion sources (ECRIS) defines the focus of this thesis. The spatial distributions of different ion species were obtained in the object plane of the bending magnet ({approx}45 cm downstream from the plasma electrode) and in the plane of the plasma electrode itself, both in high spatial resolution. The results show that each of the different ion species forms a bloated, triangular structure in the aperture of the plasma electrode. The geometry and the orientation of these structures are defined by the superposition of the radial and axial magnetic fields. The radial extent of each structure is defined by the charge of the ion. Higher charge states occupy smaller, more concentrated structures. The total current density increases towards the center of the plasma electrode. The circular and star-like structures that can be observed in the beam profiles of strongly focused, extracted ion beams are each dominated by ions of a single charge state. In addition, the spatially resolved current density distribution of charged particles in the plasma chamber that impinge on the plasma electrode was determined, differentiating between ions and electrons. The experimental results of this work show that the electrons of the plasma are strongly connected to the magnetic field lines in the source and thus spatially well confined in a triangular-like structure. The intensity of the electrons increases towards the center of the plasma electrode and the plasma chamber, as well. These electrons are surrounded by a spatially far less confined and less intense ion population. All the findings mentioned above were already predicted in parts by simulations of different groups. However, the results presented within this thesis represent the first (and by now only) direct experimental verification of those predictions and are qualitatively transferable to

  12. Climatic associations of British species distributions show good transferability in time but low predictive accuracy for range change.

    Directory of Open Access Journals (Sweden)

    Giovanni Rapacciuolo

    Full Text Available Conservation planners often wish to predict how species distributions will change in response to environmental changes. Species distribution models (SDMs are the primary tool for making such predictions. Many methods are widely used; however, they all make simplifying assumptions, and predictions can therefore be subject to high uncertainty. With global change well underway, field records of observed range shifts are increasingly being used for testing SDM transferability. We used an unprecedented distribution dataset documenting recent range changes of British vascular plants, birds, and butterflies to test whether correlative SDMs based on climate change provide useful approximations of potential distribution shifts. We modelled past species distributions from climate using nine single techniques and a consensus approach, and projected the geographical extent of these models to a more recent time period based on climate change; we then compared model predictions with recent observed distributions in order to estimate the temporal transferability and prediction accuracy of our models. We also evaluated the relative effect of methodological and taxonomic variation on the performance of SDMs. Models showed good transferability in time when assessed using widespread metrics of accuracy. However, models had low accuracy to predict where occupancy status changed between time periods, especially for declining species. Model performance varied greatly among species within major taxa, but there was also considerable variation among modelling frameworks. Past climatic associations of British species distributions retain a high explanatory power when transferred to recent time--due to their accuracy to predict large areas retained by species--but fail to capture relevant predictors of change. We strongly emphasize the need for caution when using SDMs to predict shifts in species distributions: high explanatory power on temporally-independent records

  13. On Distributions of Emission Sources and Speed-of-Sound in Proton-Proton (Proton-Antiproton Collisions

    Directory of Open Access Journals (Sweden)

    Li-Na Gao

    2015-01-01

    Full Text Available The revised (three-source Landau hydrodynamic model is used in this paper to study the (pseudorapidity distributions of charged particles produced in proton-proton and proton-antiproton collisions at high energies. The central source is assumed to contribute with a Gaussian function which covers the rapidity distribution region as wide as possible. The target and projectile sources are assumed to emit isotropically particles in their respective rest frames. The model calculations obtained with a Monte Carlo method are fitted to the experimental data over an energy range from 0.2 to 13 TeV. The values of the squared speed-of-sound parameter in different collisions are then extracted from the width of the rapidity distributions.

  14. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    Energy Technology Data Exchange (ETDEWEB)

    AlRashidi, M.R., E-mail: malrash2002@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait); AlHajri, M.F., E-mail: mfalhajri@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait)

    2011-10-15

    Highlights: {yields} A new hybrid PSO for optimal DGs placement and sizing. {yields} Statistical analysis to fine tune PSO parameters. {yields} Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  15. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    International Nuclear Information System (INIS)

    AlRashidi, M.R.; AlHajri, M.F.

    2011-01-01

    Highlights: → A new hybrid PSO for optimal DGs placement and sizing. → Statistical analysis to fine tune PSO parameters. → Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  16. Negative Binomial Distribution and the multiplicity moments at the LHC

    International Nuclear Information System (INIS)

    Praszalowicz, Michal

    2011-01-01

    In this work we show that the latest LHC data on multiplicity moments C 2 -C 5 are well described by a two-step model in the form of a convolution of the Poisson distribution with energy-dependent source function. For the source function we take Γ Negative Binomial Distribution. No unexpected behavior of Negative Binomial Distribution parameter k is found. We give also predictions for the higher energies of 10 and 14 TeV.

  17. Predicting Dynamical Crime Distribution From Environmental and Social Influences

    Directory of Open Access Journals (Sweden)

    Simon Garnier

    2018-05-01

    Full Text Available Understanding how social and environmental factors contribute to the spatio-temporal distribution of criminal activities is a fundamental question in modern criminology. Thanks to the development of statistical techniques such as Risk Terrain Modeling (RTM, it is possible to evaluate precisely the criminogenic contribution of environmental features to a given location. However, the role of social information in shaping the distribution of criminal acts is largely understudied by the criminological research literature. In this paper we investigate the existence of spatio-temporal correlations between successive robbery events, after controlling for environmental influences as estimated by RTM. We begin by showing that a robbery event increases the likelihood of future robberies at and in the neighborhood of its location. This event-dependent influence decreases exponentially with time and as an inverse function of the distance to the original event. We then combine event-dependence and environmental influences in a simulation model to predict robbery patterns at the scale of a large city (Newark, NJ. We show that this model significantly improves upon the predictions of RTM alone and of a model taking into account event-dependence only when tested against real data that were not used to calibrate either model. We conclude that combining risk from exposure (past event and vulnerability (environment, following from the Theory of Risky Places, when modeling crime distribution can improve crime suppression and prevention efforts by providing more accurate forecasting of the most likely locations of criminal events.

  18. Multi-Model Prediction for Demand Forecast in Water Distribution Networks

    Directory of Open Access Journals (Sweden)

    Rodrigo Lopez Farias

    2018-03-01

    Full Text Available This paper presents a multi-model predictor called Qualitative Multi-Model Predictor Plus (QMMP+ for demand forecast in water distribution networks. QMMP+ is based on the decomposition of the quantitative and qualitative information of the time-series. The quantitative component (i.e., the daily consumption prediction is forecasted and the pattern mode estimated using a Nearest Neighbor (NN classifier and a Calendar. The patterns are updated via a simple Moving Average scheme. The NN classifier and the Calendar are executed simultaneously every period and the most suited model for prediction is selected using a probabilistic approach. The proposed solution for water demand forecast is compared against Radial Basis Function Artificial Neural Networks (RBF-ANN, the statistical Autoregressive Integrated Moving Average (ARIMA, and Double Seasonal Holt-Winters (DSHW approaches, providing the best results when applied to real demand of the Barcelona Water Distribution Network. QMMP+ has demonstrated that the special modelling treatment of water consumption patterns improves the forecasting accuracy.

  19. Distribution of tessera terrain on Venus: Prediction for Magellan

    International Nuclear Information System (INIS)

    Bindschadler, D.L.; Head, J.W.; Kreslavsky, M.A.; Shkuratov, Yu.G.; Ivanov, M.A.; Basilevsky, A.T.

    1990-01-01

    Tessera terrain is the dominant tectonic unit in the northern hemisphere of Venus and is characterized by complex sets of intersecting structural trends and distinctive radar properties due to a high degree of meter and sub-meter scale (5 cm to 10 m) roughness. Based on these distinctive radar properties, a prediction of the global distribution of tessera can be made using Pioneer Venus (PV) reflectivity and roughness data. Where available, Venera 15/16 and Arecibo images and PV diffuse scattering data were used to evaluate the prediction. From this assessment, the authors conclude that most of the regions with prediction values greater than 0.6 (out of 1) are likely to be tessera, and are almost certain to be tectonically deformed. Lada Terra and Phoebe Regio are very likely to contain tessera terrain, while much of Aphrodite Terra is most likely to be either tessera or a landform which has not yet been recognized on Venus. This prediction map will assist in targeting Magellan investigations of Venus tectonics

  20. The effects of model and data complexity on predictions from species distributions models

    DEFF Research Database (Denmark)

    García-Callejas, David; Bastos, Miguel

    2016-01-01

    How complex does a model need to be to provide useful predictions is a matter of continuous debate across environmental sciences. In the species distributions modelling literature, studies have demonstrated that more complex models tend to provide better fits. However, studies have also shown...... that predictive performance does not always increase with complexity. Testing of species distributions models is challenging because independent data for testing are often lacking, but a more general problem is that model complexity has never been formally described in such studies. Here, we systematically...

  1. Shielding Characteristics Using an Ultrasonic Configurable Fan Artificial Noise Source to Generate Modes - Experimental Measurements and Analytical Predictions

    Science.gov (United States)

    Sutliff, Daniel L.; Walker, Bruce E.

    2014-01-01

    An Ultrasonic Configurable Fan Artificial Noise Source (UCFANS) was designed, built, and tested in support of the NASA Langley Research Center's 14x22 wind tunnel test of the Hybrid Wing Body (HWB) full 3-D 5.8% scale model. The UCFANS is a 5.8% rapid prototype scale model of a high-bypass turbofan engine that can generate the tonal signature of proposed engines using artificial sources (no flow). The purpose of the program was to provide an estimate of the acoustic shielding benefits possible from mounting an engine on the upper surface of a wing; a flat plate model was used as the shielding surface. Simple analytical simulations were used to preview the radiation patterns - Fresnel knife-edge diffraction was coupled with a dense phased array of point sources to compute shielded and unshielded sound pressure distributions for potential test geometries and excitation modes. Contour plots of sound pressure levels, and integrated power levels, from nacelle alone and shielded configurations for both the experimental measurements and the analytical predictions are presented in this paper.

  2. On distributed model predictive control for vehicle platooning with a recursive feasibility guarantee

    NARCIS (Netherlands)

    Shi, Shengling; Lazar, Mircea

    2017-01-01

    This paper proposes a distributed model predictive control algorithm for vehicle platooning and more generally networked systems in a chain structure. The distributed models of the vehicle platoon are coupled through the input of the preceding vehicles. Using the principles of robust model

  3. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    Science.gov (United States)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  4. Predicting dihedral angle probability distributions for protein coil residues from primary sequence using neural networks

    DEFF Research Database (Denmark)

    Helles, Glennie; Fonseca, Rasmus

    2009-01-01

    residue in the input-window. The trained neural network shows a significant improvement (4-68%) in predicting the most probable bin (covering a 30°×30° area of the dihedral angle space) for all amino acids in the data set compared to first order statistics. An accuracy comparable to that of secondary...... seem to have a significant influence on the dihedral angles adopted by the individual amino acids in coil segments. In this work we attempt to predict a probability distribution of these dihedral angles based on the flanking residues. While attempts to predict dihedral angles of coil segments have been...... done previously, none have, to our knowledge, presented comparable results for the probability distribution of dihedral angles. Results: In this paper we develop an artificial neural network that uses an input-window of amino acids to predict a dihedral angle probability distribution for the middle...

  5. Two sample Bayesian prediction intervals for order statistics based on the inverse exponential-type distributions using right censored sample

    Directory of Open Access Journals (Sweden)

    M.M. Mohie El-Din

    2011-10-01

    Full Text Available In this paper, two sample Bayesian prediction intervals for order statistics (OS are obtained. This prediction is based on a certain class of the inverse exponential-type distributions using a right censored sample. A general class of prior density functions is used and the predictive cumulative function is obtained in the two samples case. The class of the inverse exponential-type distributions includes several important distributions such the inverse Weibull distribution, the inverse Burr distribution, the loglogistic distribution, the inverse Pareto distribution and the inverse paralogistic distribution. Special cases of the inverse Weibull model such as the inverse exponential model and the inverse Rayleigh model are considered.

  6. What are the most crucial soil factors for predicting the distribution of alpine plant species?

    Science.gov (United States)

    Buri, A.; Pinto-Figueroa, E.; Yashiro, E.; Guisan, A.

    2017-12-01

    Nowadays the use of species distribution models (SDM) is common to predict in space and time the distribution of organisms living in the critical zone. The realized environmental niche concept behind the development of SDM imply that many environmental factors must be accounted for simultaneously to predict species distributions. Climatic and topographic factors are often primary included, whereas soil factors are frequently neglected, mainly due to the paucity of soil information available spatially and temporally. Furthermore, among existing studies, most included soil pH only, or few other soil parameters. In this study we aimed at identifying what are the most crucial soil factors for explaining alpine plant distributions and, among those identified, which ones further improve the predictive power of plant SDMs. To test the relative importance of the soil factors, we performed plant SDMs using as predictors 52 measured soil properties of various types such as organic/inorganic compounds, chemical/physical properties, water related variables, mineral composition or grain size distribution. We added them separately to a standard set of topo-climatic predictors (temperature, slope, solar radiation and topographic position). We used ensemble forecasting techniques combining together several predictive algorithms to model the distribution of 116 plant species over 250 sites in the Swiss Alps. We recorded the variable importance for each model and compared the quality of the models including different soil proprieties (one at a time) as predictors to models having only topo-climatic variables as predictors. Results show that 46% of the soil proprieties tested become the second most important variable, after air temperature, to explain spatial distribution of alpine plants species. Moreover, we also assessed that addition of certain soil factors, such as bulk soil water density, could improve over 80% the quality of some plant species models. We confirm that soil p

  7. Predicting Posttraumatic Stress Symptom Prevalence and Local Distribution after an Earthquake with Scarce Data.

    Science.gov (United States)

    Dussaillant, Francisca; Apablaza, Mauricio

    2017-08-01

    After a major earthquake, the assignment of scarce mental health emergency personnel to different geographic areas is crucial to the effective management of the crisis. The scarce information that is available in the aftermath of a disaster may be valuable in helping predict where are the populations that are in most need. The objectives of this study were to derive algorithms to predict posttraumatic stress (PTS) symptom prevalence and local distribution after an earthquake and to test whether there are algorithms that require few input data and are still reasonably predictive. A rich database of PTS symptoms, informed after Chile's 2010 earthquake and tsunami, was used. Several model specifications for the mean and centiles of the distribution of PTS symptoms, together with posttraumatic stress disorder (PTSD) prevalence, were estimated via linear and quantile regressions. The models varied in the set of covariates included. Adjusted R2 for the most liberal specifications (in terms of numbers of covariates included) ranged from 0.62 to 0.74, depending on the outcome. When only including peak ground acceleration (PGA), poverty rate, and household damage in linear and quadratic form, predictive capacity was still good (adjusted R2 from 0.59 to 0.67 were obtained). Information about local poverty, household damage, and PGA can be used as an aid to predict PTS symptom prevalence and local distribution after an earthquake. This can be of help to improve the assignment of mental health personnel to the affected localities. Dussaillant F , Apablaza M . Predicting posttraumatic stress symptom prevalence and local distribution after an earthquake with scarce data. Prehosp Disaster Med. 2017;32(4):357-367.

  8. Prediction of wind energy distribution in complex terrain using CFD

    DEFF Research Database (Denmark)

    Xu, Chang; Li, Chenqi; Yang, Jianchuan

    2013-01-01

    Based on linear models, WAsP software predicts wind energy distribution, with a good accuracy for flat terrain, but with a large error under complicated topography. In this paper, numerical simulations are carried out using the FLUENT software on a mesh generated by the GAMBIT and ARGIS software ...

  9. Family of Quantum Sources for Improving Near Field Accuracy in Transducer Modeling by the Distributed Point Source Method

    Directory of Open Access Journals (Sweden)

    Dominique Placko

    2016-10-01

    Full Text Available The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD. In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.

  10. Distributed BOLD-response in association cortex vector state space predicts reaction time during selective attention.

    Science.gov (United States)

    Musso, Francesco; Konrad, Andreas; Vucurevic, Goran; Schäffner, Cornelius; Friedrich, Britta; Frech, Peter; Stoeter, Peter; Winterer, Georg

    2006-02-15

    Human cortical information processing is thought to be dominated by distributed activity in vector state space (Churchland, P.S., Sejnowski, T.J., 1992. The Computational Brain. MIT Press, Cambridge.). In principle, it should be possible to quantify distributed brain activation with independent component analysis (ICA) through vector-based decomposition, i.e., through a separation of a mixture of sources. Using event-related functional magnetic resonance imaging (fMRI) during a selective attention-requiring task (visual oddball), we explored how the number of independent components within activated cortical areas is related to reaction time. Prior to ICA, the activated cortical areas were determined on the basis of a General linear model (GLM) voxel-by-voxel analysis of the target stimuli (checkerboard reversal). Two activated cortical areas (temporoparietal cortex, medial prefrontal cortex) were further investigated as these cortical regions are known to be the sites of simultaneously active electromagnetic generators which give rise to the compound event-related potential P300 during oddball task conditions. We found that the number of independent components more strongly predicted reaction time than the overall level of "activation" (GLM BOLD-response) in the left temporoparietal area whereas in the medial prefrontal cortex both ICA and GLM predicted reaction time equally well. Comparable correlations were not seen when principle components were used instead of independent components. These results indicate that the number of independently activated components, i.e., a high level of cortical activation complexity in cortical vector state space, may index particularly efficient information processing during selective attention-requiring tasks. To our best knowledge, this is the first report describing a potential relationship between neuronal generators of cognitive processes, the associated electrophysiological evidence for the existence of distributed networks

  11. The electron density and temperature distributions predicted by bow shock models of Herbig-Haro objects

    International Nuclear Information System (INIS)

    Noriega-Crespo, A.; Bohm, K.H.; Raga, A.C.

    1990-01-01

    The observable spatial electron density and temperature distributions for series of simple bow shock models, which are of special interest in the study of Herbig-Haro (H-H) objects are computed. The spatial electron density and temperature distributions are derived from forbidden line ratios. It should be possible to use these results to recognize whether an observed electron density or temperature distribution can be attributed to a bow shock, as is the case in some Herbig-Haro objects. As an example, the empirical and predicted distributions for H-H 1 are compared. The predicted electron temperature distributions give the correct temperature range and they show very good diagnostic possibilities if the forbidden O III (4959 + 5007)/4363 wavelength ratio is used. 44 refs

  12. Improved Predictions of the Geographic Distribution of Invasive Plants Using Climatic Niche Models

    Science.gov (United States)

    Ramírez-Albores, Jorge E.; Bustamante, Ramiro O.

    2016-01-01

    Climatic niche models for invasive plants are usually constructed with occurrence records taken from literature and collections. Because these data neither discriminate among life-cycle stages of plants (adult or juvenile) nor the origin of individuals (naturally established or man-planted), the resulting models may mispredict the distribution ranges of these species. We propose that more accurate predictions could be obtained by modelling climatic niches with data of naturally established individuals, particularly with occurrence records of juvenile plants because this would restrict the predictions of models to those sites where climatic conditions allow the recruitment of the species. To test this proposal, we focused on the Peruvian peppertree (Schinus molle), a South American species that has largely invaded Mexico. Three climatic niche models were constructed for this species using high-resolution dataset gathered in the field. The first model included all occurrence records, irrespective of the life-cycle stage or origin of peppertrees (generalized niche model). The second model only included occurrence records of naturally established mature individuals (adult niche model), while the third model was constructed with occurrence records of naturally established juvenile plants (regeneration niche model). When models were compared, the generalized climatic niche model predicted the presence of peppertrees in sites located farther beyond the climatic thresholds that naturally established individuals can tolerate, suggesting that human activities influence the distribution of this invasive species. The adult and regeneration climatic niche models concurred in their predictions about the distribution of peppertrees, suggesting that naturally established adult trees only occur in sites where climatic conditions allow the recruitment of juvenile stages. These results support the proposal that climatic niches of invasive plants should be modelled with data of

  13. The Galactic Distribution of Massive Star Formation from the Red MSX Source Survey

    Science.gov (United States)

    Figura, Charles C.; Urquhart, J. S.

    2013-01-01

    Massive stars inject enormous amounts of energy into their environments in the form of UV radiation and molecular outflows, creating HII regions and enriching local chemistry. These effects provide feedback mechanisms that aid in regulating star formation in the region, and may trigger the formation of subsequent generations of stars. Understanding the mechanics of massive star formation presents an important key to understanding this process and its role in shaping the dynamics of galactic structure. The Red MSX Source (RMS) survey is a multi-wavelength investigation of ~1200 massive young stellar objects (MYSO) and ultra-compact HII (UCHII) regions identified from a sample of colour-selected sources from the Midcourse Space Experiment (MSX) point source catalog and Two Micron All Sky Survey. We present a study of over 900 MYSO and UCHII regions investigated by the RMS survey. We review the methods used to determine distances, and investigate the radial galactocentric distribution of these sources in context with the observed structure of the galaxy. The distribution of MYSO and UCHII regions is found to be spatially correlated with the spiral arms and galactic bar. We examine the radial distribution of MYSOs and UCHII regions and find variations in the star formation rate between the inner and outer Galaxy and discuss the implications for star formation throughout the galactic disc.

  14. Predicting the geographical distribution of two invasive termite species from occurrence data.

    Science.gov (United States)

    Tonini, Francesco; Divino, Fabio; Lasinio, Giovanna Jona; Hochmair, Hartwig H; Scheffrahn, Rudolf H

    2014-10-01

    Predicting the potential habitat of species under both current and future climate change scenarios is crucial for monitoring invasive species and understanding a species' response to different environmental conditions. Frequently, the only data available on a species is the location of its occurrence (presence-only data). Using occurrence records only, two models were used to predict the geographical distribution of two destructive invasive termite species, Coptotermes gestroi (Wasmann) and Coptotermes formosanus Shiraki. The first model uses a Bayesian linear logistic regression approach adjusted for presence-only data while the second one is the widely used maximum entropy approach (Maxent). Results show that the predicted distributions of both C. gestroi and C. formosanus are strongly linked to urban development. The impact of future scenarios such as climate warming and population growth on the biotic distribution of both termite species was also assessed. Future climate warming seems to affect their projected probability of presence to a lesser extent than population growth. The Bayesian logistic approach outperformed Maxent consistently in all models according to evaluation criteria such as model sensitivity and ecological realism. The importance of further studies for an explicit treatment of residual spatial autocorrelation and a more comprehensive comparison between both statistical approaches is suggested.

  15. Predicting fundamental and realized distributions based on thermal niche: A case study of a freshwater turtle

    Science.gov (United States)

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco; Ribeiro, Bruno R.

    2018-04-01

    Species distribution models (SDM) have been broadly used in ecology to address theoretical and practical problems. Currently, there are two main approaches to generate SDMs: (i) correlative, which is based on species occurrences and environmental predictor layers and (ii) process-based models, which are constructed based on species' functional traits and physiological tolerances. The distributions estimated by each approach are based on different components of species niche. Predictions of correlative models approach species realized niches, while predictions of process-based are more akin to species fundamental niche. Here, we integrated the predictions of fundamental and realized distributions of the freshwater turtle Trachemys dorbigni. Fundamental distribution was estimated using data of T. dorbigni's egg incubation temperature, and realized distribution was estimated using species occurrence records. Both types of distributions were estimated using the same regression approaches (logistic regression and support vector machines), both considering macroclimatic and microclimatic temperatures. The realized distribution of T. dorbigni was generally nested in its fundamental distribution reinforcing theoretical assumptions that the species' realized niche is a subset of its fundamental niche. Both modelling algorithms produced similar results but microtemperature generated better results than macrotemperature for the incubation model. Finally, our results reinforce the conclusion that species realized distributions are constrained by other factors other than just thermal tolerances.

  16. Predicting moisture content and density distribution of Scots pine by microwave scanning of sawn timber

    International Nuclear Information System (INIS)

    Johansson, J.; Hagman, O.; Fjellner, B.A.

    2003-01-01

    This study was carried out to investigate the possibility of calibrating a prediction model for the moisture content and density distribution of Scots pine (Pinus sylvestris) using microwave sensors. The material was initially of green moisture content and was thereafter dried in several steps to zero moisture content. At each step, all the pieces were weighed, scanned with a microwave sensor (Satimo 9,4GHz), and computed tomography (CT)-scanned with a medical CT scanner (Siemens Somatom AR.T.). The output variables from the microwave sensor were used as predictors, and CT images that correlated with known moisture content were used as response variables. Multivariate models to predict average moisture content and density were calibrated using the partial least squares (PLS) regression. The models for average moisture content and density were applied at the pixel level, and the distribution was visualized. The results show that it is possible to predict both moisture content distribution and density distribution with high accuracy using microwave sensors. (author)

  17. Probabilistic source term predictions for use with decision support systems

    International Nuclear Information System (INIS)

    Grindon, E.; Kinniburgh, C.G.

    2003-01-01

    Full text: Decision Support Systems for use in off-site emergency management, following an incident at a Nuclear Power Plant (NPP) within Europe, are becoming accepted as a useful and appropriate tool to aid decision makers. An area which is not so well developed is the 'upstream' prediction of the source term released into the environment. Rapid prediction of this source term is crucial to the appropriate early management of a nuclear emergency. The initial source term prediction would today be typically based on simple tabulations taking little, or no, account of plant status. It is the interface between the inward looking plant control room team and the outward looking off-site emergency management team that needs to be addressed. This is not an easy proposition as these two distinct disciplines have little common basis from which to communicate their immediate findings and concerns. Within the Euratom Fifth Framework Programme (FP5), complementary approaches are being developed to the pre-release stage; each based on software tools to help bridge this gap. Traditionally source terms (or releases into the environment) provided for use with Decision Support Systems are estimated on a deterministic basis. These approaches use a single, deterministic assumption about plant status. The associated source term represents the 'best estimate' based an available information. No information is provided an the potential for uncertainty in the source term estimate. Using probabilistic methods the outcome is typically a number of possible plant states each with an associated source term and probability. These represent both the best estimate and the spread of the likely source term. However, this is a novel approach and the usefulness of such source term prediction tools is yet to be tested on a wide scale. The benefits of probabilistic source term estimation are presented here; using, as an example, the SPRINT tool developed within the FP5 STERPS project. System for the

  18. Life prediction for white OLED based on LSM under lognormal distribution

    Science.gov (United States)

    Zhang, Jianping; Liu, Fang; Liu, Yu; Wu, Helen; Zhu, Wenqing; Wu, Wenli; Wu, Liang

    2012-09-01

    In order to acquire the reliability information of White Organic Light Emitting Display (OLED), three groups of OLED constant stress accelerated life tests (CSALTs) were carried out to obtain failure data of samples. Lognormal distribution function was applied to describe OLED life distribution, and the accelerated life equation was determined by Least square method (LSM). The Kolmogorov-Smirnov test was performed to verify whether the white OLED life meets lognormal distribution or not. Author-developed software was employed to predict the average life and the median life. The numerical results indicate that the white OLED life submits to lognormal distribution, and that the accelerated life equation meets inverse power law completely. The estimated life information of the white OLED provides manufacturers and customers with important guidelines.

  19. Establishment of a Practical Approach for Characterizing the Source of Particulates in Water Distribution Systems

    Directory of Open Access Journals (Sweden)

    Seon-Ha Chae

    2016-02-01

    Full Text Available Water quality complaints related to particulate matter and discolored water can be troublesome for water utilities in terms of follow-up investigations and implementation of appropriate actions because particulate matter can enter from a variety of sources; moreover, physicochemical processes can affect the water quality during the purification and transportation processes. The origin of particulates can be attributed to sources such as background organic/inorganic materials from water sources, water treatment plants, water distribution pipelines that have deteriorated, and rehabilitation activities in the water distribution systems. In this study, a practical method is proposed for tracing particulate sources. The method entails collecting information related to hydraulic, water quality, and structural conditions, employing a network flow-path model, and establishing a database of physicochemical properties for tubercles and slimes. The proposed method was implemented within two city water distribution systems that were located in Korea. These applications were conducted to demonstrate the practical applicability of the method for providing solutions to customer complaints. The results of the field studies indicated that the proposed method would be feasible for investigating the sources of particulates and for preparing appropriate action plans for complaints related to particulate matter.

  20. Coordinated control of active and reactive power of distribution network with distributed PV cluster via model predictive control

    Science.gov (United States)

    Ji, Yu; Sheng, Wanxing; Jin, Wei; Wu, Ming; Liu, Haitao; Chen, Feng

    2018-02-01

    A coordinated optimal control method of active and reactive power of distribution network with distributed PV cluster based on model predictive control is proposed in this paper. The method divides the control process into long-time scale optimal control and short-time scale optimal control with multi-step optimization. The models are transformed into a second-order cone programming problem due to the non-convex and nonlinear of the optimal models which are hard to be solved. An improved IEEE 33-bus distribution network system is used to analyse the feasibility and the effectiveness of the proposed control method

  1. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  2. Numerical simulation and experimental validation of the three-dimensional flow field and relative analyte concentration distribution in an atmospheric pressure ion source.

    Science.gov (United States)

    Poehler, Thorsten; Kunte, Robert; Hoenen, Herwart; Jeschke, Peter; Wissdorf, Walter; Brockmann, Klaus J; Benter, Thorsten

    2011-11-01

    In this study, the validation and analysis of steady state numerical simulations of the gas flows within a multi-purpose ion source (MPIS) are presented. The experimental results were obtained with particle image velocimetry (PIV) measurements in a non-scaled MPIS. Two-dimensional time-averaged velocity and turbulent kinetic energy distributions are presented for two dry gas volume flow rates. The numerical results of the validation simulations are in very good agreement with the experimental data. All significant flow features have been correctly predicted within the accuracy of the experiments. For technical reasons, the experiments were conducted at room temperature. Thus, numerical simulations of ionization conditions at two operating points of the MPIS are also presented. It is clearly shown that the dry gas volume flow rate has the most significant impact on the overall flow pattern within the APLI source; far less critical is the (larger) nebulization gas flow. In addition to the approximate solution of Reynolds-Averaged Navier-Stokes equations, a transport equation for the relative analyte concentration has been solved. The results yield information on the three-dimensional analyte distribution within the source. It becomes evident that for ion transport into the MS ion transfer capillary, electromagnetic forces are at least as important as fluid dynamic forces. However, only the fluid dynamics determines the three-dimensional distribution of analyte gas. Thus, local flow phenomena in close proximity to the spray shield are strongly impacting on the ionization efficiency.

  3. Z-Source-Inverter-Based Flexible Distributed Generation System Solution for Grid Power Quality Improvement

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Vilathgamuwa, D. M.; Loh, Poh Chiang

    2009-01-01

    Distributed generation (DG) systems are usually connected to the grid using power electronic converters. Power delivered from such DG sources depends on factors like energy availability and load demand. The converters used in power conversion do not operate with their full capacity all the time......-stage buck-boost inverter, recently proposed Z-source inverter (ZSI) is a good candidate for future DG systems. This paper presents a controller design for a ZSI-based DG system to improve power quality of distribution systems. The proposed control method is tested with simulation results obtained using...

  4. 10 CFR 32.74 - Manufacture and distribution of sources or devices containing byproduct material for medical use.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Manufacture and distribution of sources or devices... SPECIFIC DOMESTIC LICENSES TO MANUFACTURE OR TRANSFER CERTAIN ITEMS CONTAINING BYPRODUCT MATERIAL Generally Licensed Items § 32.74 Manufacture and distribution of sources or devices containing byproduct material for...

  5. Prediction of Near-Field Wave Attenuation Due to a Spherical Blast Source

    Science.gov (United States)

    Ahn, Jae-Kwang; Park, Duhee

    2017-11-01

    Empirical and theoretical far-field attenuation relationships, which do not capture the near-field response, are most often used to predict the peak amplitude of blast wave. Jiang et al. (Vibration due to a buried explosive source. PhD Thesis, Curtin University, Western Australian School of Mines, 1993) present rigorous wave equations that simulates the near-field attenuation to a spherical blast source in damped and undamped media. However, the effect of loading frequency and velocity of the media have not yet been investigated. We perform a suite of axisymmetric, dynamic finite difference analyses to simulate the propagation of stress waves induced by spherical blast source and to quantify the near-field attenuation. A broad range of loading frequencies, wave velocities, and damping ratios are used in the simulations. The near-field effect is revealed to be proportional to the rise time of the impulse load and wave velocity. We propose an empirical additive function to the theoretical far-field attenuation curve to predict the near-field range and attenuation. The proposed curve is validated against measurements recorded in a test blast.

  6. Climate change and plant distribution: local models predict high-elevation persistence

    DEFF Research Database (Denmark)

    Randin, Christophe F.; Engler, Robin; Normand, Signe

    2009-01-01

    Mountain ecosystems will likely be affected by global warming during the 21st century, with substantial biodiversity loss predicted by species distribution models (SDMs). Depending on the geographic extent, elevation range, and spatial resolution of data used in making these models, different rates...

  7. Investigating The Neutron Flux Distribution Of The Miniature Neutron Source Reactor MNSR Type

    International Nuclear Information System (INIS)

    Nguyen Hoang Hai; Do Quang Binh

    2011-01-01

    Neutron flux distribution is the important characteristic of nuclear reactor. In this article, four energy group neutron flux distributions of the miniature neutron source reactor MNSR type versus radial and axial directions are investigated in case the control rod is fully withdrawn. In addition, the effect of control rod positions on the thermal neutron flux distribution is also studied. The group constants for all reactor components are generated by the WIMSD code, and the neutron flux distributions are calculated by the CITATION code. The results show that the control rod positions only affect in the planning area for distribution in the region around the control rod. (author)

  8. Potential Distribution Predicted for Rhynchophorus ferrugineus in China under Different Climate Warming Scenarios.

    Directory of Open Access Journals (Sweden)

    Xuezhen Ge

    Full Text Available As the primary pest of palm trees, Rhynchophorus ferrugineus (Olivier (Coleoptera: Curculionidae has caused serious harm to palms since it first invaded China. The present study used CLIMEX 1.1 to predict the potential distribution of R. ferrugineus in China according to both current climate data (1981-2010 and future climate warming estimates based on simulated climate data for the 2020s (2011-2040 provided by the Tyndall Center for Climate Change Research (TYN SC 2.0. Additionally, the Ecoclimatic Index (EI values calculated for different climatic conditions (current and future, as simulated by the B2 scenario were compared. Areas with a suitable climate for R. ferrugineus distribution were located primarily in central China according to the current climate data, with the northern boundary of the distribution reaching to 40.1°N and including Tibet, north Sichuan, central Shaanxi, south Shanxi, and east Hebei. There was little difference in the potential distribution predicted by the four emission scenarios according to future climate warming estimates. The primary prediction under future climate warming models was that, compared with the current climate model, the number of highly favorable habitats would increase significantly and expand into northern China, whereas the number of both favorable and marginally favorable habitats would decrease. Contrast analysis of EI values suggested that climate change and the density of site distribution were the main effectors of the changes in EI values. These results will help to improve control measures, prevent the spread of this pest, and revise the targeted quarantine areas.

  9. Predicting habitat distribution to conserve seagrass threatened by sea level rise

    Science.gov (United States)

    Saunders, M. I.; Baldock, T.; Brown, C. J.; Callaghan, D. P.; Golshani, A.; Hamylton, S.; Hoegh-guldberg, O.; Leon, J. X.; Lovelock, C. E.; Lyons, M. B.; O'Brien, K.; Mumby, P.; Phinn, S. R.; Roelfsema, C. M.

    2013-12-01

    Sea level rise (SLR) over the 21st century will cause significant redistribution of valuable coastal habitats. Seagrasses form extensive and highly productive meadows in shallow coastal seas support high biodiversity, including economically valuable and threatened species. Predictive habitat models can inform local management actions that will be required to conserve seagrass faced with multiple stressors. We developed novel modelling approaches, based on extensive field data sets, to examine the effects of sea level rise and other stressors on two representative seagrass habitats in Australia. First, we modelled interactive effects of SLR, water clarity and adjacent land use on estuarine seagrass meadows in Moreton Bay, Southeast Queensland. The extent of suitable seagrass habitat was predicted to decline by 17% by 2100 due to SLR alone, but losses were predicted to be significantly reduced through improvements in water quality (Fig 1a) and by allowing space for seagrass migration with inundation. The rate of sedimentation in seagrass strongly affected the area of suitable habitat for seagrass in sea level rise scenarios (Fig 1b). Further research to understand spatial, temporal and environmental variability of sediment accretion in seagrass is required. Second, we modelled changes in wave energy distribution due to predicted SLR in a linked coral reef and seagrass ecosystem at Lizard Island, Great Barrier Reef. Scenarios where the water depth over the coral reef deepened due to SLR and minimal reef accretion, resulted in larger waves propagating shoreward, changing the existing hydrodynamic conditions sufficiently to reduce area of suitable habitat for seagrass. In a scenario where accretion of the coral reef was severely compromised (e.g. warming, acidification, overfishing), the probability of the presence of seagrass declined significantly. Management to maintain coral health will therefore also benefit seagrasses subject to SLR in reef environments. Further

  10. Vaginal drug distribution modeling.

    Science.gov (United States)

    Katz, David F; Yuan, Andrew; Gao, Yajing

    2015-09-15

    This review presents and applies fundamental mass transport theory describing the diffusion and convection driven mass transport of drugs to the vaginal environment. It considers sources of variability in the predictions of the models. It illustrates use of model predictions of microbicide drug concentration distribution (pharmacokinetics) to gain insights about drug effectiveness in preventing HIV infection (pharmacodynamics). The modeling compares vaginal drug distributions after different gel dosage regimens, and it evaluates consequences of changes in gel viscosity due to aging. It compares vaginal mucosal concentration distributions of drugs delivered by gels vs. intravaginal rings. Finally, the modeling approach is used to compare vaginal drug distributions across species with differing vaginal dimensions. Deterministic models of drug mass transport into and throughout the vaginal environment can provide critical insights about the mechanisms and determinants of such transport. This knowledge, and the methodology that obtains it, can be applied and translated to multiple applications, involving the scientific underpinnings of vaginal drug distribution and the performance evaluation and design of products, and their dosage regimens, that achieve it. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. A practical algorithm for distribution state estimation including renewable energy sources

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher [Electronic and Electrical Department, Shiraz University of Technology, Modares Blvd., P.O. 71555-313, Shiraz (Iran); Firouzi, Bahman Bahmani [Islamic Azad University Marvdasht Branch, Marvdasht (Iran)

    2009-11-15

    Renewable energy is energy that is in continuous supply over time. These kinds of energy sources are divided into five principal renewable sources of energy: the sun, the wind, flowing water, biomass and heat from within the earth. According to some studies carried out by the research institutes, about 25% of the new generation will be generated by Renewable Energy Sources (RESs) in the near future. Therefore, it is necessary to study the impact of RESs on the power systems, especially on the distribution networks. This paper presents a practical Distribution State Estimation (DSE) including RESs and some practical consideration. The proposed algorithm is based on the combination of Nelder-Mead simplex search and Particle Swarm Optimization (PSO) algorithms, called PSO-NM. The proposed algorithm can estimate load and RES output values by Weighted Least-Square (WLS) approach. Some practical considerations are var compensators, Voltage Regulators (VRs), Under Load Tap Changer (ULTC) transformer modeling, which usually have nonlinear and discrete characteristics, and unbalanced three-phase power flow equations. The comparison results with other evolutionary optimization algorithms such as original PSO, Honey Bee Mating Optimization (HBMO), Neural Networks (NNs), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) for a test system demonstrate that PSO-NM is extremely effective and efficient for the DSE problems. (author)

  12. Enhanced effects of biotic interactions on predicting multispecies spatial distribution of submerged macrophytes after eutrophication.

    Science.gov (United States)

    Song, Kun; Cui, Yichong; Zhang, Xijin; Pan, Yingji; Xu, Junli; Xu, Kaiqin; Da, Liangjun

    2017-10-01

    Water eutrophication creates unfavorable environmental conditions for submerged macrophytes. In these situations, biotic interactions may be particularly important for explaining and predicting the submerged macrophytes occurrence. Here, we evaluate the roles of biotic interactions in predicting spatial occurrence of submerged macrophytes in 1959 and 2009 for Dianshan Lake in eastern China, which became eutrophic since the 1980s. For the four common species occurred in 1959 and 2009, null species distribution models based on abiotic variables and full models based on both abiotic and biotic variables were developed using generalized linear model (GLM) and boosted regression trees (BRT) to determine whether the biotic variables improved the model performance. Hierarchical Bayesian-based joint species distribution models capable of detecting paired biotic interactions were established for each species in both periods to evaluate the changes in the biotic interactions. In most of the GLM and BRT models, the full models showed better performance than the null models in predicting the species presence/absence, and the relative importance of the biotic variables in the full models increased from less than 50% in 1959 to more than 50% in 2009 for each species. Moreover, co-occurrence correlation of each paired species interaction was higher in 2009 than that in 1959. The findings suggest biotic interactions that tend to be positive play more important roles in the spatial distribution of multispecies assemblages of macrophytes and should be included in prediction models to improve prediction accuracy when forecasting macrophytes' distribution under eutrophication stress.

  13. Distribution of hadron intranuclear cascade for large distance from a source

    International Nuclear Information System (INIS)

    Bibin, V.L.; Kazarnovskij, M.V.; Serezhnikov, S.V.

    1985-01-01

    Analytical solution of the problem of three-component hadron cascade development for large distances from a source is obtained in the framework of a series of simplifying assumptions. It makes possible to understand physical mechanisms of the process studied and to obtain approximate asymptotic expressions for hadron distribution functions

  14. PHENOstruct: Prediction of human phenotype ontology terms using heterogeneous data sources.

    Science.gov (United States)

    Kahanda, Indika; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa

    2015-01-01

    The human phenotype ontology (HPO) was recently developed as a standardized vocabulary for describing the phenotype abnormalities associated with human diseases. At present, only a small fraction of human protein coding genes have HPO annotations. But, researchers believe that a large portion of currently unannotated genes are related to disease phenotypes. Therefore, it is important to predict gene-HPO term associations using accurate computational methods. In this work we demonstrate the performance advantage of the structured SVM approach which was shown to be highly effective for Gene Ontology term prediction in comparison to several baseline methods. Furthermore, we highlight a collection of informative data sources suitable for the problem of predicting gene-HPO associations, including large scale literature mining data.

  15. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    Science.gov (United States)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit

  16. Fast ignition: Dependence of the ignition energy on source and target parameters for particle-in-cell-modelled energy and angular distributions of the fast electrons

    Energy Technology Data Exchange (ETDEWEB)

    Bellei, C.; Divol, L.; Kemp, A. J.; Key, M. H.; Larson, D. J.; Strozzi, D. J.; Marinak, M. M.; Tabak, M.; Patel, P. K. [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, California 94550 (United States)

    2013-05-15

    The energy and angular distributions of the fast electrons predicted by particle-in-cell (PIC) simulations differ from those historically assumed in ignition designs of the fast ignition scheme. Using a particular 3D PIC calculation, we show how the ignition energy varies as a function of source-fuel distance, source size, and density of the pre-compressed fuel. The large divergence of the electron beam implies that the ignition energy scales with density more weakly than the ρ{sup −2} scaling for an idealized beam [S. Atzeni, Phys. Plasmas 6, 3316 (1999)], for any realistic source that is at some distance from the dense deuterium-tritium fuel. Due to the strong dependence of ignition energy with source-fuel distance, the use of magnetic or electric fields seems essential for the purpose of decreasing the ignition energy.

  17. Measurement and prediction of aromatic solute distribution coefficients for aqueous-organic solvent systems. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, J.R.; Luthy, R.G.

    1984-06-01

    Experimental and modeling activities were performed to assess techniques for measurement and prediction of distribution coefficients for aromatic solutes between water and immiscible organic solvents. Experiments were performed to measure distribution coefficients in both clean water and wastewater systems, and to assess treatment of a wastewater by solvent extraction. The theoretical portions of this investigation were directed towards development of techniques for prediction of solute-solvent/water distribution coefficients. Experiments were performed to assess treatment of a phenolic-laden coal conversion wastewater by solvent extraction. The results showed that solvent extraction for recovery of phenolic material offered several wastewater processing advantages. Distribution coefficients were measured in clean water and wastewater systems for aromatic solutes of varying functionality with different solvent types. It was found that distribution coefficients for these compounds in clean water systems were not statistically different from distribution coefficients determined in a complex coal conversion process wastewater. These and other aromatic solute distribution coefficient data were employed for evaluation of modeling techniques for prediction of solute-solvent/water distribution coefficients. Eight solvents were selected in order to represent various chemical classes: toluene and benzene (aromatics), hexane and heptane (alkanes), n-octanol (alcohols), n-butyl acetate (esters), diisopropyl ether (ethers), and methylisobutyl ketone (ketones). The aromatic solutes included: nonpolar compounds such as benzene, toluene and naphthalene, phenolic compounds such as phenol, cresol and catechol, nitrogenous aromatics such as aniline, pyridine and aminonaphthalene, and other aromatic solutes such as naphthol, quinolinol and halogenated compounds. 100 references, 20 figures, 34 tables.

  18. Computer Prediction of Air Quality in Livestock Buildings

    DEFF Research Database (Denmark)

    Svidt, Kjeld; Bjerg, Bjarne

    In modem livestock buildings the design of ventilation systems is important in order to obtain good air quality. The use of Computational Fluid Dynamics for predicting the air distribution makes it possible to include the effect of room geometry and heat sources in the design process. This paper...... presents numerical prediction of air flow in a livestock building compared with laboratory measurements. An example of the calculation of contaminant distribution is given, and the future possibilities of the method are discussed....

  19. Economic Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    Model Predictive Control (MPC) can be used to control the energy distribution in a Smart Grid with a high share of stochastic energy production from renewable energy sources like wind. Heat pumps for heating residential buildings can exploit the slow heat dynamics of a building to store heat and ...... and hereby shift the heat pump power consumption to periods with both low electricity prices and a high fraction of green energy in the grid.......Model Predictive Control (MPC) can be used to control the energy distribution in a Smart Grid with a high share of stochastic energy production from renewable energy sources like wind. Heat pumps for heating residential buildings can exploit the slow heat dynamics of a building to store heat...

  20. Prediction of the neutrons subcritical multiplication using the diffusion hybrid equation with external neutron sources

    Energy Technology Data Exchange (ETDEWEB)

    Costa da Silva, Adilson; Carvalho da Silva, Fernando [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, 21941-914, Rio de Janeiro (Brazil); Senra Martinez, Aquilino, E-mail: aquilino@lmp.ufrj.br [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, 21941-914, Rio de Janeiro (Brazil)

    2011-07-15

    Highlights: > We proposed a new neutron diffusion hybrid equation with external neutron source. > A coarse mesh finite difference method for the adjoint flux and reactivity calculation was developed. > 1/M curve to predict the criticality condition is used. - Abstract: We used the neutron diffusion hybrid equation, in cartesian geometry with external neutron sources to predict the subcritical multiplication of neutrons in a pressurized water reactor, using a 1/M curve to predict the criticality condition. A Coarse Mesh Finite Difference Method was developed for the adjoint flux calculation and to obtain the reactivity values of the reactor. The results obtained were compared with benchmark values in order to validate the methodology presented in this paper.

  1. Prediction of the neutrons subcritical multiplication using the diffusion hybrid equation with external neutron sources

    International Nuclear Information System (INIS)

    Costa da Silva, Adilson; Carvalho da Silva, Fernando; Senra Martinez, Aquilino

    2011-01-01

    Highlights: → We proposed a new neutron diffusion hybrid equation with external neutron source. → A coarse mesh finite difference method for the adjoint flux and reactivity calculation was developed. → 1/M curve to predict the criticality condition is used. - Abstract: We used the neutron diffusion hybrid equation, in cartesian geometry with external neutron sources to predict the subcritical multiplication of neutrons in a pressurized water reactor, using a 1/M curve to predict the criticality condition. A Coarse Mesh Finite Difference Method was developed for the adjoint flux calculation and to obtain the reactivity values of the reactor. The results obtained were compared with benchmark values in order to validate the methodology presented in this paper.

  2. The influence of coarse-scale environmental features on current and predicted future distributions of narrow-range endemic crayfish populations

    Science.gov (United States)

    Dyer, Joseph J.; Brewer, Shannon K.; Worthington, Thomas A.; Bergey, Elizabeth A.

    2013-01-01

    1.A major limitation to effective management of narrow-range crayfish populations is the paucity of information on the spatial distribution of crayfish species and a general understanding of the interacting environmental variables that drive current and future potential distributional patterns. 2.Maximum Entropy Species Distribution Modeling Software (MaxEnt) was used to predict the current and future potential distributions of four endemic crayfish species in the Ouachita Mountains. Current distributions were modelled using climate, geology, soils, land use, landform and flow variables thought to be important to lotic crayfish. Potential changes in the distribution were forecast by using models trained on current conditions and projecting onto the landscape predicted under climate-change scenarios. 3.The modelled distribution of the four species closely resembled the perceived distribution of each species but also predicted populations in streams and catchments where they had not previously been collected. Soils, elevation and winter precipitation and temperature most strongly related to current distributions and represented 6587% of the predictive power of the models. Model accuracy was high for all models, and model predictions of new populations were verified through additional field sampling. 4.Current models created using two spatial resolutions (1 and 4.5km2) showed that fine-resolution data more accurately represented current distributions. For three of the four species, the 1-km2 resolution models resulted in more conservative predictions. However, the modelled distributional extent of Orconectes leptogonopodus was similar regardless of data resolution. Field validations indicated 1-km2 resolution models were more accurate than 4.5-km2 resolution models. 5.Future projected (4.5-km2 resolution models) model distributions indicated three of the four endemic species would have truncated ranges with low occurrence probabilities under the low-emission scenario

  3. An appraisal of wind speed distribution prediction by soft computing methodologies: A comparative study

    International Nuclear Information System (INIS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Anuar, Nor Badrul; Saboohi, Hadi; Abdul Wahab, Ainuddin Wahid; Protić, Milan; Zalnezhad, Erfan; Mirhashemi, Seyed Mohammad Amin

    2014-01-01

    Highlights: • Probabilistic distribution functions of wind speed. • Two parameter Weibull probability distribution. • To build an effective prediction model of distribution of wind speed. • Support vector regression application as probability function for wind speed. - Abstract: The probabilistic distribution of wind speed is among the more significant wind characteristics in examining wind energy potential and the performance of wind energy conversion systems. When the wind speed probability distribution is known, the wind energy distribution can be easily obtained. Therefore, the probability distribution of wind speed is a very important piece of information required in assessing wind energy potential. For this reason, a large number of studies have been established concerning the use of a variety of probability density functions to describe wind speed frequency distributions. Although the two-parameter Weibull distribution comprises a widely used and accepted method, solving the function is very challenging. In this study, the polynomial and radial basis functions (RBF) are applied as the kernel function of support vector regression (SVR) to estimate two parameters of the Weibull distribution function according to previously established analytical methods. Rather than minimizing the observed training error, SVR p oly and SVR r bf attempt to minimize the generalization error bound, so as to achieve generalized performance. According to the experimental results, enhanced predictive accuracy and capability of generalization can be achieved using the SVR approach compared to other soft computing methodologies

  4. The Density Functional Theory of Flies: Predicting distributions of interacting active organisms

    Science.gov (United States)

    Kinkhabwala, Yunus; Valderrama, Juan; Cohen, Itai; Arias, Tomas

    On October 2nd, 2016, 52 people were crushed in a stampede when a crowd panicked at a religious gathering in Ethiopia. The ability to predict the state of a crowd and whether it is susceptible to such transitions could help prevent such catastrophes. While current techniques such as agent based models can predict transitions in emergent behaviors of crowds, the assumptions used to describe the agents are often ad hoc and the simulations are computationally expensive making their application to real-time crowd prediction challenging. Here, we pursue an orthogonal approach and ask whether a reduced set of variables, such as the local densities, are sufficient to describe the state of a crowd. Inspired by the theoretical framework of Density Functional Theory, we have developed a system that uses only measurements of local densities to extract two independent crowd behavior functions: (1) preferences for locations and (2) interactions between individuals. With these two functions, we have accurately predicted how a model system of walking Drosophila melanogaster distributes itself in an arbitrary 2D environment. In addition, this density-based approach measures properties of the crowd from only observations of the crowd itself without any knowledge of the detailed interactions and thus it can make predictions about the resulting distributions of these flies in arbitrary environments, in real-time. This research was supported in part by ARO W911NF-16-1-0433.

  5. Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering

    Science.gov (United States)

    Koehler, Sarah Muraoka

    Industrial large-scale control problems present an interesting algorithmic design challenge. A number of controllers must cooperate in real-time on a network of embedded hardware with limited computing power in order to maximize system efficiency while respecting constraints and despite communication delays. Model predictive control (MPC) can automatically synthesize a centralized controller which optimizes an objective function subject to a system model, constraints, and predictions of disturbance. Unfortunately, the computations required by model predictive controllers for large-scale systems often limit its industrial implementation only to medium-scale slow processes. Distributed model predictive control (DMPC) enters the picture as a way to decentralize a large-scale model predictive control problem. The main idea of DMPC is to split the computations required by the MPC problem amongst distributed processors that can compute in parallel and communicate iteratively to find a solution. Some popularly proposed solutions are distributed optimization algorithms such as dual decomposition and the alternating direction method of multipliers (ADMM). However, these algorithms ignore two practical challenges: substantial communication delays present in control systems and also problem non-convexity. This thesis presents two novel and practically effective DMPC algorithms. The first DMPC algorithm is based on a primal-dual active-set method which achieves fast convergence, making it suitable for large-scale control applications which have a large communication delay across its communication network. In particular, this algorithm is suited for MPC problems with a quadratic cost, linear dynamics, forecasted demand, and box constraints. We measure the performance of this algorithm and show that it significantly outperforms both dual decomposition and ADMM in the presence of communication delay. The second DMPC algorithm is based on an inexact interior point method which is

  6. An empirical evaluation of classification algorithms for fault prediction in open source projects

    Directory of Open Access Journals (Sweden)

    Arvinder Kaur

    2018-01-01

    Full Text Available Creating software with high quality has become difficult these days with the fact that size and complexity of the developed software is high. Predicting the quality of software in early phases helps to reduce testing resources. Various statistical and machine learning techniques are used for prediction of the quality of the software. In this paper, six machine learning models have been used for software quality prediction on five open source software. Varieties of metrics have been evaluated for the software including C & K, Henderson & Sellers, McCabe etc. Results show that Random Forest and Bagging produce good results while Naïve Bayes is least preferable for prediction.

  7. Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette

    Directory of Open Access Journals (Sweden)

    Fang Li

    2013-10-01

    Full Text Available This paper proposes an approach for acoustic emission (AE source localization in a large marble stone using distributed feedback (DFB fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location.

  8. Dual-Source Linear Energy Prediction (LINE-P) Model in the Context of WSNs.

    Science.gov (United States)

    Ahmed, Faisal; Tamberg, Gert; Le Moullec, Yannick; Annus, Paul

    2017-07-20

    Energy harvesting technologies such as miniature power solar panels and micro wind turbines are increasingly used to help power wireless sensor network nodes. However, a major drawback of energy harvesting is its varying and intermittent characteristic, which can negatively affect the quality of service. This calls for careful design and operation of the nodes, possibly by means of, e.g., dynamic duty cycling and/or dynamic frequency and voltage scaling. In this context, various energy prediction models have been proposed in the literature; however, they are typically compute-intensive or only suitable for a single type of energy source. In this paper, we propose Linear Energy Prediction "LINE-P", a lightweight, yet relatively accurate model based on approximation and sampling theory; LINE-P is suitable for dual-source energy harvesting. Simulations and comparisons against existing similar models have been conducted with low and medium resolutions (i.e., 60 and 22 min intervals/24 h) for the solar energy source (low variations) and with high resolutions (15 min intervals/24 h) for the wind energy source. The results show that the accuracy of the solar-based and wind-based predictions is up to approximately 98% and 96%, respectively, while requiring a lower complexity and memory than the other models. For the cases where LINE-P's accuracy is lower than that of other approaches, it still has the advantage of lower computing requirements, making it more suitable for embedded implementation, e.g., in wireless sensor network coordinator nodes or gateways.

  9. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    Energy Technology Data Exchange (ETDEWEB)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene, E-mail: mertsch@nbi.ku.dk, E-mail: mohamed.rameez@nbi.ku.dk, E-mail: tamborra@nbi.ku.dk [Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2017-03-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ''warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10{sup −6} Mpc{sup −3} and neutrino luminosity L {sub ν} ∼< 10{sup 42} erg s{sup −1} (10{sup 41} erg s{sup −1}) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.

  10. Effects of the infectious period distribution on predicted transitions in childhood disease dynamics.

    Science.gov (United States)

    Krylova, Olga; Earn, David J D

    2013-07-06

    The population dynamics of infectious diseases occasionally undergo rapid qualitative changes, such as transitions from annual to biennial cycles or to irregular dynamics. Previous work, based on the standard seasonally forced 'susceptible-exposed-infectious-removed' (SEIR) model has found that transitions in the dynamics of many childhood diseases result from bifurcations induced by slow changes in birth and vaccination rates. However, the standard SEIR formulation assumes that the stage durations (latent and infectious periods) are exponentially distributed, whereas real distributions are narrower and centred around the mean. Much recent work has indicated that realistically distributed stage durations strongly affect the dynamical structure of seasonally forced epidemic models. We investigate whether inferences drawn from previous analyses of transitions in patterns of measles dynamics are robust to the shapes of the stage duration distributions. As an illustrative example, we analyse measles dynamics in New York City from 1928 to 1972. We find that with a fixed mean infectious period in the susceptible-infectious-removed (SIR) model, the dynamical structure and predicted transitions vary substantially as a function of the shape of the infectious period distribution. By contrast, with fixed mean latent and infectious periods in the SEIR model, the shapes of the stage duration distributions have a less dramatic effect on model dynamical structure and predicted transitions. All these results can be understood more easily by considering the distribution of the disease generation time as opposed to the distributions of individual disease stages. Numerical bifurcation analysis reveals that for a given mean generation time the dynamics of the SIR and SEIR models for measles are nearly equivalent and are insensitive to the shapes of the disease stage distributions.

  11. Temperature distribution of a simplified rotor due to a uniform heat source

    Science.gov (United States)

    Welzenbach, Sarah; Fischer, Tim; Meier, Felix; Werner, Ewald; kyzy, Sonun Ulan; Munz, Oliver

    2018-03-01

    In gas turbines, high combustion efficiency as well as operational safety are required. Thus, labyrinth seal systems with honeycomb liners are commonly used. In the case of rubbing events in the seal system, the components can be damaged due to cyclic thermal and mechanical loads. Temperature differences occurring at labyrinth seal fins during rubbing events can be determined by considering a single heat source acting periodically on the surface of a rotating cylinder. Existing literature analysing the temperature distribution on rotating cylindrical bodies due to a stationary heat source is reviewed. The temperature distribution on the circumference of a simplified labyrinth seal fin is calculated using an available and easy to implement analytical approach. A finite element model of the simplified labyrinth seal fin is created and the numerical results are compared to the analytical results. The temperature distributions calculated by the analytical and the numerical approaches coincide for low sliding velocities, while there are discrepancies of the calculated maximum temperatures for higher sliding velocities. The use of the analytical approach allows the conservative estimation of the maximum temperatures arising in labyrinth seal fins during rubbing events. At the same time, high calculation costs can be avoided.

  12. Degree of polarization and source counts of faint radio sources from Stacking Polarized intensity

    International Nuclear Information System (INIS)

    Stil, J. M.; George, S. J.; Keller, B. W.; Taylor, A. R.

    2014-01-01

    We present stacking polarized intensity as a means to study the polarization of sources that are too faint to be detected individually in surveys of polarized radio sources. Stacking offers not only high sensitivity to the median signal of a class of radio sources, but also avoids a detection threshold in polarized intensity, and therefore an arbitrary exclusion of sources with a low percentage of polarization. Correction for polarization bias is done through a Monte Carlo analysis and tested on a simulated survey. We show that the nonlinear relation between the real polarized signal and the detected signal requires knowledge of the shape of the distribution of fractional polarization, which we constrain using the ratio of the upper quartile to the lower quartile of the distribution of stacked polarized intensities. Stacking polarized intensity for NRAO VLA Sky Survey (NVSS) sources down to the detection limit in Stokes I, we find a gradual increase in median fractional polarization that is consistent with a trend that was noticed before for bright NVSS sources, but is much more gradual than found by previous deep surveys of radio polarization. Consequently, the polarized radio source counts derived from our stacking experiment predict fewer polarized radio sources for future surveys with the Square Kilometre Array and its pathfinders.

  13. Numerical Prediction of Wave Patterns Due to Motion of 3D Bodies by Kelvin-Havelock Sources

    Directory of Open Access Journals (Sweden)

    Ghassemi Hassan

    2016-12-01

    Full Text Available This paper discusses the numerical evaluation of the hydrodynamic characteristics of submerged and surface piercing moving bodies. Generally, two main classes of potential methods are used for hydrodynamic characteristic analysis of steady moving bodies which are Rankine and Kelvin-Havelock singularity distribution. In this paper, the Kelvin- Havelock sources are used for simulating the moving bodies and then free surface wave patterns are obtained. Numerical evaluation of potential distribution of a Kelvin-Havelock source is completely presented and discussed. Numerical results are calculated and presented for a 2D cylinder, single source, two parallel moving source, sphere, ellipsoid and standard Wigley hull in different situation that show acceptable agreement with results of other literatures or experiments.

  14. North Slope, Alaska: Source rock distribution, richness, thermal maturity, and petroleum charge

    Science.gov (United States)

    Peters, K.E.; Magoon, L.B.; Bird, K.J.; Valin, Z.C.; Keller, M.A.

    2006-01-01

    Four key marine petroleum source rock units were identified, characterized, and mapped in the subsurface to better understand the origin and distribution of petroleum on the North Slope of Alaska. These marine source rocks, from oldest to youngest, include four intervals: (1) Middle-Upper Triassic Shublik Formation, (2) basal condensed section in the Jurassic-Lower Cretaceous Kingak Shale, (3) Cretaceous pebble shale unit, and (4) Cretaceous Hue Shale. Well logs for more than 60 wells and total organic carbon (TOC) and Rock-Eval pyrolysis analyses for 1183 samples in 125 well penetrations of the source rocks were used to map the present-day thickness of each source rock and the quantity (TOC), quality (hydrogen index), and thermal maturity (Tmax) of the organic matter. Based on assumptions related to carbon mass balance and regional distributions of TOC, the present-day source rock quantity and quality maps were used to determine the extent of fractional conversion of the kerogen to petroleum and to map the original TOC (TOCo) and the original hydrogen index (HIo) prior to thermal maturation. The quantity and quality of oil-prone organic matter in Shublik Formation source rock generally exceeded that of the other units prior to thermal maturation (commonly TOCo > 4 wt.% and HIo > 600 mg hydrocarbon/g TOC), although all are likely sources for at least some petroleum on the North Slope. We used Rock-Eval and hydrous pyrolysis methods to calculate expulsion factors and petroleum charge for each of the four source rocks in the study area. Without attempting to identify the correct methods, we conclude that calculations based on Rock-Eval pyrolysis overestimate expulsion factors and petroleum charge because low pressure and rapid removal of thermally cracked products by the carrier gas retards cross-linking and pyrobitumen formation that is otherwise favored by natural burial maturation. Expulsion factors and petroleum charge based on hydrous pyrolysis may also be high

  15. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Understanding the dynamics in distribution of invasive alien plant species under predicted climate change in Western Himalaya.

    Science.gov (United States)

    Thapa, Sunil; Chitale, Vishwas; Rijal, Srijana Joshi; Bisht, Neha; Shrestha, Bharat Babu

    2018-01-01

    Invasive alien plant species (IAPS) can pose severe threats to biodiversity and stability of native ecosystems, therefore, predicting the distribution of the IAPS plays a crucial role in effective planning and management of ecosystems. In the present study, we use Maximum Entropy (MaxEnt) modelling approach to predict the potential of distribution of eleven IAPS under future climatic conditions under RCP 2.6 and RCP 8.5 in part of Kailash sacred landscape region in Western Himalaya. Based on the model predictions, distribution of most of these invasive plants is expected to expand under future climatic scenarios, which might pose a serious threat to the native ecosystems through competition for resources in the study area. Native scrublands and subtropical needle-leaved forests will be the most affected ecosystems by the expansion of these IAPS. The present study is first of its kind in the Kailash Sacred Landscape in the field of invasive plants and the predictions of potential distribution under future climatic conditions from our study could help decision makers in planning and managing these forest ecosystems effectively.

  17. Light source distribution and scattering phase function influence light transport in diffuse multi-layered media

    Science.gov (United States)

    Vaudelle, Fabrice; L'Huillier, Jean-Pierre; Askoura, Mohamed Lamine

    2017-06-01

    Red and near-Infrared light is often used as a useful diagnostic and imaging probe for highly scattering media such as biological tissues, fruits and vegetables. Part of diffusively reflected light gives interesting information related to the tissue subsurface, whereas light recorded at further distances may probe deeper into the interrogated turbid tissues. However, modelling diffusive events occurring at short source-detector distances requires to consider both the distribution of the light sources and the scattering phase functions. In this report, a modified Monte Carlo model is used to compute light transport in curved and multi-layered tissue samples which are covered with a thin and highly diffusing tissue layer. Different light source distributions (ballistic, diffuse or Lambertian) are tested with specific scattering phase functions (modified or not modified Henyey-Greenstein, Gegenbauer and Mie) to compute the amount of backscattered and transmitted light in apple and human skin structures. Comparisons between simulation results and experiments carried out with a multispectral imaging setup confirm the soundness of the theoretical strategy and may explain the role of the skin on light transport in whole and half-cut apples. Other computational results show that a Lambertian source distribution combined with a Henyey-Greenstein phase function provides a higher photon density in the stratum corneum than in the upper dermis layer. Furthermore, it is also shown that the scattering phase function may affect the shape and the magnitude of the Bidirectional Reflectance Distribution (BRDF) exhibited at the skin surface.

  18. A STATISTICAL APPROACH TO RECOGNIZING SOURCE CLASSES FOR UNASSOCIATED SOURCES IN THE FIRST FERMI-LAT CATALOG

    Energy Technology Data Exchange (ETDEWEB)

    Ackermann, M. [Deutsches Elektronen Synchrotron DESY, D-15738 Zeuthen (Germany); Ajello, M.; Allafort, A.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; Borgland, A. W.; Buehler, R. [W. W. Hansen Experimental Physics Laboratory, Kavli Institute for Particle Astrophysics and Cosmology, Department of Physics and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Antolini, E.; Bonamente, E. [Istituto Nazionale di Fisica Nucleare, Sezione di Perugia, I-06123 Perugia (Italy); Baldini, L.; Bellazzini, R.; Bregeon, J. [Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, I-56127 Pisa (Italy); Ballet, J. [Laboratoire AIM, CEA-IRFU/CNRS/Universite Paris Diderot, Service d' Astrophysique, CEA Saclay, 91191 Gif sur Yvette (France); Barbiellini, G. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, I-34127 Trieste (Italy); Bastieri, D. [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, I-35131 Padova (Italy); Bouvier, A. [Santa Cruz Institute for Particle Physics, Department of Physics and Department of Astronomy and Astrophysics, University of California at Santa Cruz, Santa Cruz, CA 95064 (United States); Brandt, T. J. [CNRS, IRAP, F-31028 Toulouse Cedex 4 (France); Brigida, M. [Dipartimento di Fisica ' M. Merlin' dell' Universita e del Politecnico di Bari, I-70126 Bari (Italy); Bruel, P., E-mail: monzani@slac.stanford.edu, E-mail: vilchez@cesr.fr, E-mail: salvetti@lambrate.inaf.it, E-mail: elizabeth.c.ferrara@nasa.gov [Laboratoire Leprince-Ringuet, Ecole polytechnique, CNRS/IN2P3, Palaiseau (France); and others

    2012-07-01

    The Fermi Large Area Telescope (LAT) First Source Catalog (1FGL) provided spatial, spectral, and temporal properties for a large number of {gamma}-ray sources using a uniform analysis method. After correlating with the most-complete catalogs of source types known to emit {gamma} rays, 630 of these sources are 'unassociated' (i.e., have no obvious counterparts at other wavelengths). Here, we employ two statistical analyses of the primary {gamma}-ray characteristics for these unassociated sources in an effort to correlate their {gamma}-ray properties with the active galactic nucleus (AGN) and pulsar populations in 1FGL. Based on the correlation results, we classify 221 AGN-like and 134 pulsar-like sources in the 1FGL unassociated sources. The results of these source 'classifications' appear to match the expected source distributions, especially at high Galactic latitudes. While useful for planning future multiwavelength follow-up observations, these analyses use limited inputs, and their predictions should not be considered equivalent to 'probable source classes' for these sources. We discuss multiwavelength results and catalog cross-correlations to date, and provide new source associations for 229 Fermi-LAT sources that had no association listed in the 1FGL catalog. By validating the source classifications against these new associations, we find that the new association matches the predicted source class in {approx}80% of the sources.

  19. A STATISTICAL APPROACH TO RECOGNIZING SOURCE CLASSES FOR UNASSOCIATED SOURCES IN THE FIRST FERMI-LAT CATALOG

    International Nuclear Information System (INIS)

    Ackermann, M.; Ajello, M.; Allafort, A.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; Borgland, A. W.; Buehler, R.; Antolini, E.; Bonamente, E.; Baldini, L.; Bellazzini, R.; Bregeon, J.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bouvier, A.; Brandt, T. J.; Brigida, M.; Bruel, P.

    2012-01-01

    The Fermi Large Area Telescope (LAT) First Source Catalog (1FGL) provided spatial, spectral, and temporal properties for a large number of γ-ray sources using a uniform analysis method. After correlating with the most-complete catalogs of source types known to emit γ rays, 630 of these sources are 'unassociated' (i.e., have no obvious counterparts at other wavelengths). Here, we employ two statistical analyses of the primary γ-ray characteristics for these unassociated sources in an effort to correlate their γ-ray properties with the active galactic nucleus (AGN) and pulsar populations in 1FGL. Based on the correlation results, we classify 221 AGN-like and 134 pulsar-like sources in the 1FGL unassociated sources. The results of these source 'classifications' appear to match the expected source distributions, especially at high Galactic latitudes. While useful for planning future multiwavelength follow-up observations, these analyses use limited inputs, and their predictions should not be considered equivalent to 'probable source classes' for these sources. We discuss multiwavelength results and catalog cross-correlations to date, and provide new source associations for 229 Fermi-LAT sources that had no association listed in the 1FGL catalog. By validating the source classifications against these new associations, we find that the new association matches the predicted source class in ∼80% of the sources.

  20. Multivariate models for prediction of rheological characteristics of filamentous fermentation broth from the size distribution

    DEFF Research Database (Denmark)

    Petersen, Nanna; Stocks, S.; Gernaey, Krist

    2008-01-01

    fermentations conducted in 550 L pilot scale tanks were characterized with respect to particle size distribution, biomass concentration, and rheological properties. The rheological properties were described using the Herschel-Bulkley model. Estimation of all three parameters in the Herschel-Bulkley model (yield...... in filamentous fermentations. It was therefore chosen to fix this parameter to the average value thereby decreasing the standard deviation of the estimates of the remaining theological parameters significantly. Using a PLSR model, a reasonable prediction of apparent viscosity (mu(app)), yield stress (tau......(y)), and consistency index (K), could be made from the size distributions, biomass concentration, and process information. This provides a predictive method with a high predictive power for the rheology of fermentation broth, and with the advantages over previous models that tau(y) and K can be predicted as well as mu...

  1. A method for uncertainty quantification in the life prediction of gas turbine components

    Energy Technology Data Exchange (ETDEWEB)

    Lodeby, K.; Isaksson, O.; Jaervstraat, N. [Volvo Aero Corporation, Trolhaettan (Sweden)

    1998-12-31

    A failure in an aircraft jet engine can have severe consequences which cannot be accepted and high requirements are therefore raised on engine reliability. Consequently, assessment of the reliability of life predictions used in design and maintenance are important. To assess the validity of the predicted life a method to quantify the contribution to the total uncertainty in the life prediction from different uncertainty sources is developed. The method is a structured approach for uncertainty quantification that uses a generic description of the life prediction process. It is based on an approximate error propagation theory combined with a unified treatment of random and systematic errors. The result is an approximate statistical distribution for the predicted life. The method is applied on life predictions for three different jet engine components. The total uncertainty became of reasonable order of magnitude and a good qualitative picture of the distribution of the uncertainty contribution from the different sources was obtained. The relative importance of the uncertainty sources differs between the three components. It is also highly dependent on the methods and assumptions used in the life prediction. Advantages and disadvantages of this method is discussed. (orig.) 11 refs.

  2. Dose distribution and dosimetry parameters calculation of MED3633 Palladium-103 source in water phantom using MCNP

    International Nuclear Information System (INIS)

    Mowlavi, A. A.; Binesh, A.; Moslehitabar, H.

    2006-01-01

    Palladium-103 ( 103 Pd) is a brachytherapy source for cancer treatment. The Monte Carlo codes are usually applied for dose distribution and effect of shieldings. Monte Carlo calculation of dose distribution in water phantom due to a MED3633 103 Pd source is presented in this work. Materials and Methods: The dose distribution around the 10 3Pd Model MED3633 located in the center of 30*30*30 m 3 water phantom cube was calculated using MCNP code by the Monte Carlo method. The percentage depth dose variation along the different axis parallel and perpendicular to the source was also calculated. Then, the isodose curves for 100%, 75%, 50% and 25% percentage depth dose and dosimetry parameters of TG-43 protocol were determined. Results: The results show that the Monte Carlo Method could calculate dose deposition in high gradient region, near the source, accurately. The isodose curves and dosimetric characteristics obtained for MED3633 103 Pd source are in good agreement with published results. Conclusion: The isodose curves of the MED3633 103 Pd source have been derived form dose calculation by MCNP code. The calculated dosimetry parameters for the source agree quite well with their Monte Carlo calculated and experimental measurement values

  3. Predicting Wetland Distribution Changes under Climate Change and Human Activities in a Mid- and High-Latitude Region

    Directory of Open Access Journals (Sweden)

    Dandan Zhao

    2018-03-01

    Full Text Available Wetlands in the mid- and high-latitudes are particularly vulnerable to environmental changes and have declined dramatically in recent decades. Climate change and human activities are arguably the most important factors driving wetland distribution changes which will have important implications for wetland ecological functions and services. We analyzed the importance of driving variables for wetland distribution and investigated the relative importance of climatic factors and human activity factors in driving historical wetland distribution changes. We predicted wetland distribution changes under climate change and human activities over the 21st century using the Random Forest model in a mid- and high-latitude region of Northeast China. Climate change scenarios included three Representative Concentration Pathways (RCPs based on five general circulation models (GCMs downloaded from the Coupled Model Intercomparison Project, Phase 5 (CMIP5. The three scenarios (RCP 2.6, RCP 4.5, and RCP 8.5 predicted radiative forcing to peak at 2.6, 4.5, and 8.5 W/m2 by the 2100s, respectively. Our results showed that the variables with high importance scores were agricultural population proportion, warmness index, distance to water body, coldness index, and annual mean precipitation; climatic variables were given higher importance scores than human activity variables on average. Average predicted wetland area among three emission scenarios were 340,000 ha, 123,000 ha, and 113,000 ha for the 2040s, 2070s, and 2100s, respectively. Average change percent in predicted wetland area among three periods was greatest under the RCP 8.5 emission scenario followed by RCP 4.5 and RCP 2.6 emission scenarios, which were 78%, 64%, and 55%, respectively. Losses in predicted wetland distribution were generally around agricultural lands and expanded continually from the north to the whole region over time, while the gains were mostly associated with grasslands and water in the

  4. Predicting cycle time distributions for integrated processing workstations : an aggregate modeling approach

    NARCIS (Netherlands)

    Veeger, C.P.L.; Etman, L.F.P.; Lefeber, A.A.J.; Adan, I.J.B.F.; Herk, van J.; Rooda, J.E.

    2011-01-01

    To predict cycle time distributions of integrated processing workstations, detailed simulation models are almost exclusively used; these models require considerable development and maintenance effort. As an alternative, we propose an aggregate model that is a lumped-parameter representation of the

  5. Precision predictions for Higgs differential distributions at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Ebert, Markus

    2017-08-15

    After the discovery of a Standard-Model-like Higgs boson at the LHC a central aspect of the LHC physics program is to study the Higgs boson's couplings to Standard Model particles in detail in order to elucidate the nature of the Higgs mechanism and to search for hints of physics beyond the Standard Model. This requires precise theory predictions for both inclusive and differential Higgs cross sections. In this thesis we focus on the application of resummation techniques in the framework of Soft-Collinear Effective Theory (SCET) to obtain accurate predictions with reliable theory uncertainties for various observables. We first consider transverse momentum distributions, where the resummation of large logarithms in momentum (or distribution) space has been a long-standing open question. We show that its two-dimensional nature leads to additional difficulties not observed in one-dimensional observables such as thrust, and solving the associated renormalization group equations (RGEs) in momentum space thus requires a very careful scale setting. This is achieved using distributional scale setting, a new technique to solve differential equations such as RGEs directly in distribution space, as it allows one to treat logarithmic plus distributions like ordinary logarithms. We show that the momentum space solution fundamentally differs from the standard resummation in Fourier space by different boundary terms to all orders in perturbation theory and hence provides an interesting and complementary approach to obtain new insight into the all-order perturbative and nonperturbative structure of transverse momentum distributions. Our work lays the ground for a detailed numerical study of the momentum space resummation. We then show that in the case of a discovery of a new heavy color-singlet resonance such as a heavy Higgs boson, one can reliably and model-independently infer its production mechanism by dividing the data into two mutually exclusive jet bins. The method is

  6. Methods for Prediction of Temperature Distribution in Flashover Caused by Backdraft Fire

    Directory of Open Access Journals (Sweden)

    Guowei Zhang

    2014-01-01

    Full Text Available Accurately predicting temperature distribution in flashover fire is a key issue for evacuation and fire-fighting. Now many good flashover fire experiments have be conducted, but most of these experiments are proceeded in enclosure with fixed openings; researches on fire development and temperature distribution in flashover caused by backdraft fire did not receive enough attention. In order to study flashover phenomenon caused by backdraft fire, a full-scale fire experiment was conducted in one abandoned office building. Process of fire development and temperature distribution in room and corridor were separately recorded during the experiment. The experiment shows that fire development in enclosure is closely affected by the room ventilation. Unlike existing temperature curves which have only one temperature peak, temperature in flashover caused by backdraft may have more than one peak value and that there is a linear relationship between maximum peak temperature and distance away from fire compartment. Based on BFD curve and experimental data, mathematical models are proposed to predict temperature curve in flashover fire caused by backdraft at last. These conclusions and experiment data obtained in this paper could provide valuable reference to fire simulation, hazard assessment, and fire protection design.

  7. Using Bayesian Belief Network (BBN) modelling for rapid source term prediction. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Knochenhauer, M.; Swaling, V.H.; Dedda, F.D.; Hansson, F.; Sjoekvist, S.; Sunnegaerd, K. [Lloyd' s Register Consulting AB, Sundbyberg (Sweden)

    2013-10-15

    The project presented in this report deals with a number of complex issues related to the development of a tool for rapid source term prediction (RASTEP), based on a plant model represented as a Bayesian belief network (BBN) and a source term module which is used for assigning relevant source terms to BBN end states. Thus, RASTEP uses a BBN to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, composition, timing, and release path of released radio-nuclides). The output is a set of possible source terms with associated probabilities. One major issue has been associated with the integration of probabilistic and deterministic analyses are addressed, dealing with the challenge of making the source term determination flexible enough to give reliable and valid output throughout the accident scenario. The potential for connecting RASTEP to a fast running source term prediction code has been explored, as well as alternative ways of improving the deterministic connections of the tool. As part of the investigation, a comparison of two deterministic severe accident analysis codes has been performed. A second important task has been to develop a general method where experts' beliefs can be included in a systematic way when defining the conditional probability tables (CPTs) in the BBN. The proposed method includes expert judgement in a systematic way when defining the CPTs of a BBN. Using this iterative method results in a reliable BBN even though expert judgements, with their associated uncertainties, have been used. It also simplifies verification and validation of the considerable amounts of quantitative data included in a BBN. (Author)

  8. Using Bayesian Belief Network (BBN) modelling for rapid source term prediction. Final report

    International Nuclear Information System (INIS)

    Knochenhauer, M.; Swaling, V.H.; Dedda, F.D.; Hansson, F.; Sjoekvist, S.; Sunnegaerd, K.

    2013-10-01

    The project presented in this report deals with a number of complex issues related to the development of a tool for rapid source term prediction (RASTEP), based on a plant model represented as a Bayesian belief network (BBN) and a source term module which is used for assigning relevant source terms to BBN end states. Thus, RASTEP uses a BBN to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, composition, timing, and release path of released radio-nuclides). The output is a set of possible source terms with associated probabilities. One major issue has been associated with the integration of probabilistic and deterministic analyses are addressed, dealing with the challenge of making the source term determination flexible enough to give reliable and valid output throughout the accident scenario. The potential for connecting RASTEP to a fast running source term prediction code has been explored, as well as alternative ways of improving the deterministic connections of the tool. As part of the investigation, a comparison of two deterministic severe accident analysis codes has been performed. A second important task has been to develop a general method where experts' beliefs can be included in a systematic way when defining the conditional probability tables (CPTs) in the BBN. The proposed method includes expert judgement in a systematic way when defining the CPTs of a BBN. Using this iterative method results in a reliable BBN even though expert judgements, with their associated uncertainties, have been used. It also simplifies verification and validation of the considerable amounts of quantitative data included in a BBN. (Author)

  9. Bayesian Belief Networks for predicting drinking water distribution system pipe breaks

    International Nuclear Information System (INIS)

    Francis, Royce A.; Guikema, Seth D.; Henneman, Lucas

    2014-01-01

    In this paper, we use Bayesian Belief Networks (BBNs) to construct a knowledge model for pipe breaks in a water zone. To the authors’ knowledge, this is the first attempt to model drinking water distribution system pipe breaks using BBNs. Development of expert systems such as BBNs for analyzing drinking water distribution system data is not only important for pipe break prediction, but is also a first step in preventing water loss and water quality deterioration through the application of machine learning techniques to facilitate data-based distribution system monitoring and asset management. Due to the difficulties in collecting, preparing, and managing drinking water distribution system data, most pipe break models can be classified as “statistical–physical” or “hypothesis-generating.” We develop the BBN with the hope of contributing to the “hypothesis-generating” class of models, while demonstrating the possibility that BBNs might also be used as “statistical–physical” models. Our model is learned from pipe breaks and covariate data from a mid-Atlantic United States (U.S.) drinking water distribution system network. BBN models are learned using a constraint-based method, a score-based method, and a hybrid method. Model evaluation is based on log-likelihood scoring. Sensitivity analysis using mutual information criterion is also reported. While our results indicate general agreement with prior results reported in pipe break modeling studies, they also suggest that it may be difficult to select among model alternatives. This model uncertainty may mean that more research is needed for understanding whether additional pipe break risk factors beyond age, break history, pipe material, and pipe diameter might be important for asset management planning. - Highlights: • We show Bayesian Networks for predictive and diagnostic management of water distribution systems. • Our model may enable system operators and managers to prioritize system

  10. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

    Science.gov (United States)

    Zhang, Langwen; Xie, Wei; Wang, Jingcheng

    2017-11-01

    In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

  11. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  12. A practical two-way system of quantum key distribution with untrusted source

    International Nuclear Information System (INIS)

    Chen Ming-Juan; Liu Xiang

    2011-01-01

    The most severe problem of a two-way 'plug-and-play' (p and p) quantum key distribution system is that the source can be controlled by the eavesdropper. This kind of source is defined as an “untrusted source . This paper discusses the effects of the fluctuation of internal transmittance on the final key generation rate and the transmission distance. The security of the standard BB84 protocol, one-decoy state protocol, and weak+vacuum decoy state protocol, with untrusted sources and the fluctuation of internal transmittance are studied. It is shown that the one-decoy state is sensitive to the statistical fluctuation but weak+vacuum decoy state is only slightly affected by the fluctuation. It is also shown that both the maximum secure transmission distance and final key generation rate are reduced when Alice's laboratory transmittance fluctuation is considered. (general)

  13. Evaluation of Airborne Remote Sensing Techniques for Predicting the Distribution of Energetic Compounds on Impact Areas

    National Research Council Canada - National Science Library

    Graves, Mark R; Dove, Linda P; Jenkins, Thomas F; Bigl, Susan; Walsh, Marianne E; Hewitt, Alan D; Lambert, Dennis; Perron, Nancy; Ramsey, Charles; Gamey, Jeff; Beard, Les; Doll, William E; Magoun, Dale

    2007-01-01

    .... These sampling approaches do not accurately account for the distribution of such contaminants over the landscape due to the distributed nature of explosive compound sources throughout impact areas...

  14. Predicting induced radioactivity for the accelerator operations at the Taiwan Photon Source.

    Science.gov (United States)

    Sheu, R J; Jiang, S H

    2010-12-01

    This study investigates the characteristics of induced radioactivity due to the operations of a 3-GeV electron accelerator at the Taiwan Photon Source (TPS). According to the beam loss analysis, the authors set two representative irradiation conditions for the activation analysis. The FLUKA Monte Carlo code has been used to predict the isotope inventories, residual activities, and remanent dose rates as a function of time. The calculation model itself is simple but conservative for the evaluation of induced radioactivity in a light source facility. This study highlights the importance of beam loss scenarios and demonstrates the great advantage of using FLUKA in comparing the predicted radioactivity with corresponding regulatory limits. The calculated results lead to the conclusion that, due to fairly low electron consumption, the radioactivity induced in the accelerator components and surrounding concrete walls of the TPS is rather moderate and manageable, while the possible activation of air and cooling water in the tunnel and their environmental releases are negligible.

  15. Prediction of vertical distribution and ambient development temperature of Baltic cod, Gadus morhua L., eggs

    DEFF Research Database (Denmark)

    Wieland, Kai; Jarre, Astrid

    1997-01-01

    An artificial neural network (ANN) model was established to predict the vertical distribution of Baltic cod eggs. Data from vertical distribution sampling in the Bornholm Basin over the period 1986-1995 were used to train and test the network, while data sets from sampling in 1996 were used...... for validation. The model explained 82% of the variance between observed and predicted relative frequencies of occurrence of the eggs in relation to salinity, temperature and oxygen concentration; The ANN fitted all observations satisfactorily except for one sampling date, where an exceptional hydrographic...

  16. Predicting the distributions of predator (snow leopard) and prey (blue sheep) under climate change in the Himalaya.

    Science.gov (United States)

    Aryal, Achyut; Shrestha, Uttam Babu; Ji, Weihong; Ale, Som B; Shrestha, Sujata; Ingty, Tenzing; Maraseni, Tek; Cockfield, Geoff; Raubenheimer, David

    2016-06-01

    Future climate change is likely to affect distributions of species, disrupt biotic interactions, and cause spatial incongruity of predator-prey habitats. Understanding the impacts of future climate change on species distribution will help in the formulation of conservation policies to reduce the risks of future biodiversity losses. Using a species distribution modeling approach by MaxEnt, we modeled current and future distributions of snow leopard (Panthera uncia) and its common prey, blue sheep (Pseudois nayaur), and observed the changes in niche overlap in the Nepal Himalaya. Annual mean temperature is the major climatic factor responsible for the snow leopard and blue sheep distributions in the energy-deficient environments of high altitudes. Currently, about 15.32% and 15.93% area of the Nepal Himalaya are suitable for snow leopard and blue sheep habitats, respectively. The bioclimatic models show that the current suitable habitats of both snow leopard and blue sheep will be reduced under future climate change. The predicted suitable habitat of the snow leopard is decreased when blue sheep habitats is incorporated in the model. Our climate-only model shows that only 11.64% (17,190 km(2)) area of Nepal is suitable for the snow leopard under current climate and the suitable habitat reduces to 5,435 km(2) (reduced by 24.02%) after incorporating the predicted distribution of blue sheep. The predicted distribution of snow leopard reduces by 14.57% in 2030 and by 21.57% in 2050 when the predicted distribution of blue sheep is included as compared to 1.98% reduction in 2030 and 3.80% reduction in 2050 based on the climate-only model. It is predicted that future climate may alter the predator-prey spatial interaction inducing a lower degree of overlap and a higher degree of mismatch between snow leopard and blue sheep niches. This suggests increased energetic costs of finding preferred prey for snow leopards - a species already facing energetic constraints due to the

  17. Predicting bottlenose dolphin distribution along Liguria coast (northwestern Mediterranean Sea) through different modeling techniques and indirect predictors.

    Science.gov (United States)

    Marini, C; Fossa, F; Paoli, C; Bellingeri, M; Gnone, G; Vassallo, P

    2015-03-01

    Habitat modeling is an important tool to investigate the quality of the habitat for a species within a certain area, to predict species distribution and to understand the ecological processes behind it. Many species have been investigated by means of habitat modeling techniques mainly to address effective management and protection policies and cetaceans play an important role in this context. The bottlenose dolphin (Tursiops truncatus) has been investigated with habitat modeling techniques since 1997. The objectives of this work were to predict the distribution of bottlenose dolphin in a coastal area through the use of static morphological features and to compare the prediction performances of three different modeling techniques: Generalized Linear Model (GLM), Generalized Additive Model (GAM) and Random Forest (RF). Four static variables were tested: depth, bottom slope, distance from 100 m bathymetric contour and distance from coast. RF revealed itself both the most accurate and the most precise modeling technique with very high distribution probabilities predicted in presence cells (90.4% of mean predicted probabilities) and with 66.7% of presence cells with a predicted probability comprised between 90% and 100%. The bottlenose distribution obtained with RF allowed the identification of specific areas with particularly high presence probability along the coastal zone; the recognition of these core areas may be the starting point to develop effective management practices to improve T. truncatus protection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Distributed predictive control of spiral wave in cardiac excitable media

    International Nuclear Information System (INIS)

    Zheng-Ning, Gan; Xin-Ming, Cheng

    2010-01-01

    In this paper, we propose the distributed predictive control strategies of spiral wave in cardiac excitable media. The modified FitzHugh–Nagumo model was used to express the cardiac excitable media approximately. Based on the control-Lyapunov theory, we obtained the distributed control equation, which consists of a positive control-Lyapunov function and a positive cost function. Using the equation, we investigate two kinds of robust control strategies: the time-dependent distributed control strategy and the space-time dependent distributed control strategy. The feasibility of the strategies was demonstrated via an illustrative example, in which the spiral wave was prevented to occur, and the possibility for inducing ventricular fibrillation was eliminated. The strategies are helpful in designing various cardiac devices. Since the second strategy is more efficient and robust than the first one, and the response time in the second strategy is far less than that in the first one, the former is suitable for the quick-response control systems. In addition, our spatiotemporal control strategies, especially the second strategy, can be applied to other cardiac models, even to other reaction-diffusion systems. (general)

  19. Development and validation of a new virtual source model for portal image prediction and treatment quality control

    International Nuclear Information System (INIS)

    Chabert, Isabelle

    2015-01-01

    Intensity-Modulated Radiation Therapy (IMRT), require extensive verification procedures to ensure the correct dose delivery. Electronic Portal Imaging Devices (EPIDs) are widely used for quality assurance in radiotherapy, and also for dosimetric verifications. For this latter application, the images obtained during the treatment session can be compared to a pre-calculated reference image in order to highlight dose delivery errors. The quality control performance depends (1) on the accuracy of the pre-calculated reference image (2) on the ability of the tool used to compare images to detect errors. These two key points were studied during this PhD work. We chose to use a Monte Carlo (MC)-based method developed in the laboratory and based on the DPGLM (Dirichlet process generalized linear model) de-noising technique to predict high-resolution reference images. A model of the studied linear accelerator (linac Synergy, Elekta, Crawley, UK) was first developed using the PENELOPE MC codes, and then commissioned using measurements acquired in the Hopital Nord of Marseille. A 71 Go phase space file (PSF) stored under the flattening filter was then analyzed to build a new kind of virtual source model based on correlated histograms (200 Mo). This new and compact VSM is as much accurate as the PSF to calculate dose distributions in water if histogram sampling is based on adaptive method. The associated EPID modelling in PENELOPE suggests that hypothesis about linac primary source were too simple and should be reconsidered. The use of the VSM to predict high-resolution portal images however led to excellent results. The VSM associated to the linac and EPID MC models were used to detect errors in IMRT treatment plans. A preliminary study was conducted introducing on purpose treatment errors in portal image calculations (primary source parameters, phantom position and morphology changes). The γ-index commonly used in clinical routine appears to be less effective than the

  20. Memory for Textual Conflicts Predicts Sourcing When Adolescents Read Multiple Expository Texts

    Science.gov (United States)

    Stang Lund, Elisabeth; Bråten, Ivar; Brante, Eva W.; Strømsø, Helge I.

    2017-01-01

    This study investigated whether memory for conflicting information predicted mental representation of source-content links (i.e., who said what) in a sample of 86 Norwegian adolescent readers. Participants read four texts presenting conflicting claims about sun exposure and health. With differences in gender, prior knowledge, and interest…

  1. Do abundance distributions and species aggregation correctly predict macroecological biodiversity patterns in tropical forests?

    Science.gov (United States)

    Wiegand, Thorsten; Lehmann, Sebastian; Huth, Andreas; Fortin, Marie‐Josée

    2016-01-01

    Abstract Aim It has been recently suggested that different ‘unified theories of biodiversity and biogeography’ can be characterized by three common ‘minimal sufficient rules’: (1) species abundance distributions follow a hollow curve, (2) species show intraspecific aggregation, and (3) species are independently placed with respect to other species. Here, we translate these qualitative rules into a quantitative framework and assess if these minimal rules are indeed sufficient to predict multiple macroecological biodiversity patterns simultaneously. Location Tropical forest plots in Barro Colorado Island (BCI), Panama, and in Sinharaja, Sri Lanka. Methods We assess the predictive power of the three rules using dynamic and spatial simulation models in combination with census data from the two forest plots. We use two different versions of the model: (1) a neutral model and (2) an extended model that allowed for species differences in dispersal distances. In a first step we derive model parameterizations that correctly represent the three minimal rules (i.e. the model quantitatively matches the observed species abundance distribution and the distribution of intraspecific aggregation). In a second step we applied the parameterized models to predict four additional spatial biodiversity patterns. Results Species‐specific dispersal was needed to quantitatively fulfil the three minimal rules. The model with species‐specific dispersal correctly predicted the species–area relationship, but failed to predict the distance decay, the relationship between species abundances and aggregations, and the distribution of a spatial co‐occurrence index of all abundant species pairs. These results were consistent over the two forest plots. Main conclusions The three ‘minimal sufficient’ rules only provide an incomplete approximation of the stochastic spatial geometry of biodiversity in tropical forests. The assumption of independent interspecific placements is most

  2. Real-time distributed economic model predictive control for complete vehicle energy management

    NARCIS (Netherlands)

    Romijn, Constantijn; Donkers, Tijs; Kessels, John; Weiland, Siep

    2017-01-01

    In this paper, a real-time distributed economic model predictive control approach for complete vehicle energy management (CVEM) is presented using a receding control horizon in combination with a dual decomposition. The dual decomposition allows the CVEM optimization problem to be solved by solving

  3. Predicting the spatial distribution of leaf litterfall in a mixed deciduous forest

    NARCIS (Netherlands)

    Staelens, Jeroen; Nachtergale, Lieven; Luyssaert, Sebastiaan

    2004-01-01

    An accurate prediction of the spatial distribution of litterfall can improve insight in the interaction between the canopy layer and forest floor characteristics, which is a key feature in forest nutrient cycling. Attempts to model the spatial variability of litterfall have been made across forest

  4. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    Science.gov (United States)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models

  5. Herschel-ATLAS: Dust Temperature and Redshift Distribution of SPIRE and PACS Detected Sources Using Submillimetre Colours

    Science.gov (United States)

    Amblard, A.; Cooray, Asantha; Serra, P.; Temi, P.; Barton, E.; Negrello, M.; Auld, R.; Baes, M.; Baldry, I. K.; Bamford, S.; hide

    2010-01-01

    We present colour-colour diagrams of detected sources in the Herschel-ATLAS Science Demonstration Field from 100 to 500/microns using both PACS and SPIRE. We fit isothermal modified-blackbody spectral energy distribution (SED) models in order to extract the dust temperature of sources with counterparts in GAMA or SDSS with either a spectroscopic or a photometric redshift. For a subsample of 331 sources detected in at least three FIR bands with significance greater than 30 sigma, we find an average dust temperature of (28 plus or minus 8)K. For sources with no known redshifts, we populate the colour-colour diagram with a large number of SEDs generated with a broad range of dust temperatures and emissivity parameters and compare to colours of observed sources to establish the redshift distribution of those samples. For another subsample of 1686 sources with fluxes above 35 mJy at 350 microns and detected at 250 and 500 microns with a significance greater than 3sigma, we find an average redshift of 2.2 plus or minus 0.6.

  6. Predicting Spatial Distribution of Key Honeybee Pests in Kenya Using Remotely Sensed and Bioclimatic Variables: Key Honeybee Pests Distribution Models

    Directory of Open Access Journals (Sweden)

    David M. Makori

    2017-02-01

    Full Text Available Bee keeping is indispensable to global food production. It is an alternate income source, especially in rural underdeveloped African settlements, and an important forest conservation incentive. However, dwindling honeybee colonies around the world are attributed to pests and diseases whose spatial distribution and influences are not well established. In this study, we used remotely sensed data to improve the reliability of pest ecological niche (EN models to attain reliable pest distribution maps. Occurrence data on four pests (Aethina tumida, Galleria mellonella, Oplostomus haroldi and Varroa destructor were collected from apiaries within four main agro-ecological regions responsible for over 80% of Kenya’s bee keeping. Africlim bioclimatic and derived normalized difference vegetation index (NDVI variables were used to model their ecological niches using Maximum Entropy (MaxEnt. Combined precipitation variables had a high positive logit influence on all remotely sensed and biotic models’ performance. Remotely sensed vegetation variables had a substantial effect on the model, contributing up to 40.8% for G. mellonella and regions with high rainfall seasonality were predicted to be high-risk areas. Projections (to 2055 indicated that, with the current climate change trend, these regions will experience increased honeybee pest risk. We conclude that honeybee pests could be modelled using bioclimatic data and remotely sensed variables in MaxEnt. Although the bioclimatic data were most relevant in all model results, incorporating vegetation seasonality variables to improve mapping the ‘actual’ habitat of key honeybee pests and to identify risk and containment zones needs to be further investigated.

  7. Model of charge-state distributions for electron cyclotron resonance ion source plasmas

    Directory of Open Access Journals (Sweden)

    D. H. Edgell

    1999-12-01

    Full Text Available A computer model for the ion charge-state distribution (CSD in an electron cyclotron resonance ion source (ECRIS plasma is presented that incorporates non-Maxwellian distribution functions, multiple atomic species, and ion confinement due to the ambipolar potential well that arises from confinement of the electron cyclotron resonance (ECR heated electrons. Atomic processes incorporated into the model include multiple ionization and multiple charge exchange with rate coefficients calculated for non-Maxwellian electron distributions. The electron distribution function is calculated using a Fokker-Planck code with an ECR heating term. This eliminates the electron temperature as an arbitrary user input. The model produces results that are a good match to CSD data from the ANL-ECRII ECRIS. Extending the model to 1D axial will also allow the model to determine the plasma and electrostatic potential profiles, further eliminating arbitrary user input to the model.

  8. Predictive modelling of grain size distributions from marine electromagnetic profiling data using end-member analysis and a radial basis function network

    Science.gov (United States)

    Baasch, B.; M"uller, H.; von Dobeneck, T.

    2018-04-01

    In this work we present a new methodology to predict grain-size distributions from geophysical data. Specifically, electric conductivity and magnetic susceptibility of seafloor sediments recovered from electromagnetic profiling data are used to predict grain-size distributions along shelf-wide survey lines. Field data from the NW Iberian shelf are investigated and reveal a strong relation between the electromagnetic properties and grain-size distribution. The here presented workflow combines unsupervised and supervised machine learning techniques. Nonnegative matrix factorisation is used to determine grain-size end-members from sediment surface samples. Four end-members were found which well represent the variety of sediments in the study area. A radial-basis function network modified for prediction of compositional data is then used to estimate the abundances of these end-members from the electromagnetic properties. The end-members together with their predicted abundances are finally back transformed to grain-size distributions. A minimum spatial variation constraint is implemented in the training of the network to avoid overfitting and to respect the spatial distribution of sediment patterns. The predicted models are tested via leave-one-out cross-validation revealing high prediction accuracy with coefficients of determination (R2) between 0.76 and 0.89. The predicted grain-size distributions represent the well-known sediment facies and patterns on the NW Iberian shelf and provide new insights into their distribution, transition and dynamics. This study suggests that electromagnetic benthic profiling in combination with machine learning techniques is a powerful tool to estimate grain-size distribution of marine sediments.

  9. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System

    Directory of Open Access Journals (Sweden)

    Miao Sun

    2016-06-01

    Full Text Available We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  10. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System.

    Science.gov (United States)

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong

    2016-06-06

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  11. 137Cs source dose distribution using the Fricke Xylenol Gel dosimetry

    International Nuclear Information System (INIS)

    Sato, R.; De Almeida, A.; Moreira, M.V.

    2009-01-01

    Dosimetric measurements close to radioisotope sources, such as those used in brachytherapy, require high spatial resolution to avoid incorrect results in the steep dose gradient region. In this work the Fricke Xylenol Gel dosimeter was used to obtain the spatial dose distribution. The readings from a 137 Cs source were performed using two methods, visible spectrophotometer and CCD camera images. Good agreement with the Sievert summation method was found for the transversal axis dose profile within uncertainties of 4% and 5%, for the spectrophotometer and CCD camera respectively. Our results show that the dosimeter is adequate for brachytherapy dosimetry and, owing to its relatively fast and easy preparation and reading, it is recommended for quality control in brachytherapy applications.

  12. Species distribution models of tropical deep-sea snappers.

    Directory of Open Access Journals (Sweden)

    Céline Gomez

    Full Text Available Deep-sea fisheries provide an important source of protein to Pacific Island countries and territories that are highly dependent on fish for food security. However, spatial management of these deep-sea habitats is hindered by insufficient data. We developed species distribution models using spatially limited presence data for the main harvested species in the Western Central Pacific Ocean. We used bathymetric and water temperature data to develop presence-only species distribution models for the commercially exploited deep-sea snappers Etelis Cuvier 1828, Pristipomoides Valenciennes 1830, and Aphareus Cuvier 1830. We evaluated the performance of four different algorithms (CTA, GLM, MARS, and MAXENT within the BIOMOD framework to obtain an ensemble of predicted distributions. We projected these predictions across the Western Central Pacific Ocean to produce maps of potential deep-sea snapper distributions in 32 countries and territories. Depth was consistently the best predictor of presence for all species groups across all models. Bathymetric slope was consistently the poorest predictor. Temperature at depth was a good predictor of presence for GLM only. Model precision was highest for MAXENT and CTA. There were strong regional patterns in predicted distribution of suitable habitat, with the largest areas of suitable habitat (> 35% of the Exclusive Economic Zone predicted in seven South Pacific countries and territories (Fiji, Matthew & Hunter, Nauru, New Caledonia, Tonga, Vanuatu and Wallis & Futuna. Predicted habitat also varied among species, with the proportion of predicted habitat highest for Aphareus and lowest for Etelis. Despite data paucity, the relationship between deep-sea snapper presence and their environments was sufficiently strong to predict their distribution across a large area of the Pacific Ocean. Our results therefore provide a strong baseline for designing monitoring programs that balance resource exploitation and

  13. Cross correlations of quantum key distribution based on single-photon sources

    International Nuclear Information System (INIS)

    Dong Shuangli; Wang Xiaobo; Zhang Guofeng; Sun Jianhu; Zhang Fang; Xiao Liantuan; Jia Suotang

    2009-01-01

    We theoretically analyze the second-order correlation function in a quantum key distribution system with real single-photon sources. Based on single-event photon statistics, the influence of the modification caused by an eavesdropper's intervention and the effects of background signals on the cross correlations between authorized partners are presented. On this basis, we have shown a secure range of correlation against the intercept-resend attacks.

  14. Interpreting predictive maps of disease: highlighting the pitfalls of distribution models in epidemiology

    Directory of Open Access Journals (Sweden)

    Nicola A. Wardrop

    2014-11-01

    Full Text Available The application of spatial modelling to epidemiology has increased significantly over the past decade, delivering enhanced understanding of the environmental and climatic factors affecting disease distributions and providing spatially continuous representations of disease risk (predictive maps. These outputs provide significant information for disease control programmes, allowing spatial targeting and tailored interventions. However, several factors (e.g. sampling protocols or temporal disease spread can influence predictive mapping outputs. This paper proposes a conceptual framework which defines several scenarios and their potential impact on resulting predictive outputs, using simulated data to provide an exemplar. It is vital that researchers recognise these scenarios and their influence on predictive models and their outputs, as a failure to do so may lead to inaccurate interpretation of predictive maps. As long as these considerations are kept in mind, predictive mapping will continue to contribute significantly to epidemiological research and disease control planning.

  15. Incorporation of lysosomal sequestration in the mechanistic model for prediction of tissue distribution of basic drugs.

    Science.gov (United States)

    Assmus, Frauke; Houston, J Brian; Galetin, Aleksandra

    2017-11-15

    The prediction of tissue-to-plasma water partition coefficients (Kpu) from in vitro and in silico data using the tissue-composition based model (Rodgers & Rowland, J Pharm Sci. 2005, 94(6):1237-48.) is well established. However, distribution of basic drugs, in particular into lysosome-rich lung tissue, tends to be under-predicted by this approach. The aim of this study was to develop an extended mechanistic model for the prediction of Kpu which accounts for lysosomal sequestration and the contribution of different cell types in the tissue of interest. The extended model is based on compound-specific physicochemical properties and tissue composition data to describe drug ionization, distribution into tissue water and drug binding to neutral lipids, neutral phospholipids and acidic phospholipids in tissues, including lysosomes. Physiological data on the types of cells contributing to lung, kidney and liver, their lysosomal content and lysosomal pH were collated from the literature. The predictive power of the extended mechanistic model was evaluated using a dataset of 28 basic drugs (pK a ≥7.8, 17 β-blockers, 11 structurally diverse drugs) for which experimentally determined Kpu data in rat tissue have been reported. Accounting for the lysosomal sequestration in the extended mechanistic model improved the accuracy of Kpu predictions in lung compared to the original Rodgers model (56% drugs within 2-fold or 88% within 3-fold of observed values). Reduction in the extent of Kpu under-prediction was also evident in liver and kidney. However, consideration of lysosomal sequestration increased the occurrence of over-predictions, yielding overall comparable model performances for kidney and liver, with 68% and 54% of Kpu values within 2-fold error, respectively. High lysosomal concentration ratios relative to cytosol (>1000-fold) were predicted for the drugs investigated; the extent differed depending on the lysosomal pH and concentration of acidic phospholipids among

  16. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    units. The approach is inspired by smart-grid electric power production and consumption systems, where the flexibility of a large number of power producing and/or power consuming units can be exploited in a smart-grid solution. The objective is to accommodate the load variation on the grid, arising......This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... on one hand from varying consumption, on the other hand by natural variations in power production e.g. from wind turbines. The approach presented is based on quadratic optimization and possess the properties of low algorithmic complexity and of scalability. In particular, the proposed design methodology...

  17. Studies on the supposition of liquid source for irradiation and its dose distribution, (1)

    International Nuclear Information System (INIS)

    Yoshimura, Seiji; Nishida, Tsuneo

    1977-01-01

    Recently radio isotope has been used and applied in the respective spheres. The application of the effects by irradiation will be specially paid attention to in the future. Today the source for irradiation has been considered to be the thing sealed in the solid state into various capsules. So we suppose that we use liquid radio isotope as the source for irradiation. This is because there are some advantages compared with the solid source in its freedom of the shape or additional easiness at attenuation. In these experiments we measured the dose distribution by the columnar liquid source. We expect that these will be put to practical use. (auth.)

  18. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model

    Science.gov (United States)

    Diaz-Rodriguez, Sebastian; Bozada, Samantha M.; Phifer, Jeremy R.; Paluch, Andrew S.

    2016-11-01

    We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of 2.2± 0.2 log units (ranking 15 out of 62 entries), the correlation coefficient ( R) was 0.6± 0.1 (ranking 35), and 72± 6 % of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

  19. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    Science.gov (United States)

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  20. Analysis of the Source System of Nantun Group in Huhehu Depression of Hailar Basin

    Science.gov (United States)

    Li, Yue; Li, Junhui; Wang, Qi; Lv, Bingyang; Zhang, Guannan

    2017-10-01

    Huhehu Depression will be the new battlefield in Hailar Basin in the future, while at present it’s in a low exploration level. The study about the source system of Nantun group is little, so fine depiction of the source system would be significant to sedimentary system reconstruction, the reservoir distribution and prediction of favorable area. In this paper, it comprehensive uses of many methods such as ancient landform, light and heavy mineral combination, seismic reflection characteristics, to do detailed study about the source system of Nantun group in different views and different levels. The results show that the source system in Huhehu Depression is from the east of Xilinbeir bulge and the west of Bayan Moutain uplift, which is surrounded by basin. The slope belt is the main source, and the southern bulge is the secondary source. The distribution of source system determines the distribution of sedimentary system and the regularity of the distribution of sand body.

  1. Multi-source analysis reveals latitudinal and altitudinal shifts in range of Ixodes ricinus at its northern distribution limit

    Directory of Open Access Journals (Sweden)

    Kristoffersen Anja B

    2011-05-01

    Full Text Available Abstract Background There is increasing evidence for a latitudinal and altitudinal shift in the distribution range of Ixodes ricinus. The reported incidence of tick-borne disease in humans is on the rise in many European countries and has raised political concern and attracted media attention. It is disputed which factors are responsible for these trends, though many ascribe shifts in distribution range to climate changes. Any possible climate effect would be most easily noticeable close to the tick's geographical distribution limits. In Norway- being the northern limit of this species in Europe- no documentation of changes in range has been published. The objectives of this study were to describe the distribution of I. ricinus in Norway and to evaluate if any range shifts have occurred relative to historical descriptions. Methods Multiple data sources - such as tick-sighting reports from veterinarians, hunters, and the general public - and surveillance of human and animal tick-borne diseases were compared to describe the present distribution of I. ricinus in Norway. Correlation between data sources and visual comparison of maps revealed spatial consistency. In order to identify the main spatial pattern of tick abundance, a principal component analysis (PCA was used to obtain a weighted mean of four data sources. The weighted mean explained 67% of the variation of the data sources covering Norway's 430 municipalities and was used to depict the present distribution of I. ricinus. To evaluate if any geographical range shift has occurred in recent decades, the present distribution was compared to historical data from 1943 and 1983. Results Tick-borne disease and/or observations of I. ricinus was reported in municipalities up to an altitude of 583 metres above sea level (MASL and is now present in coastal municipalities north to approximately 69°N. Conclusion I. ricinus is currently found further north and at higher altitudes than described in

  2. Color Shift Failure Prediction for Phosphor-Converted White LEDs by Modeling Features of Spectral Power Distribution with a Nonlinear Filter Approach

    Directory of Open Access Journals (Sweden)

    Jiajie Fan

    2017-07-01

    Full Text Available With the expanding application of light-emitting diodes (LEDs, the color quality of white LEDs has attracted much attention in several color-sensitive application fields, such as museum lighting, healthcare lighting and displays. Reliability concerns for white LEDs are changing from the luminous efficiency to color quality. However, most of the current available research on the reliability of LEDs is still focused on luminous flux depreciation rather than color shift failure. The spectral power distribution (SPD, defined as the radiant power distribution emitted by a light source at a range of visible wavelength, contains the most fundamental luminescence mechanisms of a light source. SPD is used as the quantitative inference of an LED’s optical characteristics, including color coordinates that are widely used to represent the color shift process. Thus, to model the color shift failure of white LEDs during aging, this paper first extracts the features of an SPD, representing the characteristics of blue LED chips and phosphors, by multi-peak curve-fitting and modeling them with statistical functions. Then, because the shift processes of extracted features in aged LEDs are always nonlinear, a nonlinear state-space model is then developed to predict the color shift failure time within a self-adaptive particle filter framework. The results show that: (1 the failure mechanisms of LEDs can be identified by analyzing the extracted features of SPD with statistical curve-fitting and (2 the developed method can dynamically and accurately predict the color coordinates, correlated color temperatures (CCTs, and color rendering indexes (CRIs of phosphor-converted (pc-white LEDs, and also can estimate the residual color life.

  3. Color Shift Failure Prediction for Phosphor-Converted White LEDs by Modeling Features of Spectral Power Distribution with a Nonlinear Filter Approach.

    Science.gov (United States)

    Fan, Jiajie; Mohamed, Moumouni Guero; Qian, Cheng; Fan, Xuejun; Zhang, Guoqi; Pecht, Michael

    2017-07-18

    With the expanding application of light-emitting diodes (LEDs), the color quality of white LEDs has attracted much attention in several color-sensitive application fields, such as museum lighting, healthcare lighting and displays. Reliability concerns for white LEDs are changing from the luminous efficiency to color quality. However, most of the current available research on the reliability of LEDs is still focused on luminous flux depreciation rather than color shift failure. The spectral power distribution (SPD), defined as the radiant power distribution emitted by a light source at a range of visible wavelength, contains the most fundamental luminescence mechanisms of a light source. SPD is used as the quantitative inference of an LED's optical characteristics, including color coordinates that are widely used to represent the color shift process. Thus, to model the color shift failure of white LEDs during aging, this paper first extracts the features of an SPD, representing the characteristics of blue LED chips and phosphors, by multi-peak curve-fitting and modeling them with statistical functions. Then, because the shift processes of extracted features in aged LEDs are always nonlinear, a nonlinear state-space model is then developed to predict the color shift failure time within a self-adaptive particle filter framework. The results show that: (1) the failure mechanisms of LEDs can be identified by analyzing the extracted features of SPD with statistical curve-fitting and (2) the developed method can dynamically and accurately predict the color coordinates, correlated color temperatures (CCTs), and color rendering indexes (CRIs) of phosphor-converted (pc)-white LEDs, and also can estimate the residual color life.

  4. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA

    Science.gov (United States)

    Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.

    2017-10-01

    Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  5. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA.

    Science.gov (United States)

    Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M

    2017-10-01

    Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  6. Enhancing the performance of the measurement-device-independent quantum key distribution with heralded pair-coherent sources

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Feng; Zhang, Chun-Hui; Liu, Ai-Ping [Institute of Signal Processing Transmission, Nanjing University of Posts and Telecommunications, Nanjing 210003 (China); Key Lab of Broadband Wireless Communication and Sensor Network Technology, Nanjing University of Posts and Telecommunications, Ministry of Education, Nanjing 210003 (China); Wang, Qin, E-mail: qinw@njupt.edu.cn [Institute of Signal Processing Transmission, Nanjing University of Posts and Telecommunications, Nanjing 210003 (China); Key Lab of Broadband Wireless Communication and Sensor Network Technology, Nanjing University of Posts and Telecommunications, Ministry of Education, Nanjing 210003 (China); Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026 (China)

    2016-04-01

    In this paper, we propose to implement the heralded pair-coherent source into the measurement-device-independent quantum key distribution. By comparing its performance with other existing schemes, we demonstrate that our new scheme can overcome many shortcomings existing in current schemes, and show excellent behavior in the quantum key distribution. Moreover, even when taking the statistical fluctuation into account, we can still obtain quite high key generation rate at very long transmission distance by using our new scheme. - Highlights: • Implement the heralded pair-coherent source into the measurement-device-independent quantum key distribution. • Overcome many shortcomings existing in current schemes and show excellent behavior. • Obtain quite high key generation rate even when taking statistical fluctuation into account.

  7. A stationary computed tomography system with cylindrically distributed sources and detectors.

    Science.gov (United States)

    Chen, Yi; Xi, Yan; Zhao, Jun

    2014-01-01

    The temporal resolution of current computed tomography (CT) systems is limited by the rotation speed of their gantries. A helical interlaced source detector array (HISDA) CT, which is a stationary CT system with distributed X-ray sources and detectors, is presented in this paper to overcome the aforementioned limitation and achieve high temporal resolution. Projection data can be obtained from different angles in a short time and do not require source, detector, or object motion. Axial coverage speed is increased further by employing a parallel scan scheme. Interpolation is employed to approximate the missing data in the gaps, and then a Katsevich-type reconstruction algorithm is applied to enable an approximate reconstruction. The proposed algorithm suppressed the cone beam and gap-induced artifacts in HISDA CT. The results also suggest that gap-induced artifacts can be reduced by employing a large helical pitch for a fixed gap height. HISDA CT is a promising 3D dynamic imaging architecture given its good temporal resolution and stationary advantage.

  8. Linking macroecology and community ecology: refining predictions of species distributions using biotic interaction networks.

    Science.gov (United States)

    Staniczenko, Phillip P A; Sivasubramaniam, Prabu; Suttle, K Blake; Pearson, Richard G

    2017-06-01

    Macroecological models for predicting species distributions usually only include abiotic environmental conditions as explanatory variables, despite knowledge from community ecology that all species are linked to other species through biotic interactions. This disconnect is largely due to the different spatial scales considered by the two sub-disciplines: macroecologists study patterns at large extents and coarse resolutions, while community ecologists focus on small extents and fine resolutions. A general framework for including biotic interactions in macroecological models would help bridge this divide, as it would allow for rigorous testing of the role that biotic interactions play in determining species ranges. Here, we present an approach that combines species distribution models with Bayesian networks, which enables the direct and indirect effects of biotic interactions to be modelled as propagating conditional dependencies among species' presences. We show that including biotic interactions in distribution models for species from a California grassland community results in better range predictions across the western USA. This new approach will be important for improving estimates of species distributions and their dynamics under environmental change. © 2017 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  9. Combining disparate data sources for improved poverty prediction and mapping.

    Science.gov (United States)

    Pokhriyal, Neeti; Jacques, Damien Christophe

    2017-11-14

    More than 330 million people are still living in extreme poverty in Africa. Timely, accurate, and spatially fine-grained baseline data are essential to determining policy in favor of reducing poverty. The potential of "Big Data" to estimate socioeconomic factors in Africa has been proven. However, most current studies are limited to using a single data source. We propose a computational framework to accurately predict the Global Multidimensional Poverty Index (MPI) at a finest spatial granularity and coverage of 552 communes in Senegal using environmental data (related to food security, economic activity, and accessibility to facilities) and call data records (capturing individualistic, spatial, and temporal aspects of people). Our framework is based on Gaussian Process regression, a Bayesian learning technique, providing uncertainty associated with predictions. We perform model selection using elastic net regularization to prevent overfitting. Our results empirically prove the superior accuracy when using disparate data (Pearson correlation of 0.91). Our approach is used to accurately predict important dimensions of poverty: health, education, and standard of living (Pearson correlation of 0.84-0.86). All predictions are validated using deprivations calculated from census. Our approach can be used to generate poverty maps frequently, and its diagnostic nature is, likely, to assist policy makers in designing better interventions for poverty eradication. Copyright © 2017 the Author(s). Published by PNAS.

  10. Impacts of Spatio-Variability of Source Morphology on Field-Scale Predictions of Subsurface Contaminant Transport

    National Research Council Canada - National Science Library

    Hatfield, Kirk

    1998-01-01

    ... (organic immiscible liquids distribution and composition) and aquifer properties on predicting solute transport in saturated groundwater systems contaminated with residual Organic Immiscible Liquids (OIL's...

  11. Research on cross - Project software defect prediction based on transfer learning

    Science.gov (United States)

    Chen, Ya; Ding, Xiaoming

    2018-04-01

    According to the two challenges in the prediction of cross-project software defects, the distribution differences between the source project and the target project dataset and the class imbalance in the dataset, proposing a cross-project software defect prediction method based on transfer learning, named NTrA. Firstly, solving the source project data's class imbalance based on the Augmented Neighborhood Cleaning Algorithm. Secondly, the data gravity method is used to give different weights on the basis of the attribute similarity of source project and target project data. Finally, a defect prediction model is constructed by using Trad boost algorithm. Experiments were conducted using data, come from NASA and SOFTLAB respectively, from a published PROMISE dataset. The results show that the method has achieved good values of recall and F-measure, and achieved good prediction results.

  12. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    Science.gov (United States)

    Syfert, Mindy M; Smith, Matthew J; Coomes, David A

    2013-01-01

    Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  13. Responsiveness of performance and morphological traits to experimental submergence predicts field distribution pattern of wetland plants

    NARCIS (Netherlands)

    Luo, Fang-Li; Huang, Lin; Lei, Ting; Xue, Wei; Li, Hong-Li; Yu, Fei-Hai; Cornelissen, J.H.C.

    2016-01-01

    Question: Plant trait mean values and trait responsiveness to different environmental regimes are both important determinants of plant field distribution, but the degree to which plant trait means vs trait responsiveness predict plant distribution has rarely been compared quantitatively. Because

  14. Source localization of rhythmic ictal EEG activity: a study of diagnostic accuracy following STARD criteria.

    Science.gov (United States)

    Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders

    2013-10-01

    Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method

  15. Geometric discretization of the multidimensional Dirac delta distribution - Application to the Poisson equation with singular source terms

    Science.gov (United States)

    Egan, Raphael; Gibou, Frédéric

    2017-10-01

    We present a discretization method for the multidimensional Dirac distribution. We show its applicability in the context of integration problems, and for discretizing Dirac-distributed source terms in Poisson equations with constant or variable diffusion coefficients. The discretization is cell-based and can thus be applied in a straightforward fashion to Quadtree/Octree grids. The method produces second-order accurate results for integration. Superlinear convergence is observed when it is used to model Dirac-distributed source terms in Poisson equations: the observed order of convergence is 2 or slightly smaller. The method is consistent with the discretization of Dirac delta distribution for codimension one surfaces presented in [1,2]. We present Quadtree/Octree construction procedures to preserve convergence and present various numerical examples, including multi-scale problems that are intractable with uniform grids.

  16. Predicting the potential distribution of the amphibian pathogen Batrachochytrium dendrobatidis in East and Southeast Asia.

    Science.gov (United States)

    Moriguchi, Sachiko; Tominaga, Atsushi; Irwin, Kelly J; Freake, Michael J; Suzuki, Kazutaka; Goka, Koichi

    2015-04-08

    Batrachochytrium dendrobatidis (Bd) is the pathogen responsible for chytridiomycosis, a disease that is associated with a worldwide amphibian population decline. In this study, we predicted the potential distribution of Bd in East and Southeast Asia based on limited occurrence data. Our goal was to design an effective survey area where efforts to detect the pathogen can be focused. We generated ecological niche models using the maximum-entropy approach, with alleviation of multicollinearity and spatial autocorrelation. We applied eigenvector-based spatial filters as independent variables, in addition to environmental variables, to resolve spatial autocorrelation, and compared the model's accuracy and the degree of spatial autocorrelation with those of a model estimated using only environmental variables. We were able to identify areas of high suitability for Bd with accuracy. Among the environmental variables, factors related to temperature and precipitation were more effective in predicting the potential distribution of Bd than factors related to land use and cover type. Our study successfully predicted the potential distribution of Bd in East and Southeast Asia. This information should now be used to prioritize survey areas and generate a surveillance program to detect the pathogen.

  17. The dislocation distribution function near a crack tip generated by external sources

    International Nuclear Information System (INIS)

    Lung, C.W.; Deng, K.M.

    1988-06-01

    The dislocation distribution function near a crack tip generated by external sources is calculated. It is similar to the shape of curves calculated for the crack tip emission case but the quantative difference is quite large. The image forces enlarges the negative dislocation zone but does not change the form of the curve. (author). 10 refs, 3 figs

  18. A review on the sources and spatial-temporal distributions of Pb in Jiaozhou Bay

    Science.gov (United States)

    Yang, Dongfang; Zhang, Jie; Wang, Ming; Zhu, Sixi; Wu, Yunjie

    2017-12-01

    This paper provided a review on the source, spatial-distribution, temporal variations of Pb in Jiaozhou Bay based on investigation of Pb in surface and waters in different seasons during 1979-1983. The source strengths of Pb sources in Jiaozhou Bay were showing increasing trends, and the pollution level of Pb in this bay was slight or moderate in the early stage of reform and opening-up. Pb contents in the marine bay were mainly determined by the strength and frequency of Pb inputs from human activities, and Pb could be moving from high content areas to low content areas in the ocean interior. Surface waters in the ocean was polluted by human activities, and bottom waters was polluted by means of vertical water’s effect. The process of spatial distribution of Pb in waters was including three steps, i.e., 1), Pb was transferring to surface waters in the bay, 2) Pb was transferring to surface waters, and 3) Pb was transferring to and accumulating in bottom waters.

  19. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks.

    Science.gov (United States)

    Ma, Junjie; Meng, Fansheng; Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-02-16

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  20. Model Predictive Control for Distributed Microgrid Battery Energy Storage Systems

    DEFF Research Database (Denmark)

    Morstyn, Thomas; Hredzak, Branislav; Aguilera, Ricardo P.

    2018-01-01

    , and converter current constraints to be addressed. In addition, nonlinear variations in the charge and discharge efficiencies of lithium ion batteries are analyzed and included in the control strategy. Real-time digital simulations were carried out for an islanded microgrid based on the IEEE 13 bus prototypical......This brief proposes a new convex model predictive control (MPC) strategy for dynamic optimal power flow between battery energy storage (ES) systems distributed in an ac microgrid. The proposed control strategy uses a new problem formulation, based on a linear $d$ – $q$ reference frame voltage...... feeder, with distributed battery ES systems and intermittent photovoltaic generation. It is shown that the proposed control strategy approaches the performance of a strategy based on nonconvex optimization, while reducing the required computation time by a factor of 1000, making it suitable for a real...

  1. Ion-source dependence of the distributions of internuclear separations in 2-MeV HeH+ beams

    International Nuclear Information System (INIS)

    Kanter, E.P.; Gemmell, D.S.; Plesser, I.; Vager, Z.

    1981-01-01

    Experiments involving the use of MeV molecular-ion beams have yielded new information on atomic collisions in solids. A central part of the analyses of such experiments is a knowledge of the distribution of internuclear separations contained in the incident beam. In an attempt to determine how these distributions depend on ion-source gas conditions, we have studied foil-induced dissociations of H 2+ , H 3+ , HeH + , and OH 2+ ions. Although changes of ion-source gas compositions and pressure were found to have no measurable influence on the vibrational state populations of the beams reaching our target, for HeH + we found that beams produced in our rf source were vibrationally hotter than beams produced in a duoplasmatron. This was also seen in studies of neutral fragments and transmitted molecules

  2. Free-Space Quantum Key Distribution with a High Generation Rate KTP Waveguide Photon-Pair Source

    Science.gov (United States)

    Wilson, J.; Chaffee, D.; Wilson, N.; Lekki, J.; Tokars, R.; Pouch, J.; Lind, A.; Cavin, J.; Helmick, S.; Roberts, T.; hide

    2016-01-01

    NASA awarded Small Business Innovative Research (SBIR) contracts to AdvR, Inc to develop a high generation rate source of entangled photons that could be used to explore quantum key distribution (QKD) protocols. The final product, a photon pair source using a dual-element periodically- poled potassium titanyl phosphate (KTP) waveguide, was delivered to NASA Glenn Research Center in June of 2015. This paper describes the source, its characterization, and its performance in a B92 (Bennett, 1992) protocol QKD experiment.

  3. Study (Prediction of Main Pipes Break Rates in Water Distribution Systems Using Intelligent and Regression Methods

    Directory of Open Access Journals (Sweden)

    Massoud Tabesh

    2011-07-01

    Full Text Available Optimum operation of water distribution networks is one of the priorities of sustainable development of water resources, considering the issues of increasing efficiency and decreasing the water losses. One of the key subjects in optimum operational management of water distribution systems is preparing rehabilitation and replacement schemes, prediction of pipes break rate and evaluation of their reliability. Several approaches have been presented in recent years regarding prediction of pipe failure rates which each one requires especial data sets. Deterministic models based on age and deterministic multi variables and stochastic group modeling are examples of the solutions which relate pipe break rates to parameters like age, material and diameters. In this paper besides the mentioned parameters, more factors such as pipe depth and hydraulic pressures are considered as well. Then using multi variable regression method, intelligent approaches (Artificial neural network and neuro fuzzy models and Evolutionary polynomial Regression method (EPR pipe burst rate are predicted. To evaluate the results of different approaches, a case study is carried out in a part ofMashhadwater distribution network. The results show the capability and advantages of ANN and EPR methods to predict pipe break rates, in comparison with neuro fuzzy and multi-variable regression methods.

  4. Sources, occurrence and predicted aquatic impact of legacy and contemporary pesticides in streams.

    Science.gov (United States)

    McKnight, Ursula S; Rasmussen, Jes J; Kronvang, Brian; Binning, Philip J; Bjerg, Poul L

    2015-05-01

    We couple current findings of pesticides in surface and groundwater to the history of pesticide usage, focusing on the potential contribution of legacy pesticides to the predicted ecotoxicological impact on benthic macroinvertebrates in headwater streams. Results suggest that groundwater, in addition to precipitation and surface runoff, is an important source of pesticides (particularly legacy herbicides) entering surface water. In addition to current-use active ingredients, legacy pesticides, metabolites and impurities are important for explaining the estimated total toxicity attributable to pesticides. Sediment-bound insecticides were identified as the primary source for predicted ecotoxicity. Our results support recent studies indicating that highly sorbing chemicals contribute and even drive impacts on aquatic ecosystems. They further indicate that groundwater contaminated by legacy and contemporary pesticides may impact adjoining streams. Stream observations of soluble and sediment-bound pesticides are valuable for understanding the long-term fate of pesticides in aquifers, and should be included in stream monitoring programs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Distribution and sources of particulate organic matter in the Indian monsoonal estuaries during monsoon

    Digital Repository Service at National Institute of Oceanography (India)

    Sarma, V.V.S.S.; Krishna, M.S.; Prasad, V.R.; Kumar, B.S.K.; Naidu, S.A.; Rao, G.D.; Viswanadham, R.; Sridevi, T.; Kumar, P.P.; Reddy, N.P.C.

    The distribution and sources of particulate organic carbon (POC) and nitrogen (PN) in 27 Indian estuaries were examined during the monsoon using the content and isotopic composition of carbon and nitrogen. Higher phytoplankton biomass was noticed...

  6. Future prospects for ECR ion sources with improved charge state distributions

    International Nuclear Information System (INIS)

    Alton, G.D.

    1995-01-01

    Despite the steady advance in the technology of the ECR ion source, present art forms have not yet reached their full potential in terms of charge state and intensity within a particular charge state, in part, because of the narrow band width. single-frequency microwave radiation used to heat the plasma electrons. This article identifies fundamentally important methods which may enhance the performances of ECR ion sources through the use of: (1) a tailored magnetic field configuration (spatial domain) in combination with single-frequency microwave radiation to create a large uniformly distributed ECR ''volume'' or (2) the use of broadband frequency domain techniques (variable-frequency, broad-band frequency, or multiple-discrete-frequency microwave radiation), derived from standard TWT technology, to transform the resonant plasma ''surfaces'' of traditional ECR ion sources into resonant plasma ''volume''. The creation of a large ECR plasma ''volume'' permits coupling of more power into the plasma, resulting in the heating of a much larger electron population to higher energies, thereby producing higher charge state ions and much higher intensities within a particular charge state than possible in present forms of' the source. The ECR ion source concepts described in this article offer exciting opportunities to significantly advance the-state-of-the-art of ECR technology and as a consequence, open new opportunities in fundamental and applied research and for a variety of industrial applications

  7. Performance prediction of a synchronization link for distributed aerospace wireless systems.

    Science.gov (United States)

    Wang, Wen-Qin; Shao, Huaizong

    2013-01-01

    For reasons of stealth and other operational advantages, distributed aerospace wireless systems have received much attention in recent years. In a distributed aerospace wireless system, since the transmitter and receiver placed on separated platforms which use independent master oscillators, there is no cancellation of low-frequency phase noise as in the monostatic cases. Thus, high accurate time and frequency synchronization techniques are required for distributed wireless systems. The use of a dedicated synchronization link to quantify and compensate oscillator frequency instability is investigated in this paper. With the mathematical statistical models of phase noise, closed-form analytic expressions for the synchronization link performance are derived. The possible error contributions including oscillator, phase-locked loop, and receiver noise are quantified. The link synchronization performance is predicted by utilizing the knowledge of the statistical models, system error contributions, and sampling considerations. Simulation results show that effective synchronization error compensation can be achieved by using this dedicated synchronization link.

  8. Prediction of thermal coagulation from the instantaneous strain distribution induced by high-intensity focused ultrasound

    Science.gov (United States)

    Iwasaki, Ryosuke; Takagi, Ryo; Tomiyasu, Kentaro; Yoshizawa, Shin; Umemura, Shin-ichiro

    2017-07-01

    The targeting of the ultrasound beam and the prediction of thermal lesion formation in advance are the requirements for monitoring high-intensity focused ultrasound (HIFU) treatment with safety and reproducibility. To visualize the HIFU focal zone, we utilized an acoustic radiation force impulse (ARFI) imaging-based method. After inducing displacements inside tissues with pulsed HIFU called the push pulse exposure, the distribution of axial displacements started expanding and moving. To acquire RF data immediately after and during the HIFU push pulse exposure to improve prediction accuracy, we attempted methods using extrapolation estimation and applying HIFU noise elimination. The distributions going back in the time domain from the end of push pulse exposure are in good agreement with tissue coagulation at the center. The results suggest that the proposed focal zone visualization employing pulsed HIFU entailing the high-speed ARFI imaging method is useful for the prediction of thermal coagulation in advance.

  9. Emphysema Distribution and Diffusion Capacity Predict Emphysema Progression in Human Immunodeficiency Virus Infection

    Science.gov (United States)

    Leung, Janice M; Malagoli, Andrea; Santoro, Antonella; Besutti, Giulia; Ligabue, Guido; Scaglioni, Riccardo; Dai, Darlene; Hague, Cameron; Leipsic, Jonathon; Sin, Don D.; Man, SF Paul; Guaraldi, Giovanni

    2016-01-01

    Background Chronic obstructive pulmonary disease (COPD) and emphysema are common amongst patients with human immunodeficiency virus (HIV). We sought to determine the clinical factors that are associated with emphysema progression in HIV. Methods 345 HIV-infected patients enrolled in an outpatient HIV metabolic clinic with ≥2 chest computed tomography scans made up the study cohort. Images were qualitatively scored for emphysema based on percentage involvement of the lung. Emphysema progression was defined as any increase in emphysema score over the study period. Univariate analyses of clinical, respiratory, and laboratory data, as well as multivariable logistic regression models, were performed to determine clinical features significantly associated with emphysema progression. Results 17.4% of the cohort were emphysema progressors. Emphysema progression was most strongly associated with having a low baseline diffusion capacity of carbon monoxide (DLCO) and having combination centrilobular and paraseptal emphysema distribution. In adjusted models, the odds ratio (OR) for emphysema progression for every 10% increase in DLCO percent predicted was 0.58 (95% confidence interval [CI] 0.41–0.81). The equivalent OR (95% CI) for centrilobular and paraseptal emphysema distribution was 10.60 (2.93–48.98). Together, these variables had an area under the curve (AUC) statistic of 0.85 for predicting emphysema progression. This was an improvement over the performance of spirometry (forced expiratory volume in 1 second to forced vital capacity ratio), which predicted emphysema progression with an AUC of only 0.65. Conclusion Combined paraseptal and centrilobular emphysema distribution and low DLCO could identify HIV patients who may experience emphysema progression. PMID:27902753

  10. Open-source chemogenomic data-driven algorithms for predicting drug-target interactions.

    Science.gov (United States)

    Hao, Ming; Bryant, Stephen H; Wang, Yanli

    2018-02-06

    While novel technologies such as high-throughput screening have advanced together with significant investment by pharmaceutical companies during the past decades, the success rate for drug development has not yet been improved prompting researchers looking for new strategies of drug discovery. Drug repositioning is a potential approach to solve this dilemma. However, experimental identification and validation of potential drug targets encoded by the human genome is both costly and time-consuming. Therefore, effective computational approaches have been proposed to facilitate drug repositioning, which have proved to be successful in drug discovery. Doubtlessly, the availability of open-accessible data from basic chemical biology research and the success of human genome sequencing are crucial to develop effective in silico drug repositioning methods allowing the identification of potential targets for existing drugs. In this work, we review several chemogenomic data-driven computational algorithms with source codes publicly accessible for predicting drug-target interactions (DTIs). We organize these algorithms by model properties and model evolutionary relationships. We re-implemented five representative algorithms in R programming language, and compared these algorithms by means of mean percentile ranking, a new recall-based evaluation metric in the DTI prediction research field. We anticipate that this review will be objective and helpful to researchers who would like to further improve existing algorithms or need to choose appropriate algorithms to infer potential DTIs in the projects. The source codes for DTI predictions are available at: https://github.com/minghao2016/chemogenomicAlg4DTIpred. Published by Oxford University Press 2018. This work is written by US Government employees and is in the public domain in the US.

  11. Quantification of the impact of precipitation spatial distribution uncertainty on predictive uncertainty of a snowmelt runoff model

    Science.gov (United States)

    Jacquin, A. P.

    2012-04-01

    This study is intended to quantify the impact of uncertainty about precipitation spatial distribution on predictive uncertainty of a snowmelt runoff model. This problem is especially relevant in mountain catchments with a sparse precipitation observation network and relative short precipitation records. The model analysed is a conceptual watershed model operating at a monthly time step. The model divides the catchment into five elevation zones, where the fifth zone corresponds to the catchment's glaciers. Precipitation amounts at each elevation zone i are estimated as the product between observed precipitation at a station and a precipitation factor FPi. If other precipitation data are not available, these precipitation factors must be adjusted during the calibration process and are thus seen as parameters of the model. In the case of the fifth zone, glaciers are seen as an inexhaustible source of water that melts when the snow cover is depleted.The catchment case study is Aconcagua River at Chacabuquito, located in the Andean region of Central Chile. The model's predictive uncertainty is measured in terms of the output variance of the mean squared error of the Box-Cox transformed discharge, the relative volumetric error, and the weighted average of snow water equivalent in the elevation zones at the end of the simulation period. Sobol's variance decomposition (SVD) method is used for assessing the impact of precipitation spatial distribution, represented by the precipitation factors FPi, on the models' predictive uncertainty. In the SVD method, the first order effect of a parameter (or group of parameters) indicates the fraction of predictive uncertainty that could be reduced if the true value of this parameter (or group) was known. Similarly, the total effect of a parameter (or group) measures the fraction of predictive uncertainty that would remain if the true value of this parameter (or group) was unknown, but all the remaining model parameters could be fixed

  12. Influence of covariate distribution on the predictive performance of pharmacokinetic models in paediatric research

    Science.gov (United States)

    Piana, Chiara; Danhof, Meindert; Della Pasqua, Oscar

    2014-01-01

    Aims The accuracy of model-based predictions often reported in paediatric research has not been thoroughly characterized. The aim of this exercise is therefore to evaluate the role of covariate distributions when a pharmacokinetic model is used for simulation purposes. Methods Plasma concentrations of a hypothetical drug were simulated in a paediatric population using a pharmacokinetic model in which body weight was correlated with clearance and volume of distribution. Two subgroups of children were then selected from the overall population according to a typical study design, in which pre-specified body weight ranges (10–15 kg and 30–40 kg) were used as inclusion criteria. The simulated data sets were then analyzed using non-linear mixed effects modelling. Model performance was assessed by comparing the accuracy of AUC predictions obtained for each subgroup, based on the model derived from the overall population and by extrapolation of the model parameters across subgroups. Results Our findings show that systemic exposure as well as pharmacokinetic parameters cannot be accurately predicted from the pharmacokinetic model obtained from a population with a different covariate range from the one explored during model building. Predictions were accurate only when a model was used for prediction in a subgroup of the initial population. Conclusions In contrast to current practice, the use of pharmacokinetic modelling in children should be limited to interpolations within the range of values observed during model building. Furthermore, the covariate point estimate must be kept in the model even when predictions refer to a subset different from the original population. PMID:24433411

  13. Developing and Validating a Survival Prediction Model for NSCLC Patients Through Distributed Learning Across 3 Countries.

    Science.gov (United States)

    Jochems, Arthur; Deist, Timo M; El Naqa, Issam; Kessler, Marc; Mayo, Chuck; Reeves, Jackson; Jolly, Shruti; Matuszak, Martha; Ten Haken, Randall; van Soest, Johan; Oberije, Cary; Faivre-Finn, Corinne; Price, Gareth; de Ruysscher, Dirk; Lambin, Philippe; Dekker, Andre

    2017-10-01

    Tools for survival prediction for non-small cell lung cancer (NSCLC) patients treated with chemoradiation or radiation therapy are of limited quality. In this work, we developed a predictive model of survival at 2 years. The model is based on a large volume of historical patient data and serves as a proof of concept to demonstrate the distributed learning approach. Clinical data from 698 lung cancer patients, treated with curative intent with chemoradiation or radiation therapy alone, were collected and stored at 2 different cancer institutes (559 patients at Maastro clinic (Netherlands) and 139 at Michigan university [United States]). The model was further validated on 196 patients originating from The Christie (United Kingdon). A Bayesian network model was adapted for distributed learning (the animation can be viewed at https://www.youtube.com/watch?v=ZDJFOxpwqEA). Two-year posttreatment survival was chosen as the endpoint. The Maastro clinic cohort data are publicly available at https://www.cancerdata.org/publication/developing-and-validating-survival-prediction-model-nsclc-patients-through-distributed, and the developed models can be found at www.predictcancer.org. Variables included in the final model were T and N category, age, performance status, and total tumor dose. The model has an area under the curve (AUC) of 0.66 on the external validation set and an AUC of 0.62 on a 5-fold cross validation. A model based on the T and N category performed with an AUC of 0.47 on the validation set, significantly worse than our model (PLearning the model in a centralized or distributed fashion yields a minor difference on the probabilities of the conditional probability tables (0.6%); the discriminative performance of the models on the validation set is similar (P=.26). Distributed learning from federated databases allows learning of predictive models on data originating from multiple institutions while avoiding many of the data-sharing barriers. We believe that

  14. Spatiotemporal trends in Canadian domestic wild boar production and habitat predict wild pig distribution

    DEFF Research Database (Denmark)

    Michel, Nicole; Laforge, Michel; van Beest, Floris

    2017-01-01

    eradication of wild pigs is rarely feasible after establishment over large areas, effective management will depend on strengthening regulations and enforcement of containment practices for Canadian domestic wild boar farms. Initiation of coordinated provincial and federal efforts to implement population...... wild boar and test the propagule pressure hypothesis to improve predictive ability of an existing habitat-based model of wild pigs. We reviewed spatiotemporal patterns in domestic wild boar production across ten Canadian provinces during 1991–2011 and evaluated the ability of wild boar farm...... distribution to improve predictive models of wild pig occurrence using a resource selection probability function for wild pigs in Saskatchewan. Domestic wild boar production in Canada increased from 1991 to 2001 followed by sharp declines in all provinces. The distribution of domestic wild boar farms in 2006...

  15. Prediction of the low-velocity distribution from the pore structure in simple porous media

    Science.gov (United States)

    de Anna, Pietro; Quaife, Bryan; Biros, George; Juanes, Ruben

    2017-12-01

    The macroscopic properties of fluid flow and transport through porous media are a direct consequence of the underlying pore structure. However, precise relations that characterize flow and transport from the statistics of pore-scale disorder have remained elusive. Here we investigate the relationship between pore structure and the resulting fluid flow and asymptotic transport behavior in two-dimensional geometries of nonoverlapping circular posts. We derive an analytical relationship between the pore throat size distribution fλ˜λ-β and the distribution of the low fluid velocities fu˜u-β /2 , based on a conceptual model of porelets (the flow established within each pore throat, here a Hagen-Poiseuille flow). Our model allows us to make predictions, within a continuous-time random-walk framework, for the asymptotic statistics of the spreading of fluid particles along their own trajectories. These predictions are confirmed by high-fidelity simulations of Stokes flow and advective transport. The proposed framework can be extended to other configurations which can be represented as a collection of known flow distributions.

  16. Comparison of predicted far-field temperatures for discrete and smeared heat sources

    International Nuclear Information System (INIS)

    Ryder, E.E.

    1992-01-01

    A fundamental concern in the design of the potential repository at Yucca Mountain. Nevada is the response of the host rock to the emplacement of heat-generating waste. The thermal perturbation of the rock mass has implications regarding the structural, hydrologic. and geochemical performance of the potential repository. The phenomenological coupling of many of these performance aspects makes repository thermal modeling a difficult task. For many of the more complex, coupled models, it is often necessary to reduce the geometry of the potential repository to a smeared heat-source approximation. Such simplifications have impacts on induced thermal profiles that in turn may influence other predicted responses through one- or two-way thermal couplings. The effect of waste employment layout on host-rock thermal was chosen as the primary emphasis of this study. Using a consistent set of modeling and input assumptions, far-field thermal response predictions made for discrete-source as well as plate source approximations of the repository geometry. Input values used in the simulations are consistent with a design-basis a real power density (APD) of 80 kW/acre as would be achieved assuming a 2010 emplacement start date, a levelized receipt schedule, and a limitation on available area as published in previous design studies. It was found that edge effects resulting from general repository layout have a significant influence on the shapes and extents of isothermal profiles, and should be accounted for in far-field modeling efforts

  17. Predictive Model for the Analysis of the Effects of Underwater Impulsive Sources on Marine Life

    National Research Council Canada - National Science Library

    Lazauski, Colin J

    2007-01-01

    A method is provided to predict the biological consequences to marine animals from exposure to multiple underwater impulsive sources by simulating underwater explosions over a defined period of time...

  18. Age-related schema reliance of judgments of learning in predicting source memory.

    Science.gov (United States)

    Shi, Liang-Zi; Tang, Wei-Hai; Liu, Xi-Ping

    2012-01-01

    Source memory refers to mental processes of encoding and making attributions to the origin of information. We investigated schematic effects on source attributions of younger and older adults for different schema-based types of items, and their schema-utilization of judgments of learning (JOLs) in estimating source memory. Participants studied statements presented by two speakers either as a doctor or a lawyer: those in the schema-after-encoding condition were informed their occupation only before retrieving, while those of schema-before-encoding were presented the schematic information prior to study. Immediately after learning every item, they made judgments of the likelihood for it to be correctly attributed to the original source later. In the test, they fulfilled a task of source attributing. The results showed a two-edged effect of schemas: schema reliance improved source memory for schema-consistent items while impaired that for schema-inconsistent items, even with schematic information presented prior to encoding. Compared with younger adults, older adults benefited more from schema-based compensatory mechanisms. Both younger and older adults could make JOLs based on before-encoding schematic information, and the schema-based JOLs were more accurate in predicting source memory than JOLs made without schema support. However, even in the schema-after-encoding condition, older adults were able to make metacognitive judgments as accurately as younger adults did, though they did have great impairments in source memory itself.

  19. Regional climate model downscaling may improve the prediction of alien plant species distributions

    Science.gov (United States)

    Liu, Shuyan; Liang, Xin-Zhong; Gao, Wei; Stohlgren, Thomas J.

    2014-12-01

    Distributions of invasive species are commonly predicted with species distribution models that build upon the statistical relationships between observed species presence data and climate data. We used field observations, climate station data, and Maximum Entropy species distribution models for 13 invasive plant species in the United States, and then compared the models with inputs from a General Circulation Model (hereafter GCM-based models) and a downscaled Regional Climate Model (hereafter, RCM-based models).We also compared species distributions based on either GCM-based or RCM-based models for the present (1990-1999) to the future (2046-2055). RCM-based species distribution models replicated observed distributions remarkably better than GCM-based models for all invasive species under the current climate. This was shown for the presence locations of the species, and by using four common statistical metrics to compare modeled distributions. For two widespread invasive taxa ( Bromus tectorum or cheatgrass, and Tamarix spp. or tamarisk), GCM-based models failed miserably to reproduce observed species distributions. In contrast, RCM-based species distribution models closely matched observations. Future species distributions may be significantly affected by using GCM-based inputs. Because invasive plants species often show high resilience and low rates of local extinction, RCM-based species distribution models may perform better than GCM-based species distribution models for planning containment programs for invasive species.

  20. Alpha-particle autoradiography by solid state track detectors to spatial distribution of radioactivity in alpha-counting source

    International Nuclear Information System (INIS)

    Ishigure, Nobuhito; Nakano, Takashi; Enomoto, Hiroko; Koizumi, Akira; Miyamoto, Katsuhiro

    1989-01-01

    A technique of autoradiography using solid state track detectors is described by which spatial distribution of radioactivity in an alpha-counting source can easily be visualized. As solid state track detectors, polymer of allyl diglycol carbonate was used. The advantage of the present technique was proved that alpha-emitters can be handled in the light place alone through the whole course of autoradiography, otherwise in the conventional autoradiography the alpha-emitters, which requires special carefulness from the point of radiation protection, must be handled in the dark place with difficulty. This technique was applied to rough examination of self-absorption of the plutonium source prepared by the following different methods; the source (A) was prepared by drying at room temperature, (B) by drying under an infrared lamp, (C) by drying in ammonia atmosphere after redissolving by the addition of a drop of distilled water which followed complete evaporation under an infrared lamp and (D) by drying under an infrared lamp after adding a drop of diluted neutral detergent. The difference in the spatial distributions of radioactivity could clearly be observed on the autoradiographs. For example, the source (C) showed the most diffuse distribution, which suggested that the self-absorption of this source was the smallest. The present autoradiographic observation was in accordance with the result of the alpha-spectrometry with a silicon surface-barrier detector. (author)

  1. SOILD: A computer model for calculating the effective dose equivalent from external exposure to distributed gamma sources in soil

    International Nuclear Information System (INIS)

    Chen, S.Y.; LePoire, D.; Yu, C.; Schafetz, S.; Mehta, P.

    1991-01-01

    The SOLID computer model was developed for calculating the effective dose equivalent from external exposure to distributed gamma sources in soil. It is designed to assess external doses under various exposure scenarios that may be encountered in environmental restoration programs. The models four major functional features address (1) dose versus source depth in soil, (2) shielding of clean cover soil, (3) area of contamination, and (4) nonuniform distribution of sources. The model is also capable of adjusting doses when there are variations in soil densities for both source and cover soils. The model is supported by a data base of approximately 500 radionuclides. 4 refs

  2. Disentangling the major source areas for an intense aerosol advection in the Central Mediterranean on the basis of Potential Source Contribution Function modeling of chemical and size distribution measurements

    Science.gov (United States)

    Petroselli, Chiara; Crocchianti, Stefano; Moroni, Beatrice; Castellini, Silvia; Selvaggi, Roberta; Nava, Silvia; Calzolai, Giulia; Lucarelli, Franco; Cappelletti, David

    2018-05-01

    In this paper, we combined a Potential Source Contribution Function (PSCF) analysis of daily chemical aerosol composition data with hourly aerosol size distributions with the aim to disentangle the major source areas during a complex and fast modulating advection event impacting on Central Italy in 2013. Chemical data include an ample set of metals obtained by Proton Induced X-ray Emission (PIXE), main soluble ions from ionic chromatography and elemental and organic carbon (EC, OC) obtained by thermo-optical measurements. Size distributions have been recorded with an optical particle counter for eight calibrated size classes in the 0.27-10 μm range. We demonstrated the usefulness of the approach by the positive identification of two very different source areas impacting during the transport event. In particular, biomass burning from Eastern Europe and desert dust from Sahara sources have been discriminated based on both chemistry and size distribution time evolution. Hourly BT provided the best results in comparison to 6 h or 24 h based calculations.

  3. Empirical tests of Zipf's law mechanism in open source Linux distribution.

    Science.gov (United States)

    Maillart, T; Sornette, D; Spaeth, S; von Krogh, G

    2008-11-21

    Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.

  4. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources

    Directory of Open Access Journals (Sweden)

    Xiang Gao

    2016-07-01

    Full Text Available This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors’ data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented.

  5. Energy models for commercial energy prediction and substitution of renewable energy sources

    International Nuclear Information System (INIS)

    Iniyan, S.; Suganthi, L.; Samuel, Anand A.

    2006-01-01

    In this paper, three models have been projected namely Modified Econometric Mathematical (MEM) model, Mathematical Programming Energy-Economy-Environment (MPEEE) model, and Optimal Renewable Energy Mathematical (OREM) model. The actual demand for coal, oil and electricity is predicted using the MEM model based on economic, technological and environmental factors. The results were used in the MPEEE model, which determines the optimum allocation of commercial energy sources based on environmental limitations. The gap between the actual energy demand from the MEM model and optimal energy use from the MPEEE model, has to be met by the renewable energy sources. The study develops an OREM model that would facilitate effective utilization of renewable energy sources in India, based on cost, efficiency, social acceptance, reliability, potential and demand. The economic variations in solar energy systems and inclusion of environmental constraint are also analyzed with OREM model. The OREM model will help policy makers in the formulation and implementation of strategies concerning renewable energy sources in India for the next two decades

  6. Distribution, sources and health risk assessment of mercury in kindergarten dust

    Science.gov (United States)

    Sun, Guangyi; Li, Zhonggen; Bi, Xiangyang; Chen, Yupeng; Lu, Shuangfang; Yuan, Xin

    2013-07-01

    Mercury (Hg) contamination in urban area is a hot issue in environmental research. In this study, the distribution, sources and health risk of Hg in dust from 69 kindergartens in Wuhan, China, were investigated. In comparison with most other cities, the concentrations of total mercury (THg) and methylmercury (MeHg) were significantly elevated, ranging from 0.15 to 10.59 mg kg-1 and from 0.64 to 3.88 μg kg-1, respectively. Among the five different urban areas, the educational area had the highest concentrations of THg and MeHg. The GIS mapping was used to identify the hot-spot areas and assess the potential pollution sources of Hg. The emissions of coal-power plants and coking plants were the main sources of THg in the dust, whereas the contributions of municipal solid waste (MSW) landfills and iron and steel smelting related industries were not significant. However, the emission of MSW landfills was considered to be an important source of MeHg in the studied area. The result of health risk assessment indicated that there was a high adverse health effect of the kindergarten dust in terms of Hg contamination on the children living in the educational area (Hazard index (HI) = 6.89).

  7. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Junjie Ma

    2018-02-01

    Full Text Available Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  8. XTALOPT: An open-source evolutionary algorithm for crystal structure prediction

    Science.gov (United States)

    Lonie, David C.; Zurek, Eva

    2011-02-01

    The implementation and testing of XTALOPT, an evolutionary algorithm for crystal structure prediction, is outlined. We present our new periodic displacement (ripple) operator which is ideally suited to extended systems. It is demonstrated that hybrid operators, which combine two pure operators, reduce the number of duplicate structures in the search. This allows for better exploration of the potential energy surface of the system in question, while simultaneously zooming in on the most promising regions. A continuous workflow, which makes better use of computational resources as compared to traditional generation based algorithms, is employed. Various parameters in XTALOPT are optimized using a novel benchmarking scheme. XTALOPT is available under the GNU Public License, has been interfaced with various codes commonly used to study extended systems, and has an easy to use, intuitive graphical interface. Program summaryProgram title:XTALOPT Catalogue identifier: AEGX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v2.1 or later [1] No. of lines in distributed program, including test data, etc.: 36 849 No. of bytes in distributed program, including test data, etc.: 1 149 399 Distribution format: tar.gz Programming language: C++ Computer: PCs, workstations, or clusters Operating system: Linux Classification: 7.7 External routines: QT [2], OpenBabel [3], AVOGADRO [4], SPGLIB [8] and one of: VASP [5], PWSCF [6], GULP [7]. Nature of problem: Predicting the crystal structure of a system from its stoichiometry alone remains a grand challenge in computational materials science, chemistry, and physics. Solution method: Evolutionary algorithms are stochastic search techniques which use concepts from biological evolution in order to locate the global minimum on their potential energy surface. Our evolutionary algorithm, XTALOPT, is freely

  9. Distributed Learning, Recognition, and Prediction by ART and ARTMAP Neural Networks.

    Science.gov (United States)

    Carpenter, Gail A.

    1997-11-01

    A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.

  10. Method to Locate Contaminant Source and Estimate Emission Strength

    Directory of Open Access Journals (Sweden)

    Qu Hongquan

    2013-01-01

    Full Text Available People greatly concern the issue of air quality in some confined spaces, such as spacecraft, aircraft, and submarine. With the increase of residence time in such confined space, contaminant pollution has become a main factor which endangers life. It is urgent to identify a contaminant source rapidly so that a prompt remedial action can be taken. A procedure of source identification should be able to locate the position and to estimate the emission strength of the contaminant source. In this paper, an identification method was developed to realize these two aims. This method was developed based on a discrete concentration stochastic model. With this model, a sensitivity analysis algorithm was induced to locate the source position, and a Kalman filter was used to further estimate the contaminant emission strength. This method could track and predict the source strength dynamically. Meanwhile, it can predict the distribution of contaminant concentration. Simulation results have shown the virtues of the method.

  11. Positron energy distributions from a hybrid positron source based on channeling radiation

    International Nuclear Information System (INIS)

    Azadegan, B.; Mahdipour, A.; Dabagov, S.B.; Wagner, W.

    2013-01-01

    A hybrid positron source which is based on the generation of channeling radiation by relativistic electrons channeled along different crystallographic planes and axes of a tungsten single crystal and subsequent conversion of radiation into e + e − -pairs in an amorphous tungsten target is described. The photon spectra of channeling radiation are calculated using the Doyle–Turner approximation for the continuum potentials and classical equations of motion for channeled particles to obtain their trajectories, velocities and accelerations. The spectral-angular distributions of channeling radiation are found applying classical electrodynamics. Finally, the conversion of radiation into e + e − -pairs and the energy distributions of positrons are simulated using the GEANT4 package

  12. Prediction of plasma-induced damage distribution during silicon nitride etching using advanced three-dimensional voxel model

    Energy Technology Data Exchange (ETDEWEB)

    Kuboi, Nobuyuki, E-mail: Nobuyuki.Kuboi@jp.sony.com; Tatsumi, Tetsuya; Kinoshita, Takashi; Shigetoshi, Takushi; Fukasawa, Masanaga; Komachi, Jun; Ansai, Hisahiro [Device and Material Research Group, RDS Platform, Sony Corporation, 4-14-1 Asahi-cho, Atsugi, Kanagawa 243-0014 (Japan)

    2015-11-15

    The authors modeled SiN film etching with hydrofluorocarbon (CH{sub x}F{sub y}/Ar/O{sub 2}) plasma considering physical (ion bombardment) and chemical reactions in detail, including the reactivity of radicals (C, F, O, N, and H), the area ratio of Si dangling bonds, the outflux of N and H, the dependence of the H/N ratio on the polymer layer, and generation of by-products (HCN, C{sub 2}N{sub 2}, NH, HF, OH, and CH, in addition to CO, CF{sub 2}, SiF{sub 2}, and SiF{sub 4}) as ion assistance process parameters for the first time. The model was consistent with the measured C-F polymer layer thickness, etch rate, and selectivity dependence on process variation for SiN, SiO{sub 2}, and Si film etching. To analyze the three-dimensional (3D) damage distribution affected by the etched profile, the authors developed an advanced 3D voxel model that can predict the time-evolution of the etched profile and damage distribution. The model includes some new concepts for gas transportation in the pattern using a fluid model and the property of voxels called “smart voxels,” which contain details of the history of the etching situation. Using this 3D model, the authors demonstrated metal–oxide–semiconductor field-effect transistor SiN side-wall etching that consisted of the main-etch step with CF{sub 4}/Ar/O{sub 2} plasma and an over-etch step with CH{sub 3}F/Ar/O{sub 2} plasma under the assumption of a realistic process and pattern size. A large amount of Si damage induced by irradiated hydrogen occurred in the source/drain region, a Si recess depth of 5 nm was generated, and the dislocated Si was distributed in a 10 nm deeper region than the Si recess, which was consistent with experimental data for a capacitively coupled plasma. An especially large amount of Si damage was also found at the bottom edge region of the metal–oxide–semiconductor field-effect transistors. Furthermore, our simulation results for bulk fin-type field-effect transistor side-wall etching

  13. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    Science.gov (United States)

    Bansal, A. R.; Anand, S. P.; Rajaram, Mita; Rao, V. K.; Dimri, V. P.

    2013-09-01

    The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.

  14. The Space-, Time-, and Energy-distribution of Neutrons from a Pulsed Plane Source

    Energy Technology Data Exchange (ETDEWEB)

    Claesson, Arne

    1962-05-15

    The space-, time- and energy-distribution of neutrons from a pulsed, plane, high energy source in an infinite medium is determined in a diffusion approximation. For simplicity the moderator is first assumed to be hydrogen gas but it is also shown that the method can be used for a moderator of arbitrary mass.

  15. Distributed least-squares estimation of a remote chemical source via convex combination in wireless sensor networks.

    Science.gov (United States)

    Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

    2014-06-27

    This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  16. Distributed Least-Squares Estimation of a Remote Chemical Source via Convex Combination in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Meng-Li Cao

    2014-06-01

    Full Text Available This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN. Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE method to solve the chemical source localization (CSL problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  17. Site-to-Source Finite Fault Distance Probability Distribution in Probabilistic Seismic Hazard and the Relationship Between Minimum Distances

    Science.gov (United States)

    Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.

    2017-12-01

    We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.

  18. Evaluating the impact of improvements to the FLAMBE smoke source model on forecasts of aerosol distribution from NAAPS

    Science.gov (United States)

    Hyer, E. J.; Reid, J. S.

    2006-12-01

    As more forecast models aim to include aerosol and chemical species, there is a need for source functions for biomass burning emissions that are accurate, robust, and operable in real-time. NAAPS is a global aerosol forecast model running every six hours and forecasting distributions of biomass burning, industrial sulfate, dust, and sea salt aerosols. This model is run operationally by the U.S. Navy as an aid to planning. The smoke emissions used as input to the model are calculated from the data collected by the FLAMBE system, driven by near-real-time active fire data from GOES WF_ABBA and MODIS Rapid Response. The smoke source function uses land cover data to predict properties of detected fires based on literature data from experimental burns. This scheme is very sensitive to the choice of land cover data sets. In areas of rapid land cover change, the use of static land cover data can produce artifactual changes in emissions unrelated to real changes in fire patterns. In South America, this change may be as large as 40% over five years. We demonstrate the impact of a modified land cover scheme on FLAMBE emissions and NAAPS forecasts, including a fire size algorithm developed using MODIS burned area data. We also describe the effects of corrections to emissions estimates for cloud and satellite coverage. We outline areas where existing data sources are incomplete and improvements are required to achieve accurate modeling of biomass burning emissions in real time.

  19. Multivariate models for prediction of rheological characteristics of filamentous fermentation broth from the size distribution.

    Science.gov (United States)

    Petersen, Nanna; Stocks, Stuart; Gernaey, Krist V

    2008-05-01

    The main purpose of this article is to demonstrate that principal component analysis (PCA) and partial least squares regression (PLSR) can be used to extract information from particle size distribution data and predict rheological properties. Samples from commercially relevant Aspergillus oryzae fermentations conducted in 550 L pilot scale tanks were characterized with respect to particle size distribution, biomass concentration, and rheological properties. The rheological properties were described using the Herschel-Bulkley model. Estimation of all three parameters in the Herschel-Bulkley model (yield stress (tau(y)), consistency index (K), and flow behavior index (n)) resulted in a large standard deviation of the parameter estimates. The flow behavior index was not found to be correlated with any of the other measured variables and previous studies have suggested a constant value of the flow behavior index in filamentous fermentations. It was therefore chosen to fix this parameter to the average value thereby decreasing the standard deviation of the estimates of the remaining rheological parameters significantly. Using a PLSR model, a reasonable prediction of apparent viscosity (micro(app)), yield stress (tau(y)), and consistency index (K), could be made from the size distributions, biomass concentration, and process information. This provides a predictive method with a high predictive power for the rheology of fermentation broth, and with the advantages over previous models that tau(y) and K can be predicted as well as micro(app). Validation on an independent test set yielded a root mean square error of 1.21 Pa for tau(y), 0.209 Pa s(n) for K, and 0.0288 Pa s for micro(app), corresponding to R(2) = 0.95, R(2) = 0.94, and R(2) = 0.95 respectively. Copyright 2007 Wiley Periodicals, Inc.

  20. Simulations of a spectral gamma-ray logging tool response to a surface source distribution on the borehole wall

    International Nuclear Information System (INIS)

    Wilson, R.D.; Conaway, J.G.

    1991-01-01

    We have developed Monte Carlo and discrete ordinates simulation models for the large-detector spectral gamma-ray (SGR) logging tool in use at the Nevada Test Site. Application of the simulation models produced spectra for source layers on the borehole wall, either from potassium-bearing mudcakes or from plate-out of radon daughter products. Simulations show that the shape and magnitude of gamma-ray spectra from sources distributed on the borehole wall depend on radial position with in the air-filled borehole as well as on hole diameter. No such dependence is observed for sources uniformly distributed in the formation. In addition, sources on the borehole wall produce anisotropic angular fluxes at the higher scattered energies and at the source energy. These differences in borehole effects and in angular flux are important to the process of correcting SGR logs for the presence of potassium mudcakes; they also suggest a technique for distinguishing between spectral contributions from formation sources and sources on the borehole wall. These results imply the existence of a standoff effect not present for spectra measured in air-filled boreholes from formation sources. 5 refs., 11 figs

  1. Training algorithms evaluation for artificial neural network to temporal prediction of photovoltaic generation

    International Nuclear Information System (INIS)

    Arantes Monteiro, Raul Vitor; Caixeta Guimarães, Geraldo; Rocio Castillo, Madeleine; Matheus Moura, Fabrício Augusto; Tamashiro, Márcio Augusto

    2016-01-01

    Current energy policies are encouraging the connection of power generation based on low-polluting technologies, mainly those using renewable sources, to distribution networks. Hence, it becomes increasingly important to understand technical challenges, facing high penetration of PV systems at the grid, especially considering the effects of intermittence of this source on the power quality, reliability and stability of the electric distribution system. This fact can affect the distribution networks on which they are attached causing overvoltage, undervoltage and frequency oscillations. In order to predict these disturbs, artificial neural networks are used. This article aims to analyze 3 training algorithms used in artificial neural networks for temporal prediction of the generated active power thru photovoltaic panels. As a result it was concluded that the algorithm with the best performance among the 3 analyzed was the Levenberg-Marquadrt.

  2. Isotopes, Inventories and Seasonality: Unraveling Methane Source Distribution in the Complex Landscapes of the United Kingdom.

    Science.gov (United States)

    Lowry, D.; Fisher, R. E.; Zazzeri, G.; Lanoisellé, M.; France, J.; Allen, G.; Nisbet, E. G.

    2017-12-01

    Unlike the big open landscapes of many continents with large area sources dominated by one particular methane emission type that can be isotopically characterized by flight measurements and sampling, the complex patchwork of urban, fossil and agricultural methane sources across NW Europe require detailed ground surveys for characterization (Zazzeri et al., 2017). Here we outline the findings from multiple seasonal urban and rural measurement campaigns in the United Kingdom. These surveys aim to: 1) Assess source distribution and baseline in regions of planned fracking, and relate to on-site continuous baseline climatology. 2) Characterize spatial and seasonal differences in the isotopic signatures of the UNFCCC source categories, and 3) Assess the spatial validity of the 1 x 1 km UK inventory for large continuous emitters, proposed point sources, and seasonal / ephemeral emissions. The UK inventory suggests that 90% of methane emissions are from 3 source categories, ruminants, landfill and gas distribution. Bag sampling and GC-IRMS delta13C analysis shows that landfill gives a constant signature of -57 ±3 ‰ throughout the year. Fugitive gas emissions are consistent regionally depending on the North Sea supply regions feeding the network (-41 ± 2 ‰ in N England, -37 ± 2 ‰ in SE England). Ruminant, mostly cattle, emissions are far more complex as these spend winters in barns and summers in fields, but are essentially a mix of 2 end members, breath at -68 ±3 ‰ and manure at -51 ±3 ‰, resulting in broad summer field emission plumes of -64 ‰ and point winter barn emission plumes of -58 ‰. The inventory correctly locates emission hotspots from landfill, larger sewage treatment plants and gas compressor stations, giving a broad overview of emission distribution for regional model validation. Mobile surveys are adding an extra layer of detail to this which, combined with isotopic characterization, has identified spatial distribution of gas pipe leaks

  3. Time-dependent anisotropic distributed source capability in transient 3-d transport code tort-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    The transient 3-D discrete ordinates transport code TORT-TD has been extended to account for time-dependent anisotropic distributed external sources. The extension aims at the simulation of the pulsed neutron source in the YALINA-Thermal subcritical assembly. Since feedback effects are not relevant in this zero-power configuration, this offers a unique opportunity to validate the time-dependent neutron kinetics of TORT-TD with experimental data. The extensions made in TORT-TD to incorporate a time-dependent anisotropic external source are described. The steady state of the YALINA-Thermal assembly and its response to an artificial square-wave source pulse sequence have been analysed with TORT-TD using pin-wise homogenised cross sections in 18 prompt energy groups with P 1 scattering order and 8 delayed neutron groups. The results demonstrate the applicability of TORT-TD to subcritical problems with a time-dependent external source. (authors)

  4. Angular and mass resolved energy distribution measurements with a gallium liquid metal ion source

    International Nuclear Information System (INIS)

    Marriott, Philip

    1987-06-01

    Ionisation and energy broadening mechanisms relevant to liquid metal ion sources are discussed. A review of experimental results giving a picture of source operation and a discussion of the emission mechanisms thought to occur for the ionic species and droplets emitted is presented. Further work is suggested by this review and an analysis system for angular and mass resolved energy distribution measurements of liquid metal ion source beams has been constructed. The energy analyser has been calibrated and a series of measurements, both on and off the beam axis, of 69 Ga + , Ga ++ and Ga 2 + ions emitted at various currents from a gallium source has been performed. A comparison is made between these results and published work where possible, and the results are discussed with the aim of determining the emission and energy spread mechanisms operating in the gallium liquid metal ion source. (author)

  5. Distributional patterns of arsenic concentrations in contaminant plumes offer clues to the source of arsenic in groundwater at landfills

    Science.gov (United States)

    Harte, Philip T.

    2015-01-01

    The distributional pattern of dissolved arsenic concentrations from landfill plumes can provide clues to the source of arsenic contamination. Under simple idealized conditions, arsenic concentrations along flow paths in aquifers proximal to a landfill will decrease under anthropogenic sources but potentially increase under in situ sources. This paper presents several conceptual distributional patterns of arsenic in groundwater based on the arsenic source under idealized conditions. An example of advanced subsurface mapping of dissolved arsenic with geophysical surveys, chemical monitoring, and redox fingerprinting is presented for a landfill site in New Hampshire with a complex flow pattern. Tools to assist in the mapping of arsenic in groundwater ultimately provide information on the source of contamination. Once an understanding of the arsenic contamination is achieved, appropriate remedial strategies can then be formulated.

  6. A nodal model to predict vertical temperature distribution in a room with floor heating and displacement ventilation

    DEFF Research Database (Denmark)

    Wu, Xiaozhou; Olesen, Bjarne W.; Fang, Lei

    2013-01-01

    In this paper, the development of a nodal model that predicts vertical temperature distribution in a typical office room with floor heating and displacement ventilation (FHDV) is described. The vertical air flow distribution is first determined according to the principle of displacement ventilati...

  7. Effects of source and receiver locations in predicting room transfer functions by a phased beam tracing method

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho; Ih, Jeong-Guon

    2012-01-01

    The accuracy of a phased beam tracing method in predicting transfer functions is investigated with a special focus on the positions of the source and receiver. Simulated transfer functions for various source-receiver pairs using the phased beam tracing method were compared with analytical Green’s...

  8. Development of a predictive system for SLM product quality

    Science.gov (United States)

    Park, H. S.; Tran, N. H.; Nguyen, D. S.

    2017-08-01

    Recently, layer by layer manufacturing or additive manufacturing (AM) has been used in many application fields. Selective laser melting (SLM) is the most attractive method for building layer by layer from metallic powders. However, applications of AM in general and SLM in particular to industry have some barriers due to the quality of the manufactured parts which are affected by the high residual stresses and large deformation. SLM process is characterized by high heat source and fast solidification which lead to large thermal stress. The aim of this research is to develop a system for predicting the printed part quality during SLM process by simulation in consideration of the temperature distribution on the workpiece. For carrying out the system, model for predicting the temperature distribution was established. From this model, influences of process parameters to temperature distribution were analysed. The thermal model in consideration of relationship among printing parameters with temperature distribution is used for optimizing printing process parameters. Then, these results are used for calculating residual stress and predicting the workpiece deformation. The functionality of the proposed predictive system is proven through a case study on aluminium material manufactured on a MetalSys150 - SLM machine.

  9. The effect of the volumetric heat source distribution of the fuel pellet on the minimum DNBR ratio

    International Nuclear Information System (INIS)

    Hordosy, G.; Kereszturi, A.; Maroti, L.; Trosztel, I.

    1995-01-01

    The radial power distribution in a VVER-440 type fuel assembly is strongly non-uniform as a result of the water-gap between the shrouds and the moderator filled central tube. Consequently, it can be expected that the power density inside a single fuel rod is inhomogeneous, as well. In the paper the methodology and the results of coupled thermohydraulic and neutronic calculations are presented. The objective of the analysis was the investigation of the heat source distribution and the determination of the possible extent of the power non-uniformity in a corner rod which has always the highest peaking factor in a VVER-440 type assembly. The results of the analysis revealed that there can be a strong non-uniformity of power distribution inside a fuel pellet, and the effect depends first of all on the general assembly conditions, while the local subchannel parameters have only a slight influence on the pellet heat source distribution. (author)

  10. Mechanistic variables can enhance predictive models of endotherm distributions: The American pika under current, past, and future climates

    Science.gov (United States)

    Mathewson, Paul; Moyer-Horner, Lucas; Beever, Erik; Briscoe, Natalie; Kearney, Michael T.; Yahn, Jeremiah; Porter, Warren P.

    2017-01-01

    How climate constrains species’ distributions through time and space is an important question in the context of conservation planning for climate change. Despite increasing awareness of the need to incorporate mechanism into species distribution models (SDMs), mechanistic modeling of endotherm distributions remains limited in this literature. Using the American pika (Ochotona princeps) as an example, we present a framework whereby mechanism can be incorporated into endotherm SDMs. Pika distribution has repeatedly been found to be constrained by warm temperatures, so we used Niche Mapper, a mechanistic heat-balance model, to convert macroclimate data to pika-specific surface activity time in summer across the western United States. We then explored the difference between using a macroclimate predictor (summer temperature) and using a mechanistic predictor (predicted surface activity time) in SDMs. Both approaches accurately predicted pika presences in current and past climate regimes. However, the activity models predicted 8–19% less habitat loss in response to annual temperature increases of ~3–5 °C predicted in the region by 2070, suggesting that pikas may be able to buffer some climate change effects through behavioral thermoregulation that can be captured by mechanistic modeling. Incorporating mechanism added value to the modeling by providing increased confidence in areas where different modeling approaches agreed and providing a range of outcomes in areas of disagreement. It also provided a more proximate variable relating animal distribution to climate, allowing investigations into how unique habitat characteristics and intraspecific phenotypic variation may allow pikas to exist in areas outside those predicted by generic SDMs. Only a small number of easily obtainable data are required to parameterize this mechanistic model for any endotherm, and its use can improve SDM predictions by explicitly modeling a widely applicable direct physiological effect

  11. Mechanistic variables can enhance predictive models of endotherm distributions: the American pika under current, past, and future climates.

    Science.gov (United States)

    Mathewson, Paul D; Moyer-Horner, Lucas; Beever, Erik A; Briscoe, Natalie J; Kearney, Michael; Yahn, Jeremiah M; Porter, Warren P

    2017-03-01

    How climate constrains species' distributions through time and space is an important question in the context of conservation planning for climate change. Despite increasing awareness of the need to incorporate mechanism into species distribution models (SDMs), mechanistic modeling of endotherm distributions remains limited in this literature. Using the American pika (Ochotona princeps) as an example, we present a framework whereby mechanism can be incorporated into endotherm SDMs. Pika distribution has repeatedly been found to be constrained by warm temperatures, so we used Niche Mapper, a mechanistic heat-balance model, to convert macroclimate data to pika-specific surface activity time in summer across the western United States. We then explored the difference between using a macroclimate predictor (summer temperature) and using a mechanistic predictor (predicted surface activity time) in SDMs. Both approaches accurately predicted pika presences in current and past climate regimes. However, the activity models predicted 8-19% less habitat loss in response to annual temperature increases of ~3-5 °C predicted in the region by 2070, suggesting that pikas may be able to buffer some climate change effects through behavioral thermoregulation that can be captured by mechanistic modeling. Incorporating mechanism added value to the modeling by providing increased confidence in areas where different modeling approaches agreed and providing a range of outcomes in areas of disagreement. It also provided a more proximate variable relating animal distribution to climate, allowing investigations into how unique habitat characteristics and intraspecific phenotypic variation may allow pikas to exist in areas outside those predicted by generic SDMs. Only a small number of easily obtainable data are required to parameterize this mechanistic model for any endotherm, and its use can improve SDM predictions by explicitly modeling a widely applicable direct physiological effect

  12. SU-F-T-24: Impact of Source Position and Dose Distribution Due to Curvature of HDR Transfer Tubes

    Energy Technology Data Exchange (ETDEWEB)

    Khan, A; Yue, N [Rutgers University, New Brunswick, NJ (United States)

    2016-06-15

    Purpose: Brachytherapy is a highly targeted from of radiotherapy. While this may lead to ideal dose distributions on the treatment planning system, a small error in source location can lead to change in the dose distribution. The purpose of this study is to quantify the impact on source position error due to curvature of the transfer tubes and the impact this may have on the dose distribution. Methods: Since the source travels along the midline of the tube, an estimate of the positioning error for various angles of curvature was determined using geometric properties of the tube. Based on the range of values a specific shift was chosen to alter the treatment plans for a number of cervical cancer patients who had undergone HDR brachytherapy boost using tandem and ovoids. Impact of dose to target and organs at risk were determined and checked against guidelines outlined by radiation oncologist. Results: The estimate of the positioning error was 2mm short of the expected position (the curved tube can only cause the source to not reach as far as with a flat tube). Quantitative impact on the dose distribution is still in the process of being analyzed. Conclusion: The accepted positioning tolerance for the source position of a HDR brachytherapy unit is plus or minus 1mm. If there is an additional 2mm discrepancy due to tube curvature, this can result in a source being 1mm to 3mm short of the expected location. While we do always attempt to keep the tubes straight, in some cases such as with tandem and ovoids, the tandem connector does not extend as far out from the patient so the ovoid tubes always contain some degree of curvature. The dose impact of this may be significant.

  13. The P1-approximation for the Distribution of Neutrons from a Pulsed Source in Hydrogen

    International Nuclear Information System (INIS)

    Claesson, A.

    1963-12-01

    The asymptotic distribution of neutrons from a pulsed, high energy source in an infinite moderator has been obtained earlier in a 'diffusion' approximation. In that paper the cross section was assumed to be constant over the whole energy region and the time derivative of the first moment was disregarded. Here, first, an analytic expression is obtained for the density in a P 1 -approximation. However, the result is very complicated, and it is shown that an asymptotic solution can be found in a simpler way. By taking into account the low hydrogen scattering cross section at the source energy it follows that the space dependence of the distribution is less than that obtained earlier. The importance of keeping the time derivative of the first moment is further shown in a perturbation approximation

  14. Regulatory actions to expand the offer of distributed generation from renewable energy sources in Brazil

    International Nuclear Information System (INIS)

    Pepitone da Nóbrega, André; Cabral Carvalho, Carlos Eduardo

    2015-01-01

    The composition of the Brazilian electric energy matrix has undergone transformations in recent years. However, it has still maintained significant participation of renewable energy sources, in particular hydropower plants of various magnitudes. Reasons for the growth of other renewable sources of energy, such as wind and solar, include the fact that the remaining hydropower capacity is mainly located in the Amazon, which is far from centers of consumption, the necessity of diversifying the energy mix and reducing dependence on hydrologic regimes, the increase in environmental restrictions, the increase of civil construction and land costs.Wind power generation has grown most significantly in Brazil. Positive results in the latest energy auctions show that wind power generation has reached competitive pricing. Solar energy is still incipient in Brazil, despite its high potential for conversion into electric energy. This energy source in the Brazilian electric energy matrix mainly involves solar centrals and distributed generation. Biomass thermal plants, mainly the ones that use bagasse of sugar cane, also have an important role in renewable generation in Brazil.This paper aims to present an overview of the present situation and discuss the actions and the regulations to expand the offer of renewable distributed generation in Brazil, mainly from wind power, solar and biomass energy sources. (full text)

  15. Geophysical Prediction Technology Based on Organic Carbon Content in Source Rocks of the Huizhou Sag, the South China Sea

    Directory of Open Access Journals (Sweden)

    Yang Wei

    2017-08-01

    Full Text Available Due to the high exploration cost, limited number of wells for source rocks drilling and scarce test samples for the Total Organic Carbon Content (TOC in the Huizhou sag, the TOC prediction of source rocks in this area and the assessment of resource potentials of the basin are faced with great challenges. In the study of TOC prediction, predecessors usually adopted the logging assessment method, since the data is only confined to a “point” and the regional prediction of the source bed in the seismic profile largely depends on the recognition of seismic facies, making it difficult to quantify TOC. In this study, we combined source rock geological characteristics, logging and seismic response and built the mathematical relation between quasi TOC curve and seismic data based on the TOC logging date of a single well and its internal seismic attribute. The result suggested that it was not purely a linear relationship that was adhered to by predecessors, but was shown as a complicated non-linear relationship. Therefore, the neural network algorithm and SVMs were introduced to obtain the optimum relationship between the quasi TOC curve and the seismic attribute. Then the goal of TOC prediction can be realized with the method of seismic inversion.

  16. CFD prediction of flow and phase distribution in fuel assemblies with spacers

    Energy Technology Data Exchange (ETDEWEB)

    Anglart, H.; Nylund, O. [ABB Atom AB, Vasteras (Switzerland); Kurul, N. [Rensselaer Polytechnic Institute, Troy, NY (United States)] [and others

    1995-09-01

    This paper is concerned with the modeling and computation of multi-dimensional two-phase flows in BWR fuel assemblies. The modeling principles are presented based on using a two-fluid model in which lateral interfacial effects are accounted for. This model has been used to evaluate the velocity fields of both vapor and liquid phases, as well as phase distribution, between fuel elements in geometries similar to BWR fuel bundles. Furthermore, this model has been used to predict, in a detailed mechanistic manner, the effects of spacers on flow and phase distribution between, and pressure drop along, fuel elements. The related numerical simulations have been performed using a CFD computer code, CFDS-FLOW3D.

  17. Mathematical model of heat transfer to predict distribution of hardness through the Jominy bar

    International Nuclear Information System (INIS)

    Lopez, E.; Hernandez, J. B.; Solorio, G.; Vergara, H. J.; Vazquez, O.; Garnica, F.

    2013-01-01

    The heat transfer coefficient was estimated at the bottom surface at Jominy bar end quench specimen by solution of the heat inverse conduction problem. A mathematical model based on the finite-difference method was developed to predict thermal paths and volume fraction of transformed phases. The mathematical model was codified in the commercial package Microsoft Visual Basic v. 6. The calculated thermal path and final phase distribution were used to evaluate the hardness distribution along the AISI 4140 Jominy bar. (Author)

  18. Fast optical source for quantum key distribution based on semiconductor optical amplifiers.

    Science.gov (United States)

    Jofre, M; Gardelein, A; Anzolin, G; Amaya, W; Capmany, J; Ursin, R; Peñate, L; Lopez, D; San Juan, J L; Carrasco, J A; Garcia, F; Torcal-Milla, F J; Sanchez-Brea, L M; Bernabeu, E; Perdigues, J M; Jennewein, T; Torres, J P; Mitchell, M W; Pruneri, V

    2011-02-28

    A novel integrated optical source capable of emitting faint pulses with different polarization states and with different intensity levels at 100 MHz has been developed. The source relies on a single laser diode followed by four semiconductor optical amplifiers and thin film polarizers, connected through a fiber network. The use of a single laser ensures high level of indistinguishability in time and spectrum of the pulses for the four different polarizations and three different levels of intensity. The applicability of the source is demonstrated in the lab through a free space quantum key distribution experiment which makes use of the decoy state BB84 protocol. We achieved a lower bound secure key rate of the order of 3.64 Mbps and a quantum bit error ratio as low as 1.14×10⁻² while the lower bound secure key rate became 187 bps for an equivalent attenuation of 35 dB. To our knowledge, this is the fastest polarization encoded QKD system which has been reported so far. The performance, reduced size, low power consumption and the fact that the components used can be space qualified make the source particularly suitable for secure satellite communication.

  19. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  20. Investigation of Anisotropy Caused by Cylinder Applicator on Dose Distribution around Cs-137 Brachytherapy Source using MCNP4C Code

    Directory of Open Access Journals (Sweden)

    Sedigheh Sina

    2011-06-01

    Full Text Available Introduction: Brachytherapy is a type of radiotherapy in which radioactive sources are used in proximity of tumors normally for treatment of malignancies in the head, prostate and cervix. Materials and Methods: The Cs-137 Selectron source is a low-dose-rate (LDR brachytherapy source used in a remote afterloading system for treatment of different cancers. This system uses active and inactive spherical sources of 2.5 mm diameter, which can be used in different configurations inside the applicator to obtain different dose distributions. In this study, first the dose distribution at different distances from the source was obtained around a single pellet inside the applicator in a water phantom using the MCNP4C Monte Carlo code. The simulations were then repeated for six active pellets in the applicator and for six point sources.  Results: The anisotropy of dose distribution due to the presence of the applicator was obtained by division of dose at each distance and angle to the dose at the same distance and angle of 90 degrees. According to the results, the doses decreased towards the applicator tips. For example, for points at the distances of 5 and 7 cm from the source and angle of 165 degrees, such discrepancies reached 5.8% and 5.1%, respectively.  By increasing the number of pellets to six, these values reached 30% for the angle of 5 degrees. Discussion and Conclusion: The results indicate that the presence of the applicator causes a significant dose decrease at the tip of the applicator compared with the dose in the transverse plane. However, the treatment planning systems consider an isotropic dose distribution around the source and this causes significant errors in treatment planning, which are not negligible, especially for a large number of sources inside the applicator.

  1. Proposed Use of the NASA Ames Nebula Cloud Computing Platform for Numerical Weather Prediction and the Distribution of High Resolution Satellite Imagery

    Science.gov (United States)

    Limaye, Ashutosh S.; Molthan, Andrew L.; Srikishen, Jayanthi

    2010-01-01

    The development of the Nebula Cloud Computing Platform at NASA Ames Research Center provides an open-source solution for the deployment of scalable computing and storage capabilities relevant to the execution of real-time weather forecasts and the distribution of high resolution satellite data to the operational weather community. Two projects at Marshall Space Flight Center may benefit from use of the Nebula system. The NASA Short-term Prediction Research and Transition (SPoRT) Center facilitates the use of unique NASA satellite data and research capabilities in the operational weather community by providing datasets relevant to numerical weather prediction, and satellite data sets useful in weather analysis. SERVIR provides satellite data products for decision support, emphasizing environmental threats such as wildfires, floods, landslides, and other hazards, with interests in numerical weather prediction in support of disaster response. The Weather Research and Forecast (WRF) model Environmental Modeling System (WRF-EMS) has been configured for Nebula cloud computing use via the creation of a disk image and deployment of repeated instances. Given the available infrastructure within Nebula and the "infrastructure as a service" concept, the system appears well-suited for the rapid deployment of additional forecast models over different domains, in response to real-time research applications or disaster response. Future investigations into Nebula capabilities will focus on the development of a web mapping server and load balancing configuration to support the distribution of high resolution satellite data sets to users within the National Weather Service and international partners of SERVIR.

  2. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    Science.gov (United States)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  3. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  4. Influence of the Human Skin Tumor Type in Photodynamic Therapy Analysed by a Predictive Model

    Directory of Open Access Journals (Sweden)

    I. Salas-García

    2012-01-01

    Full Text Available Photodynamic Therapy (PDT modeling allows the prediction of the treatment results depending on the lesion properties, the photosensitizer distribution, or the optical source characteristics. We employ a predictive PDT model and apply it to different skin tumors. It takes into account optical radiation distribution, a nonhomogeneous topical photosensitizer spatial temporal distribution, and the time-dependent photochemical interaction. The predicted singlet oxygen molecular concentrations with varying optical irradiance are compared and could be directly related with the necrosis area. The results show a strong dependence on the particular lesion. This suggests the need to design optimal PDT treatment protocols adapted to the specific patient and lesion.

  5. Power Law Distributions in the Experiment for Adjustment of the Ion Source of the NBI System

    International Nuclear Information System (INIS)

    Han Xiaopu; Hu Chundong

    2005-01-01

    The experiential adjustment process in an experiment on the ion source of the neutral beam injector system for the HT-7 Tokamak is reported in this paper. With regard to the data obtained in the same condition, in arranging the arc current intensities of every shot with a decay rank, the distributions of the arc current intensity correspond to the power laws, and the distribution obtained in the condition with the cryo-pump corresponds to the double Pareto distribution. Using the similar study method, the distributions of the arc duration are close to the power laws too. These power law distributions are formed rather naturally instead of being the results of purposeful seeking

  6. Predicted altitudinal shifts and reduced spatial distribution of Leishmania infantum vector species under climate change scenarios in Colombia.

    Science.gov (United States)

    González, Camila; Paz, Andrea; Ferro, Cristina

    2014-01-01

    Visceral leishmaniasis (VL) is caused by the trypanosomatid parasite Leishmania infantum (=Leishmania chagasi), and is epidemiologically relevant due to its wide geographic distribution, the number of annual cases reported and the increase in its co-infection with HIV. Two vector species have been incriminated in the Americas: Lutzomyia longipalpis and Lutzomyia evansi. In Colombia, L. longipalpis is distributed along the Magdalena River Valley while L. evansi is only found in the northern part of the Country. Regarding the epidemiology of the disease, in Colombia the incidence of VL has decreased over the last few years without any intervention being implemented. Additionally, changes in transmission cycles have been reported with urban transmission occurring in the Caribbean Coast. In Europe and North America climate change seems to be driving a latitudinal shift of leishmaniasis transmission. Here, we explored the spatial distribution of the two known vector species of L. infantum in Colombia and projected its future distribution into climate change scenarios to establish the expansion potential of the disease. An updated database including L. longipalpis and L. evansi collection records from Colombia was compiled. Ecological niche models were performed for each species using the Maxent software and 13 Worldclim bioclimatic coverages. Projections were made for the pessimistic CSIRO A2 scenario, which predicts the higher increase in temperature due to non-emission reduction, and the optimistic Hadley B2 Scenario predicting the minimum increase in temperature. The database contained 23 records for L. evansi and 39 records for L. longipalpis, distributed along the Magdalena River Valley and the Caribbean Coast, where the potential distribution areas of both species were also predicted by Maxent. Climate change projections showed a general overall reduction in the spatial distribution of the two vector species, promoting a shift in altitudinal distribution for L

  7. A GIS model predicting potential distributions of a lineage: a test case on hermit spiders (Nephilidae: Nephilengys).

    Science.gov (United States)

    Năpăruş, Magdalena; Kuntner, Matjaž

    2012-01-01

    Although numerous studies model species distributions, these models are almost exclusively on single species, while studies of evolutionary lineages are preferred as they by definition study closely related species with shared history and ecology. Hermit spiders, genus Nephilengys, represent an ecologically important but relatively species-poor lineage with a globally allopatric distribution. Here, we model Nephilengys global habitat suitability based on known localities and four ecological parameters. We geo-referenced 751 localities for the four most studied Nephilengys species: N. cruentata (Africa, New World), N. livida (Madagascar), N. malabarensis (S-SE Asia), and N. papuana (Australasia). For each locality we overlaid four ecological parameters: elevation, annual mean temperature, annual mean precipitation, and land cover. We used linear backward regression within ArcGIS to select two best fit parameters per species model, and ModelBuilder to map areas of high, moderate and low habitat suitability for each species within its directional distribution. For Nephilengys cruentata suitable habitats are mid elevation tropics within Africa (natural range), a large part of Brazil and the Guianas (area of synanthropic spread), and even North Africa, Mediterranean, and Arabia. Nephilengys livida is confined to its known range with suitable habitats being mid-elevation natural and cultivated lands. Nephilengys malabarensis, however, ranges across the Equator throughout Asia where the model predicts many areas of high ecological suitability in the wet tropics. Its directional distribution suggests the species may potentially spread eastwards to New Guinea where the suitable areas of N. malabarensis largely surpass those of the native N. papuana, a species that prefers dry forests of Australian (sub)tropics. Our model is a customizable GIS tool intended to predict current and future potential distributions of globally distributed terrestrial lineages. Its predictive

  8. A GIS model predicting potential distributions of a lineage: a test case on hermit spiders (Nephilidae: Nephilengys.

    Directory of Open Access Journals (Sweden)

    Magdalena Năpăruş

    Full Text Available BACKGROUND: Although numerous studies model species distributions, these models are almost exclusively on single species, while studies of evolutionary lineages are preferred as they by definition study closely related species with shared history and ecology. Hermit spiders, genus Nephilengys, represent an ecologically important but relatively species-poor lineage with a globally allopatric distribution. Here, we model Nephilengys global habitat suitability based on known localities and four ecological parameters. METHODOLOGY/PRINCIPAL FINDINGS: We geo-referenced 751 localities for the four most studied Nephilengys species: N. cruentata (Africa, New World, N. livida (Madagascar, N. malabarensis (S-SE Asia, and N. papuana (Australasia. For each locality we overlaid four ecological parameters: elevation, annual mean temperature, annual mean precipitation, and land cover. We used linear backward regression within ArcGIS to select two best fit parameters per species model, and ModelBuilder to map areas of high, moderate and low habitat suitability for each species within its directional distribution. For Nephilengys cruentata suitable habitats are mid elevation tropics within Africa (natural range, a large part of Brazil and the Guianas (area of synanthropic spread, and even North Africa, Mediterranean, and Arabia. Nephilengys livida is confined to its known range with suitable habitats being mid-elevation natural and cultivated lands. Nephilengys malabarensis, however, ranges across the Equator throughout Asia where the model predicts many areas of high ecological suitability in the wet tropics. Its directional distribution suggests the species may potentially spread eastwards to New Guinea where the suitable areas of N. malabarensis largely surpass those of the native N. papuana, a species that prefers dry forests of Australian (subtropics. CONCLUSIONS: Our model is a customizable GIS tool intended to predict current and future potential

  9. Remotely Sensed High-Resolution Global Cloud Dynamics for Predicting Ecosystem and Biodiversity Distributions.

    Directory of Open Access Journals (Sweden)

    Adam M Wilson

    2016-03-01

    Full Text Available Cloud cover can influence numerous important ecological processes, including reproduction, growth, survival, and behavior, yet our assessment of its importance at the appropriate spatial scales has remained remarkably limited. If captured over a large extent yet at sufficiently fine spatial grain, cloud cover dynamics may provide key information for delineating a variety of habitat types and predicting species distributions. Here, we develop new near-global, fine-grain (≈1 km monthly cloud frequencies from 15 y of twice-daily Moderate Resolution Imaging Spectroradiometer (MODIS satellite images that expose spatiotemporal cloud cover dynamics of previously undocumented global complexity. We demonstrate that cloud cover varies strongly in its geographic heterogeneity and that the direct, observation-based nature of cloud-derived metrics can improve predictions of habitats, ecosystem, and species distributions with reduced spatial autocorrelation compared to commonly used interpolated climate data. These findings support the fundamental role of remote sensing as an effective lens through which to understand and globally monitor the fine-grain spatial variability of key biodiversity and ecosystem properties.

  10. Distributed video coding with multiple side information

    DEFF Research Database (Denmark)

    Huang, Xin; Brites, C.; Ascenso, J.

    2009-01-01

    Distributed Video Coding (DVC) is a new video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of some decoder side information. The quality of the side information has a major impact on the DVC rate-distortion (RD) performance in the same way...... the quality of the predictions had a major impact in predictive video coding. In this paper, a DVC solution exploiting multiple side information is proposed; the multiple side information is generated by frame interpolation and frame extrapolation targeting to improve the side information of a single...

  11. On Road Study of Colorado Front Range Greenhouse Gases Distribution and Sources

    Science.gov (United States)

    Petron, G.; Hirsch, A.; Trainer, M. K.; Karion, A.; Kofler, J.; Sweeney, C.; Andrews, A.; Kolodzey, W.; Miller, B. R.; Miller, L.; Montzka, S. A.; Kitzis, D. R.; Patrick, L.; Frost, G. J.; Ryerson, T. B.; Robers, J. M.; Tans, P.

    2008-12-01

    The Global Monitoring Division and Chemical Sciences Division of the NOAA Earth System Research Laboratory have teamed up over the summer 2008 to experiment with a new measurement strategy to characterize greenhouse gases distribution and sources in the Colorado Front Range. Combining expertise in greenhouse gases measurements and in local to regional scales air quality study intensive campaigns, we have built the 'Hybrid Lab'. A continuous CO2 and CH4 cavity ring down spectroscopic analyzer (Picarro, Inc.), a CO gas-filter correlation instrument (Thermo Environmental, Inc.) and a continuous UV absorption ozone monitor (2B Technologies, Inc., model 202SC) have been installed securely onboard a 2006 Toyota Prius Hybrid vehicle with an inlet bringing in outside air from a few meters above the ground. To better characterize point and distributed sources, air samples were taken with a Portable Flask Package (PFP) for later multiple species analysis in the lab. A GPS unit hooked up to the ozone analyzer and another one installed on the PFP kept track of our location allowing us to map measured concentrations on the driving route using Google Earth. The Hybrid Lab went out for several drives in the vicinity of the NOAA Boulder Atmospheric Observatory (BAO) tall tower located in Erie, CO and covering areas from Boulder, Denver, Longmont, Fort Collins and Greeley. Enhancements in CO2, CO and destruction of ozone mainly reflect emissions from traffic. Methane enhancements however are clearly correlated with nearby point sources (landfill, feedlot, natural gas compressor ...) or with larger scale air masses advected from the NE Colorado, where oil and gas drilling operations are widespread. The multiple species analysis (hydrocarbons, CFCs, HFCs) of the air samples collected along the way bring insightful information about the methane sources at play. We will present results of the analysis and interpretation of the Hybrid Lab Front Range Study and conclude with perspectives

  12. Agent paradigm and services technology for distributed Information Sources

    Directory of Open Access Journals (Sweden)

    Hakima Mellah

    2011-10-01

    Full Text Available The complexity of information is issued from interacting information sources (IS, and could be better exploited with respect to relevance of information. In distributed IS system, relevant information has a content that is in connection with other contents in information network, and is used for a certain purpose. The highlighting point of the proposed model is to contribute to information system agility according to a three-dimensional view involving the content, the use and the structure. This reflects the relevance of information complexity and effective methodologies through self organized principle to manage the complexity. This contribution is primarily focused on presenting some factors that lead and trigger for self organization in a Service Oriented Architecture (SOA and how it can be possible to integrate self organization mechanism in the same.

  13. Prediction of residual stress distribution in multi-stacked thin film by curvature measurement and iterative FEA

    International Nuclear Information System (INIS)

    Choi, Hyeon Chang; Park, Jun Hyub

    2005-01-01

    In this study, residual stress distribution in multi-stacked film by MEMS (Micro-Electro Mechanical System) process is predicted using Finite Element Method (FEM). We develop a finite element program for REsidual Stress Analysis (RESA) in multi-stacked film. The RESA predicts the distribution of residual stress field in multi-stacked film. Curvatures of multi-stacked film and single layers which consist of the multi-stacked film are used as the input to the RESA. To measure those curvatures is easier than to measure a distribution of residual stress. To verify the RESA, mean stresses and stress gradients of single and multilayers are measured. The mean stresses are calculated from curvatures of deposited wafer by using Stoney's equation. The stress gradients are calculated from the vertical deflection at the end of cantilever beam. To measure the mean stress of each layer in multi-stacked film, we measure the curvature of wafer with the film after etching layer by layer in multi-stacked film

  14. Ecology, distribution, and predictive occurrence modeling of Palmers chipmunk (Tamias palmeri): a high-elevation small mammal endemic to the Spring Mountains in southern Nevada, USA

    Science.gov (United States)

    Lowrey, Chris E.; Longshore, Kathleen M.; Riddle, Brett R.; Mantooth, Stacy

    2016-01-01

    Although montane sky islands surrounded by desert scrub and shrub steppe comprise a large part of the biological diversity of the Basin and Range Province of southwestern North America, comprehensive ecological and population demographic studies for high-elevation small mammals within these areas are rare. Here, we examine the ecology and population parameters of the Palmer’s chipmunk (Tamias palmeri) in the Spring Mountains of southern Nevada, and present a predictive GIS-based distribution and probability of occurrence model at both home range and geographic spatial scales. Logistic regression analyses and Akaike Information Criterion model selection found variables of forest type, slope, and distance to water sources as predictive of chipmunk occurrence at the geographic scale. At the home range scale, increasing population density, decreasing overstory canopy cover, and decreasing understory canopy cover contributed to increased survival rates.

  15. Distribution, partitioning and sources of polycyclic aromatic hydrocarbons in Daliao River water system in dry season, China

    International Nuclear Information System (INIS)

    Guo Wei; He Mengchang; Yang Zhifeng; Lin Chunye; Quan Xiangchun; Men Bing

    2009-01-01

    Eighteen polycyclic aromatic hydrocarbons (PAHs) were analyzed in 29 surface water, 29 suspended particulate matter (SPM), 28 sediment, and 10 pore water samples from Daliao River water system in dry season. The total PAH concentration ranged from 570.2 to 2318.6 ng L -1 in surface water, from 151.0 to 28483.8 ng L -1 in SPM, from 102.9 to 3419.2 ng g -1 in sediment and from 6.3 to 46.4 μg l -1 in pore water. The concentration of dissolved PAHs was higher than that of particulate PAHs at many sites, but the opposite results were generally observed at the sites of wastewater discharge. The soluble level of PAHs was much higher in the pore water than in the water column. Generally, the water column of the polluted branch streams contained higher content of PAHs than their mainstream. The environmental behaviors and fates of PAHs were examined according to some physicochemical parameters such as pH, organic carbon, SPM content, water content and grain size in sediments. Results showed that organic carbon was the primary factor controlling the distribution of the PAHs in the Daliao River water system. Partitioning of PAHs between sediment solid phase and pore water phase was studied, and the relationship between log K oc and log K ow of PAHs on some sediments and the predicted values was compared. PAHs other than naphthalene and acenaphthylene would be accumulated largely in the sediment of the Dalaio River water system. The sources of PAHs were evaluated employing ratios of specific PAHs compounds and different wastewater discharge sources, indicating that combustion was the main source of PAHs input.

  16. Technical note: Combining quantile forecasts and predictive distributions of streamflows

    Science.gov (United States)

    Bogner, Konrad; Liechti, Katharina; Zappa, Massimiliano

    2017-11-01

    The enhanced availability of many different hydro-meteorological modelling and forecasting systems raises the issue of how to optimally combine this great deal of information. Especially the usage of deterministic and probabilistic forecasts with sometimes widely divergent predicted future streamflow values makes it even more complicated for decision makers to sift out the relevant information. In this study multiple streamflow forecast information will be aggregated based on several different predictive distributions, and quantile forecasts. For this combination the Bayesian model averaging (BMA) approach, the non-homogeneous Gaussian regression (NGR), also known as the ensemble model output statistic (EMOS) techniques, and a novel method called Beta-transformed linear pooling (BLP) will be applied. By the help of the quantile score (QS) and the continuous ranked probability score (CRPS), the combination results for the Sihl River in Switzerland with about 5 years of forecast data will be compared and the differences between the raw and optimally combined forecasts will be highlighted. The results demonstrate the importance of applying proper forecast combination methods for decision makers in the field of flood and water resource management.

  17. Predictive modeling and mapping of Malayan Sun Bear (Helarctos malayanus) distribution using maximum entropy.

    Science.gov (United States)

    Nazeri, Mona; Jusoff, Kamaruzaman; Madani, Nima; Mahmud, Ahmad Rodzi; Bahman, Abdul Rani; Kumar, Lalit

    2012-01-01

    One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus) in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population.

  18. Predictive modeling and mapping of Malayan Sun Bear (Helarctos malayanus distribution using maximum entropy.

    Directory of Open Access Journals (Sweden)

    Mona Nazeri

    Full Text Available One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population.

  19. Distributed Sensor Network for meteorological observations and numerical weather Prediction Calculations

    Directory of Open Access Journals (Sweden)

    Á. Vas

    2013-06-01

    Full Text Available The prediction of weather generally means the solution of differential equations on the base of the measured initial conditions where the data of close and distant neighboring points are used for the calculations. It requires the maintenance of expensive weather stations and supercomputers. However, if weather stations are not only capable of measuring but can also communicate with each other, then these smart sensors can also be applied to run forecasting calculations. This applies the highest possible level of parallelization without the collection of measured data into one place. Furthermore, if more nodes are involved, the result becomes more accurate, but the computing power required from one node does not increase. Our Distributed Sensor Network for meteorological sensing and numerical weather Prediction Calculations (DSN-PC can be applied in several different areas where sensing and numerical calculations, even the solution of differential equations, are needed.

  20. Predicting the distribution of commercially important invertebrate stocks under future climate.

    Directory of Open Access Journals (Sweden)

    Bayden D Russell

    Full Text Available The future management of commercially exploited species is challenging because techniques used to predict the future distribution of stocks under climate change are currently inadequate. We projected the future distribution and abundance of two commercially harvested abalone species (blacklip abalone, Haliotis rubra and greenlip abalone, H. laevigata inhabiting coastal South Australia, using multiple species distribution models (SDM and for decadal time slices through to 2100. Projections are based on two contrasting global greenhouse gas emissions scenarios. The SDMs identified August (winter Sea Surface Temperature (SST as the best descriptor of abundance and forecast that warming of winter temperatures under both scenarios may be beneficial to both species by allowing increased abundance and expansion into previously uninhabited coasts. This range expansion is unlikely to be realised, however, as projected warming of March SST is projected to exceed temperatures which cause up to 10-fold increases in juvenile mortality. By linking fine-resolution forecasts of sea surface temperature under different climate change scenarios to SDMs and physiological experiments, we provide a practical first approximation of the potential impact of climate-induced change on two species of marine invertebrates in the same fishery.

  1. Evaluation of Airborne Remote Sensing Techniques for Predicting the Distribution of Energetic Compounds on Impact Areas

    National Research Council Canada - National Science Library

    Graves, Mark R; Dove, Linda P; Jenkins, Thomas F; Bigl, Susan; Walsh, Marianne E; Hewitt, Alan D; Lambert, Dennis; Perron, Nancy; Ramsey, Charles; Gamey, Jeff; Beard, Les; Doll, William E; Magoun, Dale

    2007-01-01

    .... Remote sensing and geographic information system (GIS) technologies were utilized to assist in the development of enhanced sampling strategies to better predict the landscape-scale distribution of energetic compounds...

  2. Identifying (subsurface) anthropogenic heat sources that influence temperature in the drinking water distribution system

    Science.gov (United States)

    Agudelo-Vera, Claudia M.; Blokker, Mirjam; de Kater, Henk; Lafort, Rob

    2017-09-01

    The water temperature in the drinking water distribution system and at customers' taps approaches the surrounding soil temperature at a depth of 1 m. Water temperature is an important determinant of water quality. In the Netherlands drinking water is distributed without additional residual disinfectant and the temperature of drinking water at customers' taps is not allowed to exceed 25 °C. In recent decades, the urban (sub)surface has been getting more occupied by various types of infrastructures, and some of these can be heat sources. Only recently have the anthropogenic sources and their influence on the underground been studied on coarse spatial scales. Little is known about the urban shallow underground heat profile on small spatial scales, of the order of 10 m × 10 m. Routine water quality samples at the tap in urban areas have shown up locations - so-called hotspots - in the city, with relatively high soil temperatures - up to 7 °C warmer - compared to the soil temperatures in the surrounding rural areas. Yet the sources and the locations of these hotspots have not been identified. It is expected that with climate change during a warm summer the soil temperature in the hotspots can be above 25 °C. The objective of this paper is to find a method to identify heat sources and urban characteristics that locally influence the soil temperature. The proposed method combines mapping of urban anthropogenic heat sources, retrospective modelling of the soil temperature, analysis of water temperature measurements at the tap, and extensive soil temperature measurements. This approach provided insight into the typical range of the variation of the urban soil temperature, and it is a first step to identifying areas with potential underground heat stress towards thermal underground management in cities.

  3. Prediction of oil droplet size distribution in agitated aquatic environments

    International Nuclear Information System (INIS)

    Khelifa, A.; Lee, K.; Hill, P.S.

    2004-01-01

    Oil spilled at sea undergoes many transformations based on physical, biological and chemical processes. Vertical dispersion is the hydrodynamic mechanism controlled by turbulent mixing due to breaking waves, vertical velocity, density gradients and other environmental factors. Spilled oil is dispersed in the water column as small oil droplets. In order to estimate the mass of an oil slick in the water column, it is necessary to know how the droplets formed. Also, the vertical dispersion and fate of oil spilled in aquatic environments can be modelled if the droplet-size distribution of the oil droplets is known. An oil spill remediation strategy can then be implemented. This paper presented a newly developed Monte Carlo model to predict droplet-size distribution due to Brownian motion, turbulence and a differential settling at equilibrium. A kinematic model was integrated into the proposed model to simulate droplet breakage. The key physical input of the model is the maximum droplet size permissible in the simulation. Laboratory studies were found to be in good agreement with field studies. 26 refs., 1 tab., 5 figs

  4. The GIS and analysis of earthquake damage distribution of the 1303 Hongtong M=8 earthquake

    Science.gov (United States)

    Gao, Meng-Tan; Jin, Xue-Shen; An, Wei-Ping; Lü, Xiao-Jian

    2004-07-01

    The geography information system of the 1303 Hongton M=8 earthquake has been established. Using the spatial analysis function of GIS, the spatial distribution characteristics of damage and isoseismal of the earthquake are studies. By comparing with the standard earthquake intensity attenuation relationship, the abnormal damage distribution of the earthquake is found, so the relationship of the abnormal distribution with tectonics, site condition and basin are analyzed. In this paper, the influence on the ground motion generated by earthquake source and the underground structures near source also are studied. The influence on seismic zonation, anti-earthquake design, earthquake prediction and earthquake emergency responding produced by the abnormal density distribution are discussed.

  5. Optimal Allocation of Generalized Power Sources in Distribution Network Based on Multi-Objective Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Li Ran

    2017-01-01

    Full Text Available Optimal allocation of generalized power sources in distribution network is researched. A simple index of voltage stability is put forward. Considering the investment and operation benefit, the stability of voltage and the pollution emissions of generalized power sources in distribution network, a multi-objective optimization planning model is established. A multi-objective particle swarm optimization algorithm is proposed to solve the optimal model. In order to improve the global search ability, the strategies of fast non-dominated sorting, elitism and crowding distance are adopted in this algorithm. Finally, tested the model and algorithm by IEEE-33 node system to find the best configuration of GP, the computed result shows that with the generalized power reasonable access to the active distribution network, the investment benefit and the voltage stability of the system is improved, and the proposed algorithm has better global search capability.

  6. DemQSAR: predicting human volume of distribution and clearance of drugs.

    Science.gov (United States)

    Demir-Kavuk, Ozgur; Bentzien, Jörg; Muegge, Ingo; Knapp, Ernst-Walter

    2011-12-01

    In silico methods characterizing molecular compounds with respect to pharmacologically relevant properties can accelerate the identification of new drugs and reduce their development costs. Quantitative structure-activity/-property relationship (QSAR/QSPR) correlate structure and physico-chemical properties of molecular compounds with a specific functional activity/property under study. Typically a large number of molecular features are generated for the compounds. In many cases the number of generated features exceeds the number of molecular compounds with known property values that are available for learning. Machine learning methods tend to overfit the training data in such situations, i.e. the method adjusts to very specific features of the training data, which are not characteristic for the considered property. This problem can be alleviated by diminishing the influence of unimportant, redundant or even misleading features. A better strategy is to eliminate such features completely. Ideally, a molecular property can be described by a small number of features that are chemically interpretable. The purpose of the present contribution is to provide a predictive modeling approach, which combines feature generation, feature selection, model building and control of overtraining into a single application called DemQSAR. DemQSAR is used to predict human volume of distribution (VD(ss)) and human clearance (CL). To control overtraining, quadratic and linear regularization terms were employed. A recursive feature selection approach is used to reduce the number of descriptors. The prediction performance is as good as the best predictions reported in the recent literature. The example presented here demonstrates that DemQSAR can generate a model that uses very few features while maintaining high predictive power. A standalone DemQSAR Java application for model building of any user defined property as well as a web interface for the prediction of human VD(ss) and CL is

  7. XTALOPT version r11: An open-source evolutionary algorithm for crystal structure prediction

    Science.gov (United States)

    Avery, Patrick; Falls, Zackary; Zurek, Eva

    2018-01-01

    Version 11 of XTALOPT, an evolutionary algorithm for crystal structure prediction, has now been made available for download from the CPC library or the XTALOPT website, http://xtalopt.github.io. Whereas the previous versions of XTALOPT were published under the Gnu Public License (GPL), the current version is made available under the 3-Clause BSD License, which is an open source license that is recognized by the Open Source Initiative. Importantly, the new version can be executed via a command line interface (i.e., it does not require the use of a Graphical User Interface). Moreover, the new version is written as a stand-alone program, rather than an extension to AVOGADRO.

  8. Artificial neural network application for predicting soil distribution coefficient of nickel

    International Nuclear Information System (INIS)

    Falamaki, Amin

    2013-01-01

    The distribution (or partition) coefficient (K d ) is an applicable parameter for modeling contaminant and radionuclide transport as well as risk analysis. Selection of this parameter may cause significant error in predicting the impacts of contaminant migration or site-remediation options. In this regards, various models were presented to predict K d values for different contaminants specially heavy metals and radionuclides. In this study, artificial neural network (ANN) is used to present simplified model for predicting K d of nickel. The main objective is to develop a more accurate model with a minimal number of parameters, which can be determined experimentally or select by review of different studies. In addition, the effects of training as well as the type of the network are considered. The K d values of Ni is strongly dependent on pH of the soil and mathematical relationships were presented between pH and K d of nickel recently. In this study, the same database of these presented models was used to verify that neural network may be more useful tools for predicting of K d . Two different types of ANN, multilayer perceptron and redial basis function, were used to investigate the effect of the network geometry on the results. In addition, each network was trained by 80 and 90% of the data and tested for 20 and 10% of the rest data. Then the results of the networks compared with the results of the mathematical models. Although the networks trained by 80 and 90% of the data the results show that all the networks predict with higher accuracy relative to mathematical models which were derived by 100% of data. More training of a network increases the accuracy of the network. Multilayer perceptron network used in this study predicts better than redial basis function network. - Highlights: ► Simplified models for predicting K d of nickel presented using artificial neural networks. ► Multilayer perceptron and redial basis function used to predict K d of nickel in

  9. The predictive skill of species distribution models for plankton in a changing climate

    DEFF Research Database (Denmark)

    Brun, Philipp Georg; Kiørboe, Thomas; Licandro, Priscilla

    2016-01-01

    Statistical species distribution models (SDMs) are increasingly used to project spatial relocations of marine taxa under future climate change scenarios. However, tests of their predictive skill in the real-world are rare. Here, we use data from the Continuous Plankton Recorder program, one...... null models, is essential to assess the robustness of projections of marine planktonic species under climate change...

  10. Why Do I Feel More Confident? Bandura's Sources Predict Preservice Teachers' Latent Changes in Teacher Self-Efficacy

    Science.gov (United States)

    Pfitzner-Eden, Franziska

    2016-01-01

    Teacher self-efficacy (TSE) is associated with a multitude of positive outcomes for teachers and students. However, the development of TSE is an under-researched area. Bandura (1997) proposed four sources of self-efficacy: mastery experiences, vicarious experiences, verbal persuasion, and physiological and affective states. This study introduces a first instrument to assess the four sources for TSE in line with Bandura's conception. Gathering evidence of convergent validity, the contribution that each source made to the development of TSE during a practicum at a school was explored for two samples of German preservice teachers. The first sample (N = 359) were beginning preservice teachers who completed an observation practicum. The second sample (N = 395) were advanced preservice teachers who completed a teaching practicum. The source measure showed good reliability, construct validity, and convergent validity. Latent true change modeling was applied to explore how the sources predicted changes in TSE. Three different models were compared. As expected, results showed that TSE changes in both groups were significantly predicted by mastery experiences, with a stronger relationship in the advanced group. Further, the results indicated that mastery experiences were largely informed by the other three sources to varying degrees depending on the type of practicum. Implications for the practice of teacher education are discussed in light of the results. PMID:27807422

  11. Lead concentration distribution and source tracing of urban/suburban aquatic sediments in two typical famous tourist cities: Haikou and Sanya, China.

    Science.gov (United States)

    Dong, Zhicheng; Bao, Zhengyu; Wu, Guoai; Fu, Yangrong; Yang, Yi

    2010-11-01

    The content and spatial distribution of lead in the aquatic systems in two Chinese tropical cities in Hainan province (Haikou and Sanyan) show an unequal distribution of lead between the urban and the suburban areas. The lead content is significantly higher (72.3 mg/kg) in the urban area than the suburbs (15.0 mg/kg) in Haikou, but quite equal in Sanya (41.6 and 43.9 mg/kg). The frequency distribution histograms suggest that the lead in Haikou and in Sanya derives from different natural and/or anthropogenic sources. The isotopic compositions indicate that urban sediment lead in Haikou originates mainly from anthropogenic sources (automobile exhaust, atmospheric deposition, etc.) which contribute much more than the natural sources, while natural lead (basalt and sea sands) is still dominant in the suburban areas in Haikou. In Sanya, the primary source is natural (soils and sea sands).

  12. Polycyclic Aromatic Hydrocarbons in the Dagang Oilfield (China: Distribution, Sources, and Risk Assessment

    Directory of Open Access Journals (Sweden)

    Haihua Jiao

    2015-05-01

    Full Text Available The levels of 16 polycyclic aromatic hydrocarbons (PAHs were investigated in 27 upper layer (0–25 cm soil samples collected from the Dagang Oilfield (China in April 2013 to estimate their distribution, possible sources, and potential risks posed. The total concentrations of PAHs (∑PAHs varied between 103.6 µg·kg−1 and 5872 µg·kg−1, with a mean concentration of 919.8 µg·kg−1; increased concentrations were noted along a gradient from arable desert soil (mean 343.5 µg·kg−1, to oil well areas (mean of 627.3 µg·kg−1, to urban and residential zones (mean of 1856 µg·kg−1. Diagnostic ratios showed diverse source of PAHs, including petroleum, liquid fossil fuels, and biomass combustion sources. Combustion sources were most significant for PAHs in arable desert soils and residential zones, while petroleum sources were a significant source of PAHs in oilfield areas. Based ontheir carcinogenity, PAHs were classified as carcinogenic (B or not classified/non-carcinogenic (NB. The total concentrations of carcinogenic PAHs (∑BPAHs varied from 13.3 µg·kg−1 to 4397 µg·kg−1 across all samples, with a mean concentration of 594.4 µg·kg−1. The results suggest that oilfield soil is subject to a certain level of ecological environment risk.

  13. Distributed Model Predictive Control for Active Power Control of Wind Farm

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Rasmussen, Claus Nygaard

    2014-01-01

    This paper presents the active power control of a wind farm using the Distributed Model Predictive Controller (D- MPC) via dual decomposition. Different from the conventional centralized wind farm control, multiple objectives such as power reference tracking performance and wind turbine load can...... be considered to achieve a trade-off between them. Additionally, D- MPC is based on communication among the subsystems. Through the interaction among the neighboring subsystems, the global optimization could be achieved, which significantly reduces the computation burden. It is suitable for the modern large......-scale wind farm control....

  14. The inventory of sources, environmental releases and risk assessment for perfluorooctane sulfonate in China

    International Nuclear Information System (INIS)

    Zhang Lai; Liu Jianguo; Hu Jianxin; Liu Chao; Guo Weiguang; Wang Qiang; Wang Hong

    2012-01-01

    With about 100 t/y of the production volume, perfluorootane sulfonates (PFOS) are mainly used for metal plating, aqueous fire-fighting foams (AFFFs) and sulfluramidin China, and the use amount is about 30–40 t/y, 25–35 t/y and 4–8 t/y respectively. Based on the inventory of PFOS production and uses with geographic distribution educed from statistics, environmental risk assessment of PFOS was taken by using EUSES model, as well as its environmental releases were estimated both in local and regional levels in China. While the environmental release from manufacture is significant in Central China region, metal plating was identified as the major PFOS release source in regional level. The East China region shows the most strong emission strength of PFOS. Though the predicted environmental concentrations (PECs) were not exceed current relevant predicted no effect concentrations (PNECs) of the risk characterization for PFOS, higher PECs was estimated around major PFOS release sources showing undesirable environmental risk at local level. - Highlights: ► Inventory of production and uses of perfluorooctane sulfonate (PFOS) in China with geographical distribution. ► Characteristics of PFOS release sources and distribution consistent with social-economic situation in China. ► Effective model predicted results of PFOS environmental risk assessment in local and regional scales compared with relevant environmental monitoring data. - Inventory of PFOS production and use of with sectoral and regional distribution of China, environmental releases and risk status were indicated both in the local and regional level of the country.

  15. Knowledge-based prediction of three-dimensional dose distributions for external beam radiotherapy

    International Nuclear Information System (INIS)

    Shiraishi, Satomi; Moore, Kevin L.

    2016-01-01

    Purpose: To demonstrate knowledge-based 3D dose prediction for external beam radiotherapy. Methods: Using previously treated plans as training data, an artificial neural network (ANN) was trained to predict a dose matrix based on patient-specific geometric and planning parameters, such as the closest distance (r) to planning target volume (PTV) and organ-at-risks (OARs). Twenty-three prostate and 43 stereotactic radiosurgery/radiotherapy (SRS/SRT) cases with at least one nearby OAR were studied. All were planned with volumetric-modulated arc therapy to prescription doses of 81 Gy for prostate and 12–30 Gy for SRS. Using these clinically approved plans, ANNs were trained to predict dose matrix and the predictive accuracy was evaluated using the dose difference between the clinical plan and prediction, δD = D clin − D pred . The mean (〈δD r 〉), standard deviation (σ δD r ), and their interquartile range (IQR) for the training plans were evaluated at a 2–3 mm interval from the PTV boundary (r PTV ) to assess prediction bias and precision. Initially, unfiltered models which were trained using all plans in the cohorts were created for each treatment site. The models predict approximately the average quality of OAR sparing. Emphasizing a subset of plans that exhibited superior to the average OAR sparing during training, refined models were created to predict high-quality rectum sparing for prostate and brainstem sparing for SRS. Using the refined model, potentially suboptimal plans were identified where the model predicted further sparing of the OARs was achievable. Replans were performed to test if the OAR sparing could be improved as predicted by the model. Results: The refined models demonstrated highly accurate dose distribution prediction. For prostate cases, the average prediction bias for all voxels irrespective of organ delineation ranged from −1% to 0% with maximum IQR of 3% over r PTV ∈ [ − 6, 30] mm. The average prediction error was less

  16. Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline.

    Science.gov (United States)

    Zhang, Jie; Li, Qingyang; Caselli, Richard J; Thompson, Paul M; Ye, Jieping; Wang, Yalin

    2017-06-01

    Alzheimer's Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms.

  17. eTOXlab, an open source modeling framework for implementing predictive models in production environments.

    Science.gov (United States)

    Carrió, Pau; López, Oriol; Sanz, Ferran; Pastor, Manuel

    2015-01-01

    Computational models based in Quantitative-Structure Activity Relationship (QSAR) methodologies are widely used tools for predicting the biological properties of new compounds. In many instances, such models are used as a routine in the industry (e.g. food, cosmetic or pharmaceutical industry) for the early assessment of the biological properties of new compounds. However, most of the tools currently available for developing QSAR models are not well suited for supporting the whole QSAR model life cycle in production environments. We have developed eTOXlab; an open source modeling framework designed to be used at the core of a self-contained virtual machine that can be easily deployed in production environments, providing predictions as web services. eTOXlab consists on a collection of object-oriented Python modules with methods mapping common tasks of standard modeling workflows. This framework allows building and validating QSAR models as well as predicting the properties of new compounds using either a command line interface or a graphic user interface (GUI). Simple models can be easily generated by setting a few parameters, while more complex models can be implemented by overriding pieces of the original source code. eTOXlab benefits from the object-oriented capabilities of Python for providing high flexibility: any model implemented using eTOXlab inherits the features implemented in the parent model, like common tools and services or the automatic exposure of the models as prediction web services. The particular eTOXlab architecture as a self-contained, portable prediction engine allows building models with confidential information within corporate facilities, which can be safely exported and used for prediction without disclosing the structures of the training series. The software presented here provides full support to the specific needs of users that want to develop, use and maintain predictive models in corporate environments. The technologies used by e

  18. Distributed control system for parallel-connected DC boost converters

    Science.gov (United States)

    Goldsmith, Steven

    2017-08-15

    The disclosed invention is a distributed control system for operating a DC bus fed by disparate DC power sources that service a known or unknown load. The voltage sources vary in v-i characteristics and have time-varying, maximum supply capacities. Each source is connected to the bus via a boost converter, which may have different dynamic characteristics and power transfer capacities, but are controlled through PWM. The invention tracks the time-varying power sources and apportions their power contribution while maintaining the DC bus voltage within the specifications. A central digital controller solves the steady-state system for the optimal duty cycle settings that achieve a desired power supply apportionment scheme for a known or predictable DC load. A distributed networked control system is derived from the central system that utilizes communications among controllers to compute a shared estimate of the unknown time-varying load through shared bus current measurements and bus voltage measurements.

  19. The P{sub 1}-approximation for the Distribution of Neutrons from a Pulsed Source in Hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Claesson, A

    1963-12-15

    The asymptotic distribution of neutrons from a pulsed, high energy source in an infinite moderator has been obtained earlier in a 'diffusion' approximation. In that paper the cross section was assumed to be constant over the whole energy region and the time derivative of the first moment was disregarded. Here, first, an analytic expression is obtained for the density in a P{sub 1} -approximation. However, the result is very complicated, and it is shown that an asymptotic solution can be found in a simpler way. By taking into account the low hydrogen scattering cross section at the source energy it follows that the space dependence of the distribution is less than that obtained earlier. The importance of keeping the time derivative of the first moment is further shown in a perturbation approximation.

  20. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    Science.gov (United States)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  1. Hydrogen distribution in a containment with a high-velocity hydrogen-steam source

    International Nuclear Information System (INIS)

    Bloom, G.R.; Muhlestein, L.D.; Postma, A.K.; Claybrook, S.W.

    1982-09-01

    Hydrogen mixing and distribution tests are reported for a modeled high velocity hydrogen-steam release from a postulated small pipe break or release from a pressurizer relief tank rupture disk into the lower compartment of an Ice Condenser Plant. The tests, which in most cases used helium as a simulant for hydrogen, demonstrated that the lower compartment gas was well mixed for both hydrogen release conditions used. The gas concentration differences between any spatial locations were less than 3 volume percent during the hydrogen/steam release period and were reduced to less than 0.5 volume percent within 20 minutes after termination of the hydrogen source. The high velocity hydrogen/steam jet provided the dominant mixing mechanism; however, natural convection and forced air recirculation played important roles in providing a well mixed atmosphere following termination of the hydrogen source. 5 figures, 4 tables

  2. Evaluation of the dose distribution for prostate implants using various 125I and 103Pd sources

    International Nuclear Information System (INIS)

    Meigooni, Ali S.; Luerman, Christine M.; Sowards, Keith T.

    2009-01-01

    Recently, several different models of 125 I and 103 Pd brachytherapy sources have been introduced in order to meet the increasing demand for prostate seed implants. These sources have different internal structures; hence, their TG-43 dosimetric parameters are not the same. In this study, the effects of the dosimetric differences among the sources on their clinical applications were evaluated. The quantitative and qualitative evaluations were performed by comparisons of dose distributions and dose volume histograms of prostate implants calculated for various designs of 125 I and 103 Pd sources. These comparisons were made for an identical implant scheme with the same number of seeds for each source. The results were compared with the Amersham model 6711 seed for 125 I and the Theragenics model 200 seed for 103 Pd using the same implant scheme.

  3. Predictions of malaria vector distribution in Belize based on multispectral satellite data.

    Science.gov (United States)

    Roberts, D R; Paris, J F; Manguin, S; Harbach, R E; Woodruff, R; Rejmankova, E; Polanco, J; Wullschleger, B; Legters, L J

    1996-03-01

    Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.

  4. Prediction uncertainty assessment of a systems biology model requires a sample of the full probability distribution of its parameters

    Directory of Open Access Journals (Sweden)

    Simon van Mourik

    2014-06-01

    Full Text Available Multi-parameter models in systems biology are typically ‘sloppy’: some parameters or combinations of parameters may be hard to estimate from data, whereas others are not. One might expect that parameter uncertainty automatically leads to uncertain predictions, but this is not the case. We illustrate this by showing that the prediction uncertainty of each of six sloppy models varies enormously among different predictions. Statistical approximations of parameter uncertainty may lead to dramatic errors in prediction uncertainty estimation. We argue that prediction uncertainty assessment must therefore be performed on a per-prediction basis using a full computational uncertainty analysis. In practice this is feasible by providing a model with a sample or ensemble representing the distribution of its parameters. Within a Bayesian framework, such a sample may be generated by a Markov Chain Monte Carlo (MCMC algorithm that infers the parameter distribution based on experimental data. Matlab code for generating the sample (with the Differential Evolution Markov Chain sampler and the subsequent uncertainty analysis using such a sample, is supplied as Supplemental Information.

  5. On the origin of low energy tail for monoenergetic neutron sources

    International Nuclear Information System (INIS)

    Kornilov, N.V.; Kagalenko, A.B.

    1995-01-01

    The problems of data processing when measuring inelastic neutron scattering cross sections for separated nuclei levels are studied. The model describing the neutron energy distribution for monoenergetic neutron sources is developed. The factors which make the major contributions into spectrometer response function formation are discussed. It is shown that the model considered predicts well neutron energy distribution from metal Li-target. The model parameters should be estimated on the basis of the experimental data. The neutron scattering on target environment contributes much into the low energy region of the neutron spectrum. An additional neutron source is introduced into the model in order to describe the low energy peak asymmetry (the so-called low energy tail). The tail neutron contribution dependence on incident energy and angle turns out to be rather unexpected. The conclusion is made that it is difficult to explain the origin and the properties of the tail neutron source by slit proton scattering or some Li-nuclei distribution regularities. 3 refs., 6 figs

  6. Qualitative analysis of precipiation distribution in Poland with use of different data sources

    Directory of Open Access Journals (Sweden)

    J. Walawender

    2008-04-01

    Full Text Available Geographical Information Systems (GIS can be used to integrate data from different sources and in different formats to perform innovative spatial and temporal analysis. GIS can be also applied for climatic research to manage, investigate and display all kinds of weather data.

    The main objective of this study is to demonstrate that GIS is a useful tool to examine and visualise precipitation distribution obtained from different data sources: ground measurements, satellite and radar data.

    Three selected days (30 cases with convective rainfall situations were analysed. Firstly, scalable GRID-based approach was applied to store data from three different sources in comparable layout. Then, geoprocessing algorithm was created within ArcGIS 9.2 environment. The algorithm included: GRID definition, reclassification and raster algebra. All of the calculations and procedures were performed automatically. Finally, contingency tables and pie charts were created to show relationship between ground measurements and both satellite and radar derived data. The results were visualised on maps.

  7. Distributed amplifier using Josephson vortex flow transistors

    International Nuclear Information System (INIS)

    McGinnis, D.P.; Beyer, J.B.; Nordman, J.E.

    1986-01-01

    A wide-band traveling wave amplifier using vortex flow transistors is proposed. A vortex flow transistor is a long Josephson junction used as a current controlled voltage source. The dual nature of this device to the field effect transistor is exploited. A circuit model of this device is proposed and a distributed amplifier utilizing 50 vortex flow transistors is predicted to have useful gain to 100 GHz

  8. Prediction of fission mass-yield distributions based on cross section calculations

    International Nuclear Information System (INIS)

    Hambsch, F.-J.; G.Vladuca; Tudora, Anabella; Oberstedt, S.; Ruskov, I.

    2005-01-01

    For the first time, fission mass-yield distributions have been predicted based on an extended statistical model for fission cross section calculations. In this model, the concept of the multi-modality of the fission process has been incorporated. The three most dominant fission modes, the two asymmetric standard I (S1) and standard II (S2) modes and the symmetric superlong (SL) mode are taken into account. De-convoluted fission cross sections for S1, S2 and SL modes for 235,238 U(n, f) and 237 Np(n, f), based on experimental branching ratios, were calculated for the first time in the incident neutron energy range from 0.01 to 5.5 MeV providing good agreement with the experimental fission cross section data. The branching ratios obtained from the modal fission cross section calculations have been used to deduce the corresponding fission yield distributions, including mean values also for incident neutron energies hitherto not accessible to experiment

  9. Development of a semi-automated method for subspecialty case distribution and prediction of intraoperative consultations in surgical pathology

    Directory of Open Access Journals (Sweden)

    Raul S Gonzalez

    2015-01-01

    Full Text Available Background: In many surgical pathology laboratories, operating room schedules are prospectively reviewed to determine specimen distribution to different subspecialty services and to predict the number and nature of potential intraoperative consultations for which prior medical records and slides require review. At our institution, such schedules were manually converted into easily interpretable, surgical pathology-friendly reports to facilitate these activities. This conversion, however, was time-consuming and arguably a non-value-added activity. Objective: Our goal was to develop a semi-automated method of generating these reports that improved their readability while taking less time to perform than the manual method. Materials and Methods: A dynamic Microsoft Excel workbook was developed to automatically convert published operating room schedules into different tabular formats. Based on the surgical procedure descriptions in the schedule, a list of linked keywords and phrases was utilized to sort cases by subspecialty and to predict potential intraoperative consultations. After two trial-and-optimization cycles, the method was incorporated into standard practice. Results: The workbook distributed cases to appropriate subspecialties and accurately predicted intraoperative requests. Users indicated that they spent 1-2 h fewer per day on this activity than before, and team members preferred the formatting of the newer reports. Comparison of the manual and semi-automatic predictions showed that the mean daily difference in predicted versus actual intraoperative consultations underwent no statistically significant changes before and after implementation for most subspecialties. Conclusions: A well-designed, lean, and simple information technology solution to determine subspecialty case distribution and prediction of intraoperative consultations in surgical pathology is approximately as accurate as the gold standard manual method and requires less

  10. The MACHO Project HST Follow-Up: The Large Magellanic Cloud Microlensing Source Stars

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, C.A.; /LLNL, Livermore /UC, Berkeley; Drake, A.J.; /Caltech; Cook, K.H.; /LLNL, Livermore /UC, Berkeley; Bennett, D.P.; /Caltech /Notre Dame U.; Popowski, P.; /Garching, Max Planck Inst.; Dalal, N.; /Toronto U.; Nikolaev, S.; /LLNL, Livermore; Alcock, C.; /Caltech /Harvard-Smithsonian Ctr. Astrophys.; Axelrod, T.S.; /Arizona U.; Becker, A.C. /Washington U., Seattle; Freeman, K.C.; /Res. Sch. Astron. Astrophys., Weston Creek; Geha, M.; /Yale U.; Griest, K.; /UC, San Diego; Keller, S.C.; /LLNL, Livermore; Lehner, M.J.; /Harvard-Smithsonian Ctr. Astrophys. /Taipei, Inst. Astron. Astrophys.; Marshall, S.L.; /SLAC; Minniti, D.; /Rio de Janeiro, Pont. U. Catol. /Vatican Astron. Observ.; Pratt, M.R.; /Aradigm, Hayward; Quinn, P.J.; /Western Australia U.; Stubbs, C.W.; /UC, Berkeley /Harvard U.; Sutherland, W.; /Oxford U. /Oran, Sci. Tech. U. /Garching, Max Planck Inst. /McMaster U.

    2009-06-25

    We present Hubble Space Telescope (HST) WFPC2 photometry of 13 microlensed source stars from the 5.7 year Large Magellanic Cloud (LMC) survey conducted by the MACHO Project. The microlensing source stars are identified by deriving accurate centroids in the ground-based MACHO images using difference image analysis (DIA) and then transforming the DIA coordinates to the HST frame. None of these sources is coincident with a background galaxy, which rules out the possibility that the MACHO LMC microlensing sample is contaminated with misidentified supernovae or AGN in galaxies behind the LMC. This supports the conclusion that the MACHO LMC microlensing sample has only a small amount of contamination due to non-microlensing forms of variability. We compare the WFPC2 source star magnitudes with the lensed flux predictions derived from microlensing fits to the light curve data. In most cases the source star brightness is accurately predicted. Finally, we develop a statistic which constrains the location of the Large Magellanic Cloud (LMC) microlensing source stars with respect to the distributions of stars and dust in the LMC and compare this to the predictions of various models of LMC microlensing. This test excludes at {approx}> 90% confidence level models where more than 80% of the source stars lie behind the LMC. Exotic models that attempt to explain the excess LMC microlensing optical depth seen by MACHO with a population of background sources are disfavored or excluded by this test. Models in which most of the lenses reside in a halo or spheroid distribution associated with either the Milky Way or the LMC are consistent which these data, but LMC halo or spheroid models are favored by the combined MACHO and EROS microlensing results.

  11. Prediction of clearance, volume of distribution and half-life by allometric scaling and by use of plasma concentrations predicted from pharmacokinetic constants: a comparative study.

    Science.gov (United States)

    Mahmood, I

    1999-08-01

    Pharmacokinetic parameters (clearance, CL, volume of distribution in the central compartment, VdC, and elimination half-life, t1/2beta) predicted by an empirical allometric approach have been compared with parameters predicted from plasma concentrations calculated by use of the pharmacokinetic constants A, B, alpha and beta, where A and B are the intercepts on the Y axis of the plot of plasma concentration against time and alpha and beta are the rate constants, both pairs of constants being for the distribution and elimination phases, respectively. The pharmacokinetic parameters of cefpiramide, actisomide, troglitazone, procaterol, moxalactam and ciprofloxacin were scaled from animal data obtained from the literature. Three methods were used to generate plots for the prediction of clearance in man: dependence of clearance on body weight (simple allometric equation); dependence of the product of clearance and maximum life-span potential (MLP) on body weight; and dependence of the product of clearance and brain weight on body weight. Plasma concentrations of the drugs were predicted in man by use of A, B, alpha and beta obtained from animal data. The predicted plasma concentrations were then used to calculate CL, VdC and t1/2beta. The pharmacokinetic parameters predicted by use of both approaches were compared with measured values. The results indicate that simple allometry did not predict clearance satisfactorily for actisomide, troglitazone, procaterol and ciprofloxacin. Use of MLP or the product of clearance and brain weight improved the prediction of clearance for these four drugs. Except for troglitazone, VdC and t1/2beta predicted for man by use of the allometric approach were comparable with measured values for the drugs studied. CL, VdC and t1/2beta predicted by use of pharmacokinetic constants were comparable with values predicted by simple allometry. Thus, if simple allometry failed to predict clearance of a drug, so did the pharmacokinetic constant

  12. ALMA observations of lensed Herschel sources: testing the dark matter halo paradigm

    Science.gov (United States)

    Amvrosiadis, A.; Eales, S. A.; Negrello, M.; Marchetti, L.; Smith, M. W. L.; Bourne, N.; Clements, D. L.; De Zotti, G.; Dunne, L.; Dye, S.; Furlanetto, C.; Ivison, R. J.; Maddox, S. J.; Valiante, E.; Baes, M.; Baker, A. J.; Cooray, A.; Crawford, S. M.; Frayer, D.; Harris, A.; Michałowski, M. J.; Nayyeri, H.; Oliver, S.; Riechers, D. A.; Serjeant, S.; Vaccari, M.

    2018-04-01

    With the advent of wide-area submillimetre surveys, a large number of high-redshift gravitationally lensed dusty star-forming galaxies have been revealed. Because of the simplicity of the selection criteria for candidate lensed sources in such surveys, identified as those with S500 μm > 100 mJy, uncertainties associated with the modelling of the selection function are expunged. The combination of these attributes makes submillimetre surveys ideal for the study of strong lens statistics. We carried out a pilot study of the lensing statistics of submillimetre-selected sources by making observations with the Atacama Large Millimeter Array (ALMA) of a sample of strongly lensed sources selected from surveys carried out with the Herschel Space Observatory. We attempted to reproduce the distribution of image separations for the lensed sources using a halo mass function taken from a numerical simulation that contains both dark matter and baryons. We used three different density distributions, one based on analytical fits to the haloes formed in the EAGLE simulation and two density distributions [Singular Isothermal Sphere (SIS) and SISSA] that have been used before in lensing studies. We found that we could reproduce the observed distribution with all three density distributions, as long as we imposed an upper mass transition of ˜1013 M⊙ for the SIS and SISSA models, above which we assumed that the density distribution could be represented by a Navarro-Frenk-White profile. We show that we would need a sample of ˜500 lensed sources to distinguish between the density distributions, which is practical given the predicted number of lensed sources in the Herschel surveys.

  13. Predicting plant invasions under climate change: are species distribution models validated by field trials?

    Science.gov (United States)

    Sheppard, Christine S; Burns, Bruce R; Stanley, Margaret C

    2014-09-01

    Climate change may facilitate alien species invasion into new areas, particularly for species from warm native ranges introduced into areas currently marginal for temperature. Although conclusions from modelling approaches and experimental studies are generally similar, combining the two approaches has rarely occurred. The aim of this study was to validate species distribution models by conducting field trials in sites of differing suitability as predicted by the models, thus increasing confidence in their ability to assess invasion risk. Three recently naturalized alien plants in New Zealand were used as study species (Archontophoenix cunninghamiana, Psidium guajava and Schefflera actinophylla): they originate from warm native ranges, are woody bird-dispersed species and of concern as potential weeds. Seedlings were grown in six sites across the country, differing both in climate and suitability (as predicted by the species distribution models). Seedling growth and survival were recorded over two summers and one or two winter seasons, and temperature and precipitation were monitored hourly at each site. Additionally, alien seedling performances were compared to those of closely related native species (Rhopalostylis sapida, Lophomyrtus bullata and Schefflera digitata). Furthermore, half of the seedlings were sprayed with pesticide, to investigate whether enemy release may influence performance. The results showed large differences in growth and survival of the alien species among the six sites. In the more suitable sites, performance was frequently higher compared to the native species. Leaf damage from invertebrate herbivory was low for both alien and native seedlings, with little evidence that the alien species should have an advantage over the native species because of enemy release. Correlations between performance in the field and predicted suitability of species distribution models were generally high. The projected increase in minimum temperature and reduced

  14. DUSTMS-D: DISPOSAL UNIT SOURCE TERM - MULTIPLE SPECIES - DISTRIBUTED FAILURE DATA INPUT GUIDE.

    Energy Technology Data Exchange (ETDEWEB)

    SULLIVAN, T.M.

    2006-01-01

    Performance assessment of a low-level waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). Many of these physical processes are influenced by the design of the disposal facility (e.g., how the engineered barriers control infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This has been done and the resulting models have been incorporated into the computer code DUST-MS (Disposal Unit Source Term-Multiple Species). The DUST-MS computer code is designed to model water flow, container degradation, release of contaminants from the wasteform to the contacting solution and transport through the subsurface media. Water flow through the facility over time is modeled using tabular input. Container degradation models include three types of failure rates: (a) instantaneous (all containers in a control volume fail at once), (b) uniformly distributed failures (containers fail at a linear rate between a specified starting and ending time), and (c) gaussian failure rates (containers fail at a rate determined by a mean failure time, standard deviation and gaussian distribution). Wasteform release models include four release mechanisms: (a) rinse with partitioning (inventory is released instantly upon container failure subject to equilibrium partitioning (sorption) with

  15. Two Dimensional Verification of the Dose Distribution of Gamma Knife Model C using Monte Carlo Simulation with a Virtual Source

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae-Hoon; Kim, Yong-Kyun; Lee, Cheol Ho; Son, Jaebum; Lee, Sangmin; Kim, Dong Geon; Choi, Joonbum; Jang, Jae Yeong [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun-Tai [Seoul National University, Seoul (Korea, Republic of)

    2016-10-15

    Gamma Knife model C contains 201 {sup 60}Co sources located on a spherical surface, so that each beam is concentrated on the center of the sphere. In the last work, we simulated the Gamma Knife model C through Monte Carlo simulation code using Geant4. Instead of 201 multi-collimation system, we made one single collimation system that collects source parameter passing through the collimator helmet. Using the virtual source, we drastically reduced the simulation time to transport 201 gamma circle beams to the target. Gamma index has been widely used to compare two dose distributions in cancer radiotherapy. Gamma index pass rates were compared in two calculated results using the virtual source method and the original method and measured results obtained using radiocrhomic films. A virtual source method significantly reduces simulation time of a Gamma Knife Model C and provides equivalent absorbed dose distributions as that of the original method showing Gamma Index pass rate close to 100% under 1mm/3% criteria. On the other hand, it gives a little narrow dose distribution compared to the film measurement showing Gamma Index pass rate of 94%. More accurate and sophisticated examination on the accuracy of the simulation and film measurement is necessary.

  16. Human Papilloma Virus: Prevalence, distribution and predictive value to lymphatic metastasis in penile carcinoma

    Directory of Open Access Journals (Sweden)

    Aluizio Goncalves da Fonseca

    2013-07-01

    Full Text Available Objectives To evaluate the prevalence, distribution and association of HPV with histological pattern of worse prognosis of penile cancer, in order to evaluate its predictive value of inguinal metastasis, as well as evaluation of other previous reported prognostic factors. Material and Methods Tumor samples of 82 patients with penile carcinoma were tested in order to establish the prevalence and distribution of genotypic HPV using PCR. HPV status was correlated to histopathological factors and the presence of inguinal mestastasis. The influence of several histological characteristics was also correlated to inguinal disease-free survival. Results Follow-up varied from 1 to 71 months (median 22 months. HPV DNA was identified in 60.9% of sample, with higher prevalence of types 11 and 6 (64% and 32%, respectively. There was no significant correlation of the histological characteristics of worse prognosis of penile cancer with HPV status. Inguinal disease-free survival in 5 years did also not show HPV status influence (p = 0.45. The only independent pathologic factors of inguinal metastasis were: stage T ≥ T1b-T4 (p = 0.02, lymphovascular invasion (p = 0.04 and infiltrative invasion (p = 0.03. conclusions HPV status and distribution had shown no correlation with worse prognosis of histological aspects, or predictive value for lymphatic metastasis in penile carcinoma.

  17. Effects of predicted climatic changes on distribution of organic contaminants in brackish water mesocosms.

    Science.gov (United States)

    Ripszam, M; Gallampois, C M J; Berglund, Å; Larsson, H; Andersson, A; Tysklind, M; Haglund, P

    2015-06-01

    Predicted consequences of future climate change in the northern Baltic Sea include increases in sea surface temperatures and terrestrial dissolved organic carbon (DOC) runoff. These changes are expected to alter environmental distribution of anthropogenic organic contaminants (OCs). To assess likely shifts in their distributions, outdoor mesocosms were employed to mimic pelagic ecosystems at two temperatures and two DOC concentrations, current: 15°C and 4 mg DOCL(-1) and, within ranges of predicted increases, 18°C and 6 mg DOCL(-1), respectively. Selected organic contaminants were added to the mesocosms to monitor changes in their distribution induced by the treatments. OC partitioning to particulate matter and sedimentation were enhanced at the higher DOC concentration, at both temperatures, while higher losses and lower partitioning of OCs to DOC were observed at the higher temperature. No combined effects of higher temperature and DOC on partitioning were observed, possibly because of the balancing nature of these processes. Therefore, changes in OCs' fates may largely depend on whether they are most sensitive to temperature or DOC concentration rises. Bromoanilines, phenanthrene, biphenyl and naphthalene were sensitive to the rise in DOC concentration, whereas organophosphates, chlorobenzenes (PCBz) and polychlorinated biphenyls (PCBs) were more sensitive to temperature. Mitotane and diflufenican were sensitive to both temperature and DOC concentration rises individually, but not in combination. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Post-quantum attacks on key distribution schemes in the presence of weakly stochastic sources

    International Nuclear Information System (INIS)

    Al–Safi, S W; Wilmott, C M

    2015-01-01

    It has been established that the security of quantum key distribution protocols can be severely compromised were one to permit an eavesdropper to possess a very limited knowledge of the random sources used between the communicating parties. While such knowledge should always be expected in realistic experimental conditions, the result itself opened a new line of research to fully account for real-world weak randomness threats to quantum cryptography. Here we expand of this novel idea by describing a key distribution scheme that is provably secure against general attacks by a post-quantum adversary. We then discuss possible security consequences for such schemes under the assumption of weak randomness. (paper)

  19. Security analysis of an untrusted source for quantum key distribution: passive approach

    International Nuclear Information System (INIS)

    Zhao Yi; Qi Bing; Lo, H-K; Qian Li

    2010-01-01

    We present a passive approach to the security analysis of quantum key distribution (QKD) with an untrusted source. A complete proof of its unconditional security is also presented. This scheme has significant advantages in real-life implementations as it does not require fast optical switching or a quantum random number generator. The essential idea is to use a beam splitter to split each input pulse. We show that we can characterize the source using a cross-estimate technique without active routing of each pulse. We have derived analytical expressions for the passive estimation scheme. Moreover, using simulations, we have considered four real-life imperfections: additional loss introduced by the 'plug and play' structure, inefficiency of the intensity monitor noise of the intensity monitor, and statistical fluctuation introduced by finite data size. Our simulation results show that the passive estimate of an untrusted source remains useful in practice, despite these four imperfections. Also, we have performed preliminary experiments, confirming the utility of our proposal in real-life applications. Our proposal makes it possible to implement the 'plug and play' QKD with the security guaranteed, while keeping the implementation practical.

  20. Predicting Phosphorus Dynamics Across Physiographic Regions Using a Mixed Hortonian Non-Hortonian Hydrology Model

    Science.gov (United States)

    Collick, A.; Easton, Z. M.; Auerbach, D.; Buchanan, B.; Kleinman, P. J. A.; Fuka, D.

    2017-12-01

    Predicting phosphorus (P) loss from agricultural watersheds depends on accurate representation of the hydrological and chemical processes governing P mobility and transport. In complex landscapes, P predictions are complicated by a broad range of soils with and without restrictive layers, a wide variety of agricultural management, and variable hydrological drivers. The Soil and Water Assessment Tool (SWAT) is a watershed model commonly used to predict runoff and non-point source pollution transport, but is commonly only used with Hortonian (traditional SWAT) or non-Hortonian (SWAT-VSA) initializations. Many shallow soils underlain by a restricting layer commonly generate saturation excess runoff from variable source areas (VSA), which is well represented in a re-conceptualized version, SWAT-VSA. However, many watersheds exhibit traits of both infiltration excess and saturation excess hydrology internally, based on the hydrologic distance from the stream, distribution of soils across the landscape, and characteristics of restricting layers. The objective of this research is to provide an initial look at integrating distributed predictive capabilities that consider both Hortonian and Non-Hortonian solutions simultaneously within a single SWAT-VSA initialization. We compare results from all three conceptual watershed initializations against measured surface runoff and stream P loads and to highlight the model's ability to drive sub-field management of P. All three initializations predict discharge similarly well (daily Nash-Sutcliffe Efficiencies above 0.5), but the new conceptual SWAT-VSA initialization performed best in predicting P export from the watershed, while also identifying critical source areas - those areas generating large runoff and P losses at the sub field level. These results support the use of mixed Hortonian non-Hortonian SWAT-VSA initializations in predicting watershed-scale P losses and identifying critical source areas of P loss in landscapes

  1. Knowledge-based prediction of three-dimensional dose distributions for external beam radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Shiraishi, Satomi; Moore, Kevin L., E-mail: kevinmoore@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California 92093 (United States)

    2016-01-15

    Purpose: To demonstrate knowledge-based 3D dose prediction for external beam radiotherapy. Methods: Using previously treated plans as training data, an artificial neural network (ANN) was trained to predict a dose matrix based on patient-specific geometric and planning parameters, such as the closest distance (r) to planning target volume (PTV) and organ-at-risks (OARs). Twenty-three prostate and 43 stereotactic radiosurgery/radiotherapy (SRS/SRT) cases with at least one nearby OAR were studied. All were planned with volumetric-modulated arc therapy to prescription doses of 81 Gy for prostate and 12–30 Gy for SRS. Using these clinically approved plans, ANNs were trained to predict dose matrix and the predictive accuracy was evaluated using the dose difference between the clinical plan and prediction, δD = D{sub clin} − D{sub pred}. The mean (〈δD{sub r}〉), standard deviation (σ{sub δD{sub r}}), and their interquartile range (IQR) for the training plans were evaluated at a 2–3 mm interval from the PTV boundary (r{sub PTV}) to assess prediction bias and precision. Initially, unfiltered models which were trained using all plans in the cohorts were created for each treatment site. The models predict approximately the average quality of OAR sparing. Emphasizing a subset of plans that exhibited superior to the average OAR sparing during training, refined models were created to predict high-quality rectum sparing for prostate and brainstem sparing for SRS. Using the refined model, potentially suboptimal plans were identified where the model predicted further sparing of the OARs was achievable. Replans were performed to test if the OAR sparing could be improved as predicted by the model. Results: The refined models demonstrated highly accurate dose distribution prediction. For prostate cases, the average prediction bias for all voxels irrespective of organ delineation ranged from −1% to 0% with maximum IQR of 3% over r{sub PTV} ∈ [ − 6, 30] mm. The

  2. pH prediction by artificial neural networks for the drinking water of the distribution system of Hyderabad city

    International Nuclear Information System (INIS)

    Memon, N.A.; Unar, M.A.; Ansari, A.K.

    2012-01-01

    In this research, feed forward ANN (Artificial Neural Network) model is developed and validated for predicting the pH at 10 different locations of the distribution system of drinking water of Hyderabad city. The developed model is MLP (Multilayer Perceptron) with back propagation algorithm. The data for the training and testing of the model are collected through an experimental analysis on weekly basis in a routine examination for maintaining the quality of drinking water in the city. 17 parameters are taken into consideration including pH. These all parameters are taken as input variables for the model and then pH is predicted for 03 phases;raw water of river Indus,treated water in the treatment plants and then treated water in the distribution system of drinking water. The training and testing results of this model reveal that MLP neural networks are exceedingly extrapolative for predicting the pH of river water, untreated and treated water at all locations of the distribution system of drinking water of Hyderabad city. The optimum input and output weights are generated with minimum MSE (Mean Square Error) < 5%. Experimental, predicted and tested values of pH are plotted and the effectiveness of the model is determined by calculating the coefficient of correlation (R2=0.999) of trained and tested results. (author)

  3. SPANDOM - source projection analytic nodal discrete ordinates method

    International Nuclear Information System (INIS)

    Kim, Tae Hyeong; Cho, Nam Zin

    1994-01-01

    We describe a new discrete ordinates nodal method for the two-dimensional transport equation. We solve the discrete ordinates equation analytically after the source term is projected and represented in polynomials. The method is applied to two fast reactor benchmark problems and compared with the TWOHEX code. The results indicate that the present method accurately predicts not only multiplication factor but also flux distribution

  4. A GIS-based multi-source and multi-box modeling approach (GMSMB) for air pollution assessment--a North American case study.

    Science.gov (United States)

    Wang, Bao-Zhen; Chen, Zhi

    2013-01-01

    This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.

  5. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    Science.gov (United States)

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and q

  6. Robust Distributed Model Predictive Load Frequency Control of Interconnected Power System

    Directory of Open Access Journals (Sweden)

    Xiangjie Liu

    2013-01-01

    Full Text Available Considering the load frequency control (LFC of large-scale power system, a robust distributed model predictive control (RDMPC is presented. The system uncertainty according to power system parameter variation alone with the generation rate constraints (GRC is included in the synthesis procedure. The entire power system is composed of several control areas, and the problem is formulated as convex optimization problem with linear matrix inequalities (LMI that can be solved efficiently. It minimizes an upper bound on a robust performance objective for each subsystem. Simulation results show good dynamic response and robustness in the presence of power system dynamic uncertainties.

  7. Distributed model predictive control for constrained nonlinear systems with decoupled local dynamics.

    Science.gov (United States)

    Zhao, Meng; Ding, Baocang

    2015-03-01

    This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Predicting occupancy for pygmy rabbits in Wyoming: an independent evaluation of two species distribution models

    Science.gov (United States)

    Germaine, Stephen S.; Ignizio, Drew; Keinath, Doug; Copeland, Holly

    2014-01-01

    Species distribution models are an important component of natural-resource conservation planning efforts. Independent, external evaluation of their accuracy is important before they are used in management contexts. We evaluated the classification accuracy of two species distribution models designed to predict the distribution of pygmy rabbit Brachylagus idahoensis habitat in southwestern Wyoming, USA. The Nature Conservancy model was deductive and based on published information and expert opinion, whereas the Wyoming Natural Diversity Database model was statistically derived using historical observation data. We randomly selected 187 evaluation survey points throughout southwestern Wyoming in areas predicted to be habitat and areas predicted to be nonhabitat for each model. The Nature Conservancy model correctly classified 39 of 77 (50.6%) unoccupied evaluation plots and 65 of 88 (73.9%) occupied plots for an overall classification success of 63.3%. The Wyoming Natural Diversity Database model correctly classified 53 of 95 (55.8%) unoccupied plots and 59 of 88 (67.0%) occupied plots for an overall classification success of 61.2%. Based on 95% asymptotic confidence intervals, classification success of the two models did not differ. The models jointly classified 10.8% of the area as habitat and 47.4% of the area as nonhabitat, but were discordant in classifying the remaining 41.9% of the area. To evaluate how anthropogenic development affected model predictive success, we surveyed 120 additional plots among three density levels of gas-field road networks. Classification success declined sharply for both models as road-density level increased beyond 5 km of roads per km-squared area. Both models were more effective at predicting habitat than nonhabitat in relatively undeveloped areas, and neither was effective at accounting for the effects of gas-energy-development road networks. Resource managers who wish to know the amount of pygmy rabbit habitat present in an

  9. Predictive analytics for truck arrival time estimation : a field study at a European distribution center

    NARCIS (Netherlands)

    van der Spoel, Sjoerd; Amrit, Chintan Amrit; van Hillegersberg, Jos

    2017-01-01

    Distribution centres (DCs) are the hubs connecting transport streams in the supply chain. The synchronisation of coming and going cargo at a DC requires reliable arrival times. To achieve this, a reliable method to predict arrival times is needed. A literature review was performed to find the

  10. Predicting protein-protein interactions from multimodal biological data sources via nonnegative matrix tri-factorization.

    Science.gov (United States)

    Wang, Hua; Huang, Heng; Ding, Chris; Nie, Feiping

    2013-04-01

    Protein interactions are central to all the biological processes and structural scaffolds in living organisms, because they orchestrate a number of cellular processes such as metabolic pathways and immunological recognition. Several high-throughput methods, for example, yeast two-hybrid system and mass spectrometry method, can help determine protein interactions, which, however, suffer from high false-positive rates. Moreover, many protein interactions predicted by one method are not supported by another. Therefore, computational methods are necessary and crucial to complete the interactome expeditiously. In this work, we formulate the problem of predicting protein interactions from a new mathematical perspective--sparse matrix completion, and propose a novel nonnegative matrix factorization (NMF)-based matrix completion approach to predict new protein interactions from existing protein interaction networks. Through using manifold regularization, we further develop our method to integrate different biological data sources, such as protein sequences, gene expressions, protein structure information, etc. Extensive experimental results on four species, Saccharomyces cerevisiae, Drosophila melanogaster, Homo sapiens, and Caenorhabditis elegans, have shown that our new methods outperform related state-of-the-art protein interaction prediction methods.

  11. Combining public participatory surveillance and occupancy modelling to predict the distributional response of Ixodes scapularis to climate change.

    Science.gov (United States)

    Lieske, David J; Lloyd, Vett K

    2018-03-01

    Ixodes scapularis, a known vector of Borrelia burgdorferi sensu stricto (Bbss), is undergoing range expansion in many parts of Canada. The province of New Brunswick, which borders jurisdictions with established populations of I. scapularis, constitutes a range expansion zone for this species. To better understand the current and potential future distribution of this tick under climate change projections, this study applied occupancy modelling to distributional records of adult ticks that successfully overwintered, obtained through passive surveillance. This study indicates that I. scapularis occurs throughout the southern-most portion of the province, in close proximity to coastlines and major waterways. Milder winter conditions, as indicated by the number of degree days model with a predictive accuracy of 0.845 (range: 0.828-0.893). Both RCP 4.5 and RCP 8.5 climate projections predict that a significant proportion of the province (roughly a quarter to a third) will be highly suitable for I. scapularis by the 2080s. Comparison with cases of canine infection show good spatial agreement with baseline model predictions, but the presence of canine Borrelia infections beyond the climate envelope, defined by the highest probabilities of tick occurrence, suggest the presence of Bbss-carrying ticks distributed by long-range dispersal events. This research demonstrates that predictive statistical modelling of multi-year surveillance information is an efficient way to identify areas where I. scapularis is most likely to occur, and can be used to guide subsequent active sampling efforts in order to better understand fine scale species distributional patterns. Copyright © 2018 The Authors. Published by Elsevier GmbH.. All rights reserved.

  12. Spectral intensity dependence an isotropy of sources stronger than 0.1 Jy at 2700 MHz

    International Nuclear Information System (INIS)

    Balonek, T.J.; Broderick, J.J.; Condon, J.J.; Crawford, D.F.; Jauncey, D.L.

    1975-01-01

    The 1000-foot (305 m) telescope of the National Astronomy and Ionosphere Center was used to measure 430 MHz flux densities of sources stronger than 0.1 Jy at 2700 MHz. Distributions of the resulting two-point spectral indices α (430, 2700) of sources in the intensity range 0.1less than or equal toS<0.35 Jy were compared with α (318, 2700) distributions of sources stronger than 0.35 Jy at 2700 MHz. The median normal-component spectral index and fraction of flat-spectrum sources in the faintest sample do not continue the previously discovered trend toward increased spectral steepening of faint sources. This result differs from the prediction of simple evolutionary cosmological models and therefore favors the alternative explanation that local source-density inhomogeneities are responsible for the observed intensity dependence of spectral indices

  13. Prediction of pathogen growth on iceberg lettuce under real temperature history during distribution from farm to table.

    Science.gov (United States)

    Koseki, Shigenobu; Isobe, Seiichiro

    2005-10-25

    The growth of pathogenic bacteria Escherichia coli O157:H7, Salmonella spp., and Listeria monocytogenes on iceberg lettuce under constant and fluctuating temperatures was modelled in order to estimate the microbial safety of this vegetable during distribution from the farm to the table. Firstly, we examined pathogen growth on lettuce at constant temperatures, ranging from 5 to 25 degrees C, and then we obtained the growth kinetic parameters (lag time, maximum growth rate (micro(max)), and maximum population density (MPD)) using the Baranyi primary growth model. The parameters were similar to those predicted by the pathogen modelling program (PMP), with the exception of MPD. The MPD of each pathogen on lettuce was 2-4 log(10) CFU/g lower than that predicted by PMP. Furthermore, the MPD of pathogens decreased with decreasing temperature. The relationship between mu(max) and temperature was linear in accordance with Ratkowsky secondary model as was the relationship between the MPD and temperature. Predictions of pathogen growth under fluctuating temperature used the Baranyi primary microbial growth model along with the Ratkowsky secondary model and MPD equation. The fluctuating temperature profile used in this study was the real temperature history measured during distribution from the field at harvesting to the retail store. Overall predictions for each pathogen agreed well with observed viable counts in most cases. The bias and root mean square error (RMSE) of the prediction were small. The prediction in which mu(max) was based on PMP showed a trend of overestimation relative to prediction based on lettuce. However, the prediction concerning E. coli O157:H7 and Salmonella spp. on lettuce greatly overestimated growth in the case of a temperature history starting relatively high, such as 25 degrees C for 5 h. In contrast, the overall prediction of L. monocytogenes under the same circumstances agreed with the observed data.

  14. Spatial distribution and source apportionment of water pollution in different administrative zones of Wen-Rui-Tang (WRT) river watershed, China.

    Science.gov (United States)

    Yang, Liping; Mei, Kun; Liu, Xingmei; Wu, Laosheng; Zhang, Minghua; Xu, Jianming; Wang, Fan

    2013-08-01

    Water quality degradation in river systems has caused great concerns all over the world. Identifying the spatial distribution and sources of water pollutants is the very first step for efficient water quality management. A set of water samples collected bimonthly at 12 monitoring sites in 2009 and 2010 were analyzed to determine the spatial distribution of critical parameters and to apportion the sources of pollutants in Wen-Rui-Tang (WRT) river watershed, near the East China Sea. The 12 monitoring sites were divided into three administrative zones of urban, suburban, and rural zones considering differences in land use and population density. Multivariate statistical methods [one-way analysis of variance, principal component analysis (PCA), and absolute principal component score-multiple linear regression (APCS-MLR) methods] were used to investigate the spatial distribution of water quality and to apportion the pollution sources. Results showed that most water quality parameters had no significant difference between the urban and suburban zones, whereas these two zones showed worse water quality than the rural zone. Based on PCA and APCS-MLR analysis, urban domestic sewage and commercial/service pollution, suburban domestic sewage along with fluorine point source pollution, and agricultural nonpoint source pollution with rural domestic sewage pollution were identified to the main pollution sources in urban, suburban, and rural zones, respectively. Understanding the water pollution characteristics of different administrative zones could put insights into effective water management policy-making especially in the area across various administrative zones.

  15. Analysis of mixed mode microwave distribution manifolds

    International Nuclear Information System (INIS)

    White, T.L.

    1982-09-01

    The 28-GHz microwave distribution manifold used in the ELMO Bumpy Torus-Scale (EBT-S) experiments consists of a toroidal metallic cavity, whose dimensions are much greater than a wavelength, fed by a source of microwave power. Equalization of the mixed mode power distribution ot the 24 cavities of EBT-S is accomplished by empirically adjusting the coupling irises which are equally spaced around the manifold. The performance of the manifold to date has been very good, yet no analytical models exist for optimizing manifold transmission efficiency or for scaling this technology to the EBT-P manifold design. The present report develops a general model for mixed mode microwave distribution manifolds based on isotropic plane wave sources of varying amplitudes that are distributed toroidally around the manifold. The calculated manifold transmission efficiency for the most recent EBT-S coupling iris modification is 90%. This agrees with the average measured transmission efficiency. Also, the model predicts the coupling iris areas required to balance the distribution of microwave power while maximizing transmission efficiency, and losses in waveguide feeds connecting the irises to the cavities of EBT are calculated using an approach similar to the calculation of mainfold losses. The model will be used to evaluate EBT-P manifold designs

  16. Large-eddy simulation of convective boundary layer generated by highly heated source with open source code, OpenFOAM

    International Nuclear Information System (INIS)

    Hattori, Yasuo; Suto, Hitoshi; Eguchi, Yuzuru; Sano, Tadashi; Shirai, Koji; Ishihara, Shuji

    2011-01-01

    Spatial- and temporal-characteristics of turbulence structures in the close vicinity of a heat source, which is a horizontal upward-facing round plate heated at high temperature, are examined by using well resolved large-eddy simulations. The verification is carried out through the comparison with experiments: the predicted statistics, including the PDF distribution of temperature fluctuations, agree well with measurements, indicating that the present simulations have a capability to appropriately reproduce turbulence structures near the heat source. The reproduced three-dimensional thermal- and fluid-fields in the close vicinity of the heat source reveals developing processes of coherence structures along the surface: the stationary- and streaky-flow patterns appear near the edge, and such patterns randomly shift to cell-like patterns with incursion into the center region, resulting in thermal-plume meandering. Both the patterns have very thin structures, but the depth of streaky structure is considerably small compared with that of cell-like patterns; this discrepancy causes the layered structures. The structure is the source of peculiar turbulence characteristics, the prediction of which is quite difficult with RANS-type turbulence models. The understanding such structures obtained in present study must be helpful to improve the turbulence model used in nuclear engineering. (author)

  17. A parietal biomarker for ADHD liability:As predicted by The Distributed Effects Perspective Model of ADHD

    Directory of Open Access Journals (Sweden)

    T. Sigi eHale

    2015-05-01

    Full Text Available Background: We previously hypothesized that poor task-directed sensory information processing should be indexed by increased weighting of right hemisphere (RH biased attention and visuo-perceptual brain functions during task operations, and have demonstrated this phenotype in ADHD across multiple studies, using multiple methodologies. However, in our recent Distributed Effects Model of ADHD, we surmised that this phenotype is not ADHD specific, but rather more broadly reflective of any circumstance that disrupts the induction and maintenance of an emergent task-directed neural architecture. Under this view, increased weighting of RH biased attention and visuo-perceptual brain functions is expected to generally index neurocognitive sets that are not optimized for task-directed thought and action, and when durable expressed, liability for ADHD. Method: The current study tested this view by examining whether previously identified rightward parietal EEG asymmetry in ADHD was associated with common ADHD characteristics and comorbidities (i.e., ADHD risk factors. Results: Barring one exception (non-right handedness, we found that it was. Rightward parietal asymmetry was associated with carrying the DRD4-7R risk allele, being male, having mood disorder, and having anxiety disorder. However, differences in the specific expression of rightward parietal asymmetry were observed, which are discussed in relation to possible unique mechanisms underlying ADHD liability in different ADHD RFs. Conclusion: Rightward parietal asymmetry appears to be a durable feature of ADHD liability, as predicted by the Distributed Effects Perspective Model of ADHD. Moreover, variability in the expression of this phenotype may shed light on different sources of ADHD liability.

  18. Dark Energy Survey Year 1 Results: Cross-Correlation Redshifts in the DES -- Calibration of the Weak Lensing Source Redshift Distributions

    Energy Technology Data Exchange (ETDEWEB)

    Davis, C.; et al.

    2017-10-06

    We present the calibration of the Dark Energy Survey Year 1 (DES Y1) weak lensing source galaxy redshift distributions from clustering measurements. By cross-correlating the positions of source galaxies with luminous red galaxies selected by the redMaGiC algorithm we measure the redshift distributions of the source galaxies as placed into different tomographic bins. These measurements constrain any such shifts to an accuracy of $\\sim0.02$ and can be computed even when the clustering measurements do not span the full redshift range. The highest-redshift source bin is not constrained by the clustering measurements because of the minimal redshift overlap with the redMaGiC galaxies. We compare our constraints with those obtained from $\\texttt{COSMOS}$ 30-band photometry and find that our two very different methods produce consistent constraints.

  19. The occurrence and distribution of a group of organic micropollutants in Mexico City's water sources.

    Science.gov (United States)

    Félix-Cañedo, Thania E; Durán-Álvarez, Juan C; Jiménez-Cisneros, Blanca

    2013-06-01

    The occurrence and distribution of a group of 17 organic micropollutants in surface and groundwater sources from Mexico City was determined. Water samples were taken from 7 wells, 4 dams and 15 tanks where surface and groundwater are mixed and stored before distribution. Results evidenced the occurrence of seven of the target compounds in groundwater: salicylic acid, diclofenac, di-2-ethylhexylphthalate (DEHP), butylbenzylphthalate (BBP), triclosan, bisphenol A (BPA) and 4-nonylphenol (4-NP). In surface water, 11 target pollutants were detected: same found in groundwater as well as naproxen, ibuprofen, ketoprofen and gemfibrozil. In groundwater, concentration ranges of salicylic acid, 4-NP and DEHP, the most frequently found compounds, were 1-464, 1-47 and 19-232 ng/L, respectively; while in surface water, these ranges were 29-309, 89-655 and 75-2,282 ng/L, respectively. Eleven target compounds were detected in mixed water. Concentrations in mixed water were higher than those determined in groundwater but lower than the detected in surface water. Different to that found in ground and surface water, the pesticide 2,4-D was found in mixed water, indicating that some pollutants can reach areas where they are not originally present in the local water sources. Concentration of the organic micropollutants found in this study showed similar to lower to those reported in water sources from developed countries. This study provides information that enriches the state of the art on the occurrence of organic micropollutants in water sources worldwide, notably in megacities of developing countries. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. [Predictive distribution and planting GAP of Cyathula officinalis in China based on 3S technology and MaxEnt modelling].

    Science.gov (United States)

    Wu, Ming-Yan; He, Lan; Chen, Jia-Li; Dong, Guang; Cheng, Wu-Xue

    2017-11-01

    Research on predictive distribution and planting GAP of Cyathula officinalis in China is helpful to provide scientific basis for its protection and planting popularization. According to the data in 63 distribution sites and 49 ecological variables, using MaxEnt ecological niche model and 3S technology, we performed a quantitative analysis of suitable distribution and planting GAP of C. officinalis in China. Our results show that: ① the area of suitable distribution of C. officinalis is about 634 385.80 km² in total, and mainly in Northeastern and Southeastern Sichuan, Northern and Southeastern Yunnan, Western and Southwestern Guizhou, Southwestern and Northeastern Chongqing, Southwestern Shaanxi, Southeastern Gansu, Western Guangxi, Southeastern Tibet. ② The main ecological factors determining the potential distribution are precipitation, altitude, minimum temperature of coldest month, soil type, monthly mean temperature. ③ The planting GAP region are mainly in Guangyuan, Mianyang, Ya'an, Leshan, Liangshan, Panzhihua of Sichuan province, Hanzhong of Shaanxi province, Dali, Nujiang, Chuxiong, Baoshan, Qujing, Wenshan of Yunnan province, southwestern autonomous prefecture in Guizhou province. The results are of great significance for realizing the growth environment, predicting the potential distribution and promoting planting popularization for C. officinalis. Copyright© by the Chinese Pharmaceutical Association.

  1. An inverse source location algorithm for radiation portal monitor applications

    International Nuclear Information System (INIS)

    Miller, Karen A.; Charlton, William S.

    2010-01-01

    Radiation portal monitors are being deployed at border crossings throughout the world to prevent the smuggling of nuclear and radiological materials; however, a tension exists between security and the free-flow of commerce. Delays at ports-of-entry have major economic implications, so it is imperative to minimize portal monitor screening time. We have developed an algorithm to locate a radioactive source using a distributed array of detectors, specifically for use at border crossings. To locate the source, we formulated an optimization problem where the objective function describes the least-squares difference between the actual and predicted detector measurements. The predicted measurements are calculated by solving the 3-D deterministic neutron transport equation given an estimated source position. The source position is updated using the steepest descent method, where the gradient of the objective function with respect to the source position is calculated using adjoint transport calculations. If the objective function is smaller than the convergence criterion, then the source position has been identified. This paper presents the derivation of the underlying equations in the algorithm as well as several computational test cases used to characterize its accuracy.

  2. Morphology, chemistry and distribution of neoformed spherulites in agricultural land affected by metallurgical point-source pollution

    NARCIS (Netherlands)

    Leguedois, S.; Oort, van F.; Jongmans, A.G.; Chevalier, P.

    2004-01-01

    Metal distribution patterns in superficial soil horizons of agricultural land affected by metallurgical point-source pollution were studied using optical and electron microscopy, synchrotron radiation and spectroscopy analyses. The site is located in northern France, at the center of a former entry

  3. Distribution, partitioning and sources of polycyclic aromatic hydrocarbons in the water–SPM–sediment system of Lake Chaohu, China

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Ning [MOE Laboratory for Earth Surface Processes, College of Urban and Environmental Sciences, Peking University, Beijing 100871 (China); State Key Laboratory of Environmental Criteria and Risk Assessment, Chinese Research Academy of Environmental Sciences, Beijing 100012 (China); He, Wei; Kong, Xiang-Zhen; Liu, Wen-Xiu; He, Qi-Shuang; Yang, Bin; Wang, Qing-Mei; Yang, Chen; Jiang, Yu-Jiao [MOE Laboratory for Earth Surface Processes, College of Urban and Environmental Sciences, Peking University, Beijing 100871 (China); Jorgensen, Sven Erik [Section of Toxicology and Environmental Chemistry, Institute A, University of Copenhagen, University Park 2, DK 2100, Copenhagen Ø (Denmark); Xu, Fu-Liu, E-mail: xufl@urban.pku.edu.cn [MOE Laboratory for Earth Surface Processes, College of Urban and Environmental Sciences, Peking University, Beijing 100871 (China); Zhao, Xiao-Li, E-mail: zhaoxiaoli_zxl@126.com [State Key Laboratory of Environmental Criteria and Risk Assessment, Chinese Research Academy of Environmental Sciences, Beijing 100012 (China)

    2014-10-15

    The residual levels of polycyclic aromatic hydrocarbons (PAHs) in the water, suspended particular matter (SPM) and sediment from Lake Chaohu were measured with a gas chromatograph–mass spectrometer (GC–MS). The spatial–temporal distributions and the SPM–water partition of PAHs and their influencing factors were investigated. The potential sources and contributions of PAHs in the sediment were estimated by positive matrix factorization (PMF) and probabilistic stable isotopic analysis (PSIA). The results showed that the average residual levels of total PAHs (PAH16) in the water, SPM and sediment were 170.7 ± 70.8 ng/L, 210.7 ± 160.7 ng/L and 908.5 ± 1878.1 ng/g dry weight, respectively. The same spatial distribution trend of PAH16 in the water, SPM and sediment was found from high to low: river inflows > western lake > eastern lake > water source area. There was an obvious seasonal trend of PAH16 in the water, while no obvious seasonal trend was found in the SPM. The residues and distributions of PAHs in the water, SPM and sediment relied heavily on carbon content. Significant Pearson correlations were found between LogK{sub oc} and LogK{sub ow} as well as some hydro-meteorological factors. Three major sources of PAHs including coal and biomass combustions, and vehicle emissions were identified. - Highlights: • Highest residual level of total PAHs in the SPM was detected. • Similar spatial trend of PAH16 in the water, SPM and sediment. • PAHs distributions in the water-sediment system relied heavily on organic carbon. • Correlations between LogK{sub oc} and LogK{sub ow} as well as hydro-meteorological factors. • Coal and biomass combustions and vehicle emissions were three major sources of PAHs.

  4. Asymptotically Constant-Risk Predictive Densities When the Distributions of Data and Target Variables Are Different

    Directory of Open Access Journals (Sweden)

    Keisuke Yano

    2014-05-01

    Full Text Available We investigate the asymptotic construction of constant-risk Bayesian predictive densities under the Kullback–Leibler risk when the distributions of data and target variables are different and have a common unknown parameter. It is known that the Kullback–Leibler risk is asymptotically equal to a trace of the product of two matrices: the inverse of the Fisher information matrix for the data and the Fisher information matrix for the target variables. We assume that the trace has a unique maximum point with respect to the parameter. We construct asymptotically constant-risk Bayesian predictive densities using a prior depending on the sample size. Further, we apply the theory to the subminimax estimator problem and the prediction based on the binary regression model.

  5. Comparison of experimental pulse-height distributions in germanium detectors with integrated-tiger-series-code predictions

    International Nuclear Information System (INIS)

    Beutler, D.E.; Halbleib, J.A.; Knott, D.P.

    1989-01-01

    This paper reports pulse-height distributions in two different types of Ge detectors measured for a variety of medium-energy x-ray bremsstrahlung spectra. These measurements have been compared to predictions using the integrated tiger series (ITS) Monte Carlo electron/photon transport code. In general, the authors find excellent agreement between experiments and predictions using no free parameters. These results demonstrate that the ITS codes can predict the combined bremsstrahlung production and energy deposition with good precision (within measurement uncertainties). The one region of disagreement observed occurs for low-energy (<50 keV) photons using low-energy bremsstrahlung spectra. In this case the ITS codes appear to underestimate the produced and/or absorbed radiation by almost an order of magnitude

  6. Electron energy spectrum produced in radio sources by turbulent, resonant acceleration

    International Nuclear Information System (INIS)

    Eilek, J.A.; Henriksen, R.N.

    1984-01-01

    We consider relativistic particle acceleration by resonant Alfven waves which are driven internally in a radio source from fully developed fluid turbulence. We find that self-similar behavior as described by Lacombe, f(p)proportionalp - /sup s/ but with sroughly-equal4.5, arises self-consistently when this turbulent wave driving coexists with synchrotron losses. The coupling of the wave and particle distributions provides feedback which drives an arbitrary initial distribution to the form-stable, self-similar form. The model predicts that turbulent plasma in a radio source should evolve toward a synchrotron spectral index, 0.5< or approx. =α< or approx. =1.0 in one particle lifetime, and that the average spectrum of most sources should also be in this range. The theory may also be applicable to other turbulent sites, such as cosmic-ray reaccelertion in the interstellar medium

  7. Study and Analysis of an Intelligent Microgrid Energy Management Solution with Distributed Energy Sources

    Directory of Open Access Journals (Sweden)

    Swaminathan Ganesan

    2017-09-01

    Full Text Available In this paper, a robust energy management solution which will facilitate the optimum and economic control of energy flows throughout a microgrid network is proposed. The increased penetration of renewable energy sources is highly intermittent in nature; the proposed solution demonstrates highly efficient energy management. This study enables precise management of power flows by forecasting of renewable energy generation, estimating the availability of energy at storage batteries, and invoking the appropriate mode of operation, based on the load demand to achieve efficient and economic operation. The predefined mode of operation is derived out of an expert rule set and schedules the load and distributed energy sources along with utility grid.

  8. Developing an Open Source, Reusable Platform for Distributed Collaborative Information Management in the Early Detection Research Network

    Science.gov (United States)

    Hart, Andrew F.; Verma, Rishi; Mattmann, Chris A.; Crichton, Daniel J.; Kelly, Sean; Kincaid, Heather; Hughes, Steven; Ramirez, Paul; Goodale, Cameron; Anton, Kristen; hide

    2012-01-01

    For the past decade, the NASA Jet Propulsion Laboratory, in collaboration with Dartmouth University has served as the center for informatics for the Early Detection Research Network (EDRN). The EDRN is a multi-institution research effort funded by the U.S. National Cancer Institute (NCI) and tasked with identifying and validating biomarkers for the early detection of cancer. As the distributed network has grown, increasingly formal processes have been developed for the acquisition, curation, storage, and dissemination of heterogeneous research information assets, and an informatics infrastructure has emerged. In this paper we discuss the evolution of EDRN informatics, its success as a mechanism for distributed information integration, and the potential sustainability and reuse benefits of emerging efforts to make the platform components themselves open source. We describe our experience transitioning a large closed-source software system to a community driven, open source project at the Apache Software Foundation, and point to lessons learned that will guide our present efforts to promote the reuse of the EDRN informatics infrastructure by a broader community.

  9. Comparison of four modeling tools for the prediction of potential distribution for non-indigenous weeds in the United States

    Science.gov (United States)

    Magarey, Roger; Newton, Leslie; Hong, Seung C.; Takeuchi, Yu; Christie, Dave; Jarnevich, Catherine S.; Kohl, Lisa; Damus, Martin; Higgins, Steven I.; Miller, Leah; Castro, Karen; West, Amanda; Hastings, John; Cook, Gericke; Kartesz, John; Koop, Anthony

    2018-01-01

    This study compares four models for predicting the potential distribution of non-indigenous weed species in the conterminous U.S. The comparison focused on evaluating modeling tools and protocols as currently used for weed risk assessment or for predicting the potential distribution of invasive weeds. We used six weed species (three highly invasive and three less invasive non-indigenous species) that have been established in the U.S. for more than 75 years. The experiment involved providing non-U. S. location data to users familiar with one of the four evaluated techniques, who then developed predictive models that were applied to the United States without knowing the identity of the species or its U.S. distribution. We compared a simple GIS climate matching technique known as Proto3, a simple climate matching tool CLIMEX Match Climates, the correlative model MaxEnt, and a process model known as the Thornley Transport Resistance (TTR) model. Two experienced users ran each modeling tool except TTR, which had one user. Models were trained with global species distribution data excluding any U.S. data, and then were evaluated using the current known U.S. distribution. The influence of weed species identity and modeling tool on prevalence and sensitivity effects was compared using a generalized linear mixed model. Each modeling tool itself had a low statistical significance, while weed species alone accounted for 69.1 and 48.5% of the variance for prevalence and sensitivity, respectively. These results suggest that simple modeling tools might perform as well as complex ones in the case of predicting potential distribution for a weed not yet present in the United States. Considerations of model accuracy should also be balanced with those of reproducibility and ease of use. More important than the choice of modeling tool is the construction of robust protocols and testing both new and experienced users under blind test conditions that approximate operational conditions.

  10. A Random Forest Approach to Predict the Spatial Distribution of Sediment Pollution in an Estuarine System

    Science.gov (United States)

    Modeling the magnitude and distribution of sediment-bound pollutants in estuaries is often limited by incomplete knowledge of the site and inadequate sample density. To address these modeling limitations, a decision-support tool framework was conceived that predicts sediment cont...

  11. Characterization of a Distributed Plasma Ionization Source (DPIS) for Ion Mobility Spectrometry and Mass Spectrometry

    International Nuclear Information System (INIS)

    Waltman, Melanie J.; Dwivedi, Prabha; Hill, Herbert; Blanchard, William C.; Ewing, Robert G.

    2008-01-01

    A recently developed atmospheric pressure ionization source, a distributed plasma ionization source (DPIS), was characterized and compared to commonly used atmospheric pressure ionization sources with both mass spectrometry and ion mobility spectrometry. The source consisted of two electrodes of different sizes separated by a thin dielectric. Application of a high RF voltage across the electrodes generated plasma in air yielding both positive and negative ions depending on the polarity of the applied potential. These reactant ions subsequently ionized the analyte vapors. The reactant ions generated were similar to those created in a conventional point-to-plane corona discharge ion source. The positive reactant ions generated by the source were mass identified as being solvated protons of general formula (H2O)nH+ with (H2O)2H+ as the most abundant reactant ion. The negative reactant ions produced were mass identified primarily as CO3-, NO3-, NO2-, O3- and O2- of various relative intensities. The predominant ion and relative ion ratios varied depending upon source construction and supporting gas flow rates. A few compounds including drugs, explosives and environmental pollutants were selected to evaluate the new ionization source. The source was operated continuously for several months and although deterioration was observed visually, the source continued to produce ions at a rate similar that of the initial conditions. The results indicated that the DPIS may have a longer operating life than a conventional corona discharge.

  12. Light-flavor sea-quark distributions in the nucleon in the SU(3) chiral quark soliton model. I. Phenomenological predictions

    International Nuclear Information System (INIS)

    Wakamatsu, M.

    2003-01-01

    Theoretical predictions are given for the light-flavor sea-quark distributions in the nucleon including the strange quark ones on the basis of the flavor SU(3) version of the chiral quark soliton model. Careful account is taken of the SU(3) symmetry breaking effects due to the mass difference Δm s between the strange and nonstrange quarks, which is the only one parameter necessary for the flavor SU(3) generalization of the model. A particular emphasis of study is put on the light-flavor sea-quark asymmetry as exemplified by the observables d-bar(x)-u-bar(x),d-bar(x)/u-bar(x),Δu-bar(x)-Δd-bar(x) as well as on the particle-antiparticle asymmetry of the strange quark distributions represented by s(x)-s-bar(x),s(x)/s-bar(x),Δs(x)-Δs-bar(x) etc. As for the unpolarized sea-quark distributions, the predictions of the model seem qualitatively consistent with the available phenomenological information provided by the NMC data for d-bar(x)-u-bar(x), the E866 data for d-bar(x)/u-bar(x), the CCFR data and the fit of Barone et al. for s(x)/s-bar(x), etc. The model is shown to give several unique predictions also for the spin-dependent sea-quark distribution, such that Δs(x)<<Δs-bar(x) < or approx. 0 and Δd-bar(x)<0<Δu-bar(x), although the verification of these predictions must await more elaborate experimental investigations in the near future

  13. Predicting Environmental Suitability for a Rare and Threatened Species (Lao Newt, Laotriton laoensis) Using Validated Species Distribution Models

    Science.gov (United States)

    Chunco, Amanda J.; Phimmachak, Somphouthone; Sivongxay, Niane; Stuart, Bryan L.

    2013-01-01

    The Lao newt (Laotriton laoensis) is a recently described species currently known only from northern Laos. Little is known about the species, but it is threatened as a result of overharvesting. We integrated field survey results with climate and altitude data to predict the geographic distribution of this species using the niche modeling program Maxent, and we validated these predictions by using interviews with local residents to confirm model predictions of presence and absence. The results of the validated Maxent models were then used to characterize the environmental conditions of areas predicted suitable for L. laoensis. Finally, we overlaid the resulting model with a map of current national protected areas in Laos to determine whether or not any land predicted to be suitable for this species is coincident with a national protected area. We found that both area under the curve (AUC) values and interview data provided strong support for the predictive power of these models, and we suggest that interview data could be used more widely in species distribution niche modeling. Our results further indicated that this species is mostly likely geographically restricted to high altitude regions (i.e., over 1,000 m elevation) in northern Laos and that only a minute fraction of suitable habitat is currently protected. This work thus emphasizes that increased protection efforts, including listing this species as endangered and the establishment of protected areas in the region predicted to be suitable for L. laoensis, are urgently needed. PMID:23555808

  14. MEG (Magnetoencephalography) multipolar modeling of distributed sources using RAP-MUSIC (Recursively Applied and Projected Multiple Signal Characterization)

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, J. C. (John C.); Baillet, S. (Sylvain); Jerbi, K. (Karim); Leahy, R. M. (Richard M.)

    2001-01-01

    We describe the use of truncated multipolar expansions for producing dynamic images of cortical neural activation from measurements of the magnetoencephalogram. We use a signal-subspace method to find the locations of a set of multipolar sources, each of which represents a region of activity in the cerebral cortex. Our method builds up an estimate of the sources in a recursive manner, i.e. we first search for point current dipoles, then magnetic dipoles, and finally first order multipoles. The dynamic behavior of these sources is then computed using a linear fit to the spatiotemporal data. The final step in the procedure is to map each of the multipolar sources into an equivalent distributed source on the cortical surface. The method is illustrated through an application to epileptic interictal MEG data.

  15. Measurements and predictions of the air distribution systems in high compute density (Internet) data centers

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jinkyun [HIMEC (Hanil Mechanical Electrical Consultants) Ltd., Seoul 150-103 (Korea); Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea); Lim, Taesub; Kim, Byungseon Sean [Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea)

    2009-10-15

    When equipment power density increases, a critical goal of a data center cooling system is to separate the equipment exhaust air from the equipment intake air in order to prevent the IT server from overheating. Cooling systems for data centers are primarily differentiated according to the way they distribute air. The six combinations of flooded and locally ducted air distribution make up the vast majority of all installations, except fully ducted air distribution methods. Once the air distribution system (ADS) is selected, there are other elements that must be integrated into the system design. In this research, the design parameters and IT environmental aspects of the cooling system were studied with a high heat density data center. CFD simulation analysis was carried out in order to compare the heat removal efficiencies of various air distribution systems. The IT environment of an actual operating data center is measured to validate a model for predicting the effect of different air distribution systems. A method for planning and design of the appropriate air distribution system is described. IT professionals versed in precision air distribution mechanisms, components, and configurations can work more effectively with mechanical engineers to ensure the specification and design of optimized cooling solutions. (author)

  16. Imaging phase holdup distribution of three phase flow systems using dual source gamma ray tomography

    International Nuclear Information System (INIS)

    Varma, Rajneesh; Al-Dahhan, Muthanna; O'Sullivan, Joseph

    2008-01-01

    Full text: Multiphase reaction and process systems are used in abundance in the chemical and biochemical industry. Tomography has been successfully employed to visualize the hydrodynamics of multiphase systems. Most of the tomography methods (gamma ray, x-ray and electrical capacitance and resistance) have been successfully implemented for two phase dynamic systems. However, a significant number of chemical and biochemical systems consists of dynamic three phases. Research effort directed towards the development of tomography techniques to image such dynamic system has met with partial successes for specific systems with applicability to limited operating conditions. A dual source tomography scanner has been developed that uses the 661 keV and 1332 keV photo peaks from the 137 Cs and 60 Co for imaging three phase systems. A new approach has been developed and applied that uses the polyenergetic Alternating Minimization (A-M) algorithm, developed by O'Sullivan and Benac (2007), for imaging the holdup distribution in three phases' dynamic systems. The new approach avoids the traditional post image processing approach used to determine the holdup distribution where the attenuation images of the mixed flow obtained from gamma ray photons of two different energies are used to determine the holdup of three phases. In this approach the holdup images are directly reconstructed from the gamma ray transmission data. The dual source gamma ray tomography scanner and the algorithm were validated using a three phase phantom. Based in the validation, three phase holdup studies we carried out in slurry bubble column containing gas liquid and solid phases in a dynamic state using the dual energy gamma ray tomography. The key results of the holdup distribution studies in the slurry bubble column along with the validation of the dual source gamma ray tomography system would be presented and discussed

  17. Large-Scale Prediction of Seagrass Distribution Integrating Landscape Metrics and Environmental Factors: The Case of Cymodocea nodosa (Mediterranean–Atlantic)

    KAUST Repository

    Chefaoui, Rosa M.

    2015-05-05

    Understanding the factors that affect seagrass meadows encompassing their entire range of distribution is challenging yet important for their conservation. Here, we predict the realized and potential distribution for the species Cymodocea nodosa modelling its environmental niche in the Mediterranean and adjacent Atlantic coastlines. We use a combination of environmental variables and landscape metrics to perform a suite of predictive algorithms which enables examination of the niche and find suitable habitats for the species. The most relevant environmental variables defining the distribution of C. nodosa were sea surface temperature (SST) and salinity. We found suitable habitats at SST from 5.8 °C to 26.4 °C and salinity ranging from 17.5 to 39.3. Optimal values of mean winter wave height ranged between 1.2 and 1.5 m, while waves higher than 2.5 m seemed to limit the presence of the species. The influence of nutrients and pH, despite having weight on the models, was not so clear in terms of ranges that confine the distribution of the species. Landscape metrics able to capture variation in the coastline enhanced significantly the accuracy of the models, despite the limitations caused by the scale of the study. We found potential suitable areas not occupied by the seagrass mainly in coastal regions of North Africa and the Adriatic coast of Italy. The present study describes the realized and potential distribution of a seagrass species, providing the first global model of the factors that can be shaping the environmental niche of C. nodosa throughout its range. We identified the variables constraining its distribution as well as thresholds delineating its environmental niche. Landscape metrics showed promising prospects for the prediction of coastal species dependent on the shape of the coast. By contrasting predictive approaches, we defined the variables affecting the distributional areas that seem unsuitable for C. nodosa as well as those suitable habitats not

  18. Large-Scale Prediction of Seagrass Distribution Integrating Landscape Metrics and Environmental Factors: The Case of Cymodocea nodosa (Mediterranean–Atlantic)

    KAUST Repository

    Chefaoui, Rosa M.; Assis, Jorge; Duarte, Carlos M.; Serrã o, Ester A.

    2015-01-01

    Understanding the factors that affect seagrass meadows encompassing their entire range of distribution is challenging yet important for their conservation. Here, we predict the realized and potential distribution for the species Cymodocea nodosa modelling its environmental niche in the Mediterranean and adjacent Atlantic coastlines. We use a combination of environmental variables and landscape metrics to perform a suite of predictive algorithms which enables examination of the niche and find suitable habitats for the species. The most relevant environmental variables defining the distribution of C. nodosa were sea surface temperature (SST) and salinity. We found suitable habitats at SST from 5.8 °C to 26.4 °C and salinity ranging from 17.5 to 39.3. Optimal values of mean winter wave height ranged between 1.2 and 1.5 m, while waves higher than 2.5 m seemed to limit the presence of the species. The influence of nutrients and pH, despite having weight on the models, was not so clear in terms of ranges that confine the distribution of the species. Landscape metrics able to capture variation in the coastline enhanced significantly the accuracy of the models, despite the limitations caused by the scale of the study. We found potential suitable areas not occupied by the seagrass mainly in coastal regions of North Africa and the Adriatic coast of Italy. The present study describes the realized and potential distribution of a seagrass species, providing the first global model of the factors that can be shaping the environmental niche of C. nodosa throughout its range. We identified the variables constraining its distribution as well as thresholds delineating its environmental niche. Landscape metrics showed promising prospects for the prediction of coastal species dependent on the shape of the coast. By contrasting predictive approaches, we defined the variables affecting the distributional areas that seem unsuitable for C. nodosa as well as those suitable habitats not

  19. Prediction of metabolic flux distribution from gene expression data based on the flux minimization principle.

    Directory of Open Access Journals (Sweden)

    Hyun-Seob Song

    Full Text Available Prediction of possible flux distributions in a metabolic network provides detailed phenotypic information that links metabolism to cellular physiology. To estimate metabolic steady-state fluxes, the most common approach is to solve a set of macroscopic mass balance equations subjected to stoichiometric constraints while attempting to optimize an assumed optimal objective function. This assumption is justifiable in specific cases but may be invalid when tested across different conditions, cell populations, or other organisms. With an aim to providing a more consistent and reliable prediction of flux distributions over a wide range of conditions, in this article we propose a framework that uses the flux minimization principle to predict active metabolic pathways from mRNA expression data. The proposed algorithm minimizes a weighted sum of flux magnitudes, while biomass production can be bounded to fit an ample range from very low to very high values according to the analyzed context. We have formulated the flux weights as a function of the corresponding enzyme reaction's gene expression value, enabling the creation of context-specific fluxes based on a generic metabolic network. In case studies of wild-type Saccharomyces cerevisiae, and wild-type and mutant Escherichia coli strains, our method achieved high prediction accuracy, as gauged by correlation coefficients and sums of squared error, with respect to the experimentally measured values. In contrast to other approaches, our method was able to provide quantitative predictions for both model organisms under a variety of conditions. Our approach requires no prior knowledge or assumption of a context-specific metabolic functionality and does not require trial-and-error parameter adjustments. Thus, our framework is of general applicability for modeling the transcription-dependent metabolism of bacteria and yeasts.

  20. Field distribution of a source and energy absorption in an inhomogeneous magneto-active plasma

    International Nuclear Information System (INIS)

    Galushko, N.P.; Erokhin, N.S.; Moiseev, S.S.

    1975-01-01

    In the present paper the distribution of source fields in in a magnetoactive plasma is studied from the standpoint of the possibility of an effective SHF heating of an inhomogeneous plasma in both high (ωapproximatelyωsub(pe) and low (ωapproximatelyωsub(pi) frequency ranges, where ωsub(pe) and ωsub(pi) are the electron and ion plasma frequencies. The localization of the HF energy absorption regions in cold and hot plasma and the effect of plasma inhomogeneity and source dimensions on the absorption efficiency are investigated. The linear wave transformation in an inhomogeneous hot plasma is taken into consideration. Attention is paid to the difference between the region localization for collisional and non-collisional absorption. It has been shown that the HF energy dissipation in plasma particle collisions is localized in the region of thin jets going from the source; the radiation field has a sharp peak in this region. At the same time, non-collisional HF energy dissipation is spread over the plasma volume as a result of Cherenkov and cyclotron wave attenuation. The essential contribution to the source field from resonances due to standing wave excitation in an inhomogeneous plasma shell near the source is pointed out

  1. Comparing predictive models of glioblastoma multiforme built using multi-institutional and local data sources.

    Science.gov (United States)

    Singleton, Kyle W; Hsu, William; Bui, Alex A T

    2012-01-01

    The growing amount of electronic data collected from patient care and clinical trials is motivating the creation of national repositories where multiple institutions share data about their patient cohorts. Such efforts aim to provide sufficient sample sizes for data mining and predictive modeling, ultimately improving treatment recommendations and patient outcome prediction. While these repositories offer the potential to improve our understanding of a disease, potential issues need to be addressed to ensure that multi-site data and resultant predictive models are useful to non-contributing institutions. In this paper we examine the challenges of utilizing National Cancer Institute datasets for modeling glioblastoma multiforme. We created several types of prognostic models and compared their results against models generated using data solely from our institution. While overall model performance between the data sources was similar, different variables were selected during model generation, suggesting that mapping data resources between models is not a straightforward issue.

  2. Information system architecture to support transparent access to distributed, heterogeneous data sources

    International Nuclear Information System (INIS)

    Brown, J.C.

    1994-08-01

    Quality situation assessment and decision making require access to multiple sources of data and information. Insufficient accessibility to data exists for many large corporations and Government agencies. By utilizing current advances in computer technology, today's situation analyst's have a wealth of information at their disposal. There are many potential solutions to the information accessibility problem using today's technology. The United States Department of Energy (US-DOE) faced this problem when dealing with one class of problem in the US. The result of their efforts has been the creation of the Tank Waste Information Network System -- TWINS. The TWINS solution combines many technologies to address problems in several areas such as User Interfaces, Transparent Access to Multiple Data Sources, and Integrated Data Access. Data related to the complex is currently distributed throughout several US-DOE installations. Over time, each installation has adopted their own set of standards as related to information management. Heterogeneous hardware and software platforms exist both across the complex and within a single installation. Standards for information management vary between US-DOE mission areas within installations. These factors contribute to the complexity of accessing information in a manner that enhances the performance and decision making process of the analysts. This paper presents one approach taken by the DOE to resolve the problem of distributed, heterogeneous, multi-media information management for the HLW Tank complex. The information system architecture developed for the DOE by the TWINS effort is one that is adaptable to other problem domains and uses

  3. Calculation of neutron interior source distribution within subcritical fission-chain reacting systems for a prescribed power density generation

    International Nuclear Information System (INIS)

    Moraes, Leonardo R.C.; Alves Filho, Hermes; Barros, Ricardo C.

    2017-01-01

    Accelerator Driven Systems (ADS) are sub-critical systems stabilized by stationary external sources of neutrons. A system is subcritical when the removal by absorption and leakage exceeds the production by fission and tends to shut down. On the other hand, any subcritical system can be stabilized by including time-independent external sources of neutrons. The goal of this work is to determine the intensity of uniform and isotropic sources of neutrons that must be added inside all fuel regions of a subcritical system so that it becomes stabilized, generating a prescribed distribution of electric power. A computer program has been developed in Java language to estimate the intensity of stationary sources of neutrons that must be included in the fuel regions to drive the subcritical system with a fixed power distribution prescribed by the user. The mathematical model used to achieve this goal was the energy multigroup, slab-geometry neutron transport equation in the discrete ordinates (S N ) formulation and the response matrix method was applied to solve the forward and the adjoint S N problems. Numerical results are given to verify the present. (author)

  4. Calculation of neutron interior source distribution within subcritical fission-chain reacting systems for a prescribed power density generation

    Energy Technology Data Exchange (ETDEWEB)

    Moraes, Leonardo R.C.; Alves Filho, Hermes; Barros, Ricardo C., E-mail: lrcmoraes@iprj.uerj.br, E-mail: halves@iprj.uerj.br, E-mail: ricardob@iprj.uerj.br [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Programa de Pós-Graduação em Modelagem Computacional

    2017-07-01

    Accelerator Driven Systems (ADS) are sub-critical systems stabilized by stationary external sources of neutrons. A system is subcritical when the removal by absorption and leakage exceeds the production by fission and tends to shut down. On the other hand, any subcritical system can be stabilized by including time-independent external sources of neutrons. The goal of this work is to determine the intensity of uniform and isotropic sources of neutrons that must be added inside all fuel regions of a subcritical system so that it becomes stabilized, generating a prescribed distribution of electric power. A computer program has been developed in Java language to estimate the intensity of stationary sources of neutrons that must be included in the fuel regions to drive the subcritical system with a fixed power distribution prescribed by the user. The mathematical model used to achieve this goal was the energy multigroup, slab-geometry neutron transport equation in the discrete ordinates (S{sub N}) formulation and the response matrix method was applied to solve the forward and the adjoint S{sub N} problems. Numerical results are given to verify the present. (author)

  5. Optimal operation management of fuel cell/wind/photovoltaic power sources connected to distribution networks

    Science.gov (United States)

    Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid

    2011-10-01

    In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the 'best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.

  6. Do pseudo-absence selection strategies influence species distribution models and their predictions? An information-theoretic approach based on simulated data

    Directory of Open Access Journals (Sweden)

    Guisan Antoine

    2009-04-01

    Full Text Available Abstract Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a real absences b pseudo-absences selected randomly from the background and c two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97, and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have

  7. Hardware-in-the-Loop Simulation of a Distribution System with Air Conditioners under Model Predictive Control: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Sparn, Bethany F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ruth, Mark F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnamurthy, Dheepak [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Pratt, Annabelle [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Lunacek, Monte S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jones, Wesley B [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wu, Hongyu [Kansas State University; Mittal, Saurabh [Mitre Corporation; Marks, Jesse [University of Missouri

    2017-08-01

    Many have proposed that responsive load provided by distributed energy resources (DERs) and demand response (DR) are an option to provide flexibility to the grid and especially to distribution feeders. However, because responsive load involves a complex interplay between tariffs and DER and DR technologies, it is challenging to test and evaluate options without negatively impacting customers. This paper describes a hardware-in-the-loop (HIL) simulation system that has been developed to reduce the cost of evaluating the impact of advanced controllers (e.g., model predictive controllers) and technologies (e.g., responsive appliances). The HIL simulation system combines large-scale software simulation with a small set of representative building equipment hardware. It is used to perform HIL simulation of a distribution feeder and the loads on it under various tariff structures. In the reported HIL simulation, loads include many simulated air conditioners and one physical air conditioner. Independent model predictive controllers manage operations of all air conditioners under a time-of-use tariff. Results from this HIL simulation and a discussion of future development work of the system are presented.

  8. The Comparison Study of Short-Term Prediction Methods to Enhance the Model Predictive Controller Applied to Microgrid Energy Management

    Directory of Open Access Journals (Sweden)

    César Hernández-Hernández

    2017-06-01

    Full Text Available Electricity load forecasting, optimal power system operation and energy management play key roles that can bring significant operational advantages to microgrids. This paper studies how methods based on time series and neural networks can be used to predict energy demand and production, allowing them to be combined with model predictive control. Comparisons of different prediction methods and different optimum energy distribution scenarios are provided, permitting us to determine when short-term energy prediction models should be used. The proposed prediction models in addition to the model predictive control strategy appear as a promising solution to energy management in microgrids. The controller has the task of performing the management of electricity purchase and sale to the power grid, maximizing the use of renewable energy sources and managing the use of the energy storage system. Simulations were performed with different weather conditions of solar irradiation. The obtained results are encouraging for future practical implementation.

  9. Resource selection models are useful in predicting fine-scale distributions of black-footed ferrets in prairie dog colonies

    Science.gov (United States)

    Eads, David A.; Jachowski, David S.; Biggins, Dean E.; Livieri, Travis M.; Matchett, Marc R.; Millspaugh, Joshua J.

    2012-01-01

    Wildlife-habitat relationships are often conceptualized as resource selection functions (RSFs)—models increasingly used to estimate species distributions and prioritize habitat conservation. We evaluated the predictive capabilities of 2 black-footed ferret (Mustela nigripes) RSFs developed on a 452-ha colony of black-tailed prairie dogs (Cynomys ludovicianus) in the Conata Basin, South Dakota. We used the RSFs to project the relative probability of occurrence of ferrets throughout an adjacent 227-ha colony. We evaluated performance of the RSFs using ferret space use data collected via postbreeding spotlight surveys June–October 2005–2006. In home ranges and core areas, ferrets selected the predicted "very high" and "high" occurrence categories of both RSFs. Count metrics also suggested selection of these categories; for each model in each year, approximately 81% of ferret locations occurred in areas of very high or high predicted occurrence. These results suggest usefulness of the RSFs in estimating the distribution of ferrets throughout a black-tailed prairie dog colony. The RSFs provide a fine-scale habitat assessment for ferrets that can be used to prioritize releases of ferrets and habitat restoration for prairie dogs and ferrets. A method to quickly inventory the distribution of prairie dog burrow openings would greatly facilitate application of the RSFs.

  10. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    Energy Technology Data Exchange (ETDEWEB)

    Aly, A. [North Carolina State Univ., Raleigh, NC (United States); Avramova, Maria [North Carolina State Univ., Raleigh, NC (United States); Ivanov, Kostadin [Pennsylvania State Univ., University Park, PA (United States); Motta, Arthur [Pennsylvania State Univ., University Park, PA (United States); Lacroix, E. [Pennsylvania State Univ., University Park, PA (United States); Manera, Annalisa [Univ. of Michigan, Ann Arbor, MI (United States); Walter, D. [Univ. of Michigan, Ann Arbor, MI (United States); Williamson, R. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gamble, K. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-10-29

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed by data from hydrogen experiments and PIE data.

  11. Predictions of Gene Family Distributions in Microbial Genomes: Evolution by Gene Duplication and Modification

    International Nuclear Information System (INIS)

    Yanai, Itai; Camacho, Carlos J.; DeLisi, Charles

    2000-01-01

    A universal property of microbial genomes is the considerable fraction of genes that are homologous to other genes within the same genome. The process by which these homologues are generated is not well understood, but sequence analysis of 20 microbial genomes unveils a recurrent distribution of gene family sizes. We show that a simple evolutionary model based on random gene duplication and point mutations fully accounts for these distributions and permits predictions for the number of gene families in genomes not yet complete. Our findings are consistent with the notion that a genome evolves from a set of precursor genes to a mature size by gene duplications and increasing modifications. (c) 2000 The American Physical Society

  12. Predictions of Gene Family Distributions in Microbial Genomes: Evolution by Gene Duplication and Modification

    Energy Technology Data Exchange (ETDEWEB)

    Yanai, Itai; Camacho, Carlos J.; DeLisi, Charles

    2000-09-18

    A universal property of microbial genomes is the considerable fraction of genes that are homologous to other genes within the same genome. The process by which these homologues are generated is not well understood, but sequence analysis of 20 microbial genomes unveils a recurrent distribution of gene family sizes. We show that a simple evolutionary model based on random gene duplication and point mutations fully accounts for these distributions and permits predictions for the number of gene families in genomes not yet complete. Our findings are consistent with the notion that a genome evolves from a set of precursor genes to a mature size by gene duplications and increasing modifications. (c) 2000 The American Physical Society.

  13. Biological and ecological characteristics of soft ticks (Ixodida: Argasidae and their impact for predicting tick and associated disease distribution

    Directory of Open Access Journals (Sweden)

    Vial L.

    2009-09-01

    Full Text Available As evidence of global changes is accumulating, scientists are challenged to detect distribution changes of vectors, reservoirs and pathogens caused by anthropogenic and/or environmental changes. Statistical and mathematical distribution models are emerging for ixodid hard ticks whereas no prediction has ever been developed for argasid ones. These last organisms remain unknown and under-reported; they differ from hard ticks by many structural, biological and ecological properties, which complicate direct adaptation of hard tick models. However, investigations on bibliographic resources concerning these ticks suggest that distribution modelling based on natural niche concept and using environmental factors especially climate is also possible, bearing in mind the scale of prediction and their specificities including their nidicolous lifestyle, an indiscriminate host feeding and a short bloodmeal duration, as well as a flexible development cycle through diapause periods.

  14. Calculating method for confinement time and charge distribution of ions in electron cyclotron resonance sources

    International Nuclear Information System (INIS)

    Dougar-Jabon, V.D.; Umnov, A.M.; Kutner, V.B.

    1996-01-01

    It is common knowledge that the electrostatic pit in a core plasma of electron cyclotron resonance sources exerts strict control over generation of ions in high charge states. This work is aimed at finding a dependence of the lifetime of ions on their charge states in the core region and to elaborate a numerical model of ion charge dispersion not only for the core plasmas but for extracted beams as well. The calculated data are in good agreement with the experimental results on charge distributions and magnitudes for currents of beams extracted from the 14 GHz DECRIS source. copyright 1996 American Institute of Physics

  15. Assessing the pollution risk of a groundwater source field at western Laizhou Bay under seawater intrusion

    International Nuclear Information System (INIS)

    Zeng, Xiankui; Wu, Jichun; Wang, Dong; Zhu, Xiaobin

    2016-01-01

    Coastal areas have great significance for human living, economy and society development in the world. With the rapid increase of pressures from human activities and climate change, the safety of groundwater resource is under the threat of seawater intrusion in coastal areas. The area of Laizhou Bay is one of the most serious seawater intruded areas in China, since seawater intrusion phenomenon was firstly recognized in the middle of 1970s. This study assessed the pollution risk of a groundwater source filed of western Laizhou Bay area by inferring the probability distribution of groundwater Cl − concentration. The numerical model of seawater intrusion process is built by using SEAWAT4. The parameter uncertainty of this model is evaluated by Markov Chain Monte Carlo (MCMC) simulation, and DREAM (ZS) is used as sampling algorithm. Then, the predictive distribution of Cl - concentration at groundwater source field is inferred by using the samples of model parameters obtained from MCMC. After that, the pollution risk of groundwater source filed is assessed by the predictive quantiles of Cl - concentration. The results of model calibration and verification demonstrate that the DREAM (ZS) based MCMC is efficient and reliable to estimate model parameters under current observation. Under the condition of 95% confidence level, the groundwater source point will not be polluted by seawater intrusion in future five years (2015–2019). In addition, the 2.5% and 97.5% predictive quantiles show that the Cl − concentration of groundwater source field always vary between 175 mg/l and 200 mg/l. - Highlights: • The parameter uncertainty of seawater intrusion model is evaluated by MCMC. • Groundwater source field won’t be polluted by seawater intrusion in future 5 years. • The pollution risk is assessed by the predictive quantiles of Cl − concentration

  16. Assessing the pollution risk of a groundwater source field at western Laizhou Bay under seawater intrusion

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Xiankui; Wu, Jichun; Wang, Dong, E-mail: wangdong@nju.edu.cn; Zhu, Xiaobin

    2016-07-15

    Coastal areas have great significance for human living, economy and society development in the world. With the rapid increase of pressures from human activities and climate change, the safety of groundwater resource is under the threat of seawater intrusion in coastal areas. The area of Laizhou Bay is one of the most serious seawater intruded areas in China, since seawater intrusion phenomenon was firstly recognized in the middle of 1970s. This study assessed the pollution risk of a groundwater source filed of western Laizhou Bay area by inferring the probability distribution of groundwater Cl{sup −} concentration. The numerical model of seawater intrusion process is built by using SEAWAT4. The parameter uncertainty of this model is evaluated by Markov Chain Monte Carlo (MCMC) simulation, and DREAM{sub (ZS)} is used as sampling algorithm. Then, the predictive distribution of Cl{sup -} concentration at groundwater source field is inferred by using the samples of model parameters obtained from MCMC. After that, the pollution risk of groundwater source filed is assessed by the predictive quantiles of Cl{sup -} concentration. The results of model calibration and verification demonstrate that the DREAM{sub (ZS)} based MCMC is efficient and reliable to estimate model parameters under current observation. Under the condition of 95% confidence level, the groundwater source point will not be polluted by seawater intrusion in future five years (2015–2019). In addition, the 2.5% and 97.5% predictive quantiles show that the Cl{sup −} concentration of groundwater source field always vary between 175 mg/l and 200 mg/l. - Highlights: • The parameter uncertainty of seawater intrusion model is evaluated by MCMC. • Groundwater source field won’t be polluted by seawater intrusion in future 5 years. • The pollution risk is assessed by the predictive quantiles of Cl{sup −} concentration.

  17. Spatial distribution of carbon sources and sinks in Canada's forests

    International Nuclear Information System (INIS)

    Chen, Jing M.; Weimin, Ju; Liu, Jane; Cihlar, Josef; Chen, Wenjun

    2003-01-01

    Annual spatial distributions of carbon sources and sinks in Canada's forests at 1 km resolution are computed for the period from 1901 to 1998 using ecosystem models that integrate remote sensing images, gridded climate, soils and forest inventory data. GIS-based fire scar maps for most regions of Canada are used to develop a remote sensing algorithm for mapping and dating forest burned areas in the 25 yr prior to 1998. These mapped and dated burned areas are used in combination with inventory data to produce a complete image of forest stand age in 1998. Empirical NPP age relationships were used to simulate the annual variations of forest growth and carbon balance in 1 km pixels, each treated as a homogeneous forest stand. Annual CO 2 flux data from four sites were used for model validation. Averaged over the period 1990-1998, the carbon source and sink map for Canada's forests show the following features: (i) large spatial variations corresponding to the patchiness of recent fire scars and productive forests and (ii) a general south-to-north gradient of decreasing carbon sink strength and increasing source strength. This gradient results mostly from differential effects of temperature increase on growing season length, nutrient mineralization and heterotrophic respiration at different latitudes as well as from uneven nitrogen deposition. The results from the present study are compared with those of two previous studies. The comparison suggests that the overall positive effects of non-disturbance factors (climate, CO 2 and nitrogen) outweighed the effects of increased disturbances in the last two decades, making Canada's forests a carbon sink in the 1980s and 1990s. Comparisons of the modeled results with tower-based eddy covariance measurements of net ecosystem exchange at four forest stands indicate that the sink values from the present study may be underestimated

  18. Distribution and Identification of Sources of Heavy Metals in the Voghji River Basin Impacted by Mining Activities (Armenia

    Directory of Open Access Journals (Sweden)

    A. V. Gabrielyan

    2018-01-01

    Full Text Available The objective of this research is to assess the distribution of heavy metals in the waters and sediments of the Voghji River and its tributaries impacted by mining activity and to reveal the real source of each of the heavy metals in the environment for assessing the pollution level of heavy metals. Voghji River with two main tributaries (Geghi and Norashenik drain two mining regions. To identify distribution and pollution sources of heavy metals, the water and sediment samples were collected from eight sampling sites. The results of statistical analysis based on data sets of the period 2014–2016 showed that, after the influence of drainage water and wastewater of mining regions, heavy metal contents in the Voghji River basin dramatically increased. The waters of the Voghji River were highly polluted by Mn, Co, Cu, Zn, Mo, Cd, and Pb. The relation of metals content was highly changed due to anthropogenic impact disturbing the geochemical balance of the Voghji River. The water quality based on only heavy metal contents in the source of the Voghji River belongs to “good” chemical status, and in the sources of Geghi and Norashenik Rivers it is “moderate.” The water quality of Voghji and Norashenik Rivers is sharply worsening after the influence of mining activity, becoming “bad” chemical status. The research revealed the pollution sources of each metal.

  19. Probing the Spatial Distribution of the Interstellar Dust Medium by High Angular Resolution X-ray Halos of Point Sources

    Science.gov (United States)

    Xiang, Jingen

    X-rays are absorbed and scattered by dust grains when they travel through the interstellar medium. The scattering within small angles results in an X-ray ``halo''. The halo properties are significantly affected by the energy of radiation, the optical depth of the scattering, the grain size distributions and compositions, and the spatial distribution of dust along the line of sight (LOS). Therefore analyzing the X-ray halo properties is an important tool to study the size distribution and spatial distribution of interstellar grains, which plays a central role in the astrophysical study of the interstellar medium, such as the thermodynamics and chemistry of the gas and the dynamics of star formation. With excellent angular resolution, good energy resolution and broad energy band, the Chandra ACIS is so far the best instrument for studying the X-ray halos. But the direct images of bright sources obtained with ACIS usually suffer from severe pileup which prevents us from obtaining the halos in small angles. We first improve the method proposed by Yao et al to resolve the X-ray dust scattering halos of point sources from the zeroth order data in CC-mode or the first order data in TE mode with Chandra HETG/ACIS. Using this method we re-analyze the Cygnus X-1 data observed with Chandra. Then we studied the X-ray dust scattering halos around 17 bright X-ray point sources using Chandra data. All sources were observed with the HETG/ACIS in CC-mode or TE-mode. Using the interstellar grain models of WD01 model and MRN model to fit the halo profiles, we get the hydrogen column densities and the spatial distributions of the scattering dust grains along the line of sights (LOS) to these sources. We find there is a good linear correlation not only between the scattering hydrogen column density from WD01 model and the one from MRN model, but also between N_{H} derived from spectral fits and the one derived from the grain models WD01 and MRN (except for GX 301-2 and Vela X-1): N

  20. Predicting climate change impacts on the distribution of the threatened Garcinia indica in the Western Ghats, India

    Directory of Open Access Journals (Sweden)

    Malay Pramanik

    Full Text Available In recent years, climate change has become a major threat and has been widely documented in the geographic distribution of many plant species. However, the impacts of climate change on the distribution of ecologically vulnerable medicinal species remain largely unknown. The identification of a suitable habitat for a species under climate change scenario is a significant step towards the mitigation of biodiversity decline. The study, therefore, aims to predict the impact of current, and future climatic scenarios on the distribution of the threatened Garcinia indica across the northern Western Ghats using Maximum Entropy (MaxEnt modelling. The future projections were made for the year 2050 and 2070 with all Representative Concentration Pathways (RCPs scenario (2.6, 4.5, 6.0, and 8.5 using 56 species occurrence data, and 19 bioclimatic predictors from the BCC-CSM1.1 model of the Intergovernmental Panel for Climate Change’s (IPCC 5th assessment. The bioclimatic variables were minimised to a smaller number of variables after a multicollinearity test, and their contributions were assessed using jackknife test. The AUC value of 0.956 ± 0.023 indicates that the model performs with excellent accuracy. The study identified that temperature seasonality (39.5 ± 3.1%, isothermality (19.2 ± 1.6%, and annual precipitation (12.7 ± 1.7% would be the major influencing variables in the current and future distribution. The model predicted 10.50% (19318.7 sq. km of the study area as moderately to very highly suitable, while 82.60% (151904 sq. km of the study area was identified as ‘unsuitable’ or ‘very low suitable’. Our predictions of Climate change impact on habitat suitability suggest that there will be a drastic reduction in the suitability by 5.29% and 5.69% under RCP 8.5 for 2050 and 2070, respectively. Objective and Significance: Primary objective of this study is to identify the potential distribution of medicinally and