WorldWideScience

Sample records for sources require large

  1. International Requirements for Large Integration of Renewable Energy Sources

    DEFF Research Database (Denmark)

    Molina-Garcia, Angel; Hansen, Anca Daniela; Muljadi, Ed

    2017-01-01

    Most European countries have concerns about the integration of large amounts of renewable energy sources (RES) into electric power systems, and this is currently a topic of growing interest. In January 2008, the European Commission published the 2020 package, which proposes committing the European...... Union to a 20% reduction in greenhouse gas emissions, to achieve a target of deriving 20% of the European Union's final energy consumption from renewable sources, and to achieve 20% improvement in energy efficiency both by the year 2020 [1]. Member states have different individual goals to meet...... these overall objectives, and they each need to provide a detailed roadmap describing how they will meet these legally binding targets [2]. At this time, RES are an indispensable part of the global energy mix, which has been partially motivated by the continuous increases in hydropower as well as the rapid...

  2. Requirements for Ion Sources

    International Nuclear Information System (INIS)

    Scrivens, R

    2013-01-01

    Ion sources produce beams for a large variety of different physical experiments, industrial processes and medical applications. In order to characterize the beam delivered by them, a list of requirements is necessary. In this chapter the list of principal requirements is specified and definitions for them are given. (author)

  3. Ion source requirements for pulsed spallation neutron sources

    International Nuclear Information System (INIS)

    Alonso, J.R.

    1996-01-01

    The neutron scattering community has endorsed the need for a high-power (1 to 5 MW) accelerator-driven source of neutrons for materials research. Properly configured, the accelerator could produce very short (sub-microsecond) bursts of cold neutrons, said time structure offering advantages over the continuous flux from a reactor for a large class of experiments. The recent cancellation of the ANS reactor project has increased the urgency to develop a comprehensive strategy based on the best technological scenarios. Studies to date have built on the experience from ISIS (the 160 kW source in the UK), and call for a high-current (approx. 100 mA peak) H - source-linac combination injecting into one or more accumulator rings in which beam may be further accelerated. The 1 to 5 GeV proton beam is extracted in a single turn and brought to the target-moderator stations. The high current, high duty-factor, high brightness and high reliability required of the ion source present a very large challenge to the ion source community. A workshop held in Berkeley in October 1994, analyzed in detail the source requirements for proposed accelerator scenarios, the present performance capabilities of different H - source technologies, and identified necessary R ampersand D efforts to bridge the gap. copyright 1996 American Institute of Physics

  4. Ion source requirements for pulsed spallation neutron sources

    International Nuclear Information System (INIS)

    Alonso, J.R.

    1995-10-01

    The neutron scattering community has endorsed the need for a high- power (1 to 5 MW) accelerator-driven source of neutrons for materials research. Properly configured, the accelerator could produce very short (sub-microsecond) bursts of cold neutrons, said time structure offering advantages over the continuous flux from a reactor for a large class of experiments. The recent cancellation of the ANS reactor project has increased the urgency to develop a comprehensive strategy based on the best technological scenarios. Studies to date have built on the experience from ISIS (the 160 KW source in the UK), and call for a high-current (approx. 100 mA peak) H - source-linac combination injecting into one or more accumulator rings in which beam may be further accelerated. The 1 to 5 GeV proton beam is extracted in a single turn and brought to the target-moderator stations. The high current, high duty-factor, high brightness and high reliability required of the ion source present a very large challenge to the ion source community. A workshop held in Berkeley in October 1994, analyzed in detail the source requirements for proposed accelerator scenarios, the present performance capabilities of different H - source technologies, and identified necessary R ampersand D efforts to bridge the gap

  5. Large source test stand for H-(D-) ion source

    International Nuclear Information System (INIS)

    Larson, R.; McKenzie-Wilson, R.

    1981-01-01

    The Brookhaven National Laboratory Neutral Beam Group has constructed a large source test stand for testing of the various source modules under development. The first objective of the BNL program is to develop a source module capable of delivering 10A of H - (D - ) at 25 kV operating in the steady state mode with satisfactory gas and power efficiency. The large source test stand contains gas supply and vacuum pumping systems, source cooling systems, magnet power supplies and magnet cooling systems, two arc power supplies rated at 25 kW and 50 kW, a large battery driven power supply and an extractor electrode power supply. Figure 1 is a front view of the vacuum vessel showing the control racks with the 36'' vacuum valves and refrigerated baffles mounted behind. Figure 2 shows the rear view of the vessel with a BNL Mk V magnetron source mounted in the source aperture and also shows the cooled magnet coils. Currently two types of sources are under test: a large magnetron source and a hollow cathode discharge source

  6. Advanced Neutron Sources: Plant Design Requirements

    International Nuclear Information System (INIS)

    1990-07-01

    The Advanced Neutron Source (ANS) is a new, world class facility for research using hot, thermal, cold, and ultra-cold neutrons. At the heart of the facility is a 350-MW th , heavy water cooled and moderated reactor. The reactor is housed in a central reactor building, with supporting equipment located in an adjoining reactor support building. An array of cold neutron guides fans out into a large guide hall, housing about 30 neutron research stations. Office, laboratory, and shop facilities are included to provide a complete users facility. The ANS is scheduled to begin operation at the Oak Ridge National Laboratory at the end of the decade. This Plant Design Requirements document defines the plant-level requirements for the design, construction, and operation of the ANS. This document also defines and provides input to the individual System Design Description (SDD) documents. Together, this Plant Design Requirements document and the set of SDD documents will define and control the baseline configuration of the ANS

  7. Use of large sources and accelerators

    International Nuclear Information System (INIS)

    1969-01-01

    A comprehensive review of applications of large radiation sources and accelerators in industrial processing was made at a symposium held in Munich during August. Reports presented dealt with industrial work already proved to be practical, projects in an advanced stage of development and with others in which there appears to be significant potential. (author)

  8. FERMI LARGE AREA TELESCOPE FIRST SOURCE CATALOG

    International Nuclear Information System (INIS)

    Abdo, A. A.; Ackermann, M.; Ajello, M.; Allafort, A.; Bechtol, K.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; Antolini, E.; Bonamente, E.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Bellazzini, R.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bisello, D.; Baughman, B. M.; Belli, F.

    2010-01-01

    We present a catalog of high-energy gamma-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), during the first 11 months of the science phase of the mission, which began on 2008 August 4. The First Fermi-LAT catalog (1FGL) contains 1451 sources detected and characterized in the 100 MeV to 100 GeV range. Source detection was based on the average flux over the 11 month period, and the threshold likelihood Test Statistic is 25, corresponding to a significance of just over 4σ. The 1FGL catalog includes source location regions, defined in terms of elliptical fits to the 95% confidence regions and power-law spectral fits as well as flux measurements in five energy bands for each source. In addition, monthly light curves are provided. Using a protocol defined before launch we have tested for several populations of gamma-ray sources among the sources in the catalog. For individual LAT-detected sources we provide firm identifications or plausible associations with sources in other astronomical catalogs. Identifications are based on correlated variability with counterparts at other wavelengths, or on spin or orbital periodicity. For the catalogs and association criteria that we have selected, 630 of the sources are unassociated. Care was taken to characterize the sensitivity of the results to the model of interstellar diffuse gamma-ray emission used to model the bright foreground, with the result that 161 sources at low Galactic latitudes and toward bright local interstellar clouds are flagged as having properties that are strongly dependent on the model or as potentially being due to incorrectly modeled structure in the Galactic diffuse emission.

  9. Large area solid target neutron source

    International Nuclear Information System (INIS)

    Crawford, J.C.; Bauer, W.

    1974-01-01

    A potentially useful neutron source may result from the combination of a solid deuterium-tritium loaded target with the large area, high energy ion beams from ion sources being developed for neutral beam injection. The resulting neutron source would have a large radiating area and thus produce the sizable experimental volume necessary for future studies of bulk and synergistic surface radiation effects as well as experiments on engineering samples and small components. With a 200 keV D + T + beam and 40 kW/cm 2 power dissipation on a 200 cm 2 target spot, a total neutron yield of about 4 x 10 15 n/sec may be achieved. Although the useable neutron flux from this source is limited to 1 to 2 x 10 13 n/cm 2 /sec, this flux can be produced 3 cm in front of the target and over about 300 cm 3 of experimental volume. Problems of total power dissipation, sputtering, isotopic flushing and thermal dissociation are reviewed. Neutron flux profiles and potential experimental configurations are presented and compared to other neutron source concepts. (U.S.)

  10. Industrial sources in Norway -- Regulations and requirements

    International Nuclear Information System (INIS)

    Saxeboel, G.

    2001-01-01

    On 12 May 2000, a new Act on radiation protection passed the Norwegian parliament. The report explains the requirements for the licensing process of sealed industrial sources and provides information, in particular, on the national inventory of industrial gauges, industrial radiography and logging sources. (author)

  11. Advanced Neutron Source: Plant Design Requirements

    International Nuclear Information System (INIS)

    1990-07-01

    The Advanced Neutron Source will be a new world-class facility for research using hot, thermal, cold, and ultra-cold neutrons. The heart of the facility will be a 330-MW (fission), heavy-water cooled and heavy-water moderated reactor. The reactor will be housed in a central reactor building, with supporting equipment located in an adjoining reactor support building. An array of cold neutron guides will fan out into a large guide hall, housing about 30 neutron research stations. Appropriate office, laboratory, and shop facilities will be included to provide a complete facility for users. The ANS is scheduled to begin operation at the Oak Ridge National Laboratory early in the next decade. This PDR document defines the plant-level requirements for the design, construction, and operation of ANS. It also defines and provides input to the individual System Design Description (SDD) documents. Together, this PDR document and the set of SDD documents will define and control the baseline configuration of ANS

  12. Application of large radiation sources in chemical processing industry

    International Nuclear Information System (INIS)

    Krishnamurthy, K.

    1977-01-01

    Large radiation sources and their application in chemical processing industry are described. A reference has also been made to the present developments in this field in India. Radioactive sources, notably 60 Co, are employed in production of wood-plastic and concrete-polymer composites, vulcanised rubbers, polymers, sulfochlorinated paraffin hydrocarbons and in a number of other applications which require deep penetration and high reliability of source. Machine sources of electrons are used in production of heat shrinkable plastics, insulation materials for cables, curing of paints etc. Radiation sources have also been used for sewage hygienisation. As for the scene in India, 60 Co sources, gamma chambers and batch irradiators are manufactured. A list of the on-going R and D projects and organisations engaged in research in this field is given. (M.G.B.)

  13. TWRS configuration management requirement source document

    International Nuclear Information System (INIS)

    Vann, J.M.

    1997-01-01

    The TWRS Configuration Management (CM) Requirement Source document prescribes CM as a basic product life-cycle function by which work and activities are conducted or accomplished. This document serves as the requirements basis for the TWRS CM program. The objective of the TWRS CM program is to establish consistency among requirements, physical/functional configuration, information, and documentation for TWRS and TWRS products, and to maintain this consistency throughout the life-cycle of TWRS and the product, particularly as changes are being made

  14. Quality Assurance Source Requirements Traceability Database

    International Nuclear Information System (INIS)

    MURTHY, R.; NAYDENOVA, A.; DEKLEVER, R.; BOONE, A.

    2006-01-01

    At the Yucca Mountain Project the Project Requirements Processing System assists in the management of relationships between regulatory and national/industry standards source criteria, and Quality Assurance Requirements and Description document (DOE/R W-0333P) requirements to create compliance matrices representing respective relationships. The matrices are submitted to the U.S. Nuclear Regulatory Commission to assist in the commission's review, interpretation, and concurrence with the Yucca Mountain Project QA program document. The tool is highly customized to meet the needs of the Office of Civilian Radioactive Waste Management Office of Quality Assurance

  15. Plasma and Ion Sources in Large Area Coatings: A Review

    Energy Technology Data Exchange (ETDEWEB)

    Anders, Andre

    2005-02-28

    Efficient deposition of high-quality coatings often requires controlled application of excited or ionized particles. These particles are either condensing (film-forming) or assisting by providing energy and momentum to the film growth process, resulting in densification, sputtering/etching, modification of stress, roughness, texture, etc. In this review, the technical means are surveyed enabling large area application of ions and plasmas, with ion energies ranging from a few eV to a few keV. Both semiconductortype large area (single wafer or batch processing with {approx} 1000 cm{sup 2}) and in-line web and glass-coating-type large area (> 10{sup 7} m{sup 2} annually) are considered. Characteristics and differences between plasma and ion sources are explained. The latter include gridded and gridless sources. Many examples are given, including sources based on DC, RF, and microwave discharges, some with special geometries like hollow cathodes and E x B configurations.

  16. Development of very large helicon plasma source

    International Nuclear Information System (INIS)

    Shinohara, Shunjiro; Tanikawa, Takao

    2004-01-01

    We have developed a very large volume, high-density helicon plasma source, 75 cm in diameter and 486 cm in axial length; full width at half maximum of the plasma density is up to ∼42 cm with good plasma uniformity along the z axis. By the use of a spiral antenna located just outside the end of the vacuum chamber through a quartz-glass window, plasma can be initiated with a very low value of radio frequency (rf) power ( 12 cm -3 is successfully produced with less than several hundred Watt; achieving excellent discharge efficiency. It is possible to control the radial density profile in this device by changing the magnetic field configurations near the antenna and/or the antenna radiation-field patterns

  17. FRX-C Large Source Modification

    International Nuclear Information System (INIS)

    Chrien, R.E.; Tuszewski, M.; Yavornik, E.J.

    1985-01-01

    The FRX-C Large Source Modification (LSM) consists of a larger discharge tube and a larger radius coil connected to the existing FRX-C collector plates and capacitor banks. The objectives of LSM are to (1) study the size dependences of processes governing FRC formation and poloidal flux trapping in order to improve the design of larger field-reversed theta pinch devices, (2) increase the parameter s (number of local ion gyroradii between the field null and separatrix) to seek access to predicted new regimes of improved confinement and possible instability, (3) search for evidence of internal tilt instability at higher values of s where the mode is predicted to grow more rapidly, and (4) observe the effect of s and larger size on FRC confinement. In this paper we will discuss the construction of LSM, the experimental plan, and preliminary experimental results

  18. FRX-C/T large source modification

    International Nuclear Information System (INIS)

    Tuszewski, M.; Chrien, R.E.; Yavornik, E.J.; Armstrong, W.T.; Hugrass, W.; Linford, R.K.; McKenna, K.F.; Rej, D.J.; Siemon, R.E.

    1986-01-01

    FRC experiments on FRX-C/T during the past two years have been devoted to translation studies and, more recently, to formation studies in situ. The interest in the latter became stronger as FRX-D was proposed. It is intended that, in summer 1985, FRX-C/T be modified with a larger coil that would be essentially a half-scale prototype of FRX-D. This would allow studies of flux trapping with increased size and with the associated enhanced axial dynamics. The results from these formation studies, with and without some improved formation techniques, may influence the design of the FRX-D source in a major way. In addition, this FRX-C/T Large Source Modification may allow the study of FRC's with somewhat increased values of anti s, in the range 2-3. This may be sufficient to reach a new stability regime where the predicted internal tilt mode may occur. Finally, the larger coil size should yield improved FRC confinement times by about 50%, and may allow with a minimum of plasma decompression the study of FRC's with x/sub s/ in the range 0.6-0.7 in possible future translation experiments

  19. Large Data Visualization with Open-Source Tools

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Visualization and post-processing of large data have become increasingly challenging and require more and more tools to support the diversity of data to process. In this seminar, we will present a suite of open-source tools supported and developed by Kitware to perform large-scale data visualization and analysis. In particular, we will present ParaView, an open-source tool for parallel visualization of massive datasets, the Visualization Toolkit (VTK), an open-source toolkit for scientific visualization, and Tangelohub, a suite of tools for large data analytics. About the speaker Julien Jomier is directing Kitware's European subsidiary in Lyon, France, where he focuses on European business development. Julien works on a variety of projects in the areas of parallel and distributed computing, mobile computing, image processing, and visualization. He is one of the developers of the Insight Toolkit (ITK), the Visualization Toolkit (VTK), and ParaView. Julien is also leading the CDash project, an open-source co...

  20. Large area ion and plasma beam sources

    Energy Technology Data Exchange (ETDEWEB)

    Waldorf, J. [IPT Ionen- und Plasmatech. GmbH, Kaiserslautern (Germany)

    1996-06-01

    In the past a number of ion beam sources utilizing different methods for plasma excitation have been developed. Nevertheless, a widespread use in industrial applications has not happened, since the sources were often not able to fulfill specific demands like: broad homogeneous ion beams, compatibility with reactive gases, low ion energies at high ion current densities or electrical neutrality of the beam. Our contribution wants to demonstrate technical capabilities of rf ion and plasma beam sources, which can overcome the above mentioned disadvantages. The physical principles and features of respective sources are presented. We report on effective low pressure plasma excitation by electron cyclotron wave resonance (ECWR) for the generation of dense homogeneous plasmas and the rf plasma beam extraction method for the generation of broad low energy plasma beams. Some applications like direct plasma beam deposition of a-C:H and ion beam assisted deposition of Al and Cu with tailored thin film properties are discussed. (orig.).

  1. Large area ion and plasma beam sources

    International Nuclear Information System (INIS)

    Waldorf, J.

    1996-01-01

    In the past a number of ion beam sources utilizing different methods for plasma excitation have been developed. Nevertheless, a widespread use in industrial applications has not happened, since the sources were often not able to fulfill specific demands like: broad homogeneous ion beams, compatibility with reactive gases, low ion energies at high ion current densities or electrical neutrality of the beam. Our contribution wants to demonstrate technical capabilities of rf ion and plasma beam sources, which can overcome the above mentioned disadvantages. The physical principles and features of respective sources are presented. We report on effective low pressure plasma excitation by electron cyclotron wave resonance (ECWR) for the generation of dense homogeneous plasmas and the rf plasma beam extraction method for the generation of broad low energy plasma beams. Some applications like direct plasma beam deposition of a-C:H and ion beam assisted deposition of Al and Cu with tailored thin film properties are discussed. (orig.)

  2. Automated Determination of Magnitude and Source Length of Large Earthquakes

    Science.gov (United States)

    Wang, D.; Kawakatsu, H.; Zhuang, J.; Mori, J. J.; Maeda, T.; Tsuruoka, H.; Zhao, X.

    2017-12-01

    Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus

  3. The Sources and Methods of Engineering Design Requirement

    DEFF Research Database (Denmark)

    Li, Xuemeng; Zhang, Zhinan; Ahmed-Kristensen, Saeema

    2014-01-01

    to be defined in a new context. This paper focuses on understanding the design requirement sources at the requirement elicitation phase. It aims at proposing an improved design requirement source classification considering emerging markets and presenting current methods for eliciting requirement for each source...

  4. Operational source receptor calculations for large agglomerations

    Science.gov (United States)

    Gauss, Michael; Shamsudheen, Semeena V.; Valdebenito, Alvaro; Pommier, Matthieu; Schulz, Michael

    2016-04-01

    For Air quality policy an important question is how much of the air pollution within an urbanized region can be attributed to local sources and how much of it is imported through long-range transport. This is critical information for a correct assessment of the effectiveness of potential emission measures. The ratio between indigenous and long-range transported air pollution for a given region depends on its geographic location, the size of its area, the strength and spatial distribution of emission sources, the time of the year, but also - very strongly - on the current meteorological conditions, which change from day to day and thus make it important to provide such calculations in near-real-time to support short-term legislation. Similarly, long-term analysis over longer periods (e.g. one year), or of specific air quality episodes in the past, can help to scientifically underpin multi-regional agreements and long-term legislation. Within the European MACC projects (Monitoring Atmospheric Composition and Climate) and the transition to the operational CAMS service (Copernicus Atmosphere Monitoring Service) the computationally efficient EMEP MSC-W air quality model has been applied with detailed emission data, comprehensive calculations of chemistry and microphysics, driven by high quality meteorological forecast data (up to 96-hour forecasts), to provide source-receptor calculations on a regular basis in forecast mode. In its current state, the product allows the user to choose among different regions and regulatory pollutants (e.g. ozone and PM) to assess the effectiveness of fictive emission reductions in air pollutant emissions that are implemented immediately, either within the agglomeration or outside. The effects are visualized as bar charts, showing resulting changes in air pollution levels within the agglomeration as a function of time (hourly resolution, 0 to 4 days into the future). The bar charts not only allow assessing the effects of emission

  5. Requirements Specification for Open Source Software Selection

    OpenAIRE

    YANG, YING

    2008-01-01

    Open source software has been widely used. The software world is enjoying the advantages of collaboration and cooperation in software development and use with the advent of open source movement. However, little research is concerned about the practical guidelines of OSS selection. It is hard for an organization to make a decision whether they should use the OSS or not, and to select an appropriate one from a number of OSS candidates. This thesis studies how to select an open source software f...

  6. 46 CFR 111.10-4 - Power requirements, generating sources.

    Science.gov (United States)

    2010-10-01

    ... ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Power Supply § 111.10-4 Power requirements, generating sources. (a) The aggregate capacity of the electric ship's service generating sources required in § 111.10-3 must... or sources must be sufficient to supply those services necessary to provide normal operational...

  7. A large source of low-volatility secondary organic aerosol

    DEFF Research Database (Denmark)

    Ehn, Mikael; Thornton, Joel A.; Kleist, Einhard

    2014-01-01

    radiation and by acting as cloud condensation nuclei. The quantitative assessment of such climate effects remains hampered by a number of factors, including an incomplete understanding of how biogenic VOCs contribute to the formation of atmospheric secondary organic aerosol. The growth of newly formed...... particles from sizes of less than three nanometres up to the sizes of cloud condensation nuclei (about one hundred nanometres) in many continental ecosystems requires abundant, essentially non-volatile organic vapours, but the sources and compositions of such vapours remain unknown. Here we investigate...... the oxidation of VOCs, in particular the terpene α-pinene, under atmospherically relevant conditions in chamber experiments. We find that a direct pathway leads from several biogenic VOCs, such as monoterpenes, to the formation of large amounts of extremely low-volatility vapours. These vapours form...

  8. 30 CFR 49.40 - Requirements for large coal mines.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Requirements for large coal mines. 49.40 Section 49.40 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR EDUCATION AND TRAINING MINE RESCUE TEAMS Mine Rescue Teams for Underground Coal Mines § 49.40 Requirements for large coal...

  9. Large area, surface discharge pumped, vacuum ultraviolet light source

    Science.gov (United States)

    Sze, R.C.; Quigley, G.P.

    1996-12-17

    Large area, surface discharge pumped, vacuum ultraviolet (VUV) light source is disclosed. A contamination-free VUV light source having a 225 cm{sup 2} emission area in the 240-340 nm region of the electromagnetic spectrum with an average output power in this band of about 2 J/cm{sup 2} at a wall-plug efficiency of approximately 5% is described. Only ceramics and metal parts are employed in this surface discharge source. Because of the contamination-free, high photon energy and flux, and short pulse characteristics of the source, it is suitable for semiconductor and flat panel display material processing. 3 figs.

  10. Getting Grip on Security Requirements Elicitation by Structuring and Reusing Security Requirements Sources

    Directory of Open Access Journals (Sweden)

    Christian Schmitt

    2015-07-01

    Full Text Available This paper presents a model for structuring and reusing security requirements sources. The model serves as blueprint for the development of an organization-specific repository, which provides relevant security requirements sources, such as security information and knowledge sources and relevant compliance obligations, in a structured and reusable form. The resulting repository is intended to be used by development teams during the elicitation and analysis of security requirements with the goal to understand the security problem space, incorporate all relevant requirements sources, and to avoid unnecessary effort for identifying, understanding, and correlating applicable security requirements sources on a project-wise basis. We start with an overview and categorization of important security requirements sources, followed by the description of the generic model. To demonstrate the applicability and benefits of the model, the instantiation approach and details of the resulting repository of security requirements sources are presented.

  11. Estimated spatial requirements of the medium- to large-sized ...

    African Journals Online (AJOL)

    Conservation planning in the Cape Floristic Region (CFR) of South Africa, a recognised world plant diversity hotspot, required information on the estimated spatial requirements of selected medium- to large-sized mammals within each of 102 Broad Habitat Units (BHUs) delineated according to key biophysical parameters.

  12. Large seismic source imaging from old analogue seismograms

    Science.gov (United States)

    Caldeira, Bento; Buforn, Elisa; Borges, José; Bezzeghoud, Mourad

    2017-04-01

    In this work we present a procedure to recover the ground motions by a proper digital structure, from old seismograms in analogue physical support (paper or microfilm) to study the source rupture process, by application of modern finite source inversion tools. Despite the quality that the analog data and the digitizing technologies available may have, recover the ground motions with the accurate metrics from old seismograms, is often an intricate procedure. Frequently the general parameters of the analogue instruments response that allow recover the shape of the ground motions (free periods and damping) are known, but the magnification that allow recover the metric of these motions is dubious. It is in these situations that the procedure applies. The procedure is based on assign of the moment magnitude value to the integral of the apparent Source Time Function (STF), estimated by deconvolution of a synthetic elementary seismogram from the related observed seismogram, corrected with an instrument response affected by improper magnification. Two delicate issues in the process are 1) the calculus of the synthetic elementary seismograms that must consider later phases if applied to large earthquakes (the portions of signal should be 3 or 4 times larger than the rupture time) and 2) the deconvolution to calculate the apparent STF. In present version of the procedure was used the Direct Solution Method to compute the elementary seismograms and the deconvolution was processed in time domain by an iterative algorithm that allow constrains the STF to stay positive and time limited. The method was examined using synthetic data to test the accuracy and robustness. Finally, a set of 17 real old analog seismograms from the Santa Maria (Azores) 1939 earthquake (Mw=7.1) was used in order to recover the waveforms in the required digital structure, from which by inversion allows compute the finite source rupture model (slip distribution). Acknowledgements: This work is co

  13. 40 CFR 63.2343 - What are my requirements for emission sources not requiring control?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true What are my requirements for emission sources not requiring control? 63.2343 Section 63.2343 Protection of Environment ENVIRONMENTAL PROTECTION... (Non-Gasoline) What This Subpart Covers § 63.2343 What are my requirements for emission sources not...

  14. Updating Geospatial Data from Large Scale Data Sources

    Science.gov (United States)

    Zhao, R.; Chen, J.; Wang, D.; Shang, Y.; Wang, Z.; Li, X.; Ai, T.

    2011-08-01

    In the past decades, many geospatial databases have been established at national, regional and municipal levels over the world. Nowadays, it has been widely recognized that how to update these established geo-spatial database and keep them up to date is most critical for the value of geo-spatial database. So, more and more efforts have been devoted to the continuous updating of these geospatial databases. Currently, there exist two main types of methods for Geo-spatial database updating: directly updating with remote sensing images or field surveying materials, and indirectly updating with other updated data result such as larger scale newly updated data. The former method is the basis because the update data sources in the two methods finally root from field surveying and remote sensing. The later method is often more economical and faster than the former. Therefore, after the larger scale database is updated, the smaller scale database should be updated correspondingly in order to keep the consistency of multi-scale geo-spatial database. In this situation, it is very reasonable to apply map generalization technology into the process of geo-spatial database updating. The latter is recognized as one of most promising methods of geo-spatial database updating, especially in collaborative updating environment in terms of map scale, i.e , different scale database are produced and maintained separately by different level organizations such as in China. This paper is focused on applying digital map generalization into the updating of geo-spatial database from large scale in the collaborative updating environment for SDI. The requirements of the application of map generalization into spatial database updating are analyzed firstly. A brief review on geospatial data updating based digital map generalization is then given. Based on the requirements analysis and review, we analyze the key factors for implementing updating geospatial data from large scale including technical

  15. Coordinating a Large, Amalgamated REU Program with Multiple Funding Sources

    Science.gov (United States)

    Fiorini, Eugene; Myers, Kellen; Naqvi, Yusra

    2017-01-01

    In this paper, we discuss the challenges of organizing a large REU program amalgamated from multiple funding sources, including diverse participants, mentors, and research projects. We detail the program's structure, activities, and recruitment, and we hope to demonstrate that the organization of this REU is not only beneficial to its…

  16. Monte Carlo modelling of large scale NORM sources using MCNP.

    Science.gov (United States)

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  17. Fabrication of large diameter alumino-silicate K+ sources

    International Nuclear Information System (INIS)

    Baca, D.; Chacon-Golcher, E.; Kwan, J.W.; Wu, J.K.

    2003-01-01

    Alumino-silicate K + sources have been used in HIF experiments for many years. For example the Neutralized Transport Expt. (NTX) and the High Current Transport Expt. (HCX) are now using this type of ion source with diameters of 2.54 cm and 10 cm respectively. These sources have demonstrated ion currents of 80 mA and 700 mA, for typical HIF pulse lengths of 5-10 (micro)s. The corresponding current density is ∼ 10-15 mA/cm 2 , but much higher current density has been observed using smaller size sources. Recently we have improved our fabrication techniques and, therefore, are able to reliably produce large diameter ion sources with high quality emitter surface without defects. This note provides a detailed description of the procedures employed in the fabrication process. The variables in the processing steps affecting surface quality, such as substrate porosity, powder size distribution, coating technique on large area concave surfaces, drying, and heat firing temperature have been investigated

  18. Fermi Large Area Telescope Bright Gamma-ray Source List

    Energy Technology Data Exchange (ETDEWEB)

    Abdo, Aous A.; /Naval Research Lab, Wash., D.C.; Ackermann, M.; /KIPAC, Menlo Park /SLAC; Ajello, M.; /KIPAC, Menlo Park /SLAC; Atwood, W.B.; /UC, Santa Cruz; Axelsson, M.; /Stockholm U., OKC /Stockholm U.; Baldini, L.; /INFN, Pisa; Ballet, J.; /DAPNIA, Saclay; Band, D.L.; /NASA, Goddard /NASA, Goddard; Barbiellini, Guido; /INFN, Trieste /Trieste U.; Bastieri, Denis; /INFN, Padua /Padua U.; Bechtol, K.; /KIPAC, Menlo Park /SLAC; Bellazzini, R.; /INFN, Pisa; Berenji, B.; /KIPAC, Menlo Park /SLAC; Bignami, G.F.; /Pavia U.; Bloom, Elliott D.; /KIPAC, Menlo Park /SLAC; Bonamente, E.; /INFN, Perugia /Perugia U.; Borgland, A.W.; /KIPAC, Menlo Park /SLAC; Bregeon, J.; /INFN, Pisa; Brigida, M.; /Bari U. /INFN, Bari; Bruel, P.; /Ecole Polytechnique; Burnett, Thompson H.; /Washington U., Seattle /Bari U. /INFN, Bari /KIPAC, Menlo Park /SLAC /IASF, Milan /IASF, Milan /DAPNIA, Saclay /ASDC, Frascati /INFN, Perugia /Perugia U. /KIPAC, Menlo Park /SLAC /George Mason U. /Naval Research Lab, Wash., D.C. /NASA, Goddard /KIPAC, Menlo Park /SLAC /INFN, Perugia /Perugia U. /KIPAC, Menlo Park /SLAC /Montpellier U. /Sonoma State U. /Stockholm U., OKC /Royal Inst. Tech., Stockholm /Stockholm U. /KIPAC, Menlo Park /SLAC /ASDC, Frascati /NASA, Goddard /Maryland U. /Naval Research Lab, Wash., D.C. /INFN, Trieste /Pavia U. /Bari U. /INFN, Bari /KIPAC, Menlo Park /SLAC /UC, Santa Cruz /KIPAC, Menlo Park /SLAC /KIPAC, Menlo Park /SLAC /KIPAC, Menlo Park /SLAC /Montpellier U. /Bari U. /INFN, Bari /Ecole Polytechnique /NASA, Goddard; /more authors..

    2009-05-15

    Following its launch in 2008 June, the Fermi Gamma-ray Space Telescope (Fermi) began a sky survey in August. The Large Area Telescope (LAT) on Fermi in three months produced a deeper and better resolved map of the {gamma}-ray sky than any previous space mission. We present here initial results for energies above 100 MeV for the 205 most significant (statistical significance greater than {approx}10{sigma}) {gamma}-ray sources in these data. These are the best characterized and best localized point-like (i.e., spatially unresolved) {gamma}-ray sources in the early mission data.

  19. General-purpose heat source development. Phase I: design requirements

    International Nuclear Information System (INIS)

    Snow, E.C.; Zocher, R.W.

    1978-09-01

    Studies have been performed to determine the necessary design requirements for a 238 PuO 2 General-Purpose Heat Source (GPHS). Systems and missions applications, as well as accident conditions, were considered. The results of these studies, along with the recommended GPHS design requirements, are given in this report

  20. Fuel-cycle financing, capital requirements and sources of funds

    International Nuclear Information System (INIS)

    Manderbach, R.W.

    1977-01-01

    An issue of global importance today is the economic case fro nuclear power and the conservation of precious fossil resources. An important question is whether sufficient financial resources can be attracted to the nuclear industry in order to develop a complete fuel-cycle industry capable of meeting the requirements of a global nuclear power industry. Future growth of the nuclear power industry will depend largely on the timely development of a private competitive industry covering the total fuel cycle. The report of the Edison Electric Institute on Nuclear Fuels Supply estimates that by 1985 initial capital investmentor in the nuclear fuel cycle will total US$15x10 9 and by the year 2000, US$60x10 9 will be required. Although the amount of funding projected is manageable from a global availability standpoint, there is a hesitancy to commit financial resources to certain segments of the fuel cycle, because of the many unresolved problems in connection with the nuclear industry - uncertainty regarding local and international governmental regulations and legislation, environmental and alternative technological considerations coupled with the substantial long-term capital commitments needed in each of the several segments of the processes. Activities associated with the nuclear fuel cycle have unique investment requirements, which are needed in many diverse unrelated fields such as resource development and high technology process. This paper examines sources of capital on a national scale, such as net earnings, depreciation, capital market and public subsidies; and, in the broader context, capital investments in highly industrialized and developing countries. Possible areas of government guarantees and financing; and the situation on financing fuel-cycle projects in the USA and in other countries is also discussed. Comments are included on the money market and investment climate in developing countries, particularly regarding the development of uranium resources

  1. LLNL large-area inductively coupled plasma (ICP) source: Experiments

    International Nuclear Information System (INIS)

    Richardson, R.A.; Egan, P.O.; Benjamin, R.D.

    1995-05-01

    We describe initial experiments with a large (76-cm diameter) plasma source chamber to explore the problems associated with large-area inductively coupled plasma (ICP) sources to produce high density plasmas useful for processing 400-mm semiconductor wafers. Our experiments typically use a 640-nun diameter planar ICP coil driven at 13.56 MHz. Plasma and system data are taken in Ar and N 2 over the pressure range 3-50 mtorr. RF inductive power was run up to 2000W, but typically data were taken over the range 100-1000W. Diagnostics include optical emission spectroscopy, Langmuir probes, and B probes as well as electrical circuit measurements. The B and E-M measurements are compared with models based on commercial E-M codes. Initial indications are that uniform plasmas suitable for 400-mm processing are attainable

  2. Characteristics of large scale ionic source for JT-60

    Energy Technology Data Exchange (ETDEWEB)

    Fujiwara, Yukio; Honda, Atsushi; Inoue, Takashi [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment] [and others

    1997-02-01

    The Neutral Beam Injection (NBI) apparatus is expected for important role sharing apparatus to realize the plasma electric current drive and the plasma control in not only temperature upgrading of the plasma but also Tokamak nuclear fusion reactor for the next generation such as JT-60, ITER and so forth. Japan Atomic Energy Research Institute has developed the ionic source with high energy and large electric current for about 10 years. Some arrangement tests of the large negative ion source for JT-60 No. 1 were executed from June to October, 1995. As a series of arrangement tests, 400 KeV and 13.5 A of deuterium negative ion beam was successfully accelerated for 0.12 sec. under 0.22 Pa of low gas pressure. And, it was elucidated that electron electric current could be controlled efficiently even in deuterium negative ion beam. Here is described on the testing results in details. (G.K.)

  3. Large area UV light source with a semiconductor cathode

    International Nuclear Information System (INIS)

    Salamov, B. G.; Ciftci, Y. Oe.; Colakoglu, K.

    2002-01-01

    The light emission (LE) in the UV and visible (blue) range generated by a planar gas discharge system (PGDS) with a semiconductor cathode (SC) are studied. New light source offer high-intensity narrow-band emission at various UV and visible wavelengths (330 - 440 nm). Spectra in N 2 is presented, as well as intensity vs pressure curves for the main peaks of the spectrum. The use of source offers several advantages: PGDS can be extremely efficient energy converters transforming and amplifying a relatively low-powered photon flux incident on the receiving surface of the SC into a flux of high-energy particles over extended areas, i.e. electron, ions, photons. Thus, extremely bright UV and visible sources can be built. LE characteristics of the space in the PGDS are complex, depending on the emitting medium and species. By using the IR light to excite the SC of the system, we have shown that the discharge light emission (DLE) of the device with the N 2 in the gap can serve as an efficient source of the UV radiation if gas pressure and electric field are sufficiently high. This is realized due to the effect of the stabilisation of the spatially homogeneous mode of the discharge in a narrow gap with a large emitting area of SC. Special features of DLE render it highly promising for the development of sources with a large area of the emitting surface, high spatial uniformity of UV radiation, and fast dynamics of these devices. This low cost, high power light sources can provide an interesting alternative to conventional UV lamps

  4. Open source tools for large-scale neuroscience.

    Science.gov (United States)

    Freeman, Jeremy

    2015-06-01

    New technologies for monitoring and manipulating the nervous system promise exciting biology but pose challenges for analysis and computation. Solutions can be found in the form of modern approaches to distributed computing, machine learning, and interactive visualization. But embracing these new technologies will require a cultural shift: away from independent efforts and proprietary methods and toward an open source and collaborative neuroscience. Copyright © 2015 The Author. Published by Elsevier Ltd.. All rights reserved.

  5. Fuel cycle financing, capital requirements and sources of funds

    International Nuclear Information System (INIS)

    Manderbach, R.W.

    1977-01-01

    An issue of global importance today is the economic case for nuclear power and the conservation of precious fossil resources. A question important to all of us is can sufficient financial resources be attracted to the nuclear industry in order to develop a complete fuel cycle industry capable of meeting the requirements of a global nuclear power industry. Future growth of the nuclear power industry will depend to a large extent on the timely development of a private competitive industry covering the total fuel cycle. The report of the Edison Electric Institute on Nuclear Fuels Supply estimates that by 1985 initial capital investment in the nuclear fuel cycle will total $15 billion and by the year 2000, $60 billion will be required. Although undoubtedly the amount of funding projected is manageable from a global availability standpoint, there is a hesitancy to commit financial resources to certain segments of the fuel cycle. This is because of the many unresolved problems in connection with the nuclear industry such as uncertainty regarding local and international governmental regulations and legislation, environmental and alternative technological considerations coupled, of course, with the substantial capital long term commitments needed in each of the several segments of the processes. Activities associated with the nuclear fuel cycle have unique investment requirements. Investments are needed in many diverse unrelated fields such as resource development and high technology process some of which are not yet fully commercialized. Sources of capital will be examined on a national scale, such as net earnings, depreciation, capital market and public subsidies. The paper also examines, in the broader context, capital investments in highly industrialized and developing countries as well as discussing the possible areas of Government guarantees and financing. The intensive capital required in certain segments of the cycle, which are to be developed by private

  6. Shape accuracy requirements on starshades for large and small apertures

    Science.gov (United States)

    Shaklan, Stuart B.; Marchen, Luis; Cady, Eric

    2017-09-01

    Starshades have been designed to work with large and small telescopes alike. With smaller telescopes, the targets tend to be brighter and closer to the Solar System, and their putative planetary systems span angles that require starshades with radii of 10-30 m at distances of 10s of Mm. With larger apertures, the light-collecting power enables studies of more numerous, fainter systems, requiring larger, more distant starshades with radii >50 m at distances of 100s of Mm. Characterization using infrared wavelengths requires even larger starshades. A mitigating approach is to observe planets between the petals, where one can observe regions closer to the star but with reduced throughput and increased instrument scatter. We compare the starshade shape requirements, including petal shape, petal positioning, and other key terms, for the WFIRST 26m starshade and the HABEX 72 m starshade concepts, over a range of working angles and telescope sizes. We also compare starshades having rippled and smooth edges and show that their performance is nearly identical.

  7. 41 CFR 51-5.2 - Mandatory source requirement.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Mandatory source requirement. 51-5.2 Section 51-5.2 Public Contracts and Property Management Other Provisions Relating to... such as the Defense Logistics Agency and the General Services Administration, and certain commercial...

  8. A new large-scale plasma source with plasma cathode

    International Nuclear Information System (INIS)

    Yamauchi, K.; Hirokawa, K.; Suzuki, H.; Satake, T.

    1996-01-01

    A new large-scale plasma source (200 mm diameter) with a plasma cathode has been investigated. The plasma has a good spatial uniformity, operates at low electron temperature, and is highly ionized under relatively low gas pressure of about 10 -4 Torr. The plasma source consists of a plasma chamber and a plasma cathode generator. The plasma chamber has an anode which is 200 mm in diameter, 150 mm in length, is made of 304 stainless steel, and acts as a plasma expansion cup. A filament-cathode-like plasma ''plasma cathode'' is placed on the central axis of this source. To improve the plasma spatial uniformity in the plasma chamber, a disk-shaped, floating electrode is placed between the plasma chamber and the plasma cathode. The 200 mm diameter plasma is measure by using Langmuir probes. As a result, the discharge voltage is relatively low (30-120 V), the plasma space potential is almost equal to the discharge voltage and can be easily controlled, the electron temperature is several electron volts, the plasma density is about 10 10 cm -3 , and the plasma density is about 10% variance in over a 100 mm diameter. (Author)

  9. Estimating Source Duration for Moderate and Large Earthquakes in Taiwan

    Science.gov (United States)

    Chang, Wen-Yen; Hwang, Ruey-Der; Ho, Chien-Yin; Lin, Tzu-Wei

    2017-04-01

    Estimating Source Duration for Moderate and Large Earthquakes in Taiwan Wen-Yen Chang1, Ruey-Der Hwang2, Chien-Yin Ho3 and Tzu-Wei Lin4 1 Department of Natural Resources and Environmental Studies, National Dong Hwa University, Hualien, Taiwan, ROC 2Department of Geology, Chinese Culture University, Taipei, Taiwan, ROC 3Department of Earth Sciences, National Cheng Kung University, Tainan, Taiwan, ROC 4Seismology Center, Central Weather Bureau, Taipei, Taiwan, ROC ABSTRACT To construct a relationship between seismic moment (M0) and source duration (t) was important for seismic hazard in Taiwan, where earthquakes were quite active. In this study, we used a proposed inversion process using teleseismic P-waves to derive the M0-t relationship in the Taiwan region for the first time. Fifteen earthquakes with MW 5.5-7.1 and focal depths of less than 40 km were adopted. The inversion process could simultaneously determine source duration, focal depth, and pseudo radiation patterns of direct P-wave and two depth phases, by which M0 and fault plane solutions were estimated. Results showed that the estimated t ranging from 2.7 to 24.9 sec varied with one-third power of M0. That is, M0 is proportional to t**3, and then the relationship between both of them was M0=0.76*10**23(t)**3 , where M0 in dyne-cm and t in second. The M0-t relationship derived from this study was very close to those determined from global moderate to large earthquakes. For further understanding the validity in the derived relationship, through the constructed relationship of M0-, we inferred the source duration of the 1999 Chi-Chi (Taiwan) earthquake with M0=2-5*10**27 dyne-cm (corresponding to Mw = 7.5-7.7) to be approximately 29-40 sec, in agreement with many previous studies for source duration (28-42 sec).

  10. Large-scale fracture mechancis testing -- requirements and possibilities

    International Nuclear Information System (INIS)

    Brumovsky, M.

    1993-01-01

    Application of fracture mechanics to very important and/or complicated structures, like reactor pressure vessels, brings also some questions about the reliability and precision of such calculations. These problems become more pronounced in cases of elastic-plastic conditions of loading and/or in parts with non-homogeneous materials (base metal and austenitic cladding, property gradient changes through material thickness) or with non-homogeneous stress fields (nozzles, bolt threads, residual stresses etc.). For such special cases some verification by large-scale testing is necessary and valuable. This paper discusses problems connected with planning of such experiments with respect to their limitations, requirements to a good transfer of received results to an actual vessel. At the same time, an analysis of possibilities of small-scale model experiments is also shown, mostly in connection with application of results between standard, small-scale and large-scale experiments. Experience from 30 years of large-scale testing in SKODA is used as an example to support this analysis. 1 fig

  11. Source requirements for flash x-radiography diagnostics

    International Nuclear Information System (INIS)

    Posey, L.D.

    1975-01-01

    Electron beam sources capable of being used for x-ray cinematography were evaluated with respect to their applicability for the detection of LMFBR fuel motion. In the study each source type was coupled with a common detector system in order to determine source requirements. The basis for this determination was the proposed experiment matrix for the ANL SAREF program. The experimental situations considered corresponded to partial, single, and multiple subassemblies and operating power densities of 250 watts/gm to 10 6 watts/gm. The electron beam source types considered were LINAC, Linear Induction Accelerator, and Relativistic Electron Beam Accelerator. The background (neutron and gamma) from the driver reactor core and the test assembly itself were found to be a very important factor in sizing the electron beam sources. It is possible, however, that through the use of selective filtering techniques, differentiation between signal and background may be enhanced. The results of this work indicate that the Linear Induction Accelerator should be able to satisfy most experimental requirements up to and including full subassembly test configurations. Reasonable resolution should be attained for these configurations although it will be determined to a substantial degree by the effects of photon buildup and scattering

  12. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  13. Large Deployable Reflector (LDR) Requirements for Space Station Accommodations

    Science.gov (United States)

    Crowe, D. A.; Clayton, M. J.; Runge, F. C.

    1985-01-01

    Top level requirements for assembly and integration of the Large Deployable Reflector (LDR) Observatory at the Space Station are examined. Concepts are currently under study for LDR which will provide a sequel to the Infrared Astronomy Satellite and the Space Infrared Telescope Facility. LDR will provide a spectacular capability over a very broad spectral range. The Space Station will provide an essential facility for the initial assembly and check out of LDR, as well as a necessary base for refurbishment, repair and modification. By providing a manned platform, the Space Station will remove the time constraint on assembly associated with use of the Shuttle alone. Personnel safety during necessary EVA is enhanced by the presence of the manned facility.

  14. Large Deployable Reflector (LDR) requirements for space station accommodations

    Science.gov (United States)

    Crowe, D. A.; Clayton, M. J.; Runge, F. C.

    1985-04-01

    Top level requirements for assembly and integration of the Large Deployable Reflector (LDR) Observatory at the Space Station are examined. Concepts are currently under study for LDR which will provide a sequel to the Infrared Astronomy Satellite and the Space Infrared Telescope Facility. LDR will provide a spectacular capability over a very broad spectral range. The Space Station will provide an essential facility for the initial assembly and check out of LDR, as well as a necessary base for refurbishment, repair and modification. By providing a manned platform, the Space Station will remove the time constraint on assembly associated with use of the Shuttle alone. Personnel safety during necessary EVA is enhanced by the presence of the manned facility.

  15. Investigation of a large volume negative hydrogen ion source

    International Nuclear Information System (INIS)

    Courteille, C.; Bruneteau, A.M.; Bacal, M.

    1995-01-01

    The electron and negative ion densities and temperatures are reported for a large volume hybrid multicusp negative ion source. Based on the scaling laws an analysis is made of the plasma formation and loss processes. It is shown that the positive ions are predominantly lost to the walls, although the observed scaling law is n + ∝I 0.57 d . However, the total plasma loss scales linearly with the discharge current, in agreement with the theoretical model. The negative ion formation and loss is also discussed. It is shown that at low pressure (1 mTorr) the negative ion wall loss becomes a significant part of the total loss. The dependence of n - /n e versus the electron temperature is reported. When the negative ion wall loss is negligible, all the data on n - /n e versus the electron temperatures fit a single curve. copyright 1995 American Institute of Physics

  16. Redefining Requirements of Ancillary Services for Technology Agnostic Sources

    DEFF Research Database (Denmark)

    Bondy, Daniel Esteban Morales; MacDonald, Jason; Kara, Emre Can

    2018-01-01

    New sources for ancillary services are needed, yet the requirements for service provision in most countries are explicitly formulated for traditional generators. This leads to waste of the potential for new technologies to deliver ancillary services. In order to harness this potential, we propose...... to parameterize the requirements of ancillary services so that reserves can be built by combining the advantageous properties of different technologies. The proposal is exemplified through a laboratory test where it shown that the system needs can be covered through cheaper and smaller reserves....

  17. Relations between source parameters for large Persian earthquakes

    Directory of Open Access Journals (Sweden)

    Majid Nemati

    2015-11-01

    Full Text Available Empirical relationships for magnitude scales and fault parameters were produced using 436 Iranian intraplate earthquakes of recently regional databases since the continental events represent a large portion of total seismicity of Iran. The relations between different source parameters of the earthquakes were derived using input information which has usefully been provided from the databases after 1900. Suggested equations for magnitude scales relate the body-wave, surface-wave as well as local magnitude scales to scalar moment of the earthquakes. Also, dependence of source parameters as surface and subsurface rupture length and maximum surface displacement on the moment magnitude for some well documented earthquakes was investigated. For meeting this aim, ordinary linear regression procedures were employed for all relations. Our evaluations reveal a fair agreement between obtained relations and equations described in other worldwide and regional works in literature. The M0-mb and M0-MS equations are correlated well to the worldwide relations. Also, both M0-MS and M0-ML relations have a good agreement with regional studies in Taiwan. The equations derived from this study mainly confirm the results of the global investigations about rupture length of historical and instrumental events. However, some relations like MW-MN and MN-ML which are remarkably unlike to available regional works (e.g., American and Canadian were also found.

  18. Large Scale Computing and Storage Requirements for High Energy Physics

    International Nuclear Information System (INIS)

    Gerber, Richard A.; Wasserman, Harvey

    2010-01-01

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  19. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  20. Towards large and powerful radio frequency driven negative ion sources for fusion

    International Nuclear Information System (INIS)

    Heinemann, B; Fantz, U; Kraus, W; Schiesko, L; Wimmer, C; Wünderlich, D; Bonomo, F; Fröschle, M; Nocentini, R; Riedl, R

    2017-01-01

    The ITER neutral beam system will be equipped with radio-frequency (RF) negative ion sources, based on the IPP Garching prototype source design. Up to 100 kW at 1 MHz is coupled to the RF driver, out of which the plasma expands into the main source chamber. Compared to arc driven sources, RF sources are maintenance free and without evaporation of tungsten. The modularity of the driver concept permits to supply large source volumes. The prototype source (one driver) demonstrated operation in hydrogen and deuterium up to one hour with ITER relevant parameters. The ELISE test facility is operating with a source of half the ITER size (four drivers) in order to validate the modular source concept and to gain early operational experience at ITER relevant dimensions. A large variety of diagnostics allows improving the understanding of the relevant physics and its link to the source performance. Most of the negative ions are produced on a caesiated surface by conversion of hydrogen atoms. Cs conditioning and distribution have been optimized in order to achieve high ion currents which are stable in time. A magnetic filter field is needed to reduce the electron temperature and co-extracted electron current. The influence of different field topologies and strengths on the source performance, plasma and beam properties is being investigated. The results achieved in short pulse operation are close to or even exceed the ITER requirements with respect to the extracted ion currents. However, the extracted negative ion current for long pulse operation (up to 1 h) is limited by the increase of the co-extracted electron current, especially in deuterium operation. (paper)

  1. Performance of the BATMAN RF source with a large racetrack shaped driver

    Science.gov (United States)

    Kraus, W.; Schiesko, L.; Wimmer, C.; Fantz, U.; Heinemann, B.

    2017-08-01

    In the negative ion sources in neutral beam injection systems (NBI) of future fusion reactors the plasma is generated in up to eight cylindrical RF sources ("drivers") from which it expands into the main volume. For these large sources, in particular those used in the future DEMO NBI, a high RF efficiency and operational reliability is required. To achieve this it could be favorable to substitute each pair of drivers by one larger one. To investigate this option the cylindrical driver of the BATMAN source at IPP Garching has been replaced by a large source with a racetrack shaped base area and tested using the same extraction system. The main differences are a five times larger source volume and another position of the Cs oven which is mounted onto the driver`s back plate and not onto the expansion volume. The conditioning characteristics and the plasma symmetry in front of the plasma grid were very similar. The extracted H- current densities jex are comparable to that achieved with the small driver at the same power. Because no saturation of jex occurred at 0.6 Pa at high power and the source allows high power operation, a maximum value 45.1 mA/cm2 at 103 kW has been reached. Sputtered Cu from the walls of the expansion volume affected the performance at low pressure, particularly in deuterium. The experiments will be therefore continued with Mo coating of all inner walls.

  2. Prospects for accelerator neutron sources for large volume minerals analysis

    International Nuclear Information System (INIS)

    Clayton, C.G.; Spackman, R.

    1988-01-01

    The electron Linac can be regarded as a practical source of thermal neutrons for activation analysis of large volume mineral samples. With a suitable target and moderator, a neutron flux of about 10 10 n/cm/s over 2-3 kg of rock can be generated. The proton Linac gives the possibility of a high neutron yield (> 10 12 n/s) of fast neutrons at selected energies. For the electron Linac, targets of W-U and W-Be are discussed. The advantages and limitations of the system are demonstrated for the analysis of gold in rocks and ores and for platinum in chromitite. These elements were selected as they are most likely to justify an accelerator installation at the present time. Errors due to self shielding in gold particles for thermal neutrons are discussed. The proton Linac is considered for neutrons generated from a lithium target through the 7 Li(p, n) 7 Be reaction. The analysis of gold by fast neutron activation is considered. This approach avoids particle self-absorption and, by appropriate proton energy selection, avoids potentially dominating interfering reactions. The analysis of 235 U in the presence of 238 U and 232 Th is also considered. (author)

  3. Large Wind Turbine Design Characteristics and R and D Requirements

    Science.gov (United States)

    Lieblein, S. (Editor)

    1979-01-01

    Detailed technical presentations on large wind turbine research and development activities sponsored by public and private organizations are presented. Both horizontal and vertical axis machines are considered with emphasis on their structural design.

  4. Application of large radiation sources in Asia and the Pacific - a review

    International Nuclear Information System (INIS)

    Iya, V.K.

    1977-01-01

    The current status of the applications of large radiation sources on industrial scale in the countries of Asia and the Pacific Region has been reviewed. The present R and D programmes and the major centres engaged in these programmes are described. So far as commercialization is considered, radiation processing industry is now well established in Japan, Australia, India and Israel. The major industrial uses of large radiation sources have been for : (1) sterilization of medical products, (2) food preservation, (3) cross-linking of polyethylene and (4) production of composite materials from polymer and wood or bamboo or bagasse. A table is given which indicates the current status of clearance of irradiated food in the countries under consideration. Finally, technological requirements in these countries for development and application of radiation processing are spelled out and discussed. (M.G.B.)

  5. Broadband frequency ECR ion source concepts with large resonant plasma volumes

    International Nuclear Information System (INIS)

    Alton, G.D.

    1995-01-01

    New techniques are proposed for enhancing the performances of ECR ion sources. The techniques are based on the use of high-power, variable-frequency, multiple-discrete-frequency, or broadband microwave radiation, derived from standard TWT technology, to effect large resonant ''volume'' ECR sources. The creation of a large ECR plasma ''volume'' permits coupling of more power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present forms of the ECR ion source. If successful, these developments could significantly impact future accelerator designs and accelerator-based, heavy-ion-research programs by providing multiply-charged ion beams with the energies and intensities required for nuclear physics research from existing ECR ion sources. The methods described in this article can be used to retrofit any ECR ion source predicated on B-minimum plasma confinement techniques

  6. Large scale integration of intermittent renewable energy sources in the Greek power sector

    International Nuclear Information System (INIS)

    Voumvoulakis, Emmanouil; Asimakopoulou, Georgia; Danchev, Svetoslav; Maniatis, George; Tsakanikas, Aggelos

    2012-01-01

    As a member of the European Union, Greece has committed to achieve ambitious targets for the penetration of renewable energy sources (RES) in gross electricity consumption by 2020. Large scale integration of RES requires a suitable mixture of compatible generation units, in order to deal with the intermittency of wind velocity and solar irradiation. The scope of this paper is to examine the impact of large scale integration of intermittent energy sources, required to meet the 2020 RES target, on the generation expansion plan, the fuel mix and the spinning reserve requirements of the Greek electricity system. We perform hourly simulation of the intermittent RES generation to estimate residual load curves on a monthly basis, which are then inputted in a WASP-IV model of the Greek power system. We find that the decarbonisation effort, with the rapid entry of RES and the abolishment of the grandfathering of CO 2 allowances, will radically transform the Greek electricity sector over the next 10 years, which has wide-reaching policy implications. - Highlights: ► Greece needs 8.8 to 9.3 GW additional RES installations by 2020. ► RES capacity credit varies between 12.2% and 15.3%, depending on interconnections. ► Without institutional changes, the reserve requirements will be more than double. ► New CCGT installed capacity will probably exceed the cost-efficient level. ► Competitive pressures should be introduced in segments other than day-ahead market.

  7. Cybele: a large size ion source of module construction for Tore-Supra injector

    International Nuclear Information System (INIS)

    Simonin, A.; Garibaldi, P.

    2005-01-01

    A 70 keV 40 A hydrogen beam injector has been developed at Cadarache for plasma diagnostic purpose (MSE diagnostic and Charge exchange) on the Tore-Supra Tokamak. This injector daily operates with a large size ions source (called Pagoda) which does not completely fulfill all the requirements necessary for the present experiment. As a consequence, the development of a new ion source (called Cybele) has been underway whose objective is to meet high proton rate (>80%), current density of 160 mA/cm 2 within 5% of uniformity on the whole extraction surface for long shot operation (from 1 to 100 s). Moreover, the main particularity of Cybele is the module construction concept: it is composed of five source modules vertically juxtaposed, with a special orientation which fits the curved extraction surface of the injector; this curvature ensures a geometrical focalization of the neutral beam 7 m downstream in the Tore-Supra chamber. Cybele will be tested first in positive ion production for the Tore-Supra injector, and afterward in negative ion production mode; its modular concept could be advantageous to ensure plasma uniformity on the large extraction surface (about 1 m 2 ) of the ITER neutral beam injector. A module prototype (called the Drift Source) has already been developed in the past and optimized in the laboratory both for positive and negative ion production, where it has met the ITER ion source requirements in terms of D-current density (200 A/m 2 ), source pressure (0.3 Pa), uniformity and arc efficiency (0.015 A D-/kW). (authors)

  8. MILDOS-AREA: An enhanced version of MILDOS for large-area sources

    International Nuclear Information System (INIS)

    Yuan, Y.C.; Wang, J.H.C.; Zielen, A.

    1989-06-01

    The MILDOS-AREA computer code is a modified version of the MILDOS code, which estimates the radiological impacts of airborne emissions from uranium mining and milling facilities or any other large-area source involving emissions of radioisotopes of the uranium-238 series. MILDOS-AREA is designed for execution on personal computers. The modifications incorporated in the MILDOS-AREA code provide enhanced capabilities for calculating doses from large-area sources and update dosimetry calculations. The major revision from the original MILDOS code is the treatment of atmospheric dispersion from area sources: MILDOS-AREA substitutes a finite element integration approach for the virtual-point method (the algorithm used in the original MILDOS code) when specified by the user. Other revisions include the option of using Martin-Tickvart dispersion coefficients in place of Briggs coefficients for a given source, consideration of plume reflection, and updated internal dosimetry calculations based on the most recent recommendations of the International Commission on Radiation Protection and the age-specific dose calculation methodology developed by Oak Ridge National Laboratory. This report also discusses changes in computer code structure incorporated into MILDOS-AREA, summarizes data input requirements, and provides instructions for installing and using the program on personal computers. 15 refs., 9 figs., 26 tabs

  9. Simulation requirements for the Large Deployable Reflector (LDR)

    Science.gov (United States)

    Soosaar, K.

    1984-01-01

    Simulation tools for the large deployable reflector (LDR) are discussed. These tools are often the transfer function variety equations. However, transfer functions are inadequate to represent time-varying systems for multiple control systems with overlapping bandwidths characterized by multi-input, multi-output features. Frequency domain approaches are the useful design tools, but a full-up simulation is needed. Because of the need for a dedicated computer for high frequency multi degree of freedom components encountered, non-real time smulation is preferred. Large numerical analysis software programs are useful only to receive inputs and provide output to the next block, and should be kept out of the direct loop of simulation. The following blocks make up the simulation. The thermal model block is a classical heat transfer program. It is a non-steady state program. The quasistatic block deals with problems associated with rigid body control of reflector segments. The steady state block assembles data into equations of motion and dynamics. A differential raytrace is obtained to establish a change in wave aberrations. The observation scene is described. The focal plane module converts the photon intensity impinging on it into electron streams or into permanent film records.

  10. Large area negative ion source for high voltage neutral beams

    International Nuclear Information System (INIS)

    Poulsen, P.; Hooper, E.B. Jr.

    1979-11-01

    A source of negative deuterium ions in the multi-ampere range is described that is readily extrapolated to reactor size, 10 amp or more of neutral beam, that is of interest in future experiments and reactors. The negative ion source is based upon the double charge exchange process. A beam of positive ions is created and accelerated to an energy at which the attachment process D + M → D - + M + proceeds efficiently. The positive ions are atomically neutralized either in D 2 or in the charge exchange medium M. Atomic species make a second charge exchange collision in the charge target to form D - . For a sufficiently thick target, the beam reaches an equilibrium fraction of negative ions. For reasons of efficiency, the target is typically alkali metal vapor; this experiment uses sodium. The beam of negative ions can be accelerated to high (>200 keV) energy, the electrons stripped from the ions, and a high energy neutral beam formed

  11. Irradiator design with large-volume source cylinders

    International Nuclear Information System (INIS)

    Eichholz, G.G.; Craft, T.F.; Suh, D.Y.

    1985-01-01

    To provide for economic utilization of prospective vitrified cesium-137 waste elements, a study was conducted for a conceptual irradiator system based on these elements for the commercial sterilization of sewage sludge for land spreading as fertilizer. A literature study showed that dried sludge could be sterilized more efficiently than wet. Adequate destruction of E. coli in sludge could be obtained with radiation doses as low as 150 kR. However, a dose of about 1 megarad is generally regarded as mandatory. Two cesium waste concentrations had been proposed. The one incorporating lower concentrations of Cs-137 and a surface dose of 20 kR/h was insufficiently active. Work, therefore, concentrated on the more active source cylinders, which are 18 cm in diameter with a specific activity of 16 to 17 Ci/cc. The conceptual design envisages the dry sludge passing horizontally by a conveyor system, past two rows of source elements in a three-pass array. A computer program has been developed to produce isodose contours and to calculate integrated doses for various source-target configurations

  12. 40 CFR 74.16 - Application requirements for combustion sources.

    Science.gov (United States)

    2010-07-01

    ... combustion sources. 74.16 Section 74.16 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... for combustion sources. (a) Opt-in permit application. Each complete opt-in permit application for a combustion source shall contain the following elements in a format prescribed by the Administrator: (1...

  13. Seismic and source characteristics of large chemical explosions. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Adushkin, V.V.; Kostuchenko, V.N.; Pernik, L.M.; Sultanov, D.D.; Zcikanovsky, V.I.

    1995-01-01

    From the very beginning of its arrangement in 1947, the Institute for Dynamics of the Geospheres RAS (former Special Sector of the Institute for physics of the Earth, RAS) was providing scientific observations of effects of nuclear explosions, as well as large-scale detonations of HE, on environment. This report presents principal results of instrumental observations obtained from various large-scale chemical explosions conducted in the Former-Soviet Union in the period of time from 1957 to 1989. Considering principal aim of the work, tamped and equivalent chemical explosions have been selected with total weights from several hundreds to several thousands ton. In particular, the selected explosions were aimed to study scaling law from excavation explosions, seismic effect of tamped explosions, and for dam construction for hydropower stations and soil melioration. Instrumental data on surface explosions of total weight in the same range aimed to test military technics and special objects are not included.

  14. A large-area RF source for negative hydrogen ions

    International Nuclear Information System (INIS)

    Frank, P.; Feist, J. H.; Kraus, W.; Speth, E.; Heinemann, B.; Probst, F.; Trainham, R.; Jacquot, C.

    1998-01-01

    In a collaboration with CEA Cadarache, IPP is presently developing an rf source, in which the production of negative ions (H - /D - ) is being investigated. It utilizes PINI-size rf sources with an external antenna and for the first step a small size extraction system with 48 cm 2 net extraction area. First results from BATMAN (Bavarian T lowbar est Machine for N lowbar egative Ions) show (without Cs) a linear dependence of the negative ion yield with rf power, without any sign of saturation. At elevated pressure (1.6 Pa) a current density of 4.5 mA/cm 2 H - (without Cs) has been found so far. At medium pressure (0.6 Pa) the current density is lower by approx. a factor of 5, but preliminary results with Cesium injection show a relative increase by almost the same factor in this pressure range. Langmuir probe measurements indicate an electron temperature T e >2 eV close to the plasma grid with a moderate magnetic filter (700 Gcm). Attempts to improve the performance by using different magnetic configurations and different wall materials are under way

  15. 2π proportional counting chamber for large-area-coated β sources

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 86; Issue 6. 2 π proportional counting chamber for large-area-coated β sources ... A provision is made for change ofthe source and immediate measurement of source activity. These sources are used to calibrate the efficiency of contamination monitors at radiological ...

  16. 40 CFR 74.17 - Application requirements for process sources. [Reserved

    Science.gov (United States)

    2010-07-01

    ... requirements for process sources. [Reserved] ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Application requirements for process sources. [Reserved] 74.17 Section 74.17 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  17. Harmonic Instability Source Identification in Large Wind Farms

    DEFF Research Database (Denmark)

    Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei

    2017-01-01

    A large-scale power electronics based power system like a wind farm introduces the passive and active impedances. The interactions between the active and passive impedances can lead to harmonic-frequency oscillations above the fundamental frequency, which can be called harmonic instability....... This paper presents an approach to identify which wind turbine and which bus has more contribution to the harmonic instability problems. In the approach, a wind farm is modeled as a Multi-Input Multi-Output (MIMO) dynamic system. The poles of the MIMO transfer matrix are used to predict the system...... instability and the eigenvalues sensitivity analysis in respect to the elements of the MIMO matrix locates the most influencing buses of the wind farm. Time-domain simulations in PSCAD software environment for a 400-MW wind farm validate that the presented approach is an effective tool to determine the main...

  18. Regulatory requirements of radiation and radioactive sources in India

    International Nuclear Information System (INIS)

    Sundara Rao, I.S.

    1993-01-01

    Manufacture and supply of radiation sources, their use and the disposal of radioactive materials are regulated through the application of Safe Disposal Radioactive Wastes Rules 1987. Salient aspects of these are discussed

  19. Data requirements and data sources for biodiversity priority area ...

    Indian Academy of Sciences (India)

    Unknown

    CSIRO Sustainable Ecosystems, Tropical Forest Research Centre and the Rainforest Co-operative Research Centre, ... kinds of data that already exist, and sources of those data. ...... ment of the realized qualitative niche: environmental niches.

  20. A Requirements-Based Exploration of Open-Source Software Development Projects--Towards a Natural Language Processing Software Analysis Framework

    Science.gov (United States)

    Vlas, Radu Eduard

    2012-01-01

    Open source projects do have requirements; they are, however, mostly informal, text descriptions found in requests, forums, and other correspondence. Understanding such requirements provides insight into the nature of open source projects. Unfortunately, manual analysis of natural language requirements is time-consuming, and for large projects,…

  1. Source and special nuclear material sealing and labeling requirements

    International Nuclear Information System (INIS)

    Jordan, K.N.

    1978-04-01

    Purpose of this document is to define requirements for the use of tamper-indicating seals and identifying labels on SS Material containers at Rockwell Hanford Operations. The requirements defined in this document are applicable to all Rockwell Hanford Operation employees involved in handling, processing, packaging, transferring, shipping, receiving or storing SS Material

  2. Laser wakefield accelerator based light sources: potential applications and requirements

    Energy Technology Data Exchange (ETDEWEB)

    Albert, F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). NIF and Photon Sciences; Thomas, A. G. [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of Nuclear Engineering and Radiological Sciences; Mangles, S. P.D. [Imperial College, London (United Kingdom). Blackett Lab.; Banerjee, S. [Univ. of Nebraska, Lincoln, NE (United States); Corde, S. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Flacco, A. [ENSTA, CNRS, Ecole Polytechnique, Palaiseau (France); Litos, M. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Neely, D. [Science and Technology Facilities Council (STFC), Oxford (United Kingdom). Rutherford Appleton Lab. (RAL). Central Laser Facility; Viera, J. [Univ. of Lisbon (Portugal). GoLP-Inst. de Plasmas e Fusao Nuclear-Lab. Associado; Najmudin, Z. [Imperial College, London (United Kingdom). Blackett Lab.; Bingham, R. [Science and Technology Facilities Council (STFC), Oxford (United Kingdom). Rutherford Appleton Lab. (RAL). Central Laser Facility; Joshi, C. [Univ. of California, Los Angeles, CA (United States). Dept. of Electrical Engineering; Katsouleas, T. [Duke Univ., Durham, NC (United States). Platt School of Engineering

    2015-01-15

    In this article we review the prospects of laser wakefield accelerators as next generation light sources for applications. This work arose as a result of discussions held at the 2013 Laser Plasma Accelerators Workshop. X-ray phase contrast imaging, X-ray absorption spectroscopy, and nuclear resonance fluorescence are highlighted as potential applications for laser-plasma based light sources. We discuss ongoing and future efforts to improve the properties of radiation from plasma betatron emission and Compton scattering using laser wakefield accelerators for these specific applications.

  3. Transformation to cloud services sourcing : Required it governance capabilities

    NARCIS (Netherlands)

    Joha, A.; Janssen, M.F.W.H.A.

    2012-01-01

    The sourcing of cloud services is a relatively new type of service delivery model in which an organization gets access to IT services via a cloud service provider that is delivering services over the web to many users on a pay per use or period basis. Even though the importance of IT governance is

  4. A combination of permanent magnet and magnetic coil for a large diameter ion source

    International Nuclear Information System (INIS)

    Uramoto, Joshin; Kubota, Yusuke; Miyahara, Akira.

    1980-02-01

    A large diameter ion source for fast neutral beam injection is designed under a magnetic field (we call ''Uramoto Field'') composed of a circular ferrite permanent magnet and a usual coreless magnetic coil. As the magnetic filed is reduced abruptly in a discharge anode, an ion source with a uniform ion current density over a large diameter is produced easily without a ''button'' of ORNL duoPIGatron type ion source (a floating electrode to diffuse an axial plasma flow radially). (author)

  5. The impact of a large penetration of intermittent sources on the power system operation and planning

    Science.gov (United States)

    Ausin, Juan Carlos

    This research investigated the impact on the power system of a large penetration of intermittent renewable sources, mainly wind and photovoltaic generation. Currently, electrical utilities deal with wind and PV plants as if they were sources of negative demand, that is to say, they have no control over the power output produced. In this way, the grid absorbs all the power fluctuation as if it were coming from a common load. With the level of wind penetration growing so quickly, there is growing concern amongst the utilities and the grid operators, as they will have to deal with a much higher level of fluctuation. In the same way, the potential cost reduction of PV technologies suggests that a similar development may be expected for solar production in the mid term. The first part of the research was focused on the issues that affect utility planning and reinforcement decision making. Although DG is located mainly on the distribution network, a large penetration may alter the flows, not only on the distribution lines, but also on the transmission system and through the transmission - distribution interfaces. The optimal capacity and production costs for the UK transmission network have been calculated for several combinations of load profiles and typical wind/PV output scenarios. A full economic analysis is developed, showing the benefits and disadvantages that a large penetration of these distributed generators may have on transmission system operator reinforcement strategies. Closely related to planning factors are institutional, revelatory, and economic considerations, such as transmission pricing, which may hamper the integration of renewable energy technologies into the electric utility industry. The second part of the research related to the impact of intermittent renewable energy technologies on the second by second, minute by minute, and half-hour by half-hour operations of power systems. If a large integration of these new generators partially replaces the

  6. Leveraging human oversight and intervention in large-scale parallel processing of open-source data

    Science.gov (United States)

    Casini, Enrico; Suri, Niranjan; Bradshaw, Jeffrey M.

    2015-05-01

    The popularity of cloud computing along with the increased availability of cheap storage have led to the necessity of elaboration and transformation of large volumes of open-source data, all in parallel. One way to handle such extensive volumes of information properly is to take advantage of distributed computing frameworks like Map-Reduce. Unfortunately, an entirely automated approach that excludes human intervention is often unpredictable and error prone. Highly accurate data processing and decision-making can be achieved by supporting an automatic process through human collaboration, in a variety of environments such as warfare, cyber security and threat monitoring. Although this mutual participation seems easily exploitable, human-machine collaboration in the field of data analysis presents several challenges. First, due to the asynchronous nature of human intervention, it is necessary to verify that once a correction is made, all the necessary reprocessing is done in chain. Second, it is often needed to minimize the amount of reprocessing in order to optimize the usage of resources due to limited availability. In order to improve on these strict requirements, this paper introduces improvements to an innovative approach for human-machine collaboration in the processing of large amounts of open-source data in parallel.

  7. Large shift in source of fine sediment in the upper Mississippi River

    Science.gov (United States)

    Belmont, P.; Gran, K.B.; Schottler, S.P.; Wilcock, P.R.; Day, S.S.; Jennings, C.; Lauer, J.W.; Viparelli, E.; Willenbring, J.K.; Engstrom, D.R.; Parker, G.

    2011-01-01

    Although sediment is a natural constituent of rivers, excess loading to rivers and streams is a leading cause of impairment and biodiversity loss. Remedial actions require identification of the sources and mechanisms of sediment supply. This task is complicated by the scale and complexity of large watersheds as well as changes in climate and land use that alter the drivers of sediment supply. Previous studies in Lake Pepin, a natural lake on the Mississippi River, indicate that sediment supply to the lake has increased 10-fold over the past 150 years. Herein we combine geochemical fingerprinting and a suite of geomorphic change detection techniques with a sediment mass balance for a tributary watershed to demonstrate that, although the sediment loading remains very large, the dominant source of sediment has shifted from agricultural soil erosion to accelerated erosion of stream banks and bluffs, driven by increased river discharge. Such hydrologic amplification of natural erosion processes calls for a new approach to watershed sediment modeling that explicitly accounts for channel and floodplain dynamics that amplify or dampen landscape processes. Further, this finding illustrates a new challenge in remediating nonpoint sediment pollution and indicates that management efforts must expand from soil erosion to factors contributing to increased water runoff. ?? 2011 American Chemical Society.

  8. Transmission grid requirements with scattered and flutuating renewable electricity sources

    DEFF Research Database (Denmark)

    Østergaard, Poul Alberg

    2002-01-01

    Denmark is in a situation with many scattered sources of electricity, that are not controlled by the central load dispatch. At the same time, Denmark is being used as an electricity transit corridor between Norway/Sweden and Germany. Through energy systems analyses and load-flow analyses......, it is determined that if scattered load balancing is introduced, electricity transit is enabled to a higher degree than if central load balancing is maintained....

  9. A simple method for determining the activity of large-area beta sources constructed from anodized aluminum foils

    International Nuclear Information System (INIS)

    Stanga, D.

    2014-01-01

    A simple method has been developed for determining the activity of large-area beta reference sources in anodized aluminum foils. It is based on the modeling of the transmission of beta rays through thin foils in planar geometry using Monte Carlo simulation. The method was checked experimentally and measurement results show that the activity of large-area beta reference sources in anodized aluminum foils can be measured with standard uncertainties smaller than the limit of 10% required by ISO 8769. - Highlights: • A method for determining the activity of large-area beta sources is presented. • The method is based on a model of electron transport in planar geometry. • The method makes use of linear programming for determining the activity. • The uncertainty of the method is smaller than 10%

  10. Data requirements and data sources for biodiversity priority area ...

    Indian Academy of Sciences (India)

    Unknown

    tions required for a priority areas analysis. An important ..... Analysis Project, etc. Even before bias can be assessed and decisions taken either to proceed with existing data, model expected data. (see below), collect new data, or reject the data, those data have to be ..... Caudill 1990). They are a form of artificial intelligence,.

  11. 22 CFR 228.13 - Special source rules requiring procurement from the United States.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Special source rules requiring procurement from... ON SOURCE, ORIGIN AND NATIONALITY FOR COMMODITIES AND SERVICES FINANCED BY USAID Conditions Governing Source and Nationality of Commodity Procurement Transactions for USAID Financing § 228.13 Special source...

  12. A Nationwide Overview of Sight-Singing Requirements of Large-Group Choral Festivals

    Science.gov (United States)

    Norris, Charles E.

    2004-01-01

    The purpose of this study was to examine sight-singing requirements at junior and senior high school large-group ratings-based choral festivals throughout the United States. Responses to the following questions were sought from each state: (1) Are there ratings-based large-group choral festivals? (2) Is sight-singing a requirement? (3) Are there…

  13. Radio Follow-up on All Unassociated Gamma-Ray Sources from the Third Fermi Large Area Telescope Source Catalog

    Energy Technology Data Exchange (ETDEWEB)

    Schinzel, Frank K. [National Radio Astronomy Observatory, P.O. Box O, Socorro, NM 87801 (United States); Petrov, Leonid [Astrogeo Center, Falls Church, VA 22043 (United States); Taylor, Gregory B. [Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM 87131 (United States); Edwards, Philip G., E-mail: fschinze@nrao.edu [CSIRO Astronomy and Space Science, P.O. Box 76, Epping, 1710 NSW (Australia)

    2017-04-01

    The third Fermi Large Area Telescope γ -ray source catalog (3FGL) contains over 1000 objects for which there is no known counterpart at other wavelengths. The physical origin of the γ -ray emission from those objects is unknown. Such objects are commonly referred to as unassociated and mostly do not exhibit significant γ -ray flux variability. We performed a survey of all unassociated γ -ray sources found in 3FGL using the Australia Telescope Compact Array and Very Large Array in the range 4.0–10.0 GHz. We found 2097 radio candidates for association with γ -ray sources. The follow-up with very long baseline interferometry for a subset of those candidates yielded 142 new associations with active galactic nuclei that are γ -ray sources, provided alternative associations for seven objects, and improved positions for another 144 known associations to the milliarcsecond level of accuracy. In addition, for 245 unassociated γ -ray sources we did not find a single compact radio source above 2 mJy within 3 σ of their γ -ray localization. A significant fraction of these empty fields, 39%, are located away from the Galactic plane. We also found 36 extended radio sources that are candidates for association with a corresponding γ -ray object, 19 of which are most likely supernova remnants or H ii regions, whereas 17 could be radio galaxies.

  14. Development of technology for the large-scale preparation of 60Co polymer film source

    International Nuclear Information System (INIS)

    Udhayakumar, J.; Pardeshi, G.S.; Gandhi, Shymala S.; Chakravarty, Rubel; Kumar, Manoj; Dash, Ashutosh; Venkatesh, Meera

    2008-01-01

    60 Co sources (∼37 kBq) in the form of a thin film are widely used in position identification of perforation in offshore oil-well explorations. This paper describes the large-scale preparation of such sources using a radioactive polymer containing 60 Co. 60 Co was extracted into chloroform containing 8-hydroxyquinoline. The chloroform layer was mixed with polymethyl methacrylate (PMMA) polymer. A large film was prepared using the polymer solution containing the complex. The polymer film was then cut into circular sources, mounted on a source holder and supplied to various users

  15. Observations of X-ray sources in the Large Magellanic cloud by the OSO-7 satellite

    International Nuclear Information System (INIS)

    Markert, T.H.; Clark, G.W.

    1975-01-01

    Observations of the Large Magellanic Cloud with the 1-40 keV X-ray detectors on the OSO-7 satellite are reported. Results include the discovery of a previously unreported source LMC X-5, measurements of the spectral characteristics of four sources, and observations of their variability on time scales of months

  16. SIproc: an open-source biomedical data processing platform for large hyperspectral images.

    Science.gov (United States)

    Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David

    2017-04-10

    There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.

  17. Verification of surface source's characteristics using large-area 2π gas flow counter

    International Nuclear Information System (INIS)

    Abu Naser Waheed, M.M.; Mikami, S.; Kobayashi, H.; Noda, K.

    1998-09-01

    Power Reactor and Nuclear Fuel Development Corporation (PNC) has large-area 2π gas flow counter for the purpose of measuring activity of surface sources of alpha or beta ray emitter. Surface sources are used for the calibration of radiation measuring equipment for radiation control. Due to sequent use of sources, the surface of these sources are inclined to go in bad condition because of unwanted accidental incidents. For the better calibration achievement of radiation measuring instruments the rate of emission of these sources are to be checked periodically by the large-area 2π gas flow counter. In this paper described that eight U 3 O 8 surface sources were selected from many sources of PNC Tokai Works and activity of these sources was measured by the 2π gas flow counter. The results were compared with the values certified by Japan Radio Isotope Association (JRIA). It is evident from the result of comparison that the surface sources are in good condition, i.e., the sources are reliable to calibrate the radiation control instruments. (author)

  18. Technology, safety, and costs of decommissioning a reference large irradiator and reference sealed sources

    Energy Technology Data Exchange (ETDEWEB)

    Haffner, D.R.; Villelgas, A.J. [Pacific Northwest Lab., Richland, WA (United States)

    1996-01-01

    This report contains the results of a study sponsored by the US Nuclear Regulatory Commission (NRC) to examine the decommissioning of large radioactive irradiators and their respective facilities, and a broad spectrum of sealed radioactive sources and their respective devices. Conceptual decommissioning activities are identified, and the technology, safety, and costs (in early 1993 dollars) associated with decommissioning the reference large irradiator and sealed source facilities are evaluated. The study provides bases and background data for possible future NRC rulemaking regarding decommissioning, for evaluation of the reasonableness of planned decommissioning actions, and for determining if adequate funds are reserved by the licensees for decommissioning of their large irradiator or sealed source facilities. Another purpose of this study is to provide background and information to assist licensees in planning and carrying out the decommissioning of their sealed radioactive sources and respective facilities.

  19. Technology, safety, and costs of decommissioning a reference large irradiator and reference sealed sources

    International Nuclear Information System (INIS)

    Haffner, D.R.; Villelgas, A.J.

    1996-01-01

    This report contains the results of a study sponsored by the US Nuclear Regulatory Commission (NRC) to examine the decommissioning of large radioactive irradiators and their respective facilities, and a broad spectrum of sealed radioactive sources and their respective devices. Conceptual decommissioning activities are identified, and the technology, safety, and costs (in early 1993 dollars) associated with decommissioning the reference large irradiator and sealed source facilities are evaluated. The study provides bases and background data for possible future NRC rulemaking regarding decommissioning, for evaluation of the reasonableness of planned decommissioning actions, and for determining if adequate funds are reserved by the licensees for decommissioning of their large irradiator or sealed source facilities. Another purpose of this study is to provide background and information to assist licensees in planning and carrying out the decommissioning of their sealed radioactive sources and respective facilities

  20. From Collective Knowledge to Intelligence : Pre-Requirements Analysis of Large and Complex Systems

    NARCIS (Netherlands)

    Liang, Peng; Avgeriou, Paris; He, Keqing; Xu, Lai

    2010-01-01

    Requirements engineering is essentially a social collaborative activity in which involved stakeholders have to closely work together to communicate, elicit, negotiate, define, confirm, and finally come up with the requirements for the system to be implemented or upgraded. In the development of large

  1. Large-q correlations from a Hubble-type pion source

    International Nuclear Information System (INIS)

    Barghouty, A.F.; Miller, J.; Frankel, K.A.

    1993-01-01

    In two-pion correlation measurements from relativistic nuclear collisions, the correlation function, C 2 (q), appears to exhibit an oscillatory structure at large (q ≥100 MeV/c) relative momentum. If real, this structure may have consequences for the determination of the space-time extent of the pion source. A qualitatively similar feature is seen in cellular automaton simulations of a Lorentz gas. It has been argued phenomenologically that the q-dependent oscillations can arise from an interplay between successive scattering probabilities and density variations of an exploding pion source. To further illustrate this interplay we consider a Hubble-type free expansion model for the source in which the density is time-folded from an initial Gaussian. This allows the source expansion to enter as a dynamical variable in the source density p[r(t); t] and thus C 2 , along with any signature of the interplay between scattering and source density

  2. Non-equilibrium thermodynamics theory of econometric source discovery for large data analysis

    Science.gov (United States)

    van Bergem, Rutger; Jenkins, Jeffrey; Benachenhou, Dalila; Szu, Harold

    2014-05-01

    Almost all consumer and firm transactions are achieved using computers and as a result gives rise to increasingly large amounts of data available for analysts. The gold standard in Economic data manipulation techniques matured during a period of limited data access, and the new Large Data Analysis (LDA) paradigm we all face may quickly obfuscate most tools used by Economists. When coupled with an increased availability of numerous unstructured, multi-modal data sets, the impending 'data tsunami' could have serious detrimental effects for Economic forecasting, analysis, and research in general. Given this reality we propose a decision-aid framework for Augmented-LDA (A-LDA) - a synergistic approach to LDA which combines traditional supervised, rule-based Machine Learning (ML) strategies to iteratively uncover hidden sources in large data, the artificial neural network (ANN) Unsupervised Learning (USL) at the minimum Helmholtz free energy for isothermal dynamic equilibrium strategies, and the Economic intuitions required to handle problems encountered when interpreting large amounts of Financial or Economic data. To make the ANN USL framework applicable to economics we define the temperature, entropy, and energy concepts in Economics from non-equilibrium molecular thermodynamics of Boltzmann viewpoint, as well as defining an information geometry, on which the ANN can operate using USL to reduce information saturation. An exemplar of such a system representation is given for firm industry equilibrium. We demonstrate the traditional ML methodology in the economics context and leverage firm financial data to explore a frontier concept known as behavioral heterogeneity. Behavioral heterogeneity on the firm level can be imagined as a firm's interactions with different types of Economic entities over time. These interactions could impose varying degrees of institutional constraints on a firm's business behavior. We specifically look at behavioral heterogeneity for firms

  3. A global catalogue of large SO2 sources and emissions derived from the Ozone Monitoring Instrument

    Directory of Open Access Journals (Sweden)

    V. E. Fioletov

    2016-09-01

    Full Text Available Sulfur dioxide (SO2 measurements from the Ozone Monitoring Instrument (OMI satellite sensor processed with the new principal component analysis (PCA algorithm were used to detect large point emission sources or clusters of sources. The total of 491 continuously emitting point sources releasing from about 30 kt yr−1 to more than 4000 kt yr−1 of SO2 per year have been identified and grouped by country and by primary source origin: volcanoes (76 sources; power plants (297; smelters (53; and sources related to the oil and gas industry (65. The sources were identified using different methods, including through OMI measurements themselves applied to a new emission detection algorithm, and their evolution during the 2005–2014 period was traced by estimating annual emissions from each source. For volcanic sources, the study focused on continuous degassing, and emissions from explosive eruptions were excluded. Emissions from degassing volcanic sources were measured, many for the first time, and collectively they account for about 30 % of total SO2 emissions estimated from OMI measurements, but that fraction has increased in recent years given that cumulative global emissions from power plants and smelters are declining while emissions from oil and gas industry remained nearly constant. Anthropogenic emissions from the USA declined by 80 % over the 2005–2014 period as did emissions from western and central Europe, whereas emissions from India nearly doubled, and emissions from other large SO2-emitting regions (South Africa, Russia, Mexico, and the Middle East remained fairly constant. In total, OMI-based estimates account for about a half of total reported anthropogenic SO2 emissions; the remaining half is likely related to sources emitting less than 30 kt yr−1 and not detected by OMI.

  4. A Global Catalogue of Large SO2 Sources and Emissions Derived from the Ozone Monitoring Instrument

    Science.gov (United States)

    Fioletov, Vitali E.; McLinden, Chris A.; Krotkov, Nickolay; Li, Can; Joiner, Joanna; Theys, Nicolas; Carn, Simon; Moran, Mike D.

    2016-01-01

    Sulfur dioxide (SO2) measurements from the Ozone Monitoring Instrument (OMI) satellite sensor processed with the new principal component analysis (PCA) algorithm were used to detect large point emission sources or clusters of sources. The total of 491 continuously emitting point sources releasing from about 30 kt yr(exp -1) to more than 4000 kt yr(exp -1) of SO2 per year have been identified and grouped by country and by primary source origin: volcanoes (76 sources); power plants (297); smelters (53); and sources related to the oil and gas industry (65). The sources were identified using different methods, including through OMI measurements themselves applied to a new emission detection algorithm, and their evolution during the 2005- 2014 period was traced by estimating annual emissions from each source. For volcanic sources, the study focused on continuous degassing, and emissions from explosive eruptions were excluded. Emissions from degassing volcanic sources were measured, many for the first time, and collectively they account for about 30% of total SO2 emissions estimated from OMI measurements, but that fraction has increased in recent years given that cumulative global emissions from power plants and smelters are declining while emissions from oil and gas industry remained nearly constant. Anthropogenic emissions from the USA declined by 80% over the 2005-2014 period as did emissions from western and central Europe, whereas emissions from India nearly doubled, and emissions from other large SO2-emitting regions (South Africa, Russia, Mexico, and the Middle East) remained fairly constant. In total, OMI-based estimates account for about a half of total reported anthropogenic SO2 emissions; the remaining half is likely related to sources emitting less than 30 kt yr(exp -1) and not detected by OMI.

  5. Analysis on Dangerous Source of Large Safety Accident in Storage Tank Area

    Science.gov (United States)

    Wang, Tong; Li, Ying; Xie, Tiansheng; Liu, Yu; Zhu, Xueyuan

    2018-01-01

    The difference between a large safety accident and a general accident is that the consequences of a large safety accident are particularly serious. To study the tank area which factors directly or indirectly lead to the occurrence of large-sized safety accidents. According to the three kinds of hazard source theory and the consequence cause analysis of the super safety accident, this paper analyzes the dangerous source of the super safety accident in the tank area from four aspects, such as energy source, large-sized safety accident reason, management missing, environmental impact Based on the analysis of three kinds of hazard sources and environmental analysis to derive the main risk factors and the AHP evaluation model is established, and after rigorous and scientific calculation, the weights of the related factors in four kinds of risk factors and each type of risk factors are obtained. The result of analytic hierarchy process shows that management reasons is the most important one, and then the environmental factors and the direct cause and Energy source. It should be noted that although the direct cause is relatively low overall importance, the direct cause of Failure of emergency measures and Failure of prevention and control facilities in greater weight.

  6. The European large area ISO survey - III. 90-mu m extragalactic source counts

    DEFF Research Database (Denmark)

    Efstathiou, A.; Oliver, S.; Rowan-Robinson, M.

    2000-01-01

    We present results and source counts at 90 mum extracted from the preliminary analysis of the European Large Area ISO Survey (ELAIS). The survey covered about 12 deg(2) of the sky in four main areas and was carried out with the ISOPHOT instrument onboard the Infrared Space Observatory (ISO...... or small groups of galaxies, suggesting that the sample may include a significant fraction of luminous infrared galaxies. The source counts extracted from a reliable subset of the detected sources are in agreement with strongly evolving models of the starburst galaxy population....

  7. Comparison of different source calculations in two-nucleon channel at large quark mass

    Science.gov (United States)

    Yamazaki, Takeshi; Ishikawa, Ken-ichi; Kuramashi, Yoshinobu

    2018-03-01

    We investigate a systematic error coming from higher excited state contributions in the energy shift of light nucleus in the two-nucleon channel by comparing two different source calculations with the exponential and wall sources. Since it is hard to obtain a clear signal of the wall source correlation function in a plateau region, we employ a large quark mass as the pion mass is 0.8 GeV in quenched QCD. We discuss the systematic error in the spin-triplet channel of the two-nucleon system, and the volume dependence of the energy shift.

  8. A large angle cold neutron bender using sequential garland reflections for pulsed neutron source

    Energy Technology Data Exchange (ETDEWEB)

    Ebisawa, T.; Tasaki, S. [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst; Soyama, K.; Suzuki, J. [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    We discuss a basic structure and performance of a new cold neutron bender using sequential garland reflections, in order to bend a neutron beam with large divergence by large angle. Using this bender for a pulsed neutron source we could not only avoid the frame overlap for cold neutrons but also install a plural spectrometers at a cold guide and obtain polarized neutron beams if necessary. (author)

  9. Establishment and application of a large calibration device of artificial radionuclide plane source

    International Nuclear Information System (INIS)

    Hu Mingkao; Zhang Jiyun; Wang Xinxing; Zhang Sheng

    2010-01-01

    With the expansion of the application fields of nuclear techniques and the development of economy, more and more airborne/vehicle and other large γ spectrometers are applied in the environment radiation monitoring of artificial radioactive nuclides. In order to ensure the reliability of the monitoring results, a large calibration device of artificial radionuclide plane source is established. The paper introduces the device's built history and the results of application. (authors)

  10. A large angle cold neutron bender using sequential garland reflections for pulsed neutron source

    International Nuclear Information System (INIS)

    Ebisawa, T.; Tasaki, S.; Soyama, K.; Suzuki, J.

    2001-01-01

    We discuss a basic structure and performance of a new cold neutron bender using sequential garland reflections, in order to bend a neutron beam with large divergence by large angle. Using this bender for a pulsed neutron source we could not only avoid the frame overlap for cold neutrons but also install a plural spectrometers at a cold guide and obtain polarized neutron beams if necessary. (author)

  11. Large-eddy simulation of convective boundary layer generated by highly heated source with open source code, OpenFOAM

    International Nuclear Information System (INIS)

    Hattori, Yasuo; Suto, Hitoshi; Eguchi, Yuzuru; Sano, Tadashi; Shirai, Koji; Ishihara, Shuji

    2011-01-01

    Spatial- and temporal-characteristics of turbulence structures in the close vicinity of a heat source, which is a horizontal upward-facing round plate heated at high temperature, are examined by using well resolved large-eddy simulations. The verification is carried out through the comparison with experiments: the predicted statistics, including the PDF distribution of temperature fluctuations, agree well with measurements, indicating that the present simulations have a capability to appropriately reproduce turbulence structures near the heat source. The reproduced three-dimensional thermal- and fluid-fields in the close vicinity of the heat source reveals developing processes of coherence structures along the surface: the stationary- and streaky-flow patterns appear near the edge, and such patterns randomly shift to cell-like patterns with incursion into the center region, resulting in thermal-plume meandering. Both the patterns have very thin structures, but the depth of streaky structure is considerably small compared with that of cell-like patterns; this discrepancy causes the layered structures. The structure is the source of peculiar turbulence characteristics, the prediction of which is quite difficult with RANS-type turbulence models. The understanding such structures obtained in present study must be helpful to improve the turbulence model used in nuclear engineering. (author)

  12. Numerical simulation of seismic wave propagation from land-excited large volume air-gun source

    Science.gov (United States)

    Cao, W.; Zhang, W.

    2017-12-01

    The land-excited large volume air-gun source can be used to study regional underground structures and to detect temporal velocity changes. The air-gun source is characterized by rich low frequency energy (from bubble oscillation, 2-8Hz) and high repeatability. It can be excited in rivers, reservoirs or man-made pool. Numerical simulation of the seismic wave propagation from the air-gun source helps to understand the energy partitioning and characteristics of the waveform records at stations. However, the effective energy recorded at a distance station is from the process of bubble oscillation, which can not be approximated by a single point source. We propose a method to simulate the seismic wave propagation from the land-excited large volume air-gun source by finite difference method. The process can be divided into three parts: bubble oscillation and source coupling, solid-fluid coupling and the propagation in the solid medium. For the first part, the wavelet of the bubble oscillation can be simulated by bubble model. We use wave injection method combining the bubble wavelet with elastic wave equation to achieve the source coupling. Then, the solid-fluid boundary condition is implemented along the water bottom. And the last part is the seismic wave propagation in the solid medium, which can be readily implemented by the finite difference method. Our method can get accuracy waveform of land-excited large volume air-gun source. Based on the above forward modeling technology, we analysis the effect of the excited P wave and the energy of converted S wave due to different water shapes. We study two land-excited large volume air-gun fields, one is Binchuan in Yunnan, and the other is Hutubi in Xinjiang. The station in Binchuan, Yunnan is located in a large irregular reservoir, the waveform records have a clear S wave. Nevertheless, the station in Hutubi, Xinjiang is located in a small man-made pool, the waveform records have very weak S wave. Better understanding of

  13. Utilising identifier error variation in linkage of large administrative data sources

    Directory of Open Access Journals (Sweden)

    Katie Harron

    2017-02-01

    Full Text Available Abstract Background Linkage of administrative data sources often relies on probabilistic methods using a set of common identifiers (e.g. sex, date of birth, postcode. Variation in data quality on an individual or organisational level (e.g. by hospital can result in clustering of identifier errors, violating the assumption of independence between identifiers required for traditional probabilistic match weight estimation. This potentially introduces selection bias to the resulting linked dataset. We aimed to measure variation in identifier error rates in a large English administrative data source (Hospital Episode Statistics; HES and to incorporate this information into match weight calculation. Methods We used 30,000 randomly selected HES hospital admissions records of patients aged 0–1, 5–6 and 18–19 years, for 2011/2012, linked via NHS number with data from the Personal Demographic Service (PDS; our gold-standard. We calculated identifier error rates for sex, date of birth and postcode and used multi-level logistic regression to investigate associations with individual-level attributes (age, ethnicity, and gender and organisational variation. We then derived: i weights incorporating dependence between identifiers; ii attribute-specific weights (varying by age, ethnicity and gender; and iii organisation-specific weights (by hospital. Results were compared with traditional match weights using a simulation study. Results Identifier errors (where values disagreed in linked HES-PDS records or missing values were found in 0.11% of records for sex and date of birth and in 53% of records for postcode. Identifier error rates differed significantly by age, ethnicity and sex (p < 0.0005. Errors were less frequent in males, in 5–6 year olds and 18–19 year olds compared with infants, and were lowest for the Asian ethic group. A simulation study demonstrated that substantial bias was introduced into estimated readmission rates in the presence

  14. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    International Nuclear Information System (INIS)

    Sig Drellack, Lance Prothro

    2007-01-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  15. Summary of Energy Assessment Requirements under the Area Source Boiler Rule

    Science.gov (United States)

    This document provides an overview of the energy assessment requirements for the national emission standards for hazardous air pollutants (NESHAP) for area sources: industrial, commercial and Institutional boilers, 40 CFR Part 63, Subpart JJJJJJ.

  16. Conceptual design of a permanent ring magnet based helicon plasma source module intended to be used in a large size fusion grade ion source

    Energy Technology Data Exchange (ETDEWEB)

    Pandey, Arun; Sudhir, Dass; Bandyopadhyay, M., E-mail: mainak@iter-india.org; Chakraborty, A.

    2016-02-15

    A conceptual design of a permanent magnet based single driver helicon plasma source module along with its design approach is described in this paper. The module unit is intended to be used in a large size ion source. The conceptual design of the helicon source module has been carried out using a computer code, HELIC. The magnetic field topology for the ring magnet is simulated with another code, BFieldM and the magnetic field values obtained from the calculation are further used as input in HELIC calculation for the conceptual design. The module is conceptualized based on a cylindrical glass vessel to produce plasma of diameter ∼50 mm, height ∼50 mm. The inner diameter of the permanent ring magnets is also of the same dimension with thickness ∼10 mm each, placed slightly above the backplate to maintain the required magnetic field. The simulated results show that for hydrogen gas, expected plasma density can be achieved as high as ∼10{sup 12}–10{sup 13} cm{sup −3} in the proposed helicon source configuration using 1 kW 13.56 MHz RF generator. An experimental setup to characterize a Helicon source module unit, consisting of a cylindrical glass (plasma) chamber along with the vacuum system, RF power supplies, probes and data acquisition system is being installed.

  17. Implementing ergonomics in large-scale engineering design. Communicating and negotiating requirements in an organizational context

    Energy Technology Data Exchange (ETDEWEB)

    Wulff, Ingrid Anette

    1997-12-31

    This thesis investigates under what conditions ergonomic criteria are being adhered to in engineering design. Specifically, the thesis discusses (1) the ergonomic criteria implementation process, (2) designer recognition of ergonomic requirements and the organization of ergonomics, (3) issues important for the implementation of ergonomic requirements, (4) how different means for experience transfer in design and operation are evaluated by the designers, (5) how designers ensure usability of offshore work places, and (6) how project members experience and cope with the large amount of documentation in large-scale engineering. 84 refs., 11 figs., 18 tabs.

  18. Distribution of hadron intranuclear cascade for large distance from a source

    International Nuclear Information System (INIS)

    Bibin, V.L.; Kazarnovskij, M.V.; Serezhnikov, S.V.

    1985-01-01

    Analytical solution of the problem of three-component hadron cascade development for large distances from a source is obtained in the framework of a series of simplifying assumptions. It makes possible to understand physical mechanisms of the process studied and to obtain approximate asymptotic expressions for hadron distribution functions

  19. Large-region acoustic source mapping using a movable array and sparse covariance fitting.

    Science.gov (United States)

    Zhao, Shengkui; Tuna, Cagdas; Nguyen, Thi Ngoc Tho; Jones, Douglas L

    2017-01-01

    Large-region acoustic source mapping is important for city-scale noise monitoring. Approaches using a single-position measurement scheme to scan large regions using small arrays cannot provide clean acoustic source maps, while deploying large arrays spanning the entire region of interest is prohibitively expensive. A multiple-position measurement scheme is applied to scan large regions at multiple spatial positions using a movable array of small size. Based on the multiple-position measurement scheme, a sparse-constrained multiple-position vectorized covariance matrix fitting approach is presented. In the proposed approach, the overall sample covariance matrix of the incoherent virtual array is first estimated using the multiple-position array data and then vectorized using the Khatri-Rao (KR) product. A linear model is then constructed for fitting the vectorized covariance matrix and a sparse-constrained reconstruction algorithm is proposed for recovering source powers from the model. The user parameter settings are discussed. The proposed approach is tested on a 30 m × 40 m region and a 60 m × 40 m region using simulated and measured data. Much cleaner acoustic source maps and lower sound pressure level errors are obtained compared to the beamforming approaches and the previous sparse approach [Zhao, Tuna, Nguyen, and Jones, Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP) (2016)].

  20. Orphan sources and the challenges: requirement for the prevention of malevolent use of radioactive sources and preparedness for radiological emergencies

    International Nuclear Information System (INIS)

    Pradeepkumar, K.S.; Sharma, D.N.

    2006-01-01

    Challenges from smuggled or illegally transported radioactive sources with intention of causing threats to the society are similar to the radiological emergencies possible from misplaced/lost radioactive sources. While large number of radioactive sources are transported and are in use world over, the emergency preparedness and response system is not adequately developed compared to that for nuclear facilities. After the terrorist attack on W.T.C., there is concern world over about the malicious use of radioactive material calling for improving the emergency response system and international cooperation for preventing illicit trafficking of radioactive sources/material. Extremely sensitive state-of-the art monitoring systems installed at appropriate locations and periodic mobile radiation monitoring around suspected areas can be deterrent and can prevent the illicit trafficking of radioactive sources. Unless every nation ensures strict administrative control over the sources and implement usage of state-of-the art systems and methodology for early detection/prevention of illegal movement of sources within the territory and across its boundaries, the challenges from the orphan sources will remain for ever. The issues and challenges of man made radiological emergencies, remedial measures and the methodology for prevention and management of such emergencies are discussed here. The threat from an orphan source depends on many parameters. The type and quantity of the radionuclide, physical and chemical form influencing dispersion in air, deposition, solubility, migration in soil etc., can vary the radiological consequences when the source gets crushed accidentally along with scrap or is used for malevolent purposes. Depending on the level of environmental contamination, long term effects of the radiological emergency can significantly vary. Development of capability for quick detection, assessment and response are essential if prevention of theft/misuse of such sources

  1. A Fieldable-Prototype Large-Area Gamma-ray Imager for Orphan Source Search

    Energy Technology Data Exchange (ETDEWEB)

    Ziock, Klaus-Peter [ORNL; Fabris, Lorenzo [ORNL; Carr, Dennis [Lawrence Livermore National Laboratory (LLNL); Collins, Jeff [Lawrence Livermore National Laboratory (LLNL); Cunningham, Mark F [Lawrence Livermore National Laboratory (LLNL); Habte Ghebretatios, Frezghi [ORNL; Karnowski, Thomas Paul [ORNL; Marchant, William [University of California, Berkeley

    2008-01-01

    We have constructed a unique instrument for use in the search for orphan sources. The system uses gamma-ray imaging to "see through" the natural background variations that effectively limit the search range of normal devices to ~10 m. The imager is mounted in a 4.9- m-long trailer and can be towed by a large personal vehicle. Source locations are determined both in range and along the direction of travel as the vehicle moves. A fully inertial platform coupled to a Global Positioning System receiver is used to map the gamma-ray images onto overhead geospatial imagery. The resulting images provide precise source locations, allowing rapid follow-up work. The instrument simultaneously searches both sides of the street to a distance of 50 m (100-m swath) for milliCurieclass sources with near-perfect performance.

  2. The MACHO Project HST Follow-Up: The Large Magellanic Cloud Microlensing Source Stars

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, C.A.; /LLNL, Livermore /UC, Berkeley; Drake, A.J.; /Caltech; Cook, K.H.; /LLNL, Livermore /UC, Berkeley; Bennett, D.P.; /Caltech /Notre Dame U.; Popowski, P.; /Garching, Max Planck Inst.; Dalal, N.; /Toronto U.; Nikolaev, S.; /LLNL, Livermore; Alcock, C.; /Caltech /Harvard-Smithsonian Ctr. Astrophys.; Axelrod, T.S.; /Arizona U.; Becker, A.C. /Washington U., Seattle; Freeman, K.C.; /Res. Sch. Astron. Astrophys., Weston Creek; Geha, M.; /Yale U.; Griest, K.; /UC, San Diego; Keller, S.C.; /LLNL, Livermore; Lehner, M.J.; /Harvard-Smithsonian Ctr. Astrophys. /Taipei, Inst. Astron. Astrophys.; Marshall, S.L.; /SLAC; Minniti, D.; /Rio de Janeiro, Pont. U. Catol. /Vatican Astron. Observ.; Pratt, M.R.; /Aradigm, Hayward; Quinn, P.J.; /Western Australia U.; Stubbs, C.W.; /UC, Berkeley /Harvard U.; Sutherland, W.; /Oxford U. /Oran, Sci. Tech. U. /Garching, Max Planck Inst. /McMaster U.

    2009-06-25

    We present Hubble Space Telescope (HST) WFPC2 photometry of 13 microlensed source stars from the 5.7 year Large Magellanic Cloud (LMC) survey conducted by the MACHO Project. The microlensing source stars are identified by deriving accurate centroids in the ground-based MACHO images using difference image analysis (DIA) and then transforming the DIA coordinates to the HST frame. None of these sources is coincident with a background galaxy, which rules out the possibility that the MACHO LMC microlensing sample is contaminated with misidentified supernovae or AGN in galaxies behind the LMC. This supports the conclusion that the MACHO LMC microlensing sample has only a small amount of contamination due to non-microlensing forms of variability. We compare the WFPC2 source star magnitudes with the lensed flux predictions derived from microlensing fits to the light curve data. In most cases the source star brightness is accurately predicted. Finally, we develop a statistic which constrains the location of the Large Magellanic Cloud (LMC) microlensing source stars with respect to the distributions of stars and dust in the LMC and compare this to the predictions of various models of LMC microlensing. This test excludes at {approx}> 90% confidence level models where more than 80% of the source stars lie behind the LMC. Exotic models that attempt to explain the excess LMC microlensing optical depth seen by MACHO with a population of background sources are disfavored or excluded by this test. Models in which most of the lenses reside in a halo or spheroid distribution associated with either the Milky Way or the LMC are consistent which these data, but LMC halo or spheroid models are favored by the combined MACHO and EROS microlensing results.

  3. Computing and data handling requirements for SSC [Superconducting Super Collider] and LHC [Large Hadron Collider] experiments

    International Nuclear Information System (INIS)

    Lankford, A.J.

    1990-05-01

    A number of issues for computing and data handling in the online in environment at future high-luminosity, high-energy colliders, such as the Superconducting Super Collider (SSC) and Large Hadron Collider (LHC), are outlined. Requirements for trigger processing, data acquisition, and online processing are discussed. Some aspects of possible solutions are sketched. 6 refs., 3 figs

  4. Power Generation from a Radiative Thermal Source Using a Large-Area Infrared Rectenna

    Science.gov (United States)

    Shank, Joshua; Kadlec, Emil A.; Jarecki, Robert L.; Starbuck, Andrew; Howell, Stephen; Peters, David W.; Davids, Paul S.

    2018-05-01

    Electrical power generation from a moderate-temperature thermal source by means of direct conversion of infrared radiation is important and highly desirable for energy harvesting from waste heat and micropower applications. Here, we demonstrate direct rectified power generation from an unbiased large-area nanoantenna-coupled tunnel diode rectifier called a rectenna. Using a vacuum radiometric measurement technique with irradiation from a temperature-stabilized thermal source, a generated power density of 8 nW /cm2 is observed at a source temperature of 450 °C for the unbiased rectenna across an optimized load resistance. The optimized load resistance for the peak power generation for each temperature coincides with the tunnel diode resistance at zero bias and corresponds to the impedance matching condition for a rectifying antenna. Current-voltage measurements of a thermally illuminated large-area rectenna show current zero crossing shifts into the second quadrant indicating rectification. Photon-assisted tunneling in the unbiased rectenna is modeled as the mechanism for the large short-circuit photocurrents observed where the photon energy serves as an effective bias across the tunnel junction. The measured current and voltage across the load resistor as a function of the thermal source temperature represents direct current electrical power generation.

  5. Evaluating Sources of Risks in Large Engineering Projects: The Roles of Equivocality and Uncertainty

    Directory of Open Access Journals (Sweden)

    Leena Pekkinen

    2015-11-01

    Full Text Available Contemporary project risk management literature introduces uncertainty, i.e., the lack of information, as a fundamental basis of project risks. In this study the authors assert that equivocality, i.e., the existence of multiple and conflicting interpretations, can also serve as a basis of risks. With an in-depth empirical investigation of a large complex engineering project the authors identified risk sources having their bases in the situations where uncertainty or equivocality was the predominant attribute. The information processing theory proposes different managerial practices for risk management based on the sources of risks in uncertainty or equivocality.

  6. FERMI/LARGE AREA TELESCOPE BRIGHT GAMMA-RAY SOURCE LIST

    International Nuclear Information System (INIS)

    Abdo, A. A.; Ackermann, M.; Ajello, M.; Bechtol, K.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; Borgland, A. W.; Atwood, W. B.; Axelsson, M.; Battelino, M.; Baldini, L.; Bellazzini, R.; Ballet, J.; Band, D. L.; Barbiellini, G.; Bastieri, D.; Baughman, B. M.; Bignami, G. F.; Bonamente, E.

    2009-01-01

    Following its launch in 2008 June, the Fermi Gamma-ray Space Telescope (Fermi) began a sky survey in August. The Large Area Telescope (LAT) on Fermi in three months produced a deeper and better resolved map of the γ-ray sky than any previous space mission. We present here initial results for energies above 100 MeV for the 205 most significant (statistical significance greater than ∼10σ) γ-ray sources in these data. These are the best characterized and best localized point-like (i.e., spatially unresolved) γ-ray sources in the early mission data.

  7. Manufacturing of large size RF based -ve ion source with 8 drivers-challenges and learnings

    International Nuclear Information System (INIS)

    Joshi, Jaydeep; Patel, Hitesh; Singh, Mahendrajit; Bandyopadhyay, Mainak; Chakraborty, Arun

    2017-01-01

    Radio Frequency (RF) Ion Source for ITER Diagnostic Neutral Beam (DNB) system, is an 8 driver based ion source, where the desired plasma density is produced by inductive coupling of RF power. The present paper describes the experience of developing a manufacturing design to meet the above mentioned requirements, feasibility assessment, prototyping carried out, parallel experiments in support of manufacturing and realization of sub-components along with their quality inspections activities performed. Additionally, paper also presents to the observations in terms of deviations and non-conformities encountered, as a part of learning for the future components

  8. Simple emittance measurement of H- beams from a large plasma source

    International Nuclear Information System (INIS)

    Guharay, S.K.; Tsumori, K.; Hamabe, M.; Takeiri, Y.; Kaneko, O.; Kuroda, T.

    1996-03-01

    An emittance meter is developed using pepper-pot method. Kapton foils are used to detect intensity distributions of small beamlets at the 'image' plane of the pepper-pot. Emittance of H - beams from a large plasma source for the neutral beam injector of the Large Helical Device (LHD) has been measured. The normalized emittance (95%) of a 6 mA H - beam with emission current density of about 10 mA/cm 2 is ∼0.59 mm mrad. The present system is very simple, and it eliminates many complexities of the existing schemes. (author)

  9. Impact of large field angles on the requirements for deformable mirror in imaging satellites

    Science.gov (United States)

    Kim, Jae Jun; Mueller, Mark; Martinez, Ty; Agrawal, Brij

    2018-04-01

    For certain imaging satellite missions, a large aperture with wide field-of-view is needed. In order to achieve diffraction limited performance, the mirror surface Root Mean Square (RMS) error has to be less than 0.05 waves. In the case of visible light, it has to be less than 30 nm. This requirement is difficult to meet as the large aperture will need to be segmented in order to fit inside a launch vehicle shroud. To reduce this requirement and to compensate for the residual wavefront error, Micro-Electro-Mechanical System (MEMS) deformable mirrors can be considered in the aft optics of the optical system. MEMS deformable mirrors are affordable and consume low power, but are small in size. Due to the major reduction in pupil size for the deformable mirror, the effective field angle is magnified by the diameter ratio of the primary and deformable mirror. For wide field of view imaging, the required deformable mirror correction is field angle dependant, impacting the required parameters of a deformable mirror such as size, number of actuators, and actuator stroke. In this paper, a representative telescope and deformable mirror system model is developed and the deformable mirror correction is simulated to study the impact of the large field angles in correcting a wavefront error using a deformable mirror in the aft optics.

  10. Review of particle-in-cell modeling for the extraction region of large negative hydrogen ion sources for fusion

    Science.gov (United States)

    Wünderlich, D.; Mochalskyy, S.; Montellano, I. M.; Revel, A.

    2018-05-01

    Particle-in-cell (PIC) codes are used since the early 1960s for calculating self-consistently the motion of charged particles in plasmas, taking into account external electric and magnetic fields as well as the fields created by the particles itself. Due to the used very small time steps (in the order of the inverse plasma frequency) and mesh size, the computational requirements can be very high and they drastically increase with increasing plasma density and size of the calculation domain. Thus, usually small computational domains and/or reduced dimensionality are used. In the last years, the available central processing unit (CPU) power strongly increased. Together with a massive parallelization of the codes, it is now possible to describe in 3D the extraction of charged particles from a plasma, using calculation domains with an edge length of several centimeters, consisting of one extraction aperture, the plasma in direct vicinity of the aperture, and a part of the extraction system. Large negative hydrogen or deuterium ion sources are essential parts of the neutral beam injection (NBI) system in future fusion devices like the international fusion experiment ITER and the demonstration reactor (DEMO). For ITER NBI RF driven sources with a source area of 0.9 × 1.9 m2 and 1280 extraction apertures will be used. The extraction of negative ions is accompanied by the co-extraction of electrons which are deflected onto an electron dump. Typically, the maximum negative extracted ion current is limited by the amount and the temporal instability of the co-extracted electrons, especially for operation in deuterium. Different PIC codes are available for the extraction region of large driven negative ion sources for fusion. Additionally, some effort is ongoing in developing codes that describe in a simplified manner (coarser mesh or reduced dimensionality) the plasma of the whole ion source. The presentation first gives a brief overview of the current status of the ion

  11. Abnormally large energy spread of electron beams extracted from plasma sources

    Energy Technology Data Exchange (ETDEWEB)

    Winter, H [Technische Univ., Vienna (Austria). Inst. fuer Allgemeine Physik

    1976-07-01

    Intense electron beams extracted from DUOPLASMATRON-plasma cathodes show a high degree of modulation in intensity and an abnormally large energy spread; these facts cannot be explained simply by the temperature of the plasma electrons and the discharge structure. However, an analysis of the discharge stability behaviour and the interaction of source- and extracted beam-plasma leads to an explanation for the observed effects.

  12. Design specific joint optimization of masks and sources on a very large scale

    Science.gov (United States)

    Lai, K.; Gabrani, M.; Demaris, D.; Casati, N.; Torres, A.; Sarkar, S.; Strenski, P.; Bagheri, S.; Scarpazza, D.; Rosenbluth, A. E.; Melville, D. O.; Wächter, A.; Lee, J.; Austel, V.; Szeto-Millstone, M.; Tian, K.; Barahona, F.; Inoue, T.; Sakamoto, M.

    2011-04-01

    Joint optimization (JO) of source and mask together is known to produce better SMO solutions than sequential optimization of the source and the mask. However, large scale JO problems are very difficult to solve because the global impact of the source variables causes an enormous number of mask variables to be coupled together. This work presents innovation that minimize this runtime bottleneck. The proposed SMO parallelization algorithm allows separate mask regions to be processed efficiently across multiple CPUs in a high performance computing (HPC) environment, despite the fact that a truly joint optimization is being carried out with source variables that interact across the entire mask. Building on this engine a progressive deletion (PD) method was developed that can directly compute "binding constructs" for the optimization, i.e. our method can essentially determine the particular feature content which limits the process window attainable by the optimum source. This method allows us to minimize the uncertainty inherent to different clustering/ranking methods in seeking an overall optimum source that results from the use of heuristic metrics. An objective benchmarking of the effectiveness of different pattern sampling methods was performed during postoptimization analysis. The PD serves as a golden standard for us to develop optimum pattern clustering/ranking algorithms. With this work, it is shown that it is not necessary to exhaustively optimize the entire mask together with the source in order to identify these binding clips. If the number of clips to be optimized exceeds the practical limit of the parallel SMO engine one can starts with a pattern selection step to achieve high clip count compression before SMO. With this LSSO capability one can address the challenging problem of layout-specific design, or improve the technology source as cell layouts and sample layouts replace lithography test structures in the development cycle.

  13. Requirements and concept design for large earth survey telescope for SEOS

    Science.gov (United States)

    Mailhot, P.; Bisbee, J.

    1975-01-01

    The efforts of a one year program of Requirements Analysis and Conceptual Design for the Large Earth Survey Telescope for the Synchronous Earth Observatory Satellite is summarized. A 1.4 meter aperture Cassegrain telescope with 0.6 deg field of view is shown to do an excellent job in satisfying the observational requirements for a wide range of earth resources and meteorological applications. The telescope provides imagery or thermal mapping in ten spectral bands at one time in a field sharing grouping of linear detector arrays. Pushbroom scanning is accomplished by spacecraft slew.

  14. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  15. Sources

    International Nuclear Information System (INIS)

    Duffy, L.P.

    1991-01-01

    This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community

  16. Design and fabrication of a large rectangular magnetic cusp plasma source for high intensity neutral beam injectors

    International Nuclear Information System (INIS)

    Biagi, L.A.; Berkner, K.H.; Ehlers, K.W.; Paterson, J.A.; Porter, J.R.

    1979-11-01

    The design and fabrication techniques for a large, rectangular magnetic bucket plasma source are described. This source is compatible with the accelerator structures for the TFTR and DIII neutral-beam systems

  17. Solder bond requirement for large, built-up, high-performance conductors

    International Nuclear Information System (INIS)

    Willig, R.L.

    1981-01-01

    Some large built-up conductors fabricated for large superconducting magnets are designed to operate above the maximum recovery current. Because the stability of these conductors is sensitive to the quality of the solder bond joining the composite superconductor to the high-conductivity substrate, a minimum bond requirement is necessary. The present analysis finds that the superconductor is unstable and becomes abruptly resistive when there are temperature excursions into the current sharing region of a poorly bonded conductor. This abrupt transition, produces eddy current heating in the vicinity of the superconducting filaments and causes a sharp reduction in the minimum propagating zone (MPZ) energy. This sensitivity of the MPZ energy to the solder bond contact area is used to specify a minimum bond requirement. For the superconducting MHD magnet built for the Component Development Integration Facility (CDIF), the minimum bonded surface area is .68 cm/sup 2//cm which is 44% of the composite perimeter. 5 refs

  18. Using Soluble Reactive Phosphorus and Ammonia to Identify Point Source Discharge from Large Livestock Facilities

    Science.gov (United States)

    Borrello, M. C.; Scribner, M.; Chessin, K.

    2013-12-01

    A growing body of research draws attention to the negative environmental impacts on surface water from large livestock facilities. These impacts are mostly in the form of excessive nutrient loading resulting in significantly decreased oxygen levels. Over-application of animal waste on fields as well as direct discharge into surface water from facilities themselves has been identified as the main contributor to the development of hypoxic zones in Lake Erie, Chesapeake Bay and the Gulf of Mexico. Some regulators claim enforcement of water quality laws is problematic because of the nature and pervasiveness of non-point source impacts. Any direct discharge by a facility is a violation of permits governed by the Clean Water Act, unless the facility has special dispensation for discharge. Previous research by the principal author and others has shown runoff and underdrain transport are the main mechanisms by which nutrients enter surface water. This study utilized previous work to determine if the effects of non-point source discharge can be distinguished from direct (point-source) discharge using simple nutrient analysis and dissolved oxygen (DO) parameters. Nutrient and DO parameters were measured from three sites: 1. A stream adjacent to a field receiving manure, upstream of a large livestock facility with a history of direct discharge, 2. The same stream downstream of the facility and 3. A stream in an area relatively unimpacted by large-scale agriculture (control site). Results show that calculating a simple Pearson correlation coefficient (r) of soluble reactive phosphorus (SRP) and ammonia over time as well as temperature and DO, distinguishes non-point source from point source discharge into surface water. The r value for SRP and ammonia for the upstream site was 0.01 while the r value for the downstream site was 0.92. The control site had an r value of 0.20. Likewise, r values were calculated on temperature and DO for each site. High negative correlations

  19. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    Science.gov (United States)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  20. A Large Neutrino Detector Facility at the Spallation Neutron Source at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Efremenko, Y.V.

    1999-01-01

    The ORLaND (Oak Ridge Large Neutrino Detector) collaboration proposes to construct a large neutrino detector in an underground experimental hall adjacent to the first target station of the Spallation Neutron Source (SNS) at the Oak Ridge National Laboratory. The main mission of a large (2000 ton) Scintillation-Cherenkov detector is to measure bar ν μ -> bar ν e neutrino oscillation parameters more accurately than they can be determined in other experiments, or significantly extending the covered parameter space below (sin'20 le 10 -4 ). In addition to the neutrino oscillation measurements, ORLaND would be capable of making precise measurements of sin 2 θ W , search for the magnetic moment of the muon neutrino, and investigate the anomaly in the KARMEN time spectrum, which has been attributed to a new neutral particle. With the same facility an extensive program of measurements of neutrino nucleus cross sections is also planned to support nuclear astrophysics

  1. Sources, classification, and disposal of radioactive wastes: History and legal and regulatory requirements

    International Nuclear Information System (INIS)

    Kocher, D.C.

    1991-01-01

    This report discusses the following topics: (1) early definitions of different types (classes) of radioactive waste developed prior to definitions in laws and regulations; (2) sources of different classes of radioactive waste; (3) current laws and regulations addressing classification of radioactive wastes; and requirements for disposal of different waste classes. Relationship between waste classification and requirements for permanent disposal is emphasized; (4) federal and state responsibilities for radioactive wastes; and (5) distinctions between radioactive wastes produced in civilian and defense sectors

  2. Type B package for the transport of large medical and industrial sources

    International Nuclear Information System (INIS)

    Brown, Darrell Dwaine; Noss, Philip W.

    2010-01-01

    AREVA Federal Services LLC, under contract to the Los Alamos National Laboratory's Offsite Source Recovery Project, is developing a new Type B(U)-96 package for the transport of unwanted or abandoned high activity gamma and neutron radioactive sealed sources (sources). The sources were used primarily in medical or industrial devices, and are of domestic (USA) or foreign origin. To promote public safety and mitigate the possibility of loss or misuse, the Offsite Source Recovery Project is recovering and managing sources worldwide. The package, denoted the LANL-B, is designed to accommodate the sources within an internal gamma shield. The sources are located either in the IAEA's Long Term Storage Shield (LTSS), or within intact medical or industrial irradiation devices. As the sources are already shielded separately, the package does not include any shielding of its own. A particular challenge in the design of the LANL-B has been weight. Since the LTSS shield weighs approximately 5,000 lb (2,270 kg), and the total package gross weight must be limited to 10,000 lb (4,540 kg), the net weight of the package was limited to 5,000 lb, for an efficiency of 50% (i.e., the payload weight is 50% of the gross weight of the package). This required implementation of a light-weight bell-jar concept, in which the containment takes the form of a vertical bell which is bolted to a base. A single impact limiter is used on the bottom, to protect the elastomer seals and bolted joint. A top-end impact is mitigated by the deformation of a tori spherically-shaped head. Impacts in various orientations on the bottom end are mitigated by a cylindrical, polyurethane foam-filled impact limiter. Internally, energy is absorbed using honeycomb blocks at each end, which fill the torispherical head volumes. As many of the sources are considered to be in normal form, the LANL-B package offers leak-tight containment using an elastomer seal at the joint between the bell and the base, as well as on the

  3. Large-scale fluctuations in the cosmic ionizing background: the impact of beamed source emission

    Science.gov (United States)

    Suarez, Teresita; Pontzen, Andrew

    2017-12-01

    When modelling the ionization of gas in the intergalactic medium after reionization, it is standard practice to assume a uniform radiation background. This assumption is not always appropriate; models with radiative transfer show that large-scale ionization rate fluctuations can have an observable impact on statistics of the Lyman α forest. We extend such calculations to include beaming of sources, which has previously been neglected but which is expected to be important if quasars dominate the ionizing photon budget. Beaming has two effects: first, the physical number density of ionizing sources is enhanced relative to that directly observed; and secondly, the radiative transfer itself is altered. We calculate both effects in a hard-edged beaming model where each source has a random orientation, using an equilibrium Boltzmann hierarchy in terms of spherical harmonics. By studying the statistical properties of the resulting ionization rate and H I density fields at redshift z ∼ 2.3, we find that the two effects partially cancel each other; combined, they constitute a maximum 5 per cent correction to the power spectrum P_{H I}(k) at k = 0.04 h Mpc-1. On very large scales (k effects of beaming should be considered when interpreting future observational data sets.

  4. Simulation of RF power and multi-cusp magnetic field requirement for H{sup −} ion sources

    Energy Technology Data Exchange (ETDEWEB)

    Pathak, Manish [Ion Source Lab., Proton Linac & Superconducting Cavities Division, Raja Ramanna Centre for Advanced Technology, Indore, Madhya Pradesh 452013 (India); Senecha, V.K., E-mail: kumarvsen@gmail.com [Ion Source Lab., Proton Linac & Superconducting Cavities Division, Raja Ramanna Centre for Advanced Technology, Indore, Madhya Pradesh 452013 (India); Homi Bhabha National Institute, Raja Ramanna Centre for Advanced Technology, Indore, Madhya Pradesh 452013 (India); Kumar, Rajnish; Ghodke, Dharmraj V. [Ion Source Lab., Proton Linac & Superconducting Cavities Division, Raja Ramanna Centre for Advanced Technology, Indore, Madhya Pradesh 452013 (India)

    2016-12-01

    A computer simulation study for multi-cusp RF based H{sup −} ion source has been carried out using energy and particle balance equation for inductively coupled uniformly dense plasma considering sheath formation near the boundary wall of the plasma chamber for RF ion source used as high current injector for 1 Gev H{sup −} Linac project for SNS applications. The average reaction rates for different reactions responsible for H{sup −} ion production and destruction have been considered in the simulation model. The RF power requirement for the caesium free H{sup -} ion source for a maximum possible H{sup −} ion beam current has been derived by evaluating the required current and RF voltage fed to the coil antenna using transformer model for Inductively Coupled Plasma (ICP). Different parameters of RF based H{sup −} ion source like excited hydrogen molecular density, H{sup −} ion density, RF voltage and current of RF antenna have been calculated through simulations in the presence and absence of multicusp magnetic field to distinctly observe the effect of multicusp field. The RF power evaluated for different H{sup −} ion current values have been compared with the experimental reported results showing reasonably good agreement considering the fact that some RF power will be reflected from the plasma medium. The results obtained have helped in understanding the optimum field strength and field free regions suitable for volume emission based H{sup −} ion sources. The compact RF ion source exhibits nearly 6 times better efficiency compare to large diameter ion source.

  5. Attenuation Model Using the Large-N Array from the Source Physics Experiment

    Science.gov (United States)

    Atterholt, J.; Chen, T.; Snelson, C. M.; Mellors, R. J.

    2017-12-01

    The Source Physics Experiment (SPE) consists of a series of chemical explosions at the Nevada National Security Site. SPE seeks to better characterize the influence of subsurface heterogeneities on seismic wave propagation and energy dissipation from explosions. As a part of this experiment, SPE-5, a 5000 kg TNT equivalent chemical explosion, was detonated in 2016. During the SPE-5 experiment, a Large-N array of 996 geophones (half 3-component and half z-component) was deployed. This array covered an area that includes loosely consolidated alluvium (weak rock) and weathered granite (hard rock), and recorded the SPE-5 explosion as well as 53 weight drops. We use these Large-N recordings to develop an attenuation model of the area to better characterize how geologic structures influence source energy partitioning. We found a clear variation in seismic attenuation for different rock types: high attenuation (low Q) for alluvium and low attenuation (high Q) for granite. The attenuation structure correlates well with local geology, and will be incorporated into the large simulation effort of the SPE program to validate predictive models. (LA-UR-17-26382)

  6. Influence of starch source in the required hydrolysis time for the ...

    African Journals Online (AJOL)

    Influence of starch source in the required hydrolysis time for the production of maltodextrins with different dextrose equivalent. José Luis Montañez Soto, Luis Medina García, José Venegas González, Aurea Bernardino Nicanor, Leopoldo González Cruz ...

  7. From system requirements to source code: transitions in UML and RUP

    Directory of Open Access Journals (Sweden)

    Stanisław Wrycza

    2011-06-01

    Full Text Available There are many manuals explaining language specification among UML-related books. Only some of books mentioned concentrate on practical aspects of using the UML language in effective way using CASE tools and RUP. The current paper presents transitions from system requirements specification to structural source code, useful while developing an information system.

  8. 40 CFR 63.11163 - What are the standards and compliance requirements for new sources?

    Science.gov (United States)

    2010-07-01

    ... Nonferrous Metals Area Sources-Zinc, Cadmium, and Beryllium Primary Zinc Production Facilities § 63.11163... of the baghouse and upstream of any wet scrubber. (viii) Where multiple detectors are required, the system's instrumentation and alarm may be shared among detectors. (2) You must develop and submit to the...

  9. Performance analysis on a large scale borehole ground source heat pump in Tianjin cultural centre

    Science.gov (United States)

    Yin, Baoquan; Wu, Xiaoting

    2018-02-01

    In this paper, the temperature distribution of the geothermal field for the vertical borehole ground-coupled heat pump was tested and analysed. Besides the borehole ground-coupled heat pump, the system composed of the ice storage, heat supply network and cooling tower. According to the operation data for nearly three years, the temperature constant zone is in the ground depth of 40m -120m with a temperature gradient of about 3.0°C/100m. The temperature of the soil dropped significantly in the heating season, increased significantly in the cooling season, and reinstated in the transitional season. With the energy balance design of the heating and cooling and the existence of the soil thermal inertia, the soil temperature stayed in a relative stable range and the ground source heat pump system was operated with a relative high efficiency. The geothermal source heat pump was shown to be applicable for large scale utilization.

  10. Practical sublimation source for large-scale chromium gettering in fusion devices

    Energy Technology Data Exchange (ETDEWEB)

    Simpkins, J E; Gabbard, W A; Emerson, L C; Mioduszewski, P K [Oak Ridge National Lab., TN (USA)

    1984-05-01

    This paper describe the fabrication and testing of a large-scale chromium sublimation source that resembles the VARIAN Ti-ballsup(TM) in its design. The device consists of a hollow chromium sphere with a diameter of approximately 3 cm and an incandescent filament for radiation heating from inside the ball. We also discuss the gettering technique utilizing this source. The experimental arrangement consists of an ultrahigh vacuum (UHV) system instrumented for total and partial pressure measurements, a film thickness monitor, thermocouples, an optical pyrometer, and appropriate instrumentation to measure the heating power. The results show the temperature and corresponding sublimation rate of the Cr-ball as functions of input power. In addition, an example of the total pumping speed of a gettered surface is shown.

  11. A practical sublimation source for large-scale chromium gettering in fusion devices

    International Nuclear Information System (INIS)

    Simpkins, J.E.; Gabbard, W.A.; Emerson, L.C.; Mioduszewski, P.K.

    1984-01-01

    This paper describe the fabrication and testing of a large-scale chromium sublimation source that resembles the VARIAN Ti-ballsup(TM) in its design. The device consists of a hollow chromium sphere with a diameter of approximately 3 cm and an incandescent filament for radiation heating from inside the ball. We also discuss the gettering technique utilizing this source. The experimental arrangement consists of an ultrahigh vacuum (UHV) system instrumented for total and partial pressure measurements, a film thickness monitor, thermocouples, an optical pyrometer, and appropriate instrumentation to measure the heating power. The results show the temperature and corresponding sublimation rate of the Cr-ball as functions of input power. In addition, an example of the total pumping speed of a gettered surface is shown. (orig.)

  12. Practical sublimation source for large-scale chromium gettering in fusion devices

    International Nuclear Information System (INIS)

    Simpkins, J.E.; Emerson, L.C.; Mioduszewski, P.K.

    1983-01-01

    This paper describes the technique of chromium gettering with a large-scale sublimation source which resembles in its design the VARIAN Ti-Ball. It consists of a hollow chromium sphere with a diameter of approximately 3 cm and an incandescent filament for radiation heating from inside the ball. While the fabrication of the source is described in a companion paper, we discuss here the gettering technique. The experimental arrangement consists of an UHV system instrumented for total- and partial-pressure measurements, a film-thickness monitor, thermocouples, an optical pyrometer, and appropriate instrumentation to measure the heating power. The results show the temperature and corresponding sublimation rate of the Cr-Ball as function of input power. In addition, an example of the total pumping speed of a gettered surface is shown

  13. Concept of large scale PV-WT-PSH energy sources coupled with the national power system

    Directory of Open Access Journals (Sweden)

    Jurasz Jakub

    2017-01-01

    Full Text Available Intermittent/non-dispatchable energy sources are characterized by a significant variation of their energy yield over time. In majority of cases their role in energy systems is marginalized. However, even in Poland which is strongly dedicated to its hard and brown coal fired power plants, the wind generation in terms of installed capacity starts to play a significant role. This paper briefly introduces a concept of wind (WT and solar (PV powered pumped storage hydroelectricity (PSH which seems to be a viable option for solving the problem of the variable nature of PV and WT generation. Additionally we summarize the results of our so far conducted research on the integration of variable renewable energy sources (VRES to the energy systems and present conclusions which strictly refer to the prospects of large scale PV-WT-PSH operating as a part of the polish energy system.

  14. REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kadir Alpaslan DEMIR

    2015-10-01

    Full Text Available TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development of these systems have been signifi cantly poor when compared to other types of projects. It is obvious that the currently used systems development life cycle models are insuffi cient to address today’s challenges of building these systems. Using a systems development life cycle model that is specifi cally designed for largescale defense system developments and is effective in dealing with today’s and near-future challenges will help to improve project performances. The fi rst step in the development a large-scale defense systems development life cycle model is the identifi cation of requirements for such a model. This paper contributes to the body of literature in the fi eld by providing a set of requirements for system development life cycle models for large-scale defense systems. Furthermore, a research agenda is proposed.

  15. The tail wags the dog: managing large telescope construction projects with lagging requirements and creeping scope

    Science.gov (United States)

    Warner, Mark

    2014-08-01

    In a perfect world, large telescopes would be developed and built in logical, sequential order. First, scientific requirements would be agreed upon, vetted, and fully developed. From these, instrument designers would define their own subsystem requirements and specifications, and then flesh out preliminary designs. This in turn would then allow optic designers to specify lens and mirror requirements, which would permit telescope mounts and drives to be designed. Finally, software and safety systems, enclosures and domes, buildings, foundations, and infrastructures would be specified and developed. Unfortunately, the order of most large telescope projects is the opposite of this sequence. We don't live in a perfect world. Scientists usually don't want to commit to operational requirements until late in the design process, instrument designers frequently change and update their designs due to improving filter and camera technologies, and mount and optics engineers seem to live by the words "more" and "better" throughout their own design processes. Amplifying this is the fact that site construction of buildings and domes are usually the earliest critical path items on the schedule, and are often subject to lengthy permitting and environmental processes. These facility and support items therefore must quickly get underway, often before operational requirements are fully considered. Mirrors and mounts also have very long lead times for fabrication, which in turn necessitates that they are specified and purchased early. All of these factors can result in expensive and time-consuming change orders when requirements are finalized and/or shift late in the process. This paper discusses some of these issues encountered on large, multi-year construction projects. It also presents some techniques and ideas to minimize these effects on schedule and cost. Included is a discussion on the role of Interface Control Documents (ICDs), the importance (and danger) of making big

  16. mmpdb: An Open-Source Matched Molecular Pair Platform for Large Multiproperty Data Sets.

    Science.gov (United States)

    Dalke, Andrew; Hert, Jérôme; Kramer, Christian

    2018-05-29

    Matched molecular pair analysis (MMPA) enables the automated and systematic compilation of medicinal chemistry rules from compound/property data sets. Here we present mmpdb, an open-source matched molecular pair (MMP) platform to create, compile, store, retrieve, and use MMP rules. mmpdb is suitable for the large data sets typically found in pharmaceutical and agrochemical companies and provides new algorithms for fragment canonicalization and stereochemistry handling. The platform is written in Python and based on the RDKit toolkit. It is freely available from https://github.com/rdkit/mmpdb .

  17. Dye molecules as single-photon sources and large optical nonlinearities on a chip

    International Nuclear Information System (INIS)

    Hwang, J; Hinds, E A

    2011-01-01

    We point out that individual organic dye molecules, deposited close to optical waveguides on a photonic chip, can act as single-photon sources. A thin silicon nitride strip waveguide is expected to collect 28% of the photons from a single dibenzoterrylene molecule. These molecules can also provide large, localized optical nonlinearities, which are enough to discriminate between one photon or two through a differential phase shift of 2 0 per photon. This new atom-photon interface may be used as a resource for processing quantum information.

  18. Physics-Based Hazard Assessment for Critical Structures Near Large Earthquake Sources

    Science.gov (United States)

    Hutchings, L.; Mert, A.; Fahjan, Y.; Novikova, T.; Golara, A.; Miah, M.; Fergany, E.; Foxall, W.

    2017-09-01

    We argue that for critical structures near large earthquake sources: (1) the ergodic assumption, recent history, and simplified descriptions of the hazard are not appropriate to rely on for earthquake ground motion prediction and can lead to a mis-estimation of the hazard and risk to structures; (2) a physics-based approach can address these issues; (3) a physics-based source model must be provided to generate realistic phasing effects from finite rupture and model near-source ground motion correctly; (4) wave propagations and site response should be site specific; (5) a much wider search of possible sources of ground motion can be achieved computationally with a physics-based approach; (6) unless one utilizes a physics-based approach, the hazard and risk to structures has unknown uncertainties; (7) uncertainties can be reduced with a physics-based approach, but not with an ergodic approach; (8) computational power and computer codes have advanced to the point that risk to structures can be calculated directly from source and site-specific ground motions. Spanning the variability of potential ground motion in a predictive situation is especially difficult for near-source areas, but that is the distance at which the hazard is the greatest. The basis of a "physical-based" approach is ground-motion syntheses derived from physics and an understanding of the earthquake process. This is an overview paper and results from previous studies are used to make the case for these conclusions. Our premise is that 50 years of strong motion records is insufficient to capture all possible ranges of site and propagation path conditions, rupture processes, and spatial geometric relationships between source and site. Predicting future earthquake scenarios is necessary; models that have little or no physical basis but have been tested and adjusted to fit available observations can only "predict" what happened in the past, which should be considered description as opposed to prediction

  19. Using mobile source emission reductions to offset stationary surce rule requirements

    International Nuclear Information System (INIS)

    Nazemi, M.A.; Beruldsen, K.J.

    1993-01-01

    A number of mobile source strategies have been evaluated that could potentially be used as an alternative means of compliance with existing stationary source regulations, at a lower cost. The evaluation was spurred by both public and private sector interest in identifying the lowest cost air pollution reduction strategies, and the realization that mobile sources are the predominate contributor to the air pollution problem in the South Coast Air Quality Basin. Strategies evaluated included removing older vehicles from the in-use population, use of alternative fuels, inspection and maintenance measures, application of remote sensing technology, exceeding AVR requirements, as well as a number of other strategies. Key implementation issues have been identified, so that the viability of each mobile source strategies could be assessed. These issues include: (1) quantification of emissions benefits, (2) determining whether the mobile source strategy would generate emission reductions surplus to existing and planned mobile source regulations, and (3) assessing the potential for enforceability. The results of evaluation indicate that there are a number of promising mobile source emission strategies that could provide quantifiable, surplus, and enforceable emission reductions

  20. Design, development and integration of a large scale multiple source X-ray computed tomography system

    International Nuclear Information System (INIS)

    Malcolm, Andrew A.; Liu, Tong; Ng, Ivan Kee Beng; Teng, Wei Yuen; Yap, Tsi Tung; Wan, Siew Ping; Kong, Chun Jeng

    2013-01-01

    X-ray Computed Tomography (CT) allows visualisation of the physical structures in the interior of an object without physically opening or cutting it. This technology supports a wide range of applications in the non-destructive testing, failure analysis or performance evaluation of industrial products and components. Of the numerous factors that influence the performance characteristics of an X-ray CT system the energy level in the X-ray spectrum to be used is one of the most significant. The ability of the X-ray beam to penetrate a given thickness of a specific material is directly related to the maximum available energy level in the beam. Higher energy levels allow penetration of thicker components made of more dense materials. In response to local industry demand and in support of on-going research activity in the area of 3D X-ray imaging for industrial inspection the Singapore Institute of Manufacturing Technology (SIMTech) engaged in the design, development and integration of large scale multiple source X-ray computed tomography system based on X-ray sources operating at higher energies than previously available in the Institute. The system consists of a large area direct digital X-ray detector (410 x 410 mm), a multiple-axis manipulator system, a 225 kV open tube microfocus X-ray source and a 450 kV closed tube millifocus X-ray source. The 225 kV X-ray source can be operated in either transmission or reflection mode. The body of the 6-axis manipulator system is fabricated from heavy-duty steel onto which high precision linear and rotary motors have been mounted in order to achieve high accuracy, stability and repeatability. A source-detector distance of up to 2.5 m can be achieved. The system is controlled by a proprietary X-ray CT operating system developed by SIMTech. The system currently can accommodate samples up to 0.5 x 0.5 x 0.5 m in size with weight up to 50 kg. These specifications will be increased to 1.0 x 1.0 x 1.0 m and 100 kg in future

  1. Very broad beam metal ion source for large area ion implantation application

    International Nuclear Information System (INIS)

    Brown, I.; Anders, S.; Dickinson, M.R.; MacGill, R.A.; Yao, X.

    1993-01-01

    The authors have made and operated a very broad beam version of vacuum arc ion source and used it to carry out high energy metal ion implantation of a particularly large substrate. A multiple-cathode vacuum arc plasma source was coupled to a 50 cm diameter beam extractor (multiple aperture, accel-decel configuration) operated at a net extraction voltage of up to 50 kV. The metal ion species chosen were Ni and Ta. The mean ion charge state for Ni and Ta vacuum arc plasmas is 1.8 and 2.9, respectively, and so the mean ion energies were up to about 90 and 145 keV, respectively. The ion source was operated in a repetitively pulsed mode with pulse length 250 μs and repetition rate several pulses per second. The extracted beam had a gaussian profile with FWHM about 35 cm, giving a nominal beam area of about 1,000 cm 2 . The current of Ni or Ta metal ions in the beam was up to several amperes. The targets for the ion implantation were a number of 24-inch long, highly polished Cu rails from an electromagnetic rail gun. The rails were located about 80 cm away from the ion source extractor grids, and were moved across a diameter of the vessel in such a way as to maximize the uniformity of the implant along the rail. The saturation retained dose for Ta was limited to about 4 x 10 16 cm -2 because of the rather severe sputtering, in accordance with the theoretical expectations for these implantation conditions. Here they describe the ion source, the implantation procedure, and the kinds of implants that can be produced in this way

  2. Seismic Imaging of the Source Physics Experiment Site with the Large-N Seismic Array

    Science.gov (United States)

    Chen, T.; Snelson, C. M.; Mellors, R. J.

    2017-12-01

    The Source Physics Experiment (SPE) consists of a series of chemical explosions at the Nevada National Security Site. The goal of SPE is to understand seismic wave generation and propagation from these explosions. To achieve this goal, we need an accurate geophysical model of the SPE site. A Large-N seismic array that was deployed at the SPE site during one of the chemical explosions (SPE-5) helps us construct high-resolution local geophysical model. The Large-N seismic array consists of 996 geophones, and covers an area of approximately 2 × 2.5 km. The array is located in the northern end of the Yucca Flat basin, at a transition from Climax Stock (granite) to Yucca Flat (alluvium). In addition to the SPE-5 explosion, the Large-N array also recorded 53 weight drops. Using the Large-N seismic array recordings, we perform body wave and surface wave velocity analysis, and obtain 3D seismic imaging of the SPE site for the top crust of approximately 1 km. The imaging results show clear variation of geophysical parameter with local geological structures, including heterogeneous weathering layer and various rock types. The results of this work are being incorporated in the larger 3D modeling effort of the SPE program to validate the predictive models developed for the site.

  3. Optimization of the plasma parameters for the high current and uniform large-scale pulse arc ion source of the VEST-NBI system

    International Nuclear Information System (INIS)

    Jung, Bongki; Park, Min; Heo, Sung Ryul; Kim, Tae-Seong; Jeong, Seung Ho; Chang, Doo-Hee; Lee, Kwang Won; In, Sang-Ryul

    2016-01-01

    Highlights: • High power magnetic bucket-type arc plasma source for the VEST NBI system is developed with modifications based on the prototype plasma source for KSTAR. • Plasma parameters in pulse duration are measured to characterize the plasma source. • High plasma density and good uniformity is achieved at the low operating pressure below 1 Pa. • Required ion beam current density is confirmed by analysis of plasma parameters and results of a particle balance model. - Abstract: A large-scale hydrogen arc plasma source was developed at the Korea Atomic Energy Research Institute for a high power pulsed NBI system of VEST which is a compact spherical tokamak at Seoul national university. One of the research target of VEST is to study innovative tokamak operating scenarios. For this purpose, high current density and uniform large-scale pulse plasma source is required to satisfy the target ion beam power efficiently. Therefore, optimizing the plasma parameters of the ion source such as the electron density, temperature, and plasma uniformity is conducted by changing the operating conditions of the plasma source. Furthermore, ion species of the hydrogen plasma source are analyzed using a particle balance model to increase the monatomic fraction which is another essential parameter for increasing the ion beam current density. Conclusively, efficient operating conditions are presented from the results of the optimized plasma parameters and the extractable ion beam current is calculated.

  4. NOx emissions from large point sources: variability in ozone production, resulting health damages and economic costs

    International Nuclear Information System (INIS)

    Mauzerall, D.L.; Namsoug Kim

    2005-01-01

    We present a proof-of-concept analysis of the measurement of the health damage of ozone (O 3 ) produced from nitrogen oxides (NO x =NO+NO 2 ) emitted by individual large point sources in the eastern United States. We use a regional atmospheric model of the eastern United States, the Comprehensive Air quality Model with Extensions (CAMx), to quantify the variable impact that a fixed quantity of NO x emitted from individual sources can have on the downwind concentration of surface O 3 , depending on temperature and local biogenic hydrocarbon emissions. We also examine the dependence of resulting O 3 -related health damages on the size of the exposed population. The investigation is relevant to the increasingly widely used 'cap and trade' approach to NO x regulation, which presumes that shifts of emission over time and space, holding the total fixed over the course of the summer O 3 season, will have minimal effect on the environmental outcome. By contrast, we show that a shift of a unit of NO x emissions from one place or time to another could result in large changes in resulting health effects due to O 3 formation and exposure. We indicate how the type of modeling carried out here might be used to attach externality-correcting prices to emissions. Charging emitters fees that are commensurate with the damage caused by their NO x emissions would create an incentive for emitters to reduce emissions at times and in locations where they cause the largest damage. (author)

  5. Honeycomblike large area LaB6 plasma source for Multi-Purpose Plasma facility

    International Nuclear Information System (INIS)

    Woo, Hyun-Jong; Chung, Kyu-Sun; You, Hyun-Jong; Lee, Myoung-Jae; Lho, Taihyeop; Choh, Kwon Kook; Yoon, Jung-Sik; Jung, Yong Ho; Lee, Bongju; Yoo, Suk Jae; Kwon, Myeon

    2007-01-01

    A Multi-Purpose Plasma (MP 2 ) facility has been renovated from Hanbit mirror device [Kwon et al., Nucl. Fusion 43, 686 (2003)] by adopting the same philosophy of diversified plasma simulator (DiPS) [Chung et al., Contrib. Plasma Phys. 46, 354 (2006)] by installing two plasma sources: LaB 6 (dc) and helicon (rf) plasma sources; and making three distinct simulators: divertor plasma simulator, space propulsion simulator, and astrophysics simulator. During the first renovation stage, a honeycomblike large area LaB 6 (HLA-LaB 6 ) cathode was developed for the divertor plasma simulator to improve the resistance against the thermal shock fragility for large and high density plasma generation. A HLA-LaB 6 cathode is composed of the one inner cathode with 4 in. diameter and the six outer cathodes with 2 in. diameter along with separate graphite heaters. The first plasma is generated with Ar gas and its properties are measured by the electric probes with various discharge currents and magnetic field configurations. Plasma density at the middle of central cell reaches up to 2.6x10 12 cm -3 , while the electron temperature remains around 3-3.5 eV at the low discharge current of less than 45 A, and the magnetic field intensity of 870 G. Unique features of electric property of heaters, plasma density profiles, is explained comparing with those of single LaB 6 cathode with 4 in. diameter in DiPS

  6. Fast and accurate detection of spread source in large complex networks.

    Science.gov (United States)

    Paluch, Robert; Lu, Xiaoyan; Suchecki, Krzysztof; Szymański, Bolesław K; Hołyst, Janusz A

    2018-02-06

    Spread over complex networks is a ubiquitous process with increasingly wide applications. Locating spread sources is often important, e.g. finding the patient one in epidemics, or source of rumor spreading in social network. Pinto, Thiran and Vetterli introduced an algorithm (PTVA) to solve the important case of this problem in which a limited set of nodes act as observers and report times at which the spread reached them. PTVA uses all observers to find a solution. Here we propose a new approach in which observers with low quality information (i.e. with large spread encounter times) are ignored and potential sources are selected based on the likelihood gradient from high quality observers. The original complexity of PTVA is O(N α ), where α ∈ (3,4) depends on the network topology and number of observers (N denotes the number of nodes in the network). Our Gradient Maximum Likelihood Algorithm (GMLA) reduces this complexity to O (N 2 log (N)). Extensive numerical tests performed on synthetic networks and real Gnutella network with limitation that id's of spreaders are unknown to observers demonstrate that for scale-free networks with such limitation GMLA yields higher quality localization results than PTVA does.

  7. Neural ensemble communities: Open-source approaches to hardware for large-scale electrophysiology

    Science.gov (United States)

    Siegle, Joshua H.; Hale, Gregory J.; Newman, Jonathan P.; Voigts, Jakob

    2014-01-01

    One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is “open” or “closed”: that is, whether or not the system’s schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. PMID:25528614

  8. Mining the mind research network: a novel framework for exploring large scale, heterogeneous translational neuroscience research data sources.

    Directory of Open Access Journals (Sweden)

    Henry Jeremy Bockholt

    2010-04-01

    Full Text Available A neuroinformatics (NI system is critical to brain imaging research in order to shorten the time between study conception and results. Such a NI system is required to scale well when large numbers of subjects are studied. Further, when multiple sites participate in research projects organizational issues become increasingly difficult. Optimized NI applications mitigate these problems. Additionally, NI software enables coordination across multiple studies, leveraging advantages potentially leading to exponential research discoveries. The web-based, Mind Research Network (MRN, database system has been designed and improved through our experience with 200 research studies and 250 researchers from 7 different institutions. The MRN tools permit the collection, management, reporting and efficient use of large scale, heterogeneous data sources, e.g., multiple institutions, multiple principal investigators, multiple research programs and studies, and multimodal acquisitions. We have collected and analyzed data sets on thousands of research participants and have set up a framework to automatically analyze the data, thereby making efficient, practical data mining of this vast resource possible. This paper presents a comprehensive framework for capturing and analyzing heterogeneous neuroscience research data sources that has been fully optimized for end-users to perform novel data mining.

  9. Mining the Mind Research Network: A Novel Framework for Exploring Large Scale, Heterogeneous Translational Neuroscience Research Data Sources

    Science.gov (United States)

    Bockholt, Henry J.; Scully, Mark; Courtney, William; Rachakonda, Srinivas; Scott, Adam; Caprihan, Arvind; Fries, Jill; Kalyanam, Ravi; Segall, Judith M.; de la Garza, Raul; Lane, Susan; Calhoun, Vince D.

    2009-01-01

    A neuroinformatics (NI) system is critical to brain imaging research in order to shorten the time between study conception and results. Such a NI system is required to scale well when large numbers of subjects are studied. Further, when multiple sites participate in research projects organizational issues become increasingly difficult. Optimized NI applications mitigate these problems. Additionally, NI software enables coordination across multiple studies, leveraging advantages potentially leading to exponential research discoveries. The web-based, Mind Research Network (MRN), database system has been designed and improved through our experience with 200 research studies and 250 researchers from seven different institutions. The MRN tools permit the collection, management, reporting and efficient use of large scale, heterogeneous data sources, e.g., multiple institutions, multiple principal investigators, multiple research programs and studies, and multimodal acquisitions. We have collected and analyzed data sets on thousands of research participants and have set up a framework to automatically analyze the data, thereby making efficient, practical data mining of this vast resource possible. This paper presents a comprehensive framework for capturing and analyzing heterogeneous neuroscience research data sources that has been fully optimized for end-users to perform novel data mining. PMID:20461147

  10. Dietary items as possible sources of {sup 137}Cs in large carnivores in the Gorski Kotar forest ecosystem, Western Croatia

    Energy Technology Data Exchange (ETDEWEB)

    Šprem, Nikica, E-mail: nsprem@agr.hr [University of Zagreb, Faculty of Agriculture, Department of Fisheries, Beekeeping, Game Management and Special Zoology, Svetošimunska cesta 25, 10000 Zagreb (Croatia); Piria, Marina; Barišić, Domagoj [University of Zagreb, Faculty of Agriculture, Department of Fisheries, Beekeeping, Game Management and Special Zoology, Svetošimunska cesta 25, 10000 Zagreb (Croatia); Kusak, Josip [University of Zagreb, Veterinary Faculty, Department of Biology, Heinzelova 55, 10000 Zagreb (Croatia); Barišić, Delko [Laboratory for Radioecology, Centre for Marine and Environmental Research, Ruđer Bošković Institute, PO Box 160, Bijenička 54, 10002 Zagreb (Croatia)

    2016-01-15

    The mountain forest ecosystem of Gorski Kotar is distant from any significant sources of environmental pollution, though recent findings have revealed that this region is among the most intense {sup 137}Cs contaminated area in Croatia. Therefore, the aim of this study was to investigate {sup 137}Cs and {sup 40}K load in three large predator species in the mountain forest ecosystem. Radionuclides mass activities were determined by the gamma-spectrometric method in the muscle tissue of brown bear (47), wolf (7), lynx (1) and golden jackal (2). The highest {sup 137}Cs mass activity was found in lynx (153 Bq kg{sup −1}), followed by brown bear (132 Bq kg{sup −1}), wolf (22.2 Bq kg{sup −1}), and golden jackal (2.48 Bq kg{sup −1}). Analysis of 63 samples of dietary items suggests that they are not all potentially dominant sources of {sup 137}Cs for wildlife. The most important source of radionuclides for the higher parts of the food-chain from the study area were found to be the mushroom species wood hedgehog (Hydnum repandum), with a transfer factor TF of 5.166, and blueberry (Vaccinium myrtillus) as a plant species (TF = 2.096). Food items of animal origin indicated higher mass activity of radionuclides and therefore are possible moderate bioindicators of environmental pollution. The results also revealed that possible unknown wild animal food sources are a caesium source in the study region, and further study is required to illuminate this issue. - Highlights: • Radionuclide mass activities were determined by the gamma-spectrometric method. • The highest {sup 137}Cs mass activity in brown bear was 132, wolf 22.2 and lynx 153 Bq kg{sup −1}. • The best bioindicators are a wood hedgehog (TF = 5.166) and blueberry (TF = 2.096).

  11. Anthropogenic Methane Emissions in California's San Joaquin Valley: Characterizing Large Point Source Emitters

    Science.gov (United States)

    Hopkins, F. M.; Duren, R. M.; Miller, C. E.; Aubrey, A. D.; Falk, M.; Holland, L.; Hook, S. J.; Hulley, G. C.; Johnson, W. R.; Kuai, L.; Kuwayama, T.; Lin, J. C.; Thorpe, A. K.; Worden, J. R.; Lauvaux, T.; Jeong, S.; Fischer, M. L.

    2015-12-01

    Methane is an important atmospheric pollutant that contributes to global warming and tropospheric ozone production. Methane mitigation could reduce near term climate change and improve air quality, but is hindered by a lack of knowledge of anthropogenic methane sources. Recent work has shown that methane emissions are not evenly distributed in space, or across emission sources, suggesting that a large fraction of anthropogenic methane comes from a few "super-emitters." We studied the distribution of super-emitters in California's southern San Joaquin Valley, where elevated levels of atmospheric CH4 have also been observed from space. Here, we define super-emitters as methane plumes that could be reliably detected (i.e., plume observed more than once in the same location) under varying wind conditions by airborne thermal infrared remote sensing. The detection limit for this technique was determined to be 4.5 kg CH4 h-1 by a controlled release experiment, corresponding to column methane enhancement at the point of emissions greater than 20% above local background levels. We surveyed a major oil production field, and an area with a high concentration of large dairies using a variety of airborne and ground-based measurements. Repeated airborne surveys (n=4) with the Hyperspectral Thermal Emission Spectrometer revealed 28 persistent methane plumes emanating from oil field infrastructure, including tanks, wells, and processing facilities. The likelihood that a given source type was a super-emitter varied from roughly 1/3 for processing facilities to 1/3000 for oil wells. 11 persistent plumes were detected in the dairy area, and all were associated with wet manure management. The majority (11/14) of manure lagoons in the study area were super-emitters. Comparing to a California methane emissions inventory for the surveyed areas, we estimate that super-emitters comprise a minimum of 9% of inventoried dairy emissions, and 13% of inventoried oil emissions in this region.

  12. Transmission-grid requirements with scattered and fluctuating renewable electricity-sources

    DEFF Research Database (Denmark)

    Østergaard, Poul Alberg

    2003-01-01

    The article analysis the requirements of the transmission grids in a year 2020 situation with power balancing (matching production and consumption)as it is now on the few large power plants, and a year 2020 situation with geographically-scattered power balancing using e.g. CHP plants, heat pumps...

  13. Pulse power requirements for large aperture optical switches based on plasma electrode Pockels cells

    International Nuclear Information System (INIS)

    Rhodes, M.A.; Taylor, J.

    1992-06-01

    We discuss very large-aperture optical switches (greater than 30 x 30 cm) as an enabling technology for inertial confinement fusion drivers based on multipass laser amplifiers. Large-scale laser fusion drivers such as the Nova laser have been based on single-pass amplifier designs in part because of the unavailability of a suitable large-aperture switch. We are developing an optical switch based on a Pockels cell employing plasma-electrodes. A plasma-electrode Pockels cell (PEPC) is a longitudinal-mode Pockels cell in which a plasma discharge is formed on each side of an electro-optic crystal (typically KDP or deuterated KDP, often designated KD*P). The plasmas formed on either side of the crystal act as transparent electrodes for a switching-pulse and are intended to allow uniform charging of the entire crystal. The switching-pulse is a nominally rectangular high-voltage pulse equal to the half-wave voltage V x ( 8 kV for KD*P or 17 kV for KDP) and is applied across the crystal via the plasma-electrodes. When the crystal is charged to V x , the polarization of an incoming, linearly polarized, laser beam is rotated by 90 degree. When used in conjunction with an appropriate, passive polarizer, an optical switch is thus realized. A switch with a clear aperture of 37 x 37 cm is now in construction for the Beamlet laser which will serve as a test bed for this switch as well as other technologies required for an advanced NOVA laser design. In this paper, we discuss the unique power electronics requirements of PEPC optical switches

  14. Comparison of radiation shielding requirements for HDR brachytherapy using 169Yb and 192Ir sources

    International Nuclear Information System (INIS)

    Lymperopoulou, G.; Papagiannis, P.; Sakelliou, L.; Georgiou, E.; Hourdakis, C. J.; Baltas, D.

    2006-01-01

    169 Yb has received a renewed focus lately as an alternative to 192 Ir sources for high dose rate (HDR) brachytherapy. Following the results of a recent work by our group which proved 169 Yb to be a good candidate for HDR prostate brachytherapy, this work seeks to quantify the radiation shielding requirements for 169 Yb HDR brachytherapy applications in comparison to the corresponding requirements for the current 192 Ir HDR brachytherapy standard. Monte Carlo simulation (MC) is used to obtain 169 Yb and 192 Ir broad beam transmission data through lead and concrete. Results are fitted to an analytical equation which can be used to readily calculate the barrier thickness required to achieve a given dose rate reduction. Shielding requirements for a HDR brachytherapy treatment room facility are presented as a function of distance, occupancy, dose limit, and facility workload, using analytical calculations for both 169 Yb and 192 Ir HDR sources. The barrier thickness required for 169 Yb is lower than that for 192 Ir by a factor of 4-5 for lead and 1.5-2 for concrete. Regarding 169 Yb HDR brachytherapy applications, the lead shielding requirements do not exceed 15 mm, even in highly conservative case scenarios. This allows for the construction of a lead door in most cases, thus avoiding the construction of a space consuming, specially designed maze. The effects of source structure, attenuation by the patient, and scatter conditions within an actual treatment room on the above-noted findings are also discussed using corresponding MC simulation results

  15. An examination of source material requirements contained in 10 CFR Part 40

    International Nuclear Information System (INIS)

    Nussbaumer, D.; Smith, D.A.; Wiblin, C.

    1992-10-01

    This report identifies issues for consideration for rule-making to update the requirements for source material in 10 CFR Part 40 and examines options for resolving these issues. The contemplated rulemaking is intended to update 10 CFR Part 40 to reflect current radiation protection principles and regulatory practices. It is expected that such an update would make requirements for the control of source material more comparable to those pertaining to byproduct material contained in 10 CFR Part 30. The newer biological data and dose calculation methodology reflected in revised 10 CFR Part 20 will be used in analyses of potential regulatory amendments. This report presents historical background information and discussion on the various issues identified and makes preliminary recommendations concerning needed regulatory changes and approaches to rulemaking

  16. The requirements for low-temperature plasma ionization support miniaturization of the ion source.

    Science.gov (United States)

    Kiontke, Andreas; Holzer, Frank; Belder, Detlev; Birkemeyer, Claudia

    2018-06-01

    Ambient ionization mass spectrometry (AI-MS), the ionization of samples under ambient conditions, enables fast and simple analysis of samples without or with little sample preparation. Due to their simple construction and low resource consumption, plasma-based ionization methods in particular are considered ideal for use in mobile analytical devices. However, systematic investigations that have attempted to identify the optimal configuration of a plasma source to achieve the sensitive detection of target molecules are still rare. We therefore used a low-temperature plasma ionization (LTPI) source based on dielectric barrier discharge with helium employed as the process gas to identify the factors that most strongly influence the signal intensity in the mass spectrometry of species formed by plasma ionization. In this study, we investigated several construction-related parameters of the plasma source and found that a low wall thickness of the dielectric, a small outlet spacing, and a short distance between the plasma source and the MS inlet are needed to achieve optimal signal intensity with a process-gas flow rate of as little as 10 mL/min. In conclusion, this type of ion source is especially well suited for downscaling, which is usually required in mobile devices. Our results provide valuable insights into the LTPI mechanism; they reveal the potential to further improve its implementation and standardization for mobile mass spectrometry as well as our understanding of the requirements and selectivity of this technique. Graphical abstract Optimized parameters of a dielectric barrier discharge plasma for ionization in mass spectrometry. The electrode size, shape, and arrangement, the thickness of the dielectric, and distances between the plasma source, sample, and MS inlet are marked in red. The process gas (helium) flow is shown in black.

  17. 20 CFR 416.919n - Informing the medical source of examination scheduling, report content, and signature requirements.

    Science.gov (United States)

    2010-04-01

    ... scheduling, report content, and signature requirements. 416.919n Section 416.919n Employees' Benefits SOCIAL... medical source of examination scheduling, report content, and signature requirements. The medical sources... report containing all of the elements in paragraph (c). (e) Signature requirements. All consultative...

  18. 20 CFR 404.1519n - Informing the medical source of examination scheduling, report content, and signature requirements.

    Science.gov (United States)

    2010-04-01

    ... scheduling, report content, and signature requirements. 404.1519n Section 404.1519n Employees' Benefits... medical source of examination scheduling, report content, and signature requirements. The medical sources... report containing all of the elements in paragraph (c). (e) Signature requirements. All consultative...

  19. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a

  20. Large-Eddy Simulation of Chemically Reactive Pollutant Transport from a Point Source in Urban Area

    Science.gov (United States)

    Du, Tangzheng; Liu, Chun-Ho

    2013-04-01

    Most air pollutants are chemically reactive so using inert scalar as the tracer in pollutant dispersion modelling would often overlook their impact on urban inhabitants. In this study, large-eddy simulation (LES) is used to examine the plume dispersion of chemically reactive pollutants in a hypothetical atmospheric boundary layer (ABL) in neutral stratification. The irreversible chemistry mechanism of ozone (O3) titration is integrated into the LES model. Nitric oxide (NO) is emitted from an elevated point source in a rectangular spatial domain doped with O3. The LES results are compared well with the wind tunnel results available in literature. Afterwards, the LES model is applied to idealized two-dimensional (2D) street canyons of unity aspect ratio to study the behaviours of chemically reactive plume over idealized urban roughness. The relation among various time scales of reaction/turbulence and dimensionless number are analysed.

  1. Requirements and principles for the implementation and construction of large-scale geographic information systems

    Science.gov (United States)

    Smith, Terence R.; Menon, Sudhakar; Star, Jeffrey L.; Estes, John E.

    1987-01-01

    This paper provides a brief survey of the history, structure and functions of 'traditional' geographic information systems (GIS), and then suggests a set of requirements that large-scale GIS should satisfy, together with a set of principles for their satisfaction. These principles, which include the systematic application of techniques from several subfields of computer science to the design and implementation of GIS and the integration of techniques from computer vision and image processing into standard GIS technology, are discussed in some detail. In particular, the paper provides a detailed discussion of questions relating to appropriate data models, data structures and computational procedures for the efficient storage, retrieval and analysis of spatially-indexed data.

  2. Beam calorimetry at the large negative ion source test facility ELISE: Experimental setup and latest results

    International Nuclear Information System (INIS)

    Nocentini, Riccardo; Bonomo, Federica; Ricci, Marina; Pimazzoni, Antonio; Fantz, Ursel; Heinemann, Bernd; Riedl, Rudolf; Wünderlich, Dirk

    2016-01-01

    Highlights: • ELISE is the first step in the European roadmap for the development of the ITER NBI. • Several beam diagnostic tools have been installed, the latest results are presented. • A gaussian fit procedure has been implemented to characterize the large ion beam. • Average beamlet group inhomogeneity is maximum 13%, close to the ITER target of 10%. • Beam divergence measured by calorimeter agrees with the BES measurements within 30%. - Abstract: The test facility ELISE is the first step within the European roadmap for the development of the ITER NBI system. ELISE is equipped with a 1 × 0.9 m"2 radio frequency negative ion source (half the ITER source size) and an ITER-like 3-grid extraction system which can extract an H"− or D"− beam for 10 s every 3 min (limited by available power supplies) with a total acceleration voltage of up to 60 kV. In the beam line of ELISE several beam diagnostic tools have been installed with the aim to evaluate beam intensity, divergence and uniformity. A copper diagnostic calorimeter gives the possibility to measure the beam power density profile with high resolution. The measurements are performed by an IR micro-bolometer camera and 48 thermocouples embedded in the calorimeter. A gaussian fit procedure has been implemented in order to characterize the large negative ion beam produced by ELISE. The latest results obtained from the beam calorimetry at ELISE show that the average beamlet group inhomogeneity is maximum 13%. The measured beam divergence agrees with the one measured by beam emission spectroscopy within 30%.

  3. Sources of nitrate contamination and age of water in large karstic springs of Florida

    Science.gov (United States)

    Katz, B.G.

    2004-01-01

    In response to concerns about the steady increase in nitrate concentrations over the past several decades in many of Florida's first magnitude spring waters (discharge ???2.8 m3/s), multiple isotopic and other chemical tracers were analyzed in water samples from 12 large springs to assess sources and timescales of nitrate contamination. Nitrate-N concentrations in spring waters ranged from 0.50 to 4.2 mg/L, and ??15N values of nitrate in spring waters ranged from 2.6 to 7.9 per mil. Most ??15N values were below 6 per mil indicating that inorganic fertilizers were the dominant source of nitrogen in these waters. Apparent ages of groundwater discharging from springs ranged from 5 to about 35 years, based on multi-tracer analyses (CFC-12, CFC-113, SF6, 3H/3He) and a piston flow assumption; however, apparent tracer ages generally were not concordant. The most reliable spring-water ages appear to be based on tritium and 3He data, because concentrations of CFCs and SF6 in several spring waters were much higher than would be expected from equilibration with modern atmospheric concentrations. Data for all tracers were most consistent with output curves for exponential and binary mixing models that represent mixtures of water in the Upper Floridan aquifer recharged since the early 1960s. Given that groundwater transit times are on the order of decades and are related to the prolonged input of nitrogen from multiple sources to the aquifer, nitrate could persist in groundwater that flows toward springs for several decades due to slow transport of solutes through the aquifer matrix.

  4. Large Scale Integration of Renewable Power Sources into the Vietnamese Power System

    Science.gov (United States)

    Kies, Alexander; Schyska, Bruno; Thanh Viet, Dinh; von Bremen, Lueder; Heinemann, Detlev; Schramm, Stefan

    2017-04-01

    The Vietnamese Power system is expected to expand considerably in upcoming decades. Power capacities installed are projected to grow from 39 GW in 2015 to 129.5 GW by 2030. Installed wind power capacities are expected to grow to 6 GW (0.8 GW 2015) and solar power capacities to 12 GW (0.85 GW 2015). This goes hand in hand with an increase of the renewable penetration in the power mix from 1.3% from wind and photovoltaics (PV) in 2015 to 5.4% by 2030. The overall potential for wind power in Vietnam is estimated to be around 24 GW. Moreover, the up-scaling of renewable energy sources was formulated as one of the priorized targets of the Vietnamese government in the National Power Development Plan VII. In this work, we investigate the transition of the Vietnamese power system towards high shares of renewables. For this purpose, we jointly optimise the expansion of renewable generation facilities for wind and PV, and the transmission grid within renewable build-up pathways until 2030 and beyond. To simulate the Vietnamese power system and its generation from renewable sources, we use highly spatially and temporally resolved historical weather and load data and the open source modelling toolbox Python for Power System Analysis (PyPSA). We show that the highest potential of renewable generation for wind and PV is observed in southern Vietnam and discuss the resulting need for transmission grid extensions in dependency of the optimal pathway. Furthermore, we show that the smoothing effect of wind power has several considerable beneficial effects and that the Vietnamese hydro power potential can be efficiently used to provide balancing opportunities. This work is part of the R&D Project "Analysis of the Large Scale Integration of Renewable Power into the Future Vietnamese Power System" (GIZ, 2016-2018).

  5. Molecular evolution in court: analysis of a large hepatitis C virus outbreak from an evolving source.

    Science.gov (United States)

    González-Candelas, Fernando; Bracho, María Alma; Wróbel, Borys; Moya, Andrés

    2013-07-19

    Molecular phylogenetic analyses are used increasingly in the epidemiological investigation of outbreaks and transmission cases involving rapidly evolving RNA viruses. Here, we present the results of such an analysis that contributed to the conviction of an anesthetist as being responsible for the infection of 275 of his patients with hepatitis C virus. We obtained sequences of the NS5B and E1-E2 regions in the viral genome for 322 patients suspected to have been infected by the doctor, and for 44 local, unrelated controls. The analysis of 4,184 cloned sequences of the E1-E2 region allowed us to exclude 47 patients from the outbreak. A subset of patients had known dates of infection. We used these data to calibrate a relaxed molecular clock and to determine a rough estimate of the time of infection for each patient. A similar analysis led to an estimate for the time of infection of the source. The date turned out to be 10 years before the detection of the outbreak. The number of patients infected was small at first, but it increased substantially in the months before the detection of the outbreak. We have developed a procedure to integrate molecular phylogenetic reconstructions of rapidly evolving viral populations into a forensic setting adequate for molecular epidemiological analysis of outbreaks and transmission events. We applied this procedure to a large outbreak of hepatitis C virus caused by a single source and the results obtained played a key role in the trial that led to the conviction of the suspected source.

  6. NEAR-INFRARED POLARIZATION SOURCE CATALOG OF THE NORTHEASTERN REGIONS OF THE LARGE MAGELLANIC CLOUD

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jaeyeong; Pak, Soojong [School of Space Research, Kyung Hee University, 1 Seocheon-dong, Giheung-gu, Yongin, Gyeonggi-do 446-701 (Korea, Republic of); Jeong, Woong-Seob; Park, Won-Kee [Korea Astronomy and Space Science Institute, 776 Daedeok-daero, Yuseong-gu, Daejeon 305-348 (Korea, Republic of); Tamura, Motohide, E-mail: jaeyeong@khu.ac.kr, E-mail: jeongws@kasi.re.kr [The University of Tokyo/National Astronomical Observatory of Japan/Astrobiology Center, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan)

    2016-01-15

    We present a near-infrared band-merged photometric and polarimetric catalog for the 39′ × 69′ fields in the northeastern part of the Large Magellanic Cloud (LMC), which were observed using SIRPOL, an imaging polarimeter of the InfraRed Survey Facility. This catalog lists 1858 sources brighter than 14 mag in the H band with a polarization signal-to-noise ratio greater than three in the J, H, or K{sub s} bands. Based on the relationship between the extinction and the polarization degree, we argue that the polarization mostly arises from dichroic extinctions caused by local interstellar dust in the LMC. This catalog allows us to map polarization structures to examine the global geometry of the local magnetic field, and to show a statistical analysis of the polarization of each field to understand its polarization properties. In the selected fields with coherent polarization position angles, we estimate magnetic field strengths in the range of 3−25 μG using the Chandrasekhar–Fermi method. This implies the presence of large-scale magnetic fields on a scale of around 100 parsecs. When comparing mid- and far-infrared dust emission maps, we confirmed that the polarization patterns are well aligned with molecular clouds around the star-forming regions.

  7. Large Scale Water Vapor Sources Relative to the October 2000 Piedmont Flood

    Science.gov (United States)

    Turato, Barbara; Reale, Oreste; Siccardi, Franco

    2003-01-01

    Very intense mesoscale or synoptic-scale rainfall events can occasionally be observed in the Mediterranean region without any deep cyclone developing over the areas affected by precipitation. In these perplexing cases the synoptic situation can superficially look similar to cases in which very little precipitation occurs. These situations could possibly baffle the operational weather forecasters. In this article, the major precipitation event that affected Piedmont (Italy) between 13 and 16 October 2000 is investigated. This is one of the cases in which no intense cyclone was observed within the Mediterranean region at any time, only a moderate system was present, and yet exceptional rainfall and flooding occurred. The emphasis of this study is on the moisture origin and transport. Moisture and energy balances are computed on different space- and time-scales, revealing that precipitation exceeds evaporation over an area inclusive of Piedmont and the northwestern Mediterranean region, on a time-scale encompassing the event and about two weeks preceding it. This is suggestive of an important moisture contribution originating from outside the region. A synoptic and dynamic analysis is then performed to outline the potential mechanisms that could have contributed to the large-scale moisture transport. The central part of the work uses a quasi-isentropic water-vapor back trajectory technique. The moisture sources obtained by this technique are compared with the results of the balances and with the synoptic situation, to unveil possible dynamic mechanisms and physical processes involved. It is found that moisture sources on a variety of atmospheric scales contribute to this event. First, an important contribution is caused by the extratropical remnants of former tropical storm Leslie. The large-scale environment related to this system allows a significant amount of moisture to be carried towards Europe. This happens on a time- scale of about 5-15 days preceding the

  8. Plant protection system optimization studies to mitigate consequences of large breaks in the Advanced Neutron Source Reactor

    International Nuclear Information System (INIS)

    Khayat, M.I.; March-Leuba, J.

    1993-01-01

    This paper documents some of the optimization studies performed to maximize the performance of the engineered safety features and scram systems to mitigate the consequences of large breaks in the primary cooling system of the Advanced Neutron Source (ANS) Reactor

  9. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    Science.gov (United States)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  10. Jet emission in young radio sources: A Fermi large area telescope gamma-ray view

    Energy Technology Data Exchange (ETDEWEB)

    Migliori, G.; Siemiginowska, A. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Kelly, B. C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93107 (United States); Stawarz, Ł. [Institute of Space and Astronautical Science, JAXA, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210 (Japan); Celotti, A. [Scuola Internazionale Superiore di Studi Avanzati (SISSA), via Bonomea, 265-34136 Trieste (Italy); Begelman, M. C., E-mail: migliori@cfa.harvard.edu [JILA, University of Colorado and National Institute of Standards and Technology, 440 UCB, Boulder, CO 80309-0440 (United States)

    2014-01-10

    We investigate the contribution of the beamed jet component to the high-energy emission in young and compact extragalactic radio sources, focusing for the first time on the γ-ray band. We derive predictions on the γ-ray luminosities associated with the relativistic jet assuming a leptonic radiative model. The high-energy emission is produced via Compton scattering by the relativistic electrons in a spherical region at the considered scales (≲10 kpc). Simulations show a wide range of γ-ray luminosities, with intensities up to ∼10{sup 46}-10{sup 48} erg s{sup –1} depending on the assumed jet parameters. We find a highly linear relation between the simulated X-ray and γ-ray luminosities that can be used to select candidates for γ-ray detection. We compare the simulated luminosity distributions in the radio, X-ray, and γ-ray regimes with observations for the largest sample of X-ray-detected young radio quasars. Our analysis of ∼4-yr Fermi Large Area Telescope (LAT) data does not yield any statistically significant detections. However, the majority of the model-predicted γ-ray fluxes for the sample are near or below the current Fermi-LAT flux threshold and compatible with the derived upper limits. Our study gives constraints on the minimum jet power (L {sub jet,} {sub kin}/L {sub disk} > 0.01) of a potential jet contribution to the X-ray emission in the most compact sources (≲ 1 kpc) and on the particle-to-magnetic field energy density ratio that are in broad agreement with equipartition assumptions.

  11. Scalable Metadata Management for a Large Multi-Source Seismic Data Repository

    Energy Technology Data Exchange (ETDEWEB)

    Gaylord, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dodge, D. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Magana-Zook, S. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barno, J. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Knapp, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-04-11

    In this work, we implemented the key metadata management components of a scalable seismic data ingestion framework to address limitations in our existing system, and to position it for anticipated growth in volume and complexity. We began the effort with an assessment of open source data flow tools from the Hadoop ecosystem. We then began the construction of a layered architecture that is specifically designed to address many of the scalability and data quality issues we experience with our current pipeline. This included implementing basic functionality in each of the layers, such as establishing a data lake, designing a unified metadata schema, tracking provenance, and calculating data quality metrics. Our original intent was to test and validate the new ingestion framework with data from a large-scale field deployment in a temporary network. This delivered somewhat unsatisfying results, since the new system immediately identified fatal flaws in the data relatively early in the pipeline. Although this is a correct result it did not allow us to sufficiently exercise the whole framework. We then widened our scope to process all available metadata from over a dozen online seismic data sources to further test the implementation and validate the design. This experiment also uncovered a higher than expected frequency of certain types of metadata issues that challenged us to further tune our data management strategy to handle them. Our result from this project is a greatly improved understanding of real world data issues, a validated design, and prototype implementations of major components of an eventual production framework. This successfully forms the basis of future development for the Geophysical Monitoring Program data pipeline, which is a critical asset supporting multiple programs. It also positions us very well to deliver valuable metadata management expertise to our sponsors, and has already resulted in an NNSA Office of Defense Nuclear Nonproliferation

  12. Model-independent requirements to the source of positrons in the galactic centre

    International Nuclear Information System (INIS)

    Aharonyan, F.A.

    1986-01-01

    The main requirements, following from the observational data in a wide range of electromagnetic waves, to positron source in the galactic centre are formulated. The most probable mechanism providing an efficiency of positron production of 10% is the pair production at photon-photon collisions. This mechanism can be realized a) in a thermal e + e - pair-dominated weak-relativistic plasma and b) at the development of a nonthermal electromagnetic cascade initiated by relativistic particles in the field of X-rays. Gamma-astronomical observations in the region of E γ ≥ 10 11 eV can be crucial in the choice of the model

  13. Moisture Sources and Large-Scale Dynamics Associated with a Flash Flood Event in Portugal

    Science.gov (United States)

    Liberato, Margarida L. R.; Ramos, Alexandre M.; Trigo, Ricardo M.; Trigo, Isabel F.; María Durán-Quesada, Ana; Nieto, Raquel; Gimeno, Luis

    2013-04-01

    On 18-19 November 1983, the region of Lisbon, in Portugal, was affected by a heavy precipitation event, soon followed by flash flooding, urban inundations and a burst of landslides around Lisbon [Zêzere et al., 2005] causing considerable infrastructure damage and human fatalities. With a total of 95.6 mm in 24 h observed at the longest serving station in Portugal (Lisbon's Dom Luiz Observatory), this was the rainiest day during the twentieth century and one of the rainiest registered since 1864. We found that this event was triggered by the transport of tropical and subtropical moisture associated with an extratropical cyclone. The low favored a large stream of (sub) tropical air that extended over more than 10° of latitude and across the North Atlantic Ocean, carrying a large amount of moisture originally from lower latitudes, a so-called atmospheric river. The stationary position of the jet stream along the East Atlantic Ocean through Iberia caused a strong enhancement of the precipitation associated with the moist air. A Lagrangian analysis of the transport of moisture in the Euro-Atlantic sector was performed based on the methodology developed by Stohl and James [2004, 2005], using the FLEXPART model. This Lagrangian methodology was employed to show that the evaporative sources for the precipitation falling over the area of Lisbon were distributed over large sectors of the tropical-subtropical North Atlantic Ocean and included a significant contribution from the (sub) tropics. This study [Liberato et al., 2012] aims to provide an example of the application of distinct Lagrangian techniques to achieve a better understanding of the relation between extratropical cyclones and the occurrence of a heavy precipitation event on the Iberian Peninsula. Acknowledgments: This work was partially supported by FEDER (Fundo Europeu de Desenvolvimento Regional) funds through the COMPETE (Programa Operacional Factores de Competitividade) Programme and by national funds

  14. Integrating large-scale cogeneration of hydrogen and electricity from wind and nuclear sources (NUWINDTM)

    International Nuclear Information System (INIS)

    Miller, A. I.; Duffey, R. B.

    2008-01-01

    As carbon-free fuels, hydrogen and electricity are headed for major roles in replacing hydrocarbons as the world constrains carbon dioxide (CO 2 ) emissions. This will apply particularly to the transport sector. A general trend toward electric drive on-board vehicles is already evident and hydrogen converted to electricity by a fuel cell is likely to be a major source of on-board electricity. The major car manufacturers continue to invest heavily in this option and significant government initiatives in both the USA and Canada are beginning demonstration deployments of the infrastructure needed for hydrogen refueling. However, early adoption of hydrogen as a transport fuel may well be concentrated on heavy-duty transportation: trains, ships and trucks, where battery storage of electricity is unlikely to be practical. But both hydrogen and electricity are secondary fuels and are only effective if the source of the primary energy is a low CO 2 emitter such as nuclear and wind. A competitive cost is also essential and, to achieve this, one must rely on off-peak electricity prices. This paper examines historical data for electricity prices and the actual output of the main wind farms in Ontario to show how nuclear and wind can be combined to generate hydrogen by water electrolysis at prices that are competitive with fossil-based hydrogen production. The NuWind TM concept depends on operating electrolysis cells over an extended range of current densities to accommodate the inherent variability of wind and of electricity prices as they vary in open markets. The cost of co-producing hydrogen with electricity originating from nuclear plants (80%) and from wind turbines (20%) is very close to that of production from a constantly available electricity source. In contrast, the price of hydrogen produced using electricity from wind alone is estimated to cost about $1500/tonne more than hydrogen from NuWind or nuclear alone because the electrolysis facility must be much larger

  15. A novel design for sap flux data acquisition in large research plots using open source components

    Science.gov (United States)

    Hawthorne, D. A.; Oishi, A. C.

    2017-12-01

    Sap flux sensors are a widely-used tool for estimating in-situ, tree-level transpiration rates. These probes are installed in the stems of multiple trees within a study area and are typically left in place throughout the year. Sensors vary in their design and theory of operation, but all require electrical power for a heating element and produce at least one analog signal that must be digitized for storage. There are two topologies traditionally adopted to energize these sensors and gather the data from them. In one, a single data logger and power source are used. Dedicated cables radiate out from the logger to supply power to each of the probes and retrieve analog signals. In the other layout, a standalone data logger is located at each monitored tree. Batteries must then be distributed throughout the plot to service these loggers. We present a hybrid solution based on industrial control systems that employs a central data logger and battery, but co-locates digitizing hardware with the sensors at each tree. Each hardware node is able to communicate and share power over wire links with neighboring nodes. The resulting network provides a fault-tolerant path between the logger and each sensor. The approach is optimized to limit disturbance of the study plot, protect signal integrity and to enhance system reliability. This open-source implementation is built on the Arduino micro-controller system and employs RS485 and Modbus communications protocols. It is supported by laptop based management software coded in Python. The system is designed to be readily fabricated and programmed by non-experts. It works with a variety of sap-flux measurement techniques and it is able to interface to additional environmental sensors.

  16. 40 CFR 51.914 - What new source review requirements apply for 8-hour ozone nonattainment areas?

    Science.gov (United States)

    2010-07-01

    ... apply for 8-hour ozone nonattainment areas? 51.914 Section 51.914 Protection of Environment... Standard § 51.914 What new source review requirements apply for 8-hour ozone nonattainment areas? The requirements for new source review for the 8-hour ozone standard are located in § 51.165 of this part. [70 FR...

  17. An evaluation of information sources and requirements for nuclear plant-aging research with life-extension implications

    International Nuclear Information System (INIS)

    Jacobs, P.T.

    1986-01-01

    Information requirements for plant-aging and life-extension research are discussed. Various information sources that have been used in plant-aging studies and reliability assessments are described. Data-base searches and analyses were performed for a specific system using several data bases and plant sources. Comments are provided on the results using the various information sources

  18. Particle generation methods applied in large-scale experiments on aerosol behaviour and source term studies

    International Nuclear Information System (INIS)

    Swiderska-Kowalczyk, M.; Gomez, F.J.; Martin, M.

    1997-01-01

    In aerosol research aerosols of known size, shape, and density are highly desirable because most aerosols properties depend strongly on particle size. However, such constant and reproducible generation of those aerosol particles whose size and concentration can be easily controlled, can be achieved only in laboratory-scale tests. In large scale experiments, different generation methods for various elements and compounds have been applied. This work presents, in a brief from, a review of applications of these methods used in large scale experiments on aerosol behaviour and source term. Description of generation method and generated aerosol transport conditions is followed by properties of obtained aerosol, aerosol instrumentation used, and the scheme of aerosol generation system-wherever it was available. An information concerning aerosol generation particular purposes and reference number(s) is given at the end of a particular case. These methods reviewed are: evaporation-condensation, using a furnace heating and using a plasma torch; atomization of liquid, using compressed air nebulizers, ultrasonic nebulizers and atomization of liquid suspension; and dispersion of powders. Among the projects included in this worked are: ACE, LACE, GE Experiments, EPRI Experiments, LACE-Spain. UKAEA Experiments, BNWL Experiments, ORNL Experiments, MARVIKEN, SPARTA and DEMONA. The aim chemical compounds studied are: Ba, Cs, CsOH, CsI, Ni, Cr, NaI, TeO 2 , UO 2 Al 2 O 3 , Al 2 SiO 5 , B 2 O 3 , Cd, CdO, Fe 2 O 3 , MnO, SiO 2 , AgO, SnO 2 , Te, U 3 O 8 , BaO, CsCl, CsNO 3 , Urania, RuO 2 , TiO 2 , Al(OH) 3 , BaSO 4 , Eu 2 O 3 and Sn. (Author)

  19. Safety requirements and options for a large size fast neutron reactor

    International Nuclear Information System (INIS)

    Cogne, F.; Megy, J.; Robert, E.; Benmergui, A.; Villeneuve, J.

    1977-01-01

    Starting from the experience gained in the safety evaluation of the PHENIX reactor, and from results already obtained in the safety studies on fast neutron reactors, the French regulatory bodies have defined since 1973 what could be the requirements and the recommendations in the matter of safety for the first large size ''prototype'' fast neutron power plant of 1200 MWe. Those requirements and recommendations, while not being compulsory due to the evolution of this type of reactors, will be used as a basis for the technical regulation that will be established in France in this field. They define particularly the care to be taken in the following areas which are essential for safety: the protection systems, the primary coolant system, the prevention of accidents at the core level, the measures to be taken with regard to the whole core accident and to the containment, the protection against sodium fires, and the design as a function of external aggressions. In applying these recommendations, the CREYS-MALVILLE plant designers have tried to achieve redundancy in the safety related systems and have justified the safety of the design with regard to the various involved phenomena. In particular, the extensive research made at the levels of the fuel and of the core instrumentation makes it possible to achieve the best defence to avoid the development of core accidents. The overall examination of the measures taken, from the standpoint of prevention and surveyance as well as from the standpoint of means of action led the French regulatory bodies to propose the construction permit of the CREYS MALVILLE plant, provided that additional examinations by the regulatory bodies be made during the construction of the plant on some technological aspects not fully clarified at the authorization time. The conservatism of the corresponding requirements should be demonstrated prior to the commissioning of the power plant. To pursue a programme on reactors of this type, or even more

  20. A high-throughput system for high-quality tomographic reconstruction of large datasets at Diamond Light Source.

    Science.gov (United States)

    Atwood, Robert C; Bodey, Andrew J; Price, Stephen W T; Basham, Mark; Drakopoulos, Michael

    2015-06-13

    Tomographic datasets collected at synchrotrons are becoming very large and complex, and, therefore, need to be managed efficiently. Raw images may have high pixel counts, and each pixel can be multidimensional and associated with additional data such as those derived from spectroscopy. In time-resolved studies, hundreds of tomographic datasets can be collected in sequence, yielding terabytes of data. Users of tomographic beamlines are drawn from various scientific disciplines, and many are keen to use tomographic reconstruction software that does not require a deep understanding of reconstruction principles. We have developed Savu, a reconstruction pipeline that enables users to rapidly reconstruct data to consistently create high-quality results. Savu is designed to work in an 'orthogonal' fashion, meaning that data can be converted between projection and sinogram space throughout the processing workflow as required. The Savu pipeline is modular and allows processing strategies to be optimized for users' purposes. In addition to the reconstruction algorithms themselves, it can include modules for identification of experimental problems, artefact correction, general image processing and data quality assessment. Savu is open source, open licensed and 'facility-independent': it can run on standard cluster infrastructure at any institution.

  1. Impacts of large-scale Intermittent Renewable Energy Sources on electricity systems, and how these can be modeled

    NARCIS (Netherlands)

    Brouwer, Anne Sjoerd; Van Den Broek, Machteld; Seebregts, Ad; Faaij, André

    The electricity sector in OECD countries is on the brink of a large shift towards low-carbon electricity generation. Power systems after 2030 may consist largely of two low-carbon generator types: Intermittent Renewable Energy Sources (IRES) such as wind and solar PV and thermal generators such as

  2. Optimization and Thermoeconomics Research of a Large Reclaimed Water Source Heat Pump System

    Directory of Open Access Journals (Sweden)

    Zi-ping Zhang

    2013-01-01

    Full Text Available This work describes a large reclaimed water source heat pump system (RWSHPS and elaborates on the composition of the system and its design principles. According to the characteristics of the reclaimed water and taking into account the initial investment, the project is divided into two stages: the first stage adopts distributed heat pump heating system and the second adopts the combination of centralized and decentralized systems. We analyze the heating capacity of the RWSHPS, when the phase II project is completed, the system can provide hydronic heating water with the supply and return water temperature of 55°C/15°C and meet the hydronic heating demand of 8 million square meters of residential buildings. We make a thermal economics analysis by using Thermal Economics theory on RWSHPS and gas boiler system, it is known that the RWSHPS has more advantages, compared with the gas boiler heating system; both its thermal efficiency and economic efficiency are relatively high. It provides a reference for future applications of the RWSHPS.

  3. Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools

    Science.gov (United States)

    Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew

    2017-11-01

    We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy-galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.

  4. Source of vacuum electromagnetic zero-point energy and Dirac's large numbers hypothesis

    International Nuclear Information System (INIS)

    Simaciu, I.; Dumitrescu, G.

    1993-01-01

    The stochastic electrodynamics states that zero-point fluctuation of the vacuum (ZPF) is an electromagnetic zero-point radiation with spectral density ρ(ω)=ℎω 3 / 2π 2 C 3 . Protons, free electrons and atoms are sources for this radiation. Each of them absorbs and emits energy by interacting with ZPF. At equilibrium ZPF radiation is scattered by dipoles.Scattered radiation spectral density is ρ(ω,r) ρ(ω).c.σ(ω) / 4πr 2 . Radiation of dipole spectral density of Universe is ρ ∫ 0 R nρ(ω,r)4πr 2 dr. But if σ atom P e σ=σ T then ρ ρ(ω)σ T R.n. Moreover if ρ=ρ(ω) then σ T Rn = 1. With R = G M/c 2 and σ T ≅(e 2 /m e c 2 ) 2 ∝ r e 2 then σ T .Rn 1 is equivalent to R/r e = e 2 /Gm p m e i.e. the cosmological coincidence discussed in the context of Dirac's large-numbers hypothesis. (Author)

  5. A large common-source outbreak of norovirus gastroenteritis in a hotel in Singapore, 2012.

    Science.gov (United States)

    Raj, P; Tay, J; Ang, L W; Tien, W S; Thu, M; Lee, P; Pang, Q Y; Tang, Y L; Lee, K Y; Maurer-Stroh, S; Gunalan, V; Cutter, J; Goh, K T

    2017-02-01

    An outbreak of gastroenteritis affected 453 attendees (attack rate 28·5%) of six separate events held at a hotel in Singapore. Active case detection, case-control studies, hygiene inspections and microbial analysis of food, environmental and stool samples were conducted to determine the aetiology of the outbreak and the modes of transmission. The only commonality was the food, crockery and cutlery provided and/or handled by the hotel's Chinese banquet kitchen. Stool specimens from 34 cases and 15 food handlers were positive for norovirus genogroup II. The putative index case was one of eight norovirus-positive food handlers who had worked while they were symptomatic. Several food samples and remnants tested positive for Escherichia coli or high faecal coliforms, aerobic plate counts and/or total coliforms, indicating poor food hygiene. This large common-source outbreak of norovirus gastroenteritis was caused by the consumption of contaminated food and/or contact with contaminated crockery or cutlery provided or handled by the hotel's Chinese banquet kitchen.

  6. Study of the nonequilibrium state of superconductors by large quasiparticle injection from an external current source

    International Nuclear Information System (INIS)

    Iguchi, I.

    1977-01-01

    We have studied the nonequilibrium state of superconductors by injecting large numbers of quasiparticles from an external current source into a superconducting film of a tunnel junction with low tunnel resistance (typically 0.1--1 Ω for junction area approx. = 10 -4 cm 2 ). It was observed that there was a critical tunnel current density at which a voltage appeared locally in the part of a superconducting film confined to the junction area. Its values ranged from 10 2 to 10 3 A/cm 2 for bath temperatures well below T/sub c/. Followed by this voltage onset, a transition region corresponding to the nonequilibrium intermediate resistive state was also observed. For further increase of the tunnel current, the local film resistance developed beyond the value of its normal resistance, suggesting that the nonequilibrium state extends far beyond the voltage onset point. A theory based on the modified Rothwarf-Taylor equations and Parker's T* model is presented to compare with the experimental results. The calculated critical current density yielded almost the same order of magnitude as those found experimentally. The detailed behavior, however, deviates from the theoretical predictions although the film makes a second-order transition in the broad range of temperatures. It is also shown using four-terminal analysis that our observations and those by Wong, Yeh, and Langenberg are essentially the same

  7. Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools

    Energy Technology Data Exchange (ETDEWEB)

    Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew

    2017-10-18

    We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy–galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.

  8. Mantle sources and magma evolution of the Rooiberg lavas, Bushveld Large Igneous Province, South Africa

    Science.gov (United States)

    Günther, T.; Haase, K. M.; Klemd, R.; Teschner, C.

    2018-06-01

    We report a new whole-rock dataset of major and trace element abundances and 87Sr/86Sr-143Nd/144Nd isotope ratios for basaltic to rhyolitic lavas from the Rooiberg continental large igneous province (LIP). The formation of the Paleoproterozoic Rooiberg Group is contemporaneous with and spatially related to the layered intrusion of the Bushveld Complex, which stratigraphically separates the volcanic succession. Our new data confirm the presence of low- and high-Ti mafic and intermediate lavas (basaltic—andesitic compositions) with > 4 wt% MgO, as well as evolved rocks (andesitic—rhyolitic compositions), characterized by MgO contents of N, Nb/Y and Ti/Y), indicating a different petrogenesis. MELTS modelling shows that the evolved lavas are formed by fractional crystallization from the mafic low-Ti lavas at low-to-moderate pressures ( 4 kbar). Primitive mantle-normalized trace element patterns of the Rooiberg rocks show an enrichment of large ion lithophile elements (LILE), rare-earth elements (REE) and pronounced negative anomalies of Nb, Ta, P, Ti and a positive Pb anomaly. Unaltered Rooiberg lavas have negative ɛNdi (- 5.2 to - 9.4) and radiogenic ɛSri (6.6 to 105) ratios (at 2061 Ma). These data overlap with isotope and trace element compositions of purported parental melts to the Bushveld Complex, especially for the lower zone. We suggest that the Rooiberg suite originated from a source similar to the composition of the B1-magma suggested as parental to the Bushveld Lower Zone, or that the lavas represent eruptive successions of fractional crystallization products related to the ultramafic cumulates that were forming at depth. The Rooiberg magmas may have formed by 10-20% crustal assimilation by the fractionation of a very primitive mantle-derived melt within the upper crust of the Kaapvaal Craton. Alternatively, the magmas represent mixtures of melts from a primitive, sub-lithospheric mantle plume and an enriched sub-continental lithospheric mantle (SCLM

  9. Matrix light and pixel light: optical system architecture and requirements to the light source

    Science.gov (United States)

    Spinger, Benno; Timinger, Andreas L.

    2015-09-01

    Modern Automotive headlamps enable improved functionality for more driving comfort and safety. Matrix or Pixel light headlamps are not restricted to either pure low beam functionality or pure high beam. Light in direction of oncoming traffic is selectively switched of, potential hazard can be marked via an isolated beam and the illumination on the road can even follow a bend. The optical architectures that enable these advanced functionalities are diverse. Electromechanical shutters and lens units moved by electric motors were the first ways to realize these systems. Switching multiple LED light sources is a more elegant and mechanically robust solution. While many basic functionalities can already be realized with a limited number of LEDs, an increasing number of pixels will lead to more driving comfort and better visibility. The required optical system needs not only to generate a desired beam distribution with a high angular dynamic, but also needs to guarantee minimal stray light and cross talk between the different pixels. The direct projection of the LED array via a lens is a simple but not very efficient optical system. We discuss different optical elements for pre-collimating the light with minimal cross talk and improved contrast between neighboring pixels. Depending on the selected optical system, we derive the basic light source requirements: luminance, surface area, contrast, flux and color homogeneity.

  10. Radar Emission Sources Identification Based on Hierarchical Agglomerative Clustering for Large Data Sets

    Directory of Open Access Journals (Sweden)

    Janusz Dudczyk

    2016-01-01

    Full Text Available More advanced recognition methods, which may recognize particular copies of radars of the same type, are called identification. The identification process of radar devices is a more specialized task which requires methods based on the analysis of distinctive features. These features are distinguished from the signals coming from the identified devices. Such a process is called Specific Emitter Identification (SEI. The identification of radar emission sources with the use of classic techniques based on the statistical analysis of basic measurable parameters of a signal such as Radio Frequency, Amplitude, Pulse Width, or Pulse Repetition Interval is not sufficient for SEI problems. This paper presents the method of hierarchical data clustering which is used in the process of radar identification. The Hierarchical Agglomerative Clustering Algorithm (HACA based on Generalized Agglomerative Scheme (GAS implemented and used in the research method is parameterized; therefore, it is possible to compare the results. The results of clustering are presented in dendrograms in this paper. The received results of grouping and identification based on HACA are compared with other SEI methods in order to assess the degree of their usefulness and effectiveness for systems of ESM/ELINT class.

  11. On the possibility of the multiple inductively coupled plasma and helicon plasma sources for large-area processes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jin-Won; Lee, Yun-Seong, E-mail: leeeeys@kaist.ac.kr; Chang, Hong-Young [Low-temperature Plasma Laboratory, Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of); An, Sang-Hyuk [Agency of Defense Development, Yuseong-gu, Daejeon 305-151 (Korea, Republic of)

    2014-08-15

    In this study, we attempted to determine the possibility of multiple inductively coupled plasma (ICP) and helicon plasma sources for large-area processes. Experiments were performed with the one and two coils to measure plasma and electrical parameters, and a circuit simulation was performed to measure the current at each coil in the 2-coil experiment. Based on the result, we could determine the possibility of multiple ICP sources due to a direct change of impedance due to current and saturation of impedance due to the skin-depth effect. However, a helicon plasma source is difficult to adapt to the multiple sources due to the consistent change of real impedance due to mode transition and the low uniformity of the B-field confinement. As a result, it is expected that ICP can be adapted to multiple sources for large-area processes.

  12. Uncertainty analysis methods for quantification of source terms using a large computer code

    International Nuclear Information System (INIS)

    Han, Seok Jung

    1997-02-01

    Quantification of uncertainties in the source term estimations by a large computer code, such as MELCOR and MAAP, is an essential process of the current probabilistic safety assessments (PSAs). The main objectives of the present study are (1) to investigate the applicability of a combined procedure of the response surface method (RSM) based on input determined from a statistical design and the Latin hypercube sampling (LHS) technique for the uncertainty analysis of CsI release fractions under a hypothetical severe accident sequence of a station blackout at Young-Gwang nuclear power plant using MAAP3.0B code as a benchmark problem; and (2) to propose a new measure of uncertainty importance based on the distributional sensitivity analysis. On the basis of the results obtained in the present work, the RSM is recommended to be used as a principal tool for an overall uncertainty analysis in source term quantifications, while using the LHS in the calculations of standardized regression coefficients (SRC) and standardized rank regression coefficients (SRRC) to determine the subset of the most important input parameters in the final screening step and to check the cumulative distribution functions (cdfs) obtained by RSM. Verification of the response surface model for its sufficient accuracy is a prerequisite for the reliability of the final results obtained by the combined procedure proposed in the present work. In the present study a new measure has been developed to utilize the metric distance obtained from cumulative distribution functions (cdfs). The measure has been evaluated for three different cases of distributions in order to assess the characteristics of the measure: The first case and the second are when the distribution is known as analytical distributions and the other case is when the distribution is unknown. The first case is given by symmetry analytical distributions. The second case consists of two asymmetry distributions of which the skewness is non zero

  13. Escript: Open Source Environment For Solving Large-Scale Geophysical Joint Inversion Problems in Python

    Science.gov (United States)

    Gross, Lutz; Altinay, Cihan; Fenwick, Joel; Smith, Troy

    2014-05-01

    The program package escript has been designed for solving mathematical modeling problems using python, see Gross et al. (2013). Its development and maintenance has been funded by the Australian Commonwealth to provide open source software infrastructure for the Australian Earth Science community (recent funding by the Australian Geophysical Observing System EIF (AGOS) and the AuScope Collaborative Research Infrastructure Scheme (CRIS)). The key concepts of escript are based on the terminology of spatial functions and partial differential equations (PDEs) - an approach providing abstraction from the underlying spatial discretization method (i.e. the finite element method (FEM)). This feature presents a programming environment to the user which is easy to use even for complex models. Due to the fact that implementations are independent from data structures simulations are easily portable across desktop computers and scalable compute clusters without modifications to the program code. escript has been successfully applied in a variety of applications including modeling mantel convection, melting processes, volcanic flow, earthquakes, faulting, multi-phase flow, block caving and mineralization (see Poulet et al. 2013). The recent escript release (see Gross et al. (2013)) provides an open framework for solving joint inversion problems for geophysical data sets (potential field, seismic and electro-magnetic). The strategy bases on the idea to formulate the inversion problem as an optimization problem with PDE constraints where the cost function is defined by the data defect and the regularization term for the rock properties, see Gross & Kemp (2013). This approach of first-optimize-then-discretize avoids the assemblage of the - in general- dense sensitivity matrix as used in conventional approaches where discrete programming techniques are applied to the discretized problem (first-discretize-then-optimize). In this paper we will discuss the mathematical framework for

  14. Development of guidance on applications of regulatory requirements for regulating large, contaminated equipment and large decommissioning and decontamination (D and D) components

    International Nuclear Information System (INIS)

    Pope, R.B.; Easton, E.P.; Cook, J.R.; Boyle, R.W.

    1997-01-01

    In 1985, the International Atomic Energy Agency issued revised regulations for the safe transport of radioactive material. Significant were major changes to requirements for Low Specific Activity (LSA) material and Surface Contaminated Objects (SCOs). As these requirements were adopted into regulations in the United States, it was recognised that guidance on how to apply these requirements to large, contaminated/activated pieces of equipment and decommissioning and decontamination (D and D) objects would be needed both by the regulators and those regulated to clarify technical uncertainties and ensure implementation. Thus, the US Department of Transportation and the US Nuclear Regulatory Commission, with assistance of staff from Oak Ridge National Laboratory, are preparing regulatory guidance which will present examples of acceptable methods for demonstrating compliance with the revised rules for large items. Concepts being investigated for inclusion in the pending guidance are discussed in this paper. Under current plans, the guidance will be issued for public comment before final issuance in 1997. (Author)

  15. Development of guidance on applications of regulatory requirements for regulating large, contaminated equipment and large decommissioning and decontamination (D and D) components

    International Nuclear Information System (INIS)

    Pope, R.B.; Easton, E.P.; Cook, J.R.; Boyle, R.W.

    1997-01-01

    In 1985, the International Atomic Energy Agency issued revised regulations for the safe transport of radioactive material. Significant were major changes to requirements for Low Specific Activity material and Surface Contaminated Objects. As these requirements were adopted into regulations in the US, it was recognized that guidance on how to apply these requirements to large, contaminated/activated pieces of equipment and decommissioning and decontamination objects would be needed both by the regulators and those regulated to clarify technical uncertainties and ensure implementation. Thus, the US Department of Transportation and the US Nuclear Regulatory Commission, with assistance of staff from Oak Ridge National Laboratory, are preparing regulatory guidance which will present examples of acceptable methods for demonstrating compliance with the revised rules for large items. Concepts being investigated for inclusion in the pending guidance are discussed in this paper. Under current plans, the guidance will be issued for public comment before final issuance in 1997

  16. 50 CFR 216.92 - Dolphin-safe requirements for tuna harvested in the ETP by large purse seine vessels.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Dolphin-safe requirements for tuna... MAMMALS REGULATIONS GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS Dolphin Safe Tuna Labeling § 216.92 Dolphin-safe requirements for tuna harvested in the ETP by large purse seine vessels. (a) U.S...

  17. The Potential and Utilization of Unused Energy Sources for Large-Scale Horticulture Facility Applications under Korean Climatic Conditions

    Directory of Open Access Journals (Sweden)

    In Tak Hyun

    2014-07-01

    Full Text Available As the use of fossil fuel has increased, not only in construction, but also in agriculture due to the drastic industrial development in recent times, the problems of heating costs and global warming are getting worse. Therefore, introduction of more reliable and environmentally-friendly alternative energy sources has become urgent and the same trend is found in large-scale horticulture facilities. In this study, among many alternative energy sources, we investigated the reserves and the potential of various different unused energy sources which have infinite potential, but are nowadays wasted due to limitations in their utilization. In addition, we utilized available unused energy as a heat source for a heat pump in a large-scale horticulture facility and analyzed its feasibility through EnergyPlus simulation modeling. Accordingly, the discharge flow rate from the Fan Coil Unit (FCU in the horticulture facility, the discharge air temperature, and the return temperature were analyzed. The performance and heat consumption of each heat source were compared with those of conventional boilers. The result showed that the power load of the heat pump was decreased and thus the heat efficiency was increased as the temperature of the heat source was increased. Among the analyzed heat sources, power plant waste heat which had the highest heat source temperature consumed the least electric energy and showed the highest efficiency.

  18. Power flow modelling in electric networks with renewable energy sources in large areas

    International Nuclear Information System (INIS)

    Buhawa, Z. M.; Dvorsky, E.

    2012-01-01

    In many worlds regions there is a great potential for utilizing home grid connected renewable power generating systems, with capacities of MW thousands. The optimal utilization of these sources is connected with power flow possibilities trough the power network in which they have to be connected. There is necessary to respect the long distances among the electric power sources with great outputs and power consumption and non even distribution of the power sources as well. The article gives the solution possibilities for Libya region under utilization of wind renewable sources in north in shore regions. (Authors)

  19. Evaluation of Required Water Sources during Extended Loss of All AC Power for CANDU NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Woo Jae; Lee, Kyung Jin; Kim, Min Ki; Kim, Keon Yeop; Park, Da Hee; Oh, Seo Bin [FNC Technology Co., Yongin (Korea, Republic of); Chang, Young Jin; Byun, Choong Seop [KHNP, Daejeon (Korea, Republic of)

    2016-10-15

    Fukushima accident was caused by lasting long hours of Station Black-Out (SBO) triggered from natural disaster. This accident had resulted in the reactor core damage. The purpose of this study is to evaluate the required water sources to maintain hot standby conditions until 72 hours during ELAP situation. The analysis was performed with CATHENA code. CATHENA code has been developed for the best-estimated transient simulation of CANDU plants. This study was carried out to evaluate the strategy to maintain hot standby conditions during ELAP situation in CANDU reactors. In this analysis, water was supplied to SG by MSSV open and by the gravity feed. It can cool the core without damage until the dousing tank depletion. Before dousing tank depletion, the emergency water supply pump was available by emergency power restoration. The pump continuously fed water to SG. So it is expected that the reactor core can be cooled down without damage for 72 hours if water source is enough to feed. This result is useful to make a strategy against SBO including ELAP situation.

  20. Economics of intermittent renewable energy sources: four essays on large-scale integration into European power systems

    International Nuclear Information System (INIS)

    Henriot, Arthur

    2014-01-01

    This thesis centres on issues of economic efficiency originating from the large-scale development of intermittent renewable energy sources (RES) in Europe. The flexible resources that are necessary to cope with their specificities (variability, low-predictability, site specificity) are already known, but adequate signals are required to foster efficient operation and investment in these resources. A first question is to what extent intermittent RES can remain out of the market at times when they are the main driver of investment and operation in power systems. A second question is whether the current market design is adapted to their specificities. These two questions are tackled in four distinct contributions.The first chapter is a critical literature review. This analysis introduces and confronts two (often implicit) paradigms for RES integration. It then identifies and discusses a set of evolutions required to develop a market design adapted to the large-scale development of RES, such as new definitions of the products exchanged and reorganisation of the sequence of electricity markets.In the second chapter, an analytical model is used to assess the potential of intra-day markets as a flexibility provider to intermittent RES with low production predictability. This study highlights and demonstrates how the potential of intra-day markets is heavily dependent on the evolution of the forecast errors.The third chapter focuses on the benefits of curtailing the production by intermittent RES, as a tool to smooth out their variability and reduce overall generation costs. Another analytical model is employed to anatomise the relationship between these benefits and a set of pivotal parameters. Special attention is also paid to the allocation of these benefits between the different stakeholders.In the fourth chapter, a numerical simulation is used to evaluate the ability of the European transmission system operators to tackle the investment wave required in order to

  1. Working group report on the required atomic database for neutral hydrogen beam penetration in large tokamaks

    International Nuclear Information System (INIS)

    Cox, M.; Boley, C.D.; Janev, R.K.

    1989-01-01

    This report discusses the required atomic database for the physical processes involved in the beam attenuation kinetics, when multistep processes are included, i.e., electron and proton impact processes, impurity-ion impact processes, radiative processes, as well as Lorentz field ionization. It also discusses the required accuracies of different parts of the data base in order to achieve the overall accuracy of about 10 percent that is required for the total beam stopping power cross section. 3 refs

  2. 77 FR 6463 - Revisions to Labeling Requirements for Blood and Blood Components, Including Source Plasma...

    Science.gov (United States)

    2012-02-08

    ... Blood Components, Including Source Plasma; Correction AGENCY: Food and Drug Administration, HHS. ACTION..., Including Source Plasma,'' which provided incorrect publication information regarding a 60-day notice that...

  3. Survey of high-voltage pulse technology suitable for large-scale plasma source ion implantation processes

    International Nuclear Information System (INIS)

    Reass, W.A.

    1994-01-01

    Many new plasma processes ideas are finding their way from the research lab to the manufacturing plant floor. These require high voltage (HV) pulse power equipment, which must be optimized for application, system efficiency, and reliability. Although no single HV pulse technology is suitable for all plasma processes, various classes of high voltage pulsers may offer a greater versatility and economy to the manufacturer. Technology developed for existing radar and particle accelerator modulator power systems can be utilized to develop a modern large scale plasma source ion implantation (PSII) system. The HV pulse networks can be broadly defined by two classes of systems, those that generate the voltage directly, and those that use some type of pulse forming network and step-up transformer. This article will examine these HV pulse technologies and discuss their applicability to the specific PSII process. Typical systems that will be reviewed will include high power solid state, hard tube systems such as crossed-field ''hollow beam'' switch tubes and planar tetrodes, and ''soft'' tube systems with crossatrons and thyratrons. Results will be tabulated and suggestions provided for a particular PSII process

  4. Radiological source tracking in oil/gas, medical and other industries: requirements and specifications for passive RFID technology

    Energy Technology Data Exchange (ETDEWEB)

    Dowla, Farid U. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-01

    Subsurface sensors that employ radioisotopes, such 241Am-Be and 137Cs, for reservoir characterization must be tracked for safety and security reasons. Other radiological sources are also widely used in medicine. The radiological source containers, in both applications, are small, mobile and used widely worldwide. The nuclear sources pose radiological dispersal device (RDD) security risks. Security concerns with the industrial use of radionuclide sources is in fact quite high as it is estimated that each year hundreds of sealed sources go missing, either lost or stolen. Risk mitigation efforts include enhanced regulations, source-use guidelines, research and development on electronic tracking of sources. This report summarizes the major elements of the requirements and operational concepts of nuclear sources with the goal of developing automated electronic tagging and locating systems.

  5. Sewage sludge ash (SSA) from large and small incineration plants as a potential source of phosphorus - Polish case study.

    Science.gov (United States)

    Smol, Marzena; Kulczycka, Joanna; Kowalski, Zygmunt

    2016-12-15

    The aim of this research is to present the possibility of using the sewage sludge ash (SSA) generated in incineration plants as a secondary source of phosphorus (P). The importance of issues related to P recovery from waste materials results from European Union (UE) legislation, which indicated phosphorus as a critical raw material (CRM). Due to the risks of a shortage of supply and its impact on the economy, which is greater than other raw materials, the proper management of phosphorus resources is required in order to achieve global P security. Based on available databases and literature, an analysis of the potential use of SSA for P-recovery in Poland was conducted. Currently, approx. 43,000 Mg/year of SSA is produced in large and small incineration plants and according to in the Polish National Waste Management Plan 2014 (NWMP) further steady growth is predicted. This indicates a great potential to recycle phosphorus from SSA and to reintroduce it again into the value chain as a component of fertilisers which can be applied directly on fields. The amount of SSA generated in installations, both large and small, varies and this contributes to the fact that new and different P recovery technology solutions must be developed and put into use in the years to come (e.g. mobile/stationary P recovery installations). The creation of a database focused on the collection and sharing of data about the amount of P recovered in EU and Polish installations is identified as a helpful tool in the development of an efficient P management model for Poland. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Successful large-scale hatchery culture of sandfish (Holothuria scabra using micro-algae concentrates as a larval food source

    Directory of Open Access Journals (Sweden)

    Thane A. Militz

    2018-02-01

    Full Text Available This paper reports methodology for large-scale hatchery culture of sandfish, Holothuria scabra, in the absence of live, cultured micro-algae. We demonstrate how commercially-available micro-algae concentrates can be incorporated into hatchery protocols as the sole larval food source to completely replace live, cultured micro-algae. Micro-algae concentrates supported comparable hatchery production of sandfish to that of live, cultured micro-algae traditionally used in large-scale hatchery culture. The hatchery protocol presented allowed a single technician to achieve production of more than 18,800 juvenile sandfish at 40 days post-fertilisation in a low-resource hatchery in Papua New Guinea. Growth of auricularia larvae fed micro-algae concentrates was represented by the equation length (μm = 307.8 × ln(day + 209.2 (R2 = 0.93 while survival over the entire 40 day hatchery cycle was described by the equation survival = 2 × day−1.06 (R2 = 0.74. These results show that micro-algae concentrates have great potential for simplifying hatchery culture of sea cucumbers by reducing infrastructural and technical resources required for live micro-algae culture. The hatchery methodology described in this study is likely to have applicability to low-resource hatcheries throughout the Indo-Pacific and could support regional expansion of sandfish hatchery production.

  7. Aerofoil broadband and tonal noise modelling using stochastic sound sources and incorporated large scale fluctuations

    Science.gov (United States)

    Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.

    2017-12-01

    The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.

  8. REQUIREMENTS TO THE LIMITATION OF POPULATION EXPO-SURE FROM THE NATIRAL IONIZING IRRADIATION SOURCES IN INDUSTRIAL CONDITIONS

    Directory of Open Access Journals (Sweden)

    I. P. Stamat

    2010-01-01

    Full Text Available The paper presents conceptually new requirements to the limitation of population exposure from the natural ionizing irradiation sources in industrial conditions, introduced into Basic Sanitary Rules of Radiation Safety (OSPORB-99/2010. It is shown that, first of all, introduction of these requirements is aimed at the resolution of variety of previously existing serious contradictions in organization of radiation safety control and supervision for the impact of natural ionizing irradiation sources in industry.

  9. Reflection processing of the large-N seismic data from the Source Physics Experiment (SPE)

    Energy Technology Data Exchange (ETDEWEB)

    Paschall, Olivia C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-18

    The purpose of the SPE is to develop a more physics-based model for nuclear explosion identification to understand the development of S-waves from explosion sources in order to enhance nuclear test ban treaty monitoring.

  10. Conceptual requirements for large fusion experiment control, data, robotics, and management systems

    International Nuclear Information System (INIS)

    Gaudreau, M.P.J.; Sullivan, J.D.

    1987-05-01

    The conceptual system requirements for the control, data, robotics, and project management (CDRM) system for the next generation of fusion experiments are developed by drawing on the success of the Tara control and data system. The requirements are described in terms of an integrated but separable matrix of well-defined interfaces among the various systems and subsystems. The study stresses modularity, performance, cost effectiveness, and exportability

  11. Large-scale use of mosquito larval source management for malaria control in Africa: a cost analysis.

    Science.gov (United States)

    Worrall, Eve; Fillinger, Ulrike

    2011-11-08

    At present, large-scale use of two malaria vector control methods, long-lasting insecticidal nets (LLINs) and indoor residual spraying (IRS) is being scaled up in Africa with substantial funding from donors. A third vector control method, larval source management (LSM), has been historically very successful and is today widely used for mosquito control globally, except in Africa. With increasing risk of insecticide resistance and a shift to more exophilic vectors, LSM is now under re-evaluation for use against afro-tropical vector species. Here the costs of this intervention were evaluated. The 'ingredients approach' was used to estimate the economic and financial costs per person protected per year (pppy) for large-scale LSM using microbial larvicides in three ecologically diverse settings: (1) the coastal metropolitan area of Dar es Salaam in Tanzania, (2) a highly populated Kenyan highland area (Vihiga District), and (3) a lakeside setting in rural western Kenya (Mbita Division). Two scenarios were examined to investigate the cost implications of using alternative product formulations. Sensitivity analyses on product prices were carried out. The results show that for programmes using the same granular formulation larviciding costs the least pppy in Dar es Salaam (US$0.94), approximately 60% more in Vihiga District (US$1.50) and the most in Mbita Division (US$2.50). However, these costs are reduced substantially if an alternative water-dispensable formulation is used; in Vihiga, this would reduce costs to US$0.79 and, in Mbita Division, to US$1.94. Larvicide and staff salary costs each accounted for approximately a third of the total economic costs per year. The cost pppy depends mainly on: (1) the type of formulation required for treating different aquatic habitats, (2) the human population density relative to the density of aquatic habitats and (3) the potential to target the intervention in space and/or time. Costs for LSM compare favourably with costs for IRS

  12. Large-scale use of mosquito larval source management for malaria control in Africa: a cost analysis

    Science.gov (United States)

    2011-01-01

    Background At present, large-scale use of two malaria vector control methods, long-lasting insecticidal nets (LLINs) and indoor residual spraying (IRS) is being scaled up in Africa with substantial funding from donors. A third vector control method, larval source management (LSM), has been historically very successful and is today widely used for mosquito control globally, except in Africa. With increasing risk of insecticide resistance and a shift to more exophilic vectors, LSM is now under re-evaluation for use against afro-tropical vector species. Here the costs of this intervention were evaluated. Methods The 'ingredients approach' was used to estimate the economic and financial costs per person protected per year (pppy) for large-scale LSM using microbial larvicides in three ecologically diverse settings: (1) the coastal metropolitan area of Dar es Salaam in Tanzania, (2) a highly populated Kenyan highland area (Vihiga District), and (3) a lakeside setting in rural western Kenya (Mbita Division). Two scenarios were examined to investigate the cost implications of using alternative product formulations. Sensitivity analyses on product prices were carried out. Results The results show that for programmes using the same granular formulation larviciding costs the least pppy in Dar es Salaam (US$0.94), approximately 60% more in Vihiga District (US$1.50) and the most in Mbita Division (US$2.50). However, these costs are reduced substantially if an alternative water-dispensable formulation is used; in Vihiga, this would reduce costs to US$0.79 and, in Mbita Division, to US$1.94. Larvicide and staff salary costs each accounted for approximately a third of the total economic costs per year. The cost pppy depends mainly on: (1) the type of formulation required for treating different aquatic habitats, (2) the human population density relative to the density of aquatic habitats and (3) the potential to target the intervention in space and/or time. Conclusion Costs for LSM

  13. A Lidar-derived evaluation of watershed-scale large woody debris sources and recruitment mechanisms: costal Maine, USA

    Science.gov (United States)

    A. ​Kasprak; F. J. Magilligan; K. H. Nislow; N. P. Snyder

    2012-01-01

    In‐channel large woody debris (LWD) promotes quality aquatic habitat through sediment sorting, pool scouring and in‐stream nutrient retention and transport. LWD recruitment occurs by numerous ecological and geomorphic mechanisms including channel migration, mass wasting and natural tree fall, yet LWD sourcing on the watershed scale remains poorly constrained. We...

  14. The Contribution of International Large-Scale Assessments to Educational Research: Combining Individual and Institutional Data Sources

    Science.gov (United States)

    Strietholt, Rolf; Scherer, Ronny

    2018-01-01

    The present paper aims to discuss how data from international large-scale assessments (ILSAs) can be utilized and combined, even with other existing data sources, in order to monitor educational outcomes and study the effectiveness of educational systems. We consider different purposes of linking data, namely, extending outcomes measures,…

  15. Temperature field due to time-dependent heat sources in a large rectangular grid - Derivation of analytical solution

    International Nuclear Information System (INIS)

    Claesson, J.; Probert, T.

    1996-01-01

    The temperature field in rock due to a large rectangular grid of heat releasing canisters containing nuclear waste is studied. The solution is by superposition divided into different parts. There is a global temperature field due to the large rectangular canister area, while a local field accounts for the remaining heat source problem. The global field is reduced to a single integral. The local field is also solved analytically using solutions for a finite line heat source and for an infinite grid of point sources. The local solution is reduced to three parts, each of which depends on two spatial coordinates only. The temperatures at the envelope of a canister are given by a single thermal resistance, which is given by an explicit formula. The results are illustrated by a few numerical examples dealing with the KBS-3 concept for storage of nuclear waste. 8 refs

  16. External heating and current drive source requirements towards steady-state operation in ITER

    Science.gov (United States)

    Poli, F. M.; Kessel, C. E.; Bonoli, P. T.; Batchelor, D. B.; Harvey, R. W.; Snyder, P. B.

    2014-07-01

    Steady state scenarios envisaged for ITER aim at optimizing the bootstrap current, while maintaining sufficient confinement and stability to provide the necessary fusion yield. Non-inductive scenarios will need to operate with internal transport barriers (ITBs) in order to reach adequate fusion gain at typical currents of 9 MA. However, the large pressure gradients associated with ITBs in regions of weak or negative magnetic shear can be conducive to ideal MHD instabilities, reducing the no-wall limit. The E × B flow shear from toroidal plasma rotation is expected to be low in ITER, with a major role in the ITB dynamics being played by magnetic geometry. Combinations of heating and current drive (H/CD) sources that sustain reversed magnetic shear profiles throughout the discharge are the focus of this work. Time-dependent transport simulations indicate that a combination of electron cyclotron (EC) and lower hybrid (LH) waves is a promising route towards steady state operation in ITER. The LH forms and sustains expanded barriers and the EC deposition at mid-radius freezes the bootstrap current profile stabilizing the barrier and leading to confinement levels 50% higher than typical H-mode energy confinement times. Using LH spectra with spectrum centred on parallel refractive index of 1.75-1.85, the performance of these plasma scenarios is close to the ITER target of 9 MA non-inductive current, global confinement gain H98 = 1.6 and fusion gain Q = 5.

  17. Large Scale Computing and Storage Requirements for Biological and Environmental Research

    Energy Technology Data Exchange (ETDEWEB)

    DOE Office of Science, Biological and Environmental Research Program Office (BER),

    2009-09-30

    In May 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of Biological and Environmental Research (BER) held a workshop to characterize HPC requirements for BER-funded research over the subsequent three to five years. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. Chief among them: scientific progress in BER-funded research is limited by current allocations of computational resources. Additionally, growth in mission-critical computing -- combined with new requirements for collaborative data manipulation and analysis -- will demand ever increasing computing, storage, network, visualization, reliability and service richness from NERSC. This report expands upon these key points and adds others. It also presents a number of"case studies" as significant representative samples of the needs of science teams within BER. Workshop participants were asked to codify their requirements in this"case study" format, summarizing their science goals, methods of solution, current and 3-5 year computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel,"multi-core" environment that is expected to dominate HPC architectures over the next few years.

  18. The spallation neutron source SINQ. A new large facility for research at PSI

    International Nuclear Information System (INIS)

    Bauer, G.S.; Crawford, J.F.

    1994-01-01

    This document is intended to familiarize the non-specialist with the principles of neutron scattering and some of its applications. It presents an overview of the foundations of neutron scattering, the basic types of instruments used, and their principles of operation. The design concept and some technical details of the spallation neutron source are described for the benefit of the scientifically or technically interested reader. In future this source will form the heart of the instruments available to PSI's wide community of neutron scattering researchers. (author) 32 figs., 1 tab

  19. Media Use and Source Trust among Muslims in Seven Countries: Results of a Large Random Sample Survey

    Directory of Open Access Journals (Sweden)

    Steven R. Corman

    2013-12-01

    Full Text Available Despite the perceived importance of media in the spread of and resistance against Islamist extremism, little is known about how Muslims use different kinds of media to get information about religious issues, and what sources they trust when doing so. This paper reports the results of a large, random sample survey among Muslims in seven countries Southeast Asia, West Africa and Western Europe, which helps fill this gap. Results show a diverse set of profiles of media use and source trust that differ by country, with overall low trust in mediated sources of information. Based on these findings, we conclude that mass media is still the most common source of religious information for Muslims, but that trust in mediated information is low overall. This suggests that media are probably best used to persuade opinion leaders, who will then carry anti-extremist messages through more personal means.

  20. Large Municipal Waste Combustors (LMWC): New Source Performance Standards (NSPS) and Emissions Guidelines

    Science.gov (United States)

    Learn about the NSPS, emission guidelines and compliance times for large municipal waste combustors (MWC) by reading the rule summary, rule history and the federal register citations and supporting documents

  1. Large Deployable Reflector (LDR) system concept and technology definition study. Analysis of space station requirements for LDR

    Science.gov (United States)

    Agnew, Donald L.; Vinkey, Victor F.; Runge, Fritz C.

    1989-01-01

    A study was conducted to determine how the Large Deployable Reflector (LDR) might benefit from the use of the space station for assembly, checkout, deployment, servicing, refurbishment, and technology development. Requirements that must be met by the space station to supply benefits for a selected scenario are summarized. Quantitative and qualitative data are supplied. Space station requirements for LDR which may be utilized by other missions are identified. A technology development mission for LDR is outlined and requirements summarized. A preliminary experiment plan is included. Space Station Data Base SAA 0020 and TDM 2411 are updated.

  2. Large Deployable Reflector (LDR) system concept and technology definition study. Analysis of space station requirements for LDR

    Science.gov (United States)

    Agnew, Donald L.; Vinkey, Victor F.; Runge, Fritz C.

    1989-04-01

    A study was conducted to determine how the Large Deployable Reflector (LDR) might benefit from the use of the space station for assembly, checkout, deployment, servicing, refurbishment, and technology development. Requirements that must be met by the space station to supply benefits for a selected scenario are summarized. Quantitative and qualitative data are supplied. Space station requirements for LDR which may be utilized by other missions are identified. A technology development mission for LDR is outlined and requirements summarized. A preliminary experiment plan is included. Space Station Data Base SAA 0020 and TDM 2411 are updated.

  3. Assessment of the technology required to develop photovoltaic power system for large scale national energy applications

    Science.gov (United States)

    Lutwack, R.

    1974-01-01

    A technical assessment of a program to develop photovoltaic power system technology for large-scale national energy applications was made by analyzing and judging the alternative candidate photovoltaic systems and development tasks. A program plan was constructed based on achieving the 10 year objective of a program to establish the practicability of large-scale terrestrial power installations using photovoltaic conversion arrays costing less than $0.50/peak W. Guidelines for the tasks of a 5 year program were derived from a set of 5 year objectives deduced from the 10 year objective. This report indicates the need for an early emphasis on the development of the single-crystal Si photovoltaic system for commercial utilization; a production goal of 5 x 10 to the 8th power peak W/year of $0.50 cells was projected for the year 1985. The developments of other photovoltaic conversion systems were assigned to longer range development roles. The status of the technology developments and the applicability of solar arrays in particular power installations, ranging from houses to central power plants, was scheduled to be verified in a series of demonstration projects. The budget recommended for the first 5 year phase of the program is $268.5M.

  4. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard

    2014-05-02

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In March 2013, NERSC, DOE?s Office of Advanced Scientific Computing Research (ASCR) and DOE?s Office of Fusion Energy Sciences (FES) held a review to characterize High Performance Computing (HPC) and storage requirements for FES research through 2017. This report is the result.

  5. Interactive effects of large- and small-scale sources of feral honey-bees for sunflower in the Argentine Pampas.

    Directory of Open Access Journals (Sweden)

    Agustín Sáez

    Full Text Available Pollinators for animal pollinated crops can be provided by natural and semi-natural habitats, ranging from large vegetation remnants to small areas of non-crop land in an otherwise highly modified landscape. It is unknown, however, how different small- and large-scale habitat patches interact as pollinator sources. In the intensively managed Argentine Pampas, we studied the additive and interactive effects of large expanses (up to 2200 ha of natural habitat, represented by untilled isolated "sierras", and narrow (3-7 m wide strips of semi-natural habitat, represented by field margins, as pollinator sources for sunflower (Helianthus annus. We estimated visitation rates by feral honey-bees, Apis mellifera, and native flower visitors (as a group at 1, 5, 25, 50 and 100 m from a field margin in 17 sunflower fields 0-10 km distant from the nearest sierra. Honey-bees dominated the pollinator assemblage accounting for >90% of all visits to sunflower inflorescences. Honey-bee visitation was strongly affected by proximity to the sierras decreasing by about 70% in the most isolated fields. There was also a decline in honey-bee visitation with distance from the field margin, which was apparent with increasing field isolation, but undetected in fields nearby large expanses of natural habitat. The probability of observing a native visitor decreased with isolation from the sierras, but in other respects visitation by flower visitors other than honey-bees was mostly unaffected by the habitat factors assessed in this study. Overall, we found strong hierarchical and interactive effects between the study large and small-scale pollinator sources. These results emphasize the importance of preserving natural habitats and managing actively field verges in the absence of large remnants of natural habitat for improving pollinator services.

  6. Interactive effects of large- and small-scale sources of feral honey-bees for sunflower in the Argentine Pampas.

    Science.gov (United States)

    Sáez, Agustín; Sabatino, Malena; Aizen, Marcelo A

    2012-01-01

    Pollinators for animal pollinated crops can be provided by natural and semi-natural habitats, ranging from large vegetation remnants to small areas of non-crop land in an otherwise highly modified landscape. It is unknown, however, how different small- and large-scale habitat patches interact as pollinator sources. In the intensively managed Argentine Pampas, we studied the additive and interactive effects of large expanses (up to 2200 ha) of natural habitat, represented by untilled isolated "sierras", and narrow (3-7 m wide) strips of semi-natural habitat, represented by field margins, as pollinator sources for sunflower (Helianthus annus). We estimated visitation rates by feral honey-bees, Apis mellifera, and native flower visitors (as a group) at 1, 5, 25, 50 and 100 m from a field margin in 17 sunflower fields 0-10 km distant from the nearest sierra. Honey-bees dominated the pollinator assemblage accounting for >90% of all visits to sunflower inflorescences. Honey-bee visitation was strongly affected by proximity to the sierras decreasing by about 70% in the most isolated fields. There was also a decline in honey-bee visitation with distance from the field margin, which was apparent with increasing field isolation, but undetected in fields nearby large expanses of natural habitat. The probability of observing a native visitor decreased with isolation from the sierras, but in other respects visitation by flower visitors other than honey-bees was mostly unaffected by the habitat factors assessed in this study. Overall, we found strong hierarchical and interactive effects between the study large and small-scale pollinator sources. These results emphasize the importance of preserving natural habitats and managing actively field verges in the absence of large remnants of natural habitat for improving pollinator services.

  7. Requirements to Create a Persistent, Open Source, Mirror World for Military Applications

    National Research Council Canada - National Science Library

    Sanders, Kent

    2007-01-01

    .... Solutions to these problems are proposed and analyzed, including using existing commercial and open source projects in development, using projects already deployed, or the feasibility of developing...

  8. Energetic and Economic Assessment of Pipe Network Effects on Unused Energy Source System Performance in Large-Scale Horticulture Facilities

    Directory of Open Access Journals (Sweden)

    Jae Ho Lee

    2015-04-01

    Full Text Available As the use of fossil fuel has increased, not only in construction, but also in agriculture due to the drastic industrial development in recent times, the problems of heating costs and global warming are getting worse. Therefore, the introduction of more reliable and environmentally-friendly alternative energy sources has become urgent and the same trend is found in large-scale horticulture facilities. In this study, among many alternative energy sources, we investigated the reserves and the potential of various different unused energy sources which have infinite potential, but are nowadays wasted due to limitations in their utilization. This study investigated the effects of the distance between the greenhouse and the actual heat source by taking into account the heat transfer taking place inside the pipe network. This study considered CO2 emissions and economic aspects to determine the optimal heat source. Payback period analysis against initial investment cost shows that a heat pump based on a power plant’s waste heat has the shortest payback period of 7.69 years at a distance of 0 km. On the other hand, the payback period of a heat pump based on geothermal heat showed the shortest payback period of 10.17 year at the distance of 5 km, indicating that heat pumps utilizing geothermal heat were the most effective model if the heat transfer inside the pipe network between the greenhouse and the actual heat source is taken into account.

  9. Complex active regions as the main source of extreme and large solar proton events

    Science.gov (United States)

    Ishkov, V. N.

    2013-12-01

    A study of solar proton sources indicated that solar flare events responsible for ≥2000 pfu proton fluxes mostly occur in complex active regions (CARs), i.e., in transition structures between active regions and activity complexes. Different classes of similar structures and their relation to solar proton events (SPEs) and evolution, depending on the origination conditions, are considered. Arguments in favor of the fact that sunspot groups with extreme dimensions are CARs are presented. An analysis of the flare activity in a CAR resulted in the detection of "physical" boundaries, which separate magnetic structures of the same polarity and are responsible for the independent development of each structure.

  10. Early mortality experience in a large military cohort and a comparison of mortality data sources

    Directory of Open Access Journals (Sweden)

    Smith Besa

    2010-05-01

    Full Text Available Abstract Background Complete and accurate ascertainment of mortality is critically important in any longitudinal study. Tracking of mortality is particularly essential among US military members because of unique occupational exposures (e.g., worldwide deployments as well as combat experiences. Our study objectives were to describe the early mortality experience of Panel 1 of the Millennium Cohort, consisting of participants in a 21-year prospective study of US military service members, and to assess data sources used to ascertain mortality. Methods A population-based random sample (n = 256,400 of all US military service members on service rosters as of October 1, 2000, was selected for study recruitment. Among this original sample, 214,388 had valid mailing addresses, were not in the pilot study, and comprised the group referred to in this study as the invited sample. Panel 1 participants were enrolled from 2001 to 2003, represented all armed service branches, and included active-duty, Reserve, and National Guard members. Crude death rates, as well as age- and sex-adjusted overall and age-adjusted, category-specific death rates were calculated and compared for participants (n = 77,047 and non-participants (n = 137,341 based on data from the Social Security Administration Death Master File, Department of Veterans Affairs (VA files, and the Department of Defense Medical Mortality Registry, 2001-2006. Numbers of deaths identified by these three data sources, as well as the National Death Index, were compared for 2001-2004. Results There were 341 deaths among the participants for a crude death rate of 80.7 per 100,000 person-years (95% confidence interval [CI]: 72.2,89.3 compared to 820 deaths and a crude death rate of 113.2 per 100,000 person-years (95% CI: 105.4, 120.9 for non-participants. Age-adjusted, category-specific death rates highlighted consistently higher rates among study non-participants. Although there were advantages and

  11. Large aperture contact ionized Cs+1 ion source for an induction linac

    International Nuclear Information System (INIS)

    Abbott, S.; Chupp, W.; Faltens, A.; Herrmannsfeldt, W.; Hoyer, E.; Keefe, D.; Kim, C.H.; Rosenblum, S.; Shiloh, J.

    1979-03-01

    A 500 KeV one-ampere Cs +1 ion beam has been generated by contact ionization with a 30 cm dia. iridium hot plate. Reproducibility of space charge limited ion current wave forms at repetition rates up to 1 Hz has been verified. The beam is characterized to be very bright and suitable as an ion source for the induction linac based heavy ion fusion scheme. The hot anode plate was found to be reliable and self-cleaning during the operation

  12. Development of a large proton accelerator for innovative researches; development of high power RF source

    Energy Technology Data Exchange (ETDEWEB)

    Chung, K. H.; Lee, K. O.; Shin, H. M.; Chung, I. Y. [KAPRA, Seoul (Korea); Kim, D. I. [Inha University, Incheon (Korea); Noh, S. J. [Dankook University, Seoul (Korea); Ko, S. K. [Ulsan University, Ulsan (Korea); Lee, H. J. [Cheju National University, Cheju (Korea); Choi, W. H. [Korea Advanced Institute of Science and Technology, Taejeon (Korea)

    2002-05-01

    This study was performed with objective to design and develop the KOMAC proton accelerator RF system. For the development of the high power RF source for CCDTL(coupled cavity drift tube linac), the medium power RF system using the UHF klystron for broadcasting was integrated and with this RF system we obtained the basic design data, operation experience and code-validity test data. Based on the medium power RF system experimental data, the high power RF system for CCDTL was designed and its performed was analyzed. 16 refs., 64 figs., 27 tabs. (Author)

  13. Sustainability of Open-Source Software Organizations as Underpinning for Sustainable Interoperability on Large Scales

    Science.gov (United States)

    Fulker, D. W.; Gallagher, J. H. R.

    2015-12-01

    OPeNDAP's Hyrax data server is an open-source framework fostering interoperability via easily-deployed Web services. Compatible with solutions listed in the (PA001) session description—federation, rigid standards and brokering/mediation—the framework can support tight or loose coupling, even with dependence on community-contributed software. Hyrax is a Web-services framework with a middleware-like design and a handler-style architecture that together reduce the interoperability challenge (for N datatypes and M user contexts) to an O(N+M) problem, similar to brokering. Combined with an open-source ethos, this reduction makes Hyrax a community tool for gaining interoperability. E.g., in its response to the Big Earth Data Initiative (BEDI), NASA references OPeNDAP-based interoperability. Assuming its suitability, the question becomes: how sustainable is OPeNDAP, a small not-for-profit that produces open-source software, i.e., has no software-sales? In other words, if geoscience interoperability depends on OPeNDAP and similar organizations, are those entities in turn sustainable? Jim Collins (in Good to Great) highlights three questions that successful companies can answer (paraphrased here): What is your passion? Where is your world-class excellence? What drives your economic engine? We attempt to shed light on OPeNDAP sustainability by examining these. Passion: OPeNDAP has a focused passion for improving the effectiveness of scientific data sharing and use, as deeply-cooperative community endeavors. Excellence: OPeNDAP has few peers in remote, scientific data access. Skills include computer science with experience in data science, (operational, secure) Web services, and software design (for servers and clients, where the latter vary from Web pages to standalone apps and end-user programs). Economic Engine: OPeNDAP is an engineering services organization more than a product company, despite software being key to OPeNDAP's reputation. In essence, provision of

  14. XUV synchrotron optical components for the Advanced Light Source: Summary of the requirements and the developmental program

    International Nuclear Information System (INIS)

    McKinney, W.; Irick, S.; Lunt, D.

    1992-07-01

    We give a brief summary of the requirements for water cooled optical components for the Advanced Light Source (ALS), a third generation synchrotron radiation source under construction at Lawrence Berkeley Laboratory (LBL). Materials choices, surface figure and smoothness specifications, and metrology systems for measuring the plated metal surfaces are discussed. Results from a finished water cooled copper alloy mirror will be used to demonstrate the state of the art in optical metrology with the Takacs Long Trace Profiler (LTP II)

  15. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Science.gov (United States)

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  16. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Directory of Open Access Journals (Sweden)

    Xianlei Dong

    Full Text Available Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  17. Requirements for the register of physical persons for the preparation, use and handling radioactive sources

    International Nuclear Information System (INIS)

    1998-07-01

    This norm establishes the process for register of superior level profession nals enabled to the preparation, using, and handling of radioactive sources. This norm applies to the physical persons candidates applying to the register for preparation, use and handling of radioactive sources in radioactive installations at the industry, agriculture, teaching and researching

  18. A Tracking Analyst for large 3D spatiotemporal data from multiple sources (case study: Tracking volcanic eruptions in the atmosphere)

    Science.gov (United States)

    Gad, Mohamed A.; Elshehaly, Mai H.; Gračanin, Denis; Elmongui, Hicham G.

    2018-02-01

    This research presents a novel Trajectory-based Tracking Analyst (TTA) that can track and link spatiotemporally variable data from multiple sources. The proposed technique uses trajectory information to determine the positions of time-enabled and spatially variable scatter data at any given time through a combination of along trajectory adjustment and spatial interpolation. The TTA is applied in this research to track large spatiotemporal data of volcanic eruptions (acquired using multi-sensors) in the unsteady flow field of the atmosphere. The TTA enables tracking injections into the atmospheric flow field, the reconstruction of the spatiotemporally variable data at any desired time, and the spatiotemporal join of attribute data from multiple sources. In addition, we were able to create a smooth animation of the volcanic ash plume at interactive rates. The initial results indicate that the TTA can be applied to a wide range of multiple-source data.

  19. HARVESTING, INTEGRATING AND DISTRIBUTING LARGE OPEN GEOSPATIAL DATASETS USING FREE AND OPEN-SOURCE SOFTWARE

    Directory of Open Access Journals (Sweden)

    R. Oliveira

    2016-06-01

    Full Text Available Federal, State and Local government agencies in the USA are investing heavily on the dissemination of Open Data sets produced by each of them. The main driver behind this thrust is to increase agencies’ transparency and accountability, as well as to improve citizens’ awareness. However, not all Open Data sets are easy to access and integrate with other Open Data sets available even from the same agency. The City and County of Denver Open Data Portal distributes several types of geospatial datasets, one of them is the city parcels information containing 224,256 records. Although this data layer contains many pieces of information it is incomplete for some custom purposes. Open-Source Software were used to first collect data from diverse City of Denver Open Data sets, then upload them to a repository in the Cloud where they were processed using a PostgreSQL installation on the Cloud and Python scripts. Our method was able to extract non-spatial information from a ‘not-ready-to-download’ source that could then be combined with the initial data set to enhance its potential use.

  20. A novel absorption refrigeration cycle for heat sources with large temperature change

    International Nuclear Information System (INIS)

    Yan, Xiaona; Chen, Guangming; Hong, Daliang; Lin, Shunrong; Tang, Liming

    2013-01-01

    To increase the use efficiency of available thermal energy in the waste gas/water, a novel high-efficient absorption refrigeration cycle regarded as an improved single-effect/double-lift configuration is proposed. The improved cycle using an evaporator/absorber (E/A) promotes the coefficient of performance and reduces the irreversible loss. Water–lithium bromide is used as the working pair and a simulation study under the steady working conditions is conducted. The results show that the temperature of waste gas discharged is about 20 °C lower than that of the conventional single-effect cycle and the novel cycle we proposed can achieve more cooling capacity per unit mass of waste gas/water at the simulated working conditions. -- Graphical abstract: Pressure – temperature diagram for water – lithium bromide. Highlights: ► A novel waste heat-driven absorption refrigeration cycle is presented. ► The novel cycle can reject heat at much lower temperature. ► The available temperature range of heat source of the proposed cycle is wider. ► Multiple heat sources with different temperatures can be used in the novel cycle

  1. Oil crops: requirements and possibilities for their utilization as an energy source

    International Nuclear Information System (INIS)

    Boerner, G.; Schoenefeldt, J.; Mehring, I.

    1995-01-01

    Although vegetable oils have been used as an energy source for centuries, they were used almost exclusively in oil lamps. Their value as a foodstuff and the availability and low price of mineral oil had for a long time kept them from being seriously considered as a potential energy source. Now, owing to the increasing cost of fossil fuel, particularly oil, and increasing industrial energy consumption, as well as the negative impact of fossil fuel use on the environment, there is interest in a number of alternative energy sources, including vegetable oils. The discussion in this paper focuses on the use of untreated vegetable oils, particularly rapeseed oil. The energy potential of rapeseed oil is explored first. Then, conditions under which the use of oil crops as an energy source is feasible are briefly discussed; two concepts for decentralized oil-seed processing are described and, finally, future possibilities for use of vegetable oils as a fuel source are reviewed. (author)

  2. Requirements for accuracy of superconducting coils in the Large Helical Device

    Energy Technology Data Exchange (ETDEWEB)

    Yamazaki, K; Yanagi, N; Ji, H; Kaneko, H; Ohyabu, N; Satow, T; Morimoto, S; Yamamoto, J; Motojima, O [National Inst. for Fusion Science, Chikusa, Nagoya (Japan); LHD Design Group

    1993-01-01

    Irregular magnetic fields resonate with the rational surface of the magnetic confinement systems, form magnetic islands and ergodic layers, and destruct the plasma confinement. To avoid this confinement destruction the requirement of an accuracy of 10[sup -4] in the magnetic field is adopted as the magnetic-accuracy design criterion for the LHD machine. Following this criterion the width of the undesirable magnetic island is kept less than one tenth of the plasma radius. The irregular magnetic field from the superconducting (SC) helical and poloidal coils is produced by winding irregularity, installing irregularity, cooling-down deformations and electromagnetic deformations. The local irregularities such as feeders, layer connections, adjacent-conductor connections of the coils also produce an error field. The eddy currents on the supporting shell structure of SC coils, the cryostat, etc. are also evaluated. All irregular effects are analyzed using Fourier decomposition and field mapping methods for the LHD design, and it is confirmed that the present design of the superconducting coil system satisfies the design criterion for these field irregularities. (orig.).

  3. Sources and accumulation of plutonium in a large Western Pacific marginal sea: The South China Sea.

    Science.gov (United States)

    Wu, Junwen; Dai, Minhan; Xu, Yi; Zheng, Jian

    2018-01-01

    In order to examine the sources of plutonium (Pu) and elaborate its scavenging and accumulation processes, 240 Pu/ 239 Pu atom ratios and 239+240 Pu activities in the water column of the South China Sea (SCS) were determined and compared with our previously reported data for the sediments. Consistently high 240 Pu/ 239 Pu atom ratios that ranged from 0.184-0.250 (average=0.228±0.015), indicative of non-global fallout Pu sources were observed both in the surface water and at depth during 2012-2014. The spatial distribution of the 240 Pu/ 239 Pu atom ratio in the SCS showed a decreasing trend away from the Luzon Strait, which was very consistent with the introduction pathway of the Kuroshio Current. The Kuroshio had an even heavier Pu isotopic ratio ranging from 0.250-0.263 (average=0.255±0.006), traceable to the non-global fallout Pu signature from the Pacific Proving Grounds (PPG). Using a simple two end-member mixing model, we further revealed that this PPG source contributed 41±17% of the Pu in the SCS water column. The 239+240 Pu activities in the SCS surface seawater varied from 1.59 to 2.94mBqm -3 , with an average of 2.34±0.38mBqm -3 . Such an activity level was ~40% higher than that in the Kuroshio. The distribution of 239+240 Pu in the surface seawater further showed a general trend of increase from the Kuroshio to the SCS basin, suggesting significant accumulation of Pu within the SCS. The 239+240 Pu inventory of the water column in the SCS basin at the SEATS station with a total depth of ~3840m was estimated to be ~29Bqm -2 , which was substantially higher than the sediment core estimates made for the SCS basin (3.75Bqm -2 ) but much lower than the sediment core estimates made for the shelf of the northern SCS (365.6Bqm -2 ). Such differences were determined by the lower scavenging efficiency of Pu in the SCS basin compared to the northern SCS shelf. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. The effect of the novel internal-type linear inductive antenna for large area magnetized inductive plasma source

    Science.gov (United States)

    Lee, S. H.; Shulika, Olga.; Kim, K. N.; Yeom, G. Y.; Lee, J. K.

    2004-09-01

    As the technology of plasma processing progresses, there is a continuing demand for higher plasma density, uniformity over large areas and greater control over plasma parameters to optimize the processes of etching, deposition and surface treatment. Traditionally, the external planar ICP sources with low pressure high density plasma have limited scale-up capabilities due to its high impedance accompanied by the large antenna size. Also due to the cost and thickness of their dielectric material in order to generate uniform plasma. In this study the novel internal-type linear inductive antenna system (1,020mm¡¿830mm¡¿437mm) with permanent magnet arrays are investigated to improve both the plasma density and the uniformity of LAPS (Large Area Plasma Source) for FPD processing. Generally plasma discharges are enhanced because the inductance of the novel antenna (termed as the double comb antenna) is lower than that of the serpentine-type antenna and also the magnetic confinement of electron increases the power absorption efficiency. The uniformity is improved by reducing the standing wave effect. The total length of antenna is comparable to the driving rf wavelength to cause the plasma nonuniformity. To describe the discharge phenomenon we have developed a magnetized two-dimensional fluid simulation. This work was supported by National Research Laboratory (NRL) Program of the Korea Ministry of Science and Technology. [References] 1. J.K.Lee, Lin Meng, Y.K.Shin, H,J,Lee and T.H.Chung, ¡°Modeling and Simulation of a Large-Area Plasma Source¡±, Jpn. J. Appl. Phys. Vol.36(1997) pp. 5714-5723 2. S.E.Park, B.U.Cho, Y.J.Lee*, and G.Y.Yeom*, and J.K.Lee, ¡°The Characteristics of Large Area Processing Plasmas¡±, IEEE Trans. Plasma Sci., Vol.31 ,No.4(2003) pp. 628-637

  5. Science case and requirements for the MOSAIC concept for a multi-object spectrograph for the European extremely large telescope

    International Nuclear Information System (INIS)

    Evans, C.J.; Puech, M.; Bonifacio, P.; Hammer, F.; Jagourel, P.; Caffau, E.; Disseau, K.; Flores, H.; Huertas-Company, M.; Mei, S.; Aussel, H.

    2014-01-01

    Over the past 18 months we have revisited the science requirements for a multi-object spectrograph (MOS) for the European Extremely Large Telescope (E-ELT). These efforts span the full range of E-ELT science and include input from a broad cross-section of astronomers across the ESO partner countries. In this contribution we summarise the key cases relating to studies of high-redshift galaxies, galaxy evolution, and stellar populations, with a more expansive presentation of a new case relating to detection of exoplanets in stellar clusters. A general requirement is the need for two observational modes to best exploit the large (=40 arcmin 2 ) patrol field of the E-ELT. The first mode ('high multiplex') requires integrated-light (or coarsely resolved) optical/near-IR spectroscopy of ≥100 objects simultaneously. The second ('high definition'), enabled by wide-field adaptive optics, requires spatially-resolved, near-IR of ≥10 objects/sub-fields. Within the context of the conceptual study for an ELT-MOS called MOSAIC, we summarise the top level requirements from each case and introduce the next steps in the design process. (authors)

  6. Microsoft C#.NET program and electromagnetic depth sounding for large loop source

    Science.gov (United States)

    Prabhakar Rao, K.; Ashok Babu, G.

    2009-07-01

    A program, in the C# (C Sharp) language with Microsoft.NET Framework, is developed to compute the normalized vertical magnetic field of a horizontal rectangular loop source placed on the surface of an n-layered earth. The field can be calculated either inside or outside the loop. Five C# classes with member functions in each class are, designed to compute the kernel, Hankel transform integral, coefficients for cubic spline interpolation between computed values and the normalized vertical magnetic field. The program computes the vertical magnetic field in the frequency domain using the integral expressions evaluated by a combination of straightforward numerical integration and the digital filter technique. The code utilizes different object-oriented programming (OOP) features. It finally computes the amplitude and phase of the normalized vertical magnetic field. The computed results are presented for geometric and parametric soundings. The code is developed in Microsoft.NET visual studio 2003 and uses various system class libraries.

  7. Large-scale computation at PSI scientific achievements and future requirements

    International Nuclear Information System (INIS)

    Adelmann, A.; Markushin, V.

    2008-11-01

    ' (SNSP-HPCN) is discussing this complex. Scientific results which are made possible by PSI's engagement at CSCS (named Horizon) are summarised and PSI's future high-performance computing requirements are evaluated. The data collected shows the current situation and a 5 year extrapolation of the users' needs with respect to HPC resources is made. In consequence this report can serve as a basis for future strategic decisions with respect to a non-existing HPC road-map for PSI. PSI's institutional HPC area started hardware-wise approximately in 1999 with the assembly of a 32-processor LINUX cluster called Merlin. Merlin was upgraded several times, lastly in 2007. The Merlin cluster at PSI is used for small scale parallel jobs, and is the only general purpose computing system at PSI. Several dedicated small scale clusters followed the Merlin scheme. Many of the clusters are used to analyse data from experiments at PSI or CERN, because dedicated clusters are most efficient. The intellectual and financial involvement of the procurement (including a machine update in 2007) results in a PSI share of 25 % of the available computing resources at CSCS. The (over) usage of available computing resources by PSI scientists is demonstrated. We actually get more computing cycles than we have paid for. The reason is the fair share policy that is implemented on the Horizon machine. This policy allows us to get cycles, with a low priority, even when our bi-monthly share is used. Five important observations can be drawn from the analysis of the scientific output and the survey of future requirements of main PSI HPC users: (1) High Performance Computing is a main pillar in many important PSI research areas; (2) there is a lack in the order of 10 times the current computing resources (measured in available core-hours per year); (3) there is a trend to use in the order of 600 processors per average production run; (4) the disk and tape storage growth is dramatic; (5) small HPC clusters located

  8. Large-scale computation at PSI scientific achievements and future requirements

    Energy Technology Data Exchange (ETDEWEB)

    Adelmann, A.; Markushin, V

    2008-11-15

    and Networking' (SNSP-HPCN) is discussing this complex. Scientific results which are made possible by PSI's engagement at CSCS (named Horizon) are summarised and PSI's future high-performance computing requirements are evaluated. The data collected shows the current situation and a 5 year extrapolation of the users' needs with respect to HPC resources is made. In consequence this report can serve as a basis for future strategic decisions with respect to a non-existing HPC road-map for PSI. PSI's institutional HPC area started hardware-wise approximately in 1999 with the assembly of a 32-processor LINUX cluster called Merlin. Merlin was upgraded several times, lastly in 2007. The Merlin cluster at PSI is used for small scale parallel jobs, and is the only general purpose computing system at PSI. Several dedicated small scale clusters followed the Merlin scheme. Many of the clusters are used to analyse data from experiments at PSI or CERN, because dedicated clusters are most efficient. The intellectual and financial involvement of the procurement (including a machine update in 2007) results in a PSI share of 25 % of the available computing resources at CSCS. The (over) usage of available computing resources by PSI scientists is demonstrated. We actually get more computing cycles than we have paid for. The reason is the fair share policy that is implemented on the Horizon machine. This policy allows us to get cycles, with a low priority, even when our bi-monthly share is used. Five important observations can be drawn from the analysis of the scientific output and the survey of future requirements of main PSI HPC users: (1) High Performance Computing is a main pillar in many important PSI research areas; (2) there is a lack in the order of 10 times the current computing resources (measured in available core-hours per year); (3) there is a trend to use in the order of 600 processors per average production run; (4) the disk and tape storage growth

  9. Large mobile thrombus in non-atherosclerotic thoracic aorta as the source of peripheral arterial embolism

    Directory of Open Access Journals (Sweden)

    Brkovic Zoran

    2005-11-01

    Full Text Available Abstract The presence of thrombi in the atherosclerotic and/or aneurysmatic aorta with peripheral arterial embolism is a common scenario. Thrombus formation in a morphologically normal aorta, however, is a rare event. A 50 years old woman was admitted to the mergency department for pain, coldness, and anesthesia in the the left foot. She had a 25 years history of cigarette smoking, a history of postmenopausal hormone replacement therapy (HRT, hypercholesterolemia and hyperfibrinogenemia. An extensive serologic survey for hypercoagulability, including antiphospholipid antibodies, and vasculitis disorders was negative. Transesophageal echocardiography revealed a large, pedunculated and hypermobile thrombus attached to the aortic wall 5 cm distal of the left subclavian artery. The patient was admitted to the surgery department, where a 15 cm long fresh, parietal thrombus could be removed from the aorta showing no macroscopic wall lesions or any other morphologic abnormalities. This case report demonstrates the possibility of evolving a large, pedunculated thrombus in a morphologically intact aorta in a postmenopausal woman with thrombogenic conditions such as hyperfibrinogenemia, hypercholesterolemia, smoking and HRT. For these patients, profiling the individual risk and weighing the benefits against the potential risks is warranted before prescribing HRT.

  10. Operation Modeling of Power Systems Integrated with Large-Scale New Energy Power Sources

    Directory of Open Access Journals (Sweden)

    Hui Li

    2016-10-01

    Full Text Available In the most current methods of probabilistic power system production simulation, the output characteristics of new energy power generation (NEPG has not been comprehensively considered. In this paper, the power output characteristics of wind power generation and photovoltaic power generation are firstly analyzed based on statistical methods according to their historical operating data. Then the characteristic indexes and the filtering principle of the NEPG historical output scenarios are introduced with the confidence level, and the calculation model of NEPG’s credible capacity is proposed. Based on this, taking the minimum production costs or the best energy-saving and emission-reduction effect as the optimization objective, the power system operation model with large-scale integration of new energy power generation (NEPG is established considering the power balance, the electricity balance and the peak balance. Besides, the constraints of the operating characteristics of different power generation types, the maintenance schedule, the load reservation, the emergency reservation, the water abandonment and the transmitting capacity between different areas are also considered. With the proposed power system operation model, the operation simulations are carried out based on the actual Northwest power grid of China, which resolves the new energy power accommodations considering different system operating conditions. The simulation results well verify the validity of the proposed power system operation model in the accommodation analysis for the power system which is penetrated with large scale NEPG.

  11. Particle Events as a Possible Source of Large Ozone Loss during Magnetic Polarity Transitions

    Science.gov (United States)

    vonKoenig, M.; Burrows, J. P.; Chipperfield, M. P.; Jackman, C. H.; Kallenrode, M.-B.; Kuenzi, K. F.; Quack, M.

    2002-01-01

    The energy deposition in the mesosphere and stratosphere during large extraterrestrial charged particle precipitation events has been known for some time to contribute to ozone losses due to the formation of potential ozone destroying species like NO(sub x), and HO(sub x). These impacts have been measured and can be reproduced with chemistry models fairly well. In the recent past, however, even the impact of the largest solar proton events on the total amount of ozone has been small compared to the dynamical variability of ozone, and to the anthropogenic induced impacts like the Antarctic 'ozone hole'. This is due to the shielding effect of the magnetic field. However, there is evidence that the earth's magnetic field may approach a reversal. This could lead to a decrease of magnetic field strength to less than 25% of its usual value over a period of several centuries . We show that with realistic estimates of very large solar proton events, scenarios similar to the Antarctic ozone hole of the 1990s may occur during a magnetic polarity transition.

  12. The 100 strongest radio point sources in the field of the Large Magellanic Cloud at 1.4 GHz

    Directory of Open Access Journals (Sweden)

    Payne J.L.

    2009-01-01

    Full Text Available We present the 100 strongest 1.4 GHz point sources from a new mosaic image in the direction of the Large Magellanic Cloud (LMC. The observations making up the mosaic were made using Australia Telescope Compact Array (ATCA over a ten year period and were combined with Parkes single dish data at 1.4 GHz to complete the image for short spacing. An initial list of co-identifications within 1000 at 0.843, 4.8 and 8.6 GHz consisted of 2682 sources. Elimination of extended objects and artifact noise allowed the creation of a refined list containing 1988 point sources. Most of these are presumed to be background objects seen through the LMC; a small portion may represent compact H ii regions, young SNRs and radio planetary nebulae. For the 1988 point sources we find a preliminary average spectral index (α of -0.53 and present a 1.4 GHz image showing source location in the direction of the LMC.

  13. The 100 Strongest Radio Point Sources in the Field of the Large Magellanic Cloud at 1.4 GHz

    Directory of Open Access Journals (Sweden)

    Payne, J. L.

    2009-06-01

    Full Text Available We present the 100 strongest 1.4~GHz point sources from a new mosaicimage in the direction of the Large Magellanic Cloud (LMC. The observationsmaking up the mosaic were made using Australia Telescope Compact Array (ATCAover a ten year period and were combined with Parkes single dish data at 1.4 GHz to complete the image for short spacing. An initial list of co-identifications within 10arcsec at 0.843, 4.8 and 8.6 GHz consisted of 2682 sources. Elimination of extended objects and artifact noise allowed the creation of a refined list containing 1988 point sources. Most of these are presumed to be background objects seen through the LMC; a small portion may represent compact HII regions, young SNRs and radio planetary nebulae. For the 1988 point sources we find a preliminary average spectral index ($alpha$ of -0.53 and present a 1.4 GHz image showing source locationin the direction of the LMC.

  14. Radiological Threat Reduction (RTR) program: implementing physical security to protect large radioactive sources worldwide

    International Nuclear Information System (INIS)

    Lowe, Daniel L.

    2004-01-01

    The U.S. Department of Energy's Radiological Threat Reduction (RTR) Program strives to reduce the threat of a Radiological Dispersion Device (RDD) incident that could affect U.S. interests worldwide. Sandia National Laboratories supports the RTR program on many different levels. Sandia works directly with DOE to develop strategies, including the selection of countries to receive support and the identification of radioactive materials to be protected. Sandia also works with DOE in the development of guidelines and in training DOE project managers in physical protection principles. Other support to DOE includes performing rapid assessments and providing guidance for establishing foreign regulatory and knowledge infrastructure. Sandia works directly with foreign governments to establish cooperative agreements necessary to implement the RTR Program efforts to protect radioactive sources. Once necessary agreements are in place, Sandia works with in-country organizations to implement various security related initiatives, such as installing security systems and searching for (and securing) orphaned radioactive sources. The radioactive materials of interest to the RTR program include Cobalt 60, Cesium 137, Strontium 90, Iridium 192, Radium 226, Plutonium 238, Americium 241, Californium 252, and Others. Security systems are implemented using a standardized approach that provides consistency through out the RTR program efforts at Sandia. The approach incorporates a series of major tasks that overlap in order to provide continuity. The major task sequence is to: Establish in-country contacts - integrators, Obtain material characterizations, Perform site assessments and vulnerability assessments, Develop upgrade plans, Procure and install equipment, Conduct acceptance testing and performance testing, Develop procedures, and Conduct training. Other tasks are incorporated as appropriate and commonly include such as support of reconfiguring infrastructure, and developing security

  15. Non-linear vibrating systems excited by a nonideal energy source with a large slope characteristic

    Science.gov (United States)

    González-Carbajal, Javier; Domínguez, Jaime

    2017-11-01

    This paper revisits the problem of an unbalanced motor attached to a fixed frame by means of a nonlinear spring and a linear damper. The excitation provided by the motor is, in general, nonideal, which means it is affected by the vibratory response. Since the system behaviour is highly dependent on the order of magnitude of the motor characteristic slope, the case of large slope is considered herein. Some Perturbation Methods are applied to the system of equations, which allows transforming the original 4D system into a much simpler 2D system. The fixed points of this reduced system and their stability are carefully studied. We find the existence of a Hopf bifurcation which, to the authors' knowledge, has not been addressed before in the literature. These analytical results are supported by numerical simulations. We also compare our approach and results with those published by other authors.

  16. Openwebglobe - AN Open Source Sdk for Creating Large-Scale Virtual Globes on a Webgl Basis

    Science.gov (United States)

    Loesch, B.; Christen, M.; Nebiker, S.

    2012-07-01

    This paper introduces the OpenWebGlobe project (www.openwebglobe.org) and the OpenWebGlobe SDK (Software Development Kit) - an open source virtual globe environment using WebGL. Unlike other (web-based) 3d geovisualisation technologies and toolkits, the OpenWebGlobe SDK not only supports the content authoring and web visualization aspects, but also the data processing functionality for generating multi-terabyte terrain, image, map and 3d point cloud data sets in high-performance and cloud-based parallel computing environments. The OpenWebGlobe architecture is described and the paper outlines the processing and the viewer functionality provided by the OpenWebGlobe SDK. It then discusses the generation and updating of a global 3d base map using OpenStreetMap data and finally presents two show cases employing the technology a) for implementing an interactive national 3d geoportal incorporating high resolution national geodata sets and b) for implementing a 3d geoinformation service supporting the real-time incorporation of 3d point cloud data.

  17. Temperature field due to time-dependent heat sources in a large rectangular grid. Application for the KBS-3 repository

    International Nuclear Information System (INIS)

    Probert, T.; Claesson, Johan

    1997-04-01

    In the KBS-3 concept canisters containing nuclear waste are deposited along parallel tunnels over a large rectangular area deep below the ground surface. The temperature field in rock due to such a rectangular grid of heat-releasing canisters is studied. An analytical solution for this problem for any heat source has been presented in a preceding paper. The complete solution is summarized in this paper. The solution is by superposition divided into two main parts. There is a global temperature field due to the large rectangular canister area, while a local field accounts for the remaining heat source problem. In this sequel to the first report, the local solution is discussed in detail. The local solution consists of three parts corresponding to line heat sources along tunnels, point heat sources along a tunnel and a line heat source along a canister. Each part depends on two special variables only. These parts are illustrated in dimensionless form. Inside the repository the local temperature field is periodic in the horizontal directions and has a short extent in the vertical direction. This allows us to look at the solution in a parallelepiped around a canister. The solution in the parallelepiped is valid for all canisters that are not too close to the repository edges. The total temperature field is calculated for the KBS-3 case. The temperature field is calculated using a heat release that is valid for the first 10 000 years after deposition. The temperature field is shown in 23 figures in order to illustrate different aspects of the complex thermal process

  18. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  19. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz

    2015-11-12

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  20. Load Frequency Control by use of a Number of Both Heat Pump Water Heaters and Electric Vehicles in Power System with a Large Integration of Renewable Energy Sources

    Science.gov (United States)

    Masuta, Taisuke; Shimizu, Koichiro; Yokoyama, Akihiko

    In Japan, from the viewpoints of global warming countermeasures and energy security, it is expected to establish a smart grid as a power system into which a large amount of generation from renewable energy sources such as wind power generation and photovoltaic generation can be installed. Measures for the power system stability and reliability are necessary because a large integration of these renewable energy sources causes some problems in power systems, e.g. frequency fluctuation and distribution voltage rise, and Battery Energy Storage System (BESS) is one of effective solutions to these problems. Due to a high cost of the BESS, our research group has studied an application of controllable loads such as Heat Pump Water Heater (HPWH) and Electric Vehicle (EV) to the power system control for reduction of the required capacity of BESS. This paper proposes a new coordinated Load Frequency Control (LFC) method for the conventional power plants, the BESS, the HPWHs, and the EVs. The performance of the proposed LFC method is evaluated by the numerical simulations conducted on a power system model with a large integration of wind power generation and photovoltaic generation.

  1. Oil crops: requirements and possibilities for their utilization as an energy source

    Energy Technology Data Exchange (ETDEWEB)

    Boerner, G; Schoenefeldt, J; Mehring, I [OeHMI Forschung und Ingenieurtechnik GmbH, Magdeburg (Germany)

    1995-12-01

    Although vegetable oils have been used as an energy source for centuries, they were used almost exclusively in oil lamps. Their value as a foodstuff and the availability and low price of mineral oil had for a long time kept them from being seriously considered as a potential energy source. Now, owing to the increasing cost of fossil fuel, particularly oil, and increasing industrial energy consumption, as well as the negative impact of fossil fuel use on the environment, there is interest in a number of alternative energy sources, including vegetable oils. The discussion in this paper focuses on the use of untreated vegetable oils, particularly rapeseed oil. The energy potential of rapeseed oil is explored first. Then, conditions under which the use of oil crops as an energy source is feasible are briefly discussed; two concepts for decentralized oil-seed processing are described and, finally, future possibilities for use of vegetable oils as a fuel source are reviewed. (author) 5 refs, 4 figs, 4 tabs

  2. Identifying the Source of Large-Scale Atmospheric Variability in Jupiter

    Science.gov (United States)

    Orton, Glenn

    2011-01-01

    We propose to use the unique mid-infrared filtered imaging and spectroscopic capabilities of the Subaru COMICS instrument to determine the mechanisms associated with recent unusual rapid albedo and color transformations of several of Jupiter's bands, particularly its South Equatorial Belt (SEB), as a means to understand the coupling between its dynamics and chemistry. These observations will characterize the temperature, degree of cloud cover, and distribution of minor gases that serve as indirect tracers of vertical motions in regions that will be undergoing unusual large-scale changes in dynamics and chemistry: the SEB, as well as regions near the equator and Jupiter's North Temperate Belt. COMICS is ideal for this investigation because of its efficiency in doing both imaging and spectroscopy, its 24.5-mum filter that is unique to 8-meter-class telescopes, its wide field of view that allows imaging of nearly all of Jupiter's disk, coupled with a high diffraction-limited angular resolution and optimal mid-infrared atmospheric transparency.

  3. FERMI LARGE AREA TELESCOPE OBSERVATION OF A GAMMA-RAY SOURCE AT THE POSITION OF ETA CARINAE

    International Nuclear Information System (INIS)

    Abdo, A. A.; Ackermann, M.; Ajello, M.; Allafort, A.; Bechtol, K.; Berenji, B.; Blandford, R. D.; Borgland, A. W.; Bouvier, A.; Baldini, L.; Bellazzini, R.; Bregeon, J.; Brez, A.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bonamente, E.; Brandt, T. J.; Brigida, M.; Bruel, P.

    2010-01-01

    The Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope detected a γ-ray source that is spatially consistent with the location of Eta Carinae. This source has been persistently bright since the beginning of the LAT survey observations (from 2008 August to 2009 July, the time interval considered here). The γ-ray signal is detected significantly throughout the LAT energy band (i.e., up to ∼100 GeV). The 0.1-100 GeV energy spectrum is well represented by a combination of a cutoff power-law model ( 10 GeV). The total flux (>100 MeV) is 3.7 +0.3 -0.1 x 10 -7 photons s -1 cm -2 , with additional systematic uncertainties of 10%, and consistent with the average flux measured by AGILE. The light curve obtained by Fermi is consistent with steady emission. Our observations do not confirm the presence of a γ-ray flare in 2008 October, as reported by Tavani et al., although we cannot exclude that a flare lasting only a few hours escaped detection by the Fermi LAT. We also do not find any evidence for γ-ray variability that correlates with the large X-ray variability of Eta Carinae observed during 2008 December and 2009 January. We are thus not able to establish an unambiguous identification of the LAT source with Eta Carinae.

  4. The London Charter and the Seville Principles as sources of requirements for e-archaeology systems development purposes

    Directory of Open Access Journals (Sweden)

    Juan M. Carrillo Gea

    2013-11-01

    Full Text Available Requirements engineering (RE is a discipline of critical importance in software development. This paper provides a process and a set of software artefacts to help in the production of e-archaeology systems with emphasis on requirements reuse and standards. In particular, two important guidelines in the field of earchaeology, the London Charter and the Principles of Seville, have been shown as two sources of requirements to be considered as a starting point for developing this type of systems.

  5. J1649+2635: A Grand-Design Spiral with a Large Double-Lobed Radio Source

    Science.gov (United States)

    Mao, Minnie Y.; Owen, Frazer; Duffin, Ryan; Keel, Bill; Lacy, Mark; Momjian, Emmanuel; Morrison, Glenn; Mroczkowski, Tony; Neff, Susan; Norris, Ray P.; hide

    2014-01-01

    We report the discovery of a grand-design spiral galaxy associated with a double-lobed radio source. J1649+2635 (z = 0.0545) is a red spiral galaxy with a prominent bulge that it is associated with a L(1.4GHz) is approximately 10(exp24) W Hz(exp-1) double-lobed radio source that spans almost 100 kpc. J1649+2635 has a black hole mass of M(BH) is approximately 3-7 × 10(exp8) Solar mass and SFR is approximately 0.26 - 2.6 solar mass year(exp-1). The galaxy hosts a approximately 96 kpc diffuse optical halo, which is unprecedented for spiral galaxies. We find that J1649+2635 resides in an overdense environment with a mass of M(dyn) = 7.7(+7.9/-4.3) × 10(exp13) Solar mass, likely a galaxy group below the detection threshold of the ROSAT All-Sky Survey. We suggest one possible scenario for the association of double-lobed radio emission from J1649+2635 is that the source may be similar to a Seyfert galaxy, located in a denser-than-normal environment. The study of spiral galaxies that host large-scale radio emission is important because although rare in the local Universe, these sources may be more common at high-redshifts.

  6. Hydrogen atom temperature measured with wavelength-modulated laser absorption spectroscopy in large scale filament arc negative hydrogen ion source

    International Nuclear Information System (INIS)

    Nakano, H.; Goto, M.; Tsumori, K.; Kisaki, M.; Ikeda, K.; Nagaoka, K.; Osakabe, M.; Takeiri, Y.; Kaneko, O.; Nishiyama, S.; Sasaki, K.

    2015-01-01

    The velocity distribution function of hydrogen atoms is one of the useful parameters to understand particle dynamics from negative hydrogen production to extraction in a negative hydrogen ion source. Hydrogen atom temperature is one of the indicators of the velocity distribution function. To find a feasibility of hydrogen atom temperature measurement in large scale filament arc negative hydrogen ion source for fusion, a model calculation of wavelength-modulated laser absorption spectroscopy of the hydrogen Balmer alpha line was performed. By utilizing a wide range tunable diode laser, we successfully obtained the hydrogen atom temperature of ∼3000 K in the vicinity of the plasma grid electrode. The hydrogen atom temperature increases as well as the arc power, and becomes constant after decreasing with the filling of hydrogen gas pressure

  7. 40 CFR 63.11527 - What are the monitoring requirements for new and existing sources?

    Science.gov (United States)

    2010-07-01

    ... fabric filters that are discharged to the atmosphere through a stack, the bag leak detector sensor must... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for...

  8. 75 FR 15655 - Requirements for Control Technology Determinations for Major Sources in Accordance With Clean Air...

    Science.gov (United States)

    2010-03-30

    ... Electroplating, plating, polishing, anodizing, and coloring. 336 Manufacturers of motor vehicle parts and... 1994 and amended several times since then; they are contained in subpart B, 40 CFR 63.50 through 63.56...; May 30, 2003) to allow a source additional time to compile the information necessary for the...

  9. sources

    Directory of Open Access Journals (Sweden)

    Shu-Yin Chiang

    2002-01-01

    Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.

  10. Large Industrial Point Sources in Italy: a focus on mercury concentrations resulting from three seasonal ship-borne measurements

    Directory of Open Access Journals (Sweden)

    Bencardino M.

    2013-04-01

    Full Text Available In Italy there are 25 Large Industrial Point Sources whose mercury emissions in air exceed the established threshold of 10 kg year−1. Many of these mercury point sources, mostly distributed along the Italian coastal area, are located at sites qualified as National Interest Rehabilitation Sites because of documented contamination in qualitative and/or quantitative terms and of potential health impact. Atmospheric mercury emissions related to Italian Large Industrial Point Sources, with a value of 1.04 Mg·yr−1 for 2007, have a not negligible contribution, accounting, on their own, for more than 10% of the total mercury emissions resulting from all activity sectors at a national level. Among others, thermal power stations, pig iron and steel as well as basic inorganic chemical production, result to be the main contributing industrial activities. In order to assess how mercury species concentrations and distribution in the Marine Boundary Layer (MBL change with vicinity to large industrial sites, measurements of atmospheric mercury were performed during three oceanographic campaigns aboard the Research Vessel (R.V. Urania of the Italian CNR. Collection of GEM, GOM and PBM was conducted across the Adriatic sea, during autumn 2004 (27th of October to 12th of November and summer 2005 (17th to 29th of June, and across the Tyrrhenian sea during autumn 2007 (12th of September to 1st October. Analysis were carried out with reference to the period in which the R.V. Urania has stopped close to the main Italian industrial contaminated sites. Explorative statistical parameters of atmospheric mercury species were computed over each single stop-period and then compared with the overall cruise campaign measurements. Results are herein presented and discussed.

  11. Comparison of cluster-based and source-attribution methods for estimating transmission risk using large HIV sequence databases.

    Science.gov (United States)

    Le Vu, Stéphane; Ratmann, Oliver; Delpech, Valerie; Brown, Alison E; Gill, O Noel; Tostevin, Anna; Fraser, Christophe; Volz, Erik M

    2018-06-01

    Phylogenetic clustering of HIV sequences from a random sample of patients can reveal epidemiological transmission patterns, but interpretation is hampered by limited theoretical support and statistical properties of clustering analysis remain poorly understood. Alternatively, source attribution methods allow fitting of HIV transmission models and thereby quantify aspects of disease transmission. A simulation study was conducted to assess error rates of clustering methods for detecting transmission risk factors. We modeled HIV epidemics among men having sex with men and generated phylogenies comparable to those that can be obtained from HIV surveillance data in the UK. Clustering and source attribution approaches were applied to evaluate their ability to identify patient attributes as transmission risk factors. We find that commonly used methods show a misleading association between cluster size or odds of clustering and covariates that are correlated with time since infection, regardless of their influence on transmission. Clustering methods usually have higher error rates and lower sensitivity than source attribution method for identifying transmission risk factors. But neither methods provide robust estimates of transmission risk ratios. Source attribution method can alleviate drawbacks from phylogenetic clustering but formal population genetic modeling may be required to estimate quantitative transmission risk factors. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  12. The Berry Amendment: Requiring Defense Procurement to Come From Domestic Sources

    National Research Council Canada - National Science Library

    Grasso, Valerie B

    2005-01-01

    The Berry Amendment requires the Department of Defense (DOD) to give preference in procurement to domestically produced, manufactured, or home grown products, notably food, clothing, fabrics, and specialty metals...

  13. The Berry Amendment: Requiring Defense Procurement to Come from Domestic Sources

    National Research Council Canada - National Science Library

    Grasso, Valerie B

    2008-01-01

    ...; these provisions later became the Berry Amendment. The Berry Amendment requires DOD to give preference in procurement to domestically produced, manufactured, or home grown products, notably food, clothing, fabrics, and specialty metals...

  14. The Berry Amendment: Requiring Defense Procurement to Come from Domestic Sources

    National Research Council Canada - National Science Library

    Grasso, Valerie B

    2006-01-01

    The Berry Amendment requires the Department of Defense (DoD) to give preference in procurement to domestically produced, manufactured, or home-grown products, notably food, clothing, fabrics, and specialty metals. To protect the U.S...

  15. The Berry Amendment: Requiring Defense Procurement to Come from Domestic Sources

    National Research Council Canada - National Science Library

    Grasso, Valerie B

    2008-01-01

    ...; these provisions later became the Berry Amendment. The Berry Amendment requires DoD to give preference in procurement to domestically produced, manufactured, or home-grown products, notably food, clothing, fabrics, and specialty metals...

  16. The Berry Amendment: Requiring Defense Procurement To Come From Domestic Sources

    National Research Council Canada - National Science Library

    Bailey Grasso, Valerie

    2005-01-01

    The Berry Amendment requires the Department of Defense (DOD) to give preference in procurement to domestically produced, manufactured, or home grown products, notably food, clothing, fabrics, and specialty metals...

  17. The Berry Amendment: Requiring Defense Procurement to Come from Domestic Sources

    National Research Council Canada - National Science Library

    Grasso, Valerie B

    2005-01-01

    The Berry Amendment requires the Department of Defense (DoD) to give preference in procurement to domestically produced, manufactured, or home-grown products, notably food, clothing, fabrics, and specialty metals. To protect the U.S...

  18. Methodology for Quantitative Analysis of Large Liquid Samples with Prompt Gamma Neutron Activation Analysis using Am-Be Source

    International Nuclear Information System (INIS)

    Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.

    2009-01-01

    An optimized set-up for prompt gamma neutron activation analysis (PGNAA) with Am-Be source is described and used for large liquid samples analysis. A methodology for quantitative analysis is proposed: it consists on normalizing the prompt gamma count rates with thermal neutron flux measurements carried out with He-3 detector and gamma attenuation factors calculated using MCNP-5. The relative and absolute methods are considered. This methodology is then applied to the determination of cadmium in industrial phosphoric acid. The same sample is then analyzed by inductively coupled plasma (ICP) method. Our results are in good agreement with those obtained with ICP method.

  19. Plant protection system optimization studies to mitigate consequences of large breaks in the advanced neutron source reactor

    International Nuclear Information System (INIS)

    Khayat, M.I.; March-Leuba, J.

    1993-01-01

    This paper documents some of the optimization studies performed to maximize the performance of the engineered safety features and scram systems to mitigate the consequences of large breaks in the primary cooling system of the advanced neutron source (ANS) reactor. The ANS is a new basic and applied research facility based on a powerful steady-state research reactor that provides beams of neutrons for measurements and experiments in the field of material science and engineering, biology, chemistry, material analysis, and nuclear science. To achieve the high neutron fluxes for these state-of-the-art experiments, the ANS design has a very high power density core (330 MW fission with an active volume of 67.6 ell) surrounded by a large heavy-water reflector, where most neutrons are moderated. This design maximizes the number of neutrons available for experiments but results in a low heat capacity core that creates unique challenges to the design of the plant protection system

  20. On the electron extraction in a large RF-driven negative hydrogen ion source for the ITER NBI system

    International Nuclear Information System (INIS)

    Franzen, P; Wünderlich, D; Fantz, U

    2014-01-01

    The test facility ELISE, equipped with a large RF-driven ion source (1 × 0.9 m 2 ) of half the size of the ion source for the ITER neutral beam injection (NBI) system, has been constructed over the last three years at the Max-Planck-Institut für Plasmaphysik (IPP), Garching, and is now operational. The first measurements of the dependence of the co-extracted electron currents on various operational parameters have been performed. ELISE has the unique feature that the electron currents can be measured individually on both extraction grid segments, leading to vertical spatial resolution. Although performed in volume operation, where the negative hydrogen ions are created in the plasma volume solely, the results are very encouraging for operation with caesium, this being necessary in order to achieve the relevant negative ion currents for the ITER NBI injectors. The amount of co-extracted electrons could be suppressed sufficiently with moderate magnetic filter fields and by plasma grid bias. Furthermore, the electron extraction is more or less decoupled from the main plasma, as the observed vertical asymmetry of electron extraction is not correlated at all with the plasma asymmetry, which is anyway rather small. Both effects are superior to the experience from the small IPP prototype source; the reason for these encouraging results is most probably the larger size of the source as well as the new geometry of the source having unbiased areas in its centre. The reasons, however, for the observed asymmetry of the extracted electron currents and their dependencies on various operational parameters are not well understood. (paper)

  1. A hybrid algorithm for stochastic single-source capacitated facility location problem with service level requirements

    Directory of Open Access Journals (Sweden)

    Hosseinali Salemi

    2016-04-01

    Full Text Available Facility location models are observed in many diverse areas such as communication networks, transportation, and distribution systems planning. They play significant role in supply chain and operations management and are one of the main well-known topics in strategic agenda of contemporary manufacturing and service companies accompanied by long-lasting effects. We define a new approach for solving stochastic single source capacitated facility location problem (SSSCFLP. Customers with stochastic demand are assigned to set of capacitated facilities that are selected to serve them. It is demonstrated that problem can be transformed to deterministic Single Source Capacitated Facility Location Problem (SSCFLP for Poisson demand distribution. A hybrid algorithm which combines Lagrangian heuristic with adjusted mixture of Ant colony and Genetic optimization is proposed to find lower and upper bounds for this problem. Computational results of various instances with distinct properties indicate that proposed solving approach is efficient.

  2. The Berry Amendment: Requiring Defense Procurement to Come from Domestic Sources

    Science.gov (United States)

    2014-02-24

    they could produce U.S.-made athletic footwear for military personnel. H.R. 1960, the House- proposed National Defense Authorization Act (NDAA) for...Federal Prison Industries’ Proposed Military Clothing Production Expansion - Assessing Existing Protections for Workers, Business , and FPI’s Federal...and purpose of the Berry Amendment and legislative proposals to amend the application of domestic source restrictions, as well as potential options

  3. OSS4EVA: Using Open-Source Tools to Fulfill Digital Preservation Requirements

    Directory of Open Access Journals (Sweden)

    Heidi Dowding

    2016-10-01

    Full Text Available This paper builds on the findings of a workshop held at the 2015 International Conference on Digital Preservation (iPRES, entitled, “Using Open-Source Tools to Fulfill Digital Preservation Requirements” (OSS4PRES hereafter. This day-long workshop brought together participants from across the library and archives community, including practitioners, proprietary vendors, and representatives from open-source projects. The resulting conversations were surprisingly revealing: while OSS’ significance within the preservation landscape was made clear, participants noted that there are a number of roadblocks that discourage or altogether prevent its use in many organizations. Overcoming these challenges will be necessary to further widespread, sustainable OSS adoption within the digital preservation community. This article will mine the rich discussions that took place at OSS4PRES to (1 summarize the workshop’s key themes and major points of debate, (2 provide a comprehensive analysis of the opportunities, gaps, and challenges that using OSS entails at a philosophical, institutional, and individual level, and (3 offer a tangible set of recommendations for future work designed to broaden community engagement and enhance the sustainability of open source initiatives, drawing on both participants’ experience as well as additional research.

  4. Ozonation for source treatment of pharmaceuticals in hospital wastewater - ozone lifetime and required ozone dose

    DEFF Research Database (Denmark)

    Hansen, Kamilla Marie Speht; Spiliotopoulou, Aikaterini; Chhetri, Ravi Kumar

    2016-01-01

    Ozonation aimed at removing pharmaceuticals was studied in an effluent from an experimental pilot system using staged moving bed biofilm reactor (MBBR) tanks for the optimal biological treatment of wastewater from a medical care unit of Aarhus University Hospital. Dissolved organic carbon (DOC......) and pH in samples varied considerably, and the effect of these two parameters on ozone lifetime and the efficiency of ozone in removing pharmaceuticals were determined. The pH in the effluent varied from 5.0 to 9.0 resulting in approximately a doubling of the required ozone dose at the highest p......H for each pharmaceutical. DOC varied from 6 to 20 mg-DOC/L. The ozone required for removing each pharmaceutical, varied linearly with DOC and thus, ozone doses normalized to DOC (specific ozone dose) agreed between water samples (typically within 15%). At neutral pH the specific ozone dose required...

  5. Probing Large-scale Coherence between Spitzer IR and Chandra X-Ray Source-subtracted Cosmic Backgrounds

    Energy Technology Data Exchange (ETDEWEB)

    Cappelluti, N.; Urry, M. [Yale Center for Astronomy and Astrophysics, P.O. Box 208120, New Haven, CT 06520 (United States); Arendt, R. [University of Maryland, Baltimore County, 1000 Hilltop Circle, Baltimore, MD 21250 (United States); Kashlinsky, A. [Observational Cosmology Laboratory, NASA Goddard Space Flight Center, Code 665, Greenbelt, MD 20771 (United States); Li, Y.; Hasinger, G. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Helgason, K. [Department of Astronomy, Yale University, P.O. Box 208101, New Haven, CT 06520 (United States); Natarajan, P. [Max Planck Institute for Astrophysics, Karl-Schwarzschild-Str. 1, D-85748 Garching (Germany); Finoguenov, A. [Max-Planck-Institut für extraterrestrische Physik, Postfach 1312, D-85741, Garching bei München (Germany)

    2017-09-20

    We present new measurements of the large-scale clustering component of the cross-power spectra of the source-subtracted Spitzer -IRAC cosmic infrared background and Chandra -ACIS cosmic X-ray background surface brightness fluctuations Our investigation uses data from the Chandra Deep Field South, Hubble Deep Field North, Extended Groth Strip/AEGIS field, and UDS/SXDF surveys, comprising 1160 Spitzer hours and ∼12 Ms of Chandra data collected over a total area of 0.3 deg{sup 2}. We report the first (>5 σ ) detection of a cross-power signal on large angular scales >20″ between [0.5–2] keV and the 3.6 and 4.5 μ m bands, at ∼5 σ and 6.3 σ significance, respectively. The correlation with harder X-ray bands is marginally significant. Comparing the new observations with existing models for the contribution of the known unmasked source population at z < 7, we find an excess of about an order of magnitude at 5 σ confidence. We discuss possible interpretations for the origin of this excess in terms of the contribution from accreting early black holes (BHs), including both direct collapse BHs and primordial BHs, as well as from scattering in the interstellar medium and intra-halo light.

  6. DEVELOPMENT OF THE MODEL OF GALACTIC INTERSTELLAR EMISSION FOR STANDARD POINT-SOURCE ANALYSIS OF FERMI LARGE AREA TELESCOPE DATA

    Energy Technology Data Exchange (ETDEWEB)

    Acero, F.; Ballet, J. [Laboratoire AIM, CEA-IRFU/CNRS/Université Paris Diderot, Service d’Astrophysique, CEA Saclay, F-91191 Gif sur Yvette (France); Ackermann, M.; Buehler, R. [Deutsches Elektronen Synchrotron DESY, D-15738 Zeuthen (Germany); Ajello, M. [Department of Physics and Astronomy, Clemson University, Kinard Lab of Physics, Clemson, SC 29634-0978 (United States); Albert, A.; Baldini, L.; Bloom, E. D.; Bottacini, E.; Caliandro, G. A.; Cameron, R. A. [W. W. Hansen Experimental Physics Laboratory, Kavli Institute for Particle Astrophysics and Cosmology, Department of Physics and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Barbiellini, G. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, I-34127 Trieste (Italy); Bastieri, D. [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, I-35131 Padova (Italy); Bellazzini, R. [Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, I-56127 Pisa (Italy); Bissaldi, E. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Bonino, R. [Istituto Nazionale di Fisica Nucleare, Sezione di Torino, I-10125 Torino (Italy); Brandt, T. J.; Buson, S. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Bregeon, J. [Laboratoire Univers et Particules de Montpellier, Université Montpellier, CNRS/IN2P3, Montpellier (France); Bruel, P., E-mail: isabelle.grenier@cea.fr, E-mail: casandjian@cea.fr [Laboratoire Leprince-Ringuet, École polytechnique, CNRS/IN2P3, Palaiseau (France); and others

    2016-04-01

    Most of the celestial γ rays detected by the Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope originate from the interstellar medium when energetic cosmic rays interact with interstellar nucleons and photons. Conventional point-source and extended-source studies rely on the modeling of this diffuse emission for accurate characterization. Here, we describe the development of the Galactic Interstellar Emission Model (GIEM), which is the standard adopted by the LAT Collaboration and is publicly available. This model is based on a linear combination of maps for interstellar gas column density in Galactocentric annuli and for the inverse-Compton emission produced in the Galaxy. In the GIEM, we also include large-scale structures like Loop I and the Fermi bubbles. The measured gas emissivity spectra confirm that the cosmic-ray proton density decreases with Galactocentric distance beyond 5 kpc from the Galactic Center. The measurements also suggest a softening of the proton spectrum with Galactocentric distance. We observe that the Fermi bubbles have boundaries with a shape similar to a catenary at latitudes below 20° and we observe an enhanced emission toward their base extending in the north and south Galactic directions and located within ∼4° of the Galactic Center.

  7. Influence of starch source in the required hydrolysis time for the ...

    African Journals Online (AJOL)

    Jose Luis Montañez Soto

    2012-08-28

    Aug 28, 2012 ... The maltodextrins are defined by Food and Drug. Administration .... using a Brookfield viscometer LVT model, serial number 59073 .... mechanical properties and high resistance to chemical or ... understood that these mathematical expressions were ... predicted satisfactorily the required hydrolysis time to.

  8. Dietary items as possible sources of "1"3"7Cs in large carnivores in the Gorski Kotar forest ecosystem, Western Croatia

    International Nuclear Information System (INIS)

    Šprem, Nikica; Piria, Marina; Barišić, Domagoj; Kusak, Josip; Barišić, Delko

    2016-01-01

    The mountain forest ecosystem of Gorski Kotar is distant from any significant sources of environmental pollution, though recent findings have revealed that this region is among the most intense "1"3"7Cs contaminated area in Croatia. Therefore, the aim of this study was to investigate "1"3"7Cs and "4"0K load in three large predator species in the mountain forest ecosystem. Radionuclides mass activities were determined by the gamma-spectrometric method in the muscle tissue of brown bear (47), wolf (7), lynx (1) and golden jackal (2). The highest "1"3"7Cs mass activity was found in lynx (153 Bq kg"−"1), followed by brown bear (132 Bq kg"−"1), wolf (22.2 Bq kg"−"1), and golden jackal (2.48 Bq kg"−"1). Analysis of 63 samples of dietary items suggests that they are not all potentially dominant sources of "1"3"7Cs for wildlife. The most important source of radionuclides for the higher parts of the food-chain from the study area were found to be the mushroom species wood hedgehog (Hydnum repandum), with a transfer factor TF of 5.166, and blueberry (Vaccinium myrtillus) as a plant species (TF = 2.096). Food items of animal origin indicated higher mass activity of radionuclides and therefore are possible moderate bioindicators of environmental pollution. The results also revealed that possible unknown wild animal food sources are a caesium source in the study region, and further study is required to illuminate this issue. - Highlights: • Radionuclide mass activities were determined by the gamma-spectrometric method. • The highest "1"3"7Cs mass activity in brown bear was 132, wolf 22.2 and lynx 153 Bq kg"−"1. • The best bioindicators are a wood hedgehog (TF = 5.166) and blueberry (TF = 2.096).

  9. FR-type radio sources in COSMOS: relation of radio structure to size, accretion modes and large-scale environment

    Science.gov (United States)

    Vardoulaki, Eleni; Faustino Jimenez Andrade, Eric; Delvecchio, Ivan; Karim, Alexander; Smolčić, Vernesa; Magnelli, Benjamin; Bertoldi, Frank; Schinnener, Eva; Sargent, Mark; Finoguenov, Alexis; VLA COSMOS Team

    2018-01-01

    The radio sources associated with active galactic nuclei (AGN) can exhibit a variety of radio structures, from simple to more complex, giving rise to a variety of classification schemes. The question which still remains open, given deeper surveys revealing new populations of radio sources, is whether this plethora of radio structures can be attributed to the physical properties of the host or to the environment. Here we present an analysis on the radio structure of radio-selected AGN from the VLA-COSMOS Large Project at 3 GHz (JVLA-COSMOS; Smolčić et al.) in relation to: 1) their linear projected size, 2) the Eddington ratio, and 3) the environment their hosts lie within. We classify these as FRI (jet-like) and FRII (lobe-like) based on the FR-type classification scheme, and compare them to a sample of jet-less radio AGN in JVLA-COSMOS. We measure their linear projected sizes using a semi-automatic machine learning technique. Their Eddington ratios are calculated from X-ray data available for COSMOS. As environmental probes we take the X-ray groups (hundreds kpc) and the density fields (~Mpc-scale) in COSMOS. We find that FRII radio sources are on average larger than FRIs, which agrees with literature. But contrary to past studies, we find no dichotomy in FR objects in JVLA-COSMOS given their Eddington ratios, as on average they exhibit similar values. Furthermore our results show that the large-scale environment does not explain the observed dichotomy in lobe- and jet-like FR-type objects as both types are found on similar environments, but it does affect the shape of the radio structure introducing bents for objects closer to the centre of an X-ray group.

  10. New source terms and the implications for emergency planning requirements at nuclear power plants in the United State

    International Nuclear Information System (INIS)

    Kaiser, G.D.; Cheok, M.C.

    1987-01-01

    This paper begins with a brief review of current approaches to source term driven changes to NRC emergency planning requirements and addresses significant differences between them. Approaches by IDCOR and EPRI, industry submittals to NRC and alternative risk-based evaluations have been considered. Important issues are discussed, such as the role of Protective Action Guides in determining the radius of the emergency planning zone (EPZ). The significance of current trends towards the prediction of longer warning times and longer durations of release in new source terms is assessed. These trends may help to relax the current notification time requirements. Finally, the implications of apparent support in the regulations for a threshold in warning time beyond which ad hoc protective measures are adequate is discussed

  11. On-site meteorological instrumentation requirements to characterize diffusion from point sources: workshop report. Final report Sep 79-Sep 80

    International Nuclear Information System (INIS)

    Strimaitis, D.; Hoffnagle, G.; Bass, A.

    1981-04-01

    Results of a workshop entitled 'On-Site Meteorological Instrumentation Requirements to Characterize Diffusion from Point Sources' are summarized and reported. The workshop was sponsored by the U.S. Environmental Protection Agency in Raleigh, North Carolina, on January 15-17, 1980. Its purpose was to provide EPA with a thorough examination of the meteorological instrumentation and data collection requirements needed to characterize airborne dispersion of air contaminants from point sources and to recommend, based on an expert consensus, specific measurement technique and accuracies. Secondary purposes of the workshop were to (1) make recommendations to the National Weather Service (NWS) about collecting and archiving meteorological data that would best support air quality dispersion modeling objectives and (2) make recommendations on standardization of meteorological data reporting and quality assurance programs

  12. Review of window and filter requirements for commissioning of the Advanced Photon Source insertion device beamlines

    International Nuclear Information System (INIS)

    Kuzay, T.M.; Wang, Zhibi.

    1994-01-01

    The Advanced Photon Source (APS) is building 16 insertion device (ID) front ends for the first phase of the project. Eleven of these are to be equipped with the APS Undulator A and the other five with a Wiggler-A-type source. The Undulator A front ends are designed to operate in a ''windowless'' mode using an APS-designed differential pump. However, during beamline commissioning and early operations of the storage ring, it is prudent to install windows to ensure storage ring vacuum safety before easing into windowless operation. However, the window designed for this interim period may not meet all the needs of a user's scientific program. In the early phases of the project through commissioning and start of operations, such a window will permit the user to prepare for his program, while allowing both the user and the facility operators to gain experience for safe phasing into eventual windowless operations. In this report, we will present analysis and design options for a variety of windows particularly suited to either the APS Undulator A front ends or as user windows located in the first optics enclosure (FOE)

  13. A large source of dust missing in Particulate Matter emission inventories? Wind erosion of post-fire landscapes

    Directory of Open Access Journals (Sweden)

    N.S. Wagenbrenner

    2017-02-01

    Full Text Available Wind erosion of soils burned by wildfire contributes substantial particulate matter (PM in the form of dust to the atmosphere, but the magnitude of this dust source is largely unknown. It is important to accurately quantify dust emissions because they can impact human health, degrade visibility, exacerbate dust-on-snow issues (including snowmelt timing, snow chemistry, and avalanche danger, and affect ecological and biogeochemical cycles, precipitation regimes, and the Earth’s radiation budget. We used a novel modeling approach in which local-scale winds were used to drive a high-resolution dust emission model parameterized for burned soils to provide a first estimate of post-fire PM emissions. The dust emission model was parameterized with dust flux measurements from a 2010 fire scar. Here we present a case study to demonstrate the ability of the modeling framework to capture the onset and dynamics of a post-fire dust event and then use the modeling framework to estimate PM emissions from burn scars left by wildfires in U.S. western sagebrush landscapes during 2012. Modeled emissions from 1.2 million ha of burned soil totaled 32.1 Tg (11.7–352 Tg of dust as PM10 and 12.8 Tg (4.68–141 Tg as PM2.5. Despite the relatively large uncertainties in these estimates and a number of underlying assumptions, these first estimates of annual post-fire dust emissions suggest that post-fire PM emissions could substantially increase current annual PM estimates in the U.S. National Emissions Inventory during high fire activity years. Given the potential for post-fire scars to be a large source of PM, further on-site PM flux measurements are needed to improve emission parameterizations and constrain these first estimates.

  14. Automated classification of seismic sources in a large database: a comparison of Random Forests and Deep Neural Networks.

    Science.gov (United States)

    Hibert, Clement; Stumpf, André; Provost, Floriane; Malet, Jean-Philippe

    2017-04-01

    In the past decades, the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring of crustal and surface processes. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, which include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators and because hundreds of thousands of seismic signals have to be processed. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. In this study, we evaluate the ability of machine learning algorithms for the analysis of seismic sources at the Piton de la Fournaise volcano being Random Forest and Deep Neural Network classifiers. We gather a catalog of more than 20,000 events, belonging to 8 classes of seismic sources. We define 60 attributes, based on the waveform, the frequency content and the polarization of the seismic waves, to parameterize the seismic signals recorded. We show that both algorithms provide similar positive classification rates, with values exceeding 90% of the events. When trained with a sufficient number of events, the rate of positive identification can reach 99%. These very high rates of positive identification open the perspective of an operational implementation of these algorithms for near-real time monitoring of

  15. The Ĝ Infrared Search for Extraterrestrial Civilizations with Large Energy Supplies. III. The Reddest Extended Sources in WISE

    Science.gov (United States)

    Griffith, Roger L.; Wright, Jason T.; Maldonado, Jessica; Povich, Matthew S.; Sigurđsson, Steinn; Mullan, Brendan

    2015-04-01

    Nearby Type iii (galaxy-spanning) Kardashev supercivilizations would have high mid-infrared (MIR) luminosities. We have used the Wide-field Infrared Survey Explorer (WISE) to survey ∼ 1× {{10}5} galaxies for extreme MIR emission, 1 × 103 times more galaxies than the only previous such search. We have calibrated the WISE All-sky Catalog pipeline products to improve their photometry for extended sources. We present 563 extended sources with |b|≥slant 10 and red MIR colors, having visually vetted them to remove artifacts. No galaxies in our sample host an alien civilization reprocessing more than 85% of its starlight into the MIR, and only 50 galaxies, including Arp 220, have MIR luminosities consistent with \\gt 50% reprocessing. Ninety of these (likely) extragalactic sources have little literature presence; in most cases, they are likely barely resolved galaxies or pairs of galaxies undergoing large amounts of star formation. Five are new to science and deserve further study. The Be star 48 Librae sits within a MIR nebula, and we suggest that it may be creating dust. WISE, 2MASS, and Spitzer imagery shows that IRAS 04287+6444 is consistent with a previously unnoticed, heavily extinguished cluster of young stellar objects. We identify five “passive” (i.e., red) spiral galaxies with unusually high MIR and low NUV luminosity. We search a set of H i dark galaxies for MIR emission and find none. These 90 poorly understood sources and 5 anomalous passive spirals deserve follow-up via both SETI and conventional astrophysics.

  16. Cryogenic refrigeration requirements for superconducting insertion devices in a light source

    International Nuclear Information System (INIS)

    Green, Michael A.; Green, Michael A.; Green, Michael A.

    2003-01-01

    This report discusses cryogenic cooling superconducting insertion devices for modern light sources. The introductory part of the report discusses the difference between wiggler and undulators and how the bore temperature may affect the performance of the magnets. The steps one would take to reduce the gap between the cold magnet pole are discussed. One section of the report is devoted to showing how one would calculate the heat that enters the device. Source of heat include, heat entering through the vacuum chamber, heating due to stray electrons and synchrotron radiation, heating due to image current on the bore, heat flow by conduction and radiation, and heat transfer into the cryostat through the magnet leads. A section of the report is devoted to cooling options such as small cryo-cooler and larger conventional helium refrigerators. This section contains a discussion as to when it is appropriate to use small coolers that do not have J-T circuits. Candidate small cryo-coolers are discussed in this section of the report. Cooling circuits for cooling with a conventional refrigerator are also discussed. A section of the report is devoted to vibration isolation and how this may affect how the cooling is attached to the device. Vibration isolation using straps is compared to vibration isolation using helium heat pipes. The vibration isolation of a conventional refrigeration system is also discussed. Finally, the cool down of an insertion device is discussed. The device can either be cooled down using liquid cryogenic nitrogen and liquid helium or by using the cooler used to keep the devices cold over the long haul

  17. Mechanical problems in turbomachines, steam and gas turbines. Large steam turbine manufacturing requirements to fulfill customer needs for electric power

    International Nuclear Information System (INIS)

    Brazzini, R.

    1975-01-01

    The needs of the customers in large steam turbines for electric power are examined. The choices and decisions made by the utility about the equipments are dealt with after considering the evolution of power demand on the French network. These decisions and choices mainly result from a technical and economic optimization of production equipments: choice of field-proven solutions, trend to lower steam characteristics, trend to higher output of the units (i.e. size effect), spreading out standardization of machines and components (policy of technical as well as technological levels, i.e. mass production effect). Standardization of external characteristics of units of same level of output and even standardization of some main components. The requirements turbine manufacturers have to meet may fall in two categories: on one side: gaining experience and know-how, capability of making high quality experiments, out put capacity, will to hold a high efficiency level; on the other side: meeting the technical requirements related to the contracts. Among these requirements, one can differentiate those dealing with the service expected from the turbine and that resulting in the responsibility limits of the manufacturer and those tending to gain interchangeability, to improve availability of the equipment, to increase safety, and to make operation and maintenance easier [fr

  18. Technical requirements for the actinide source-term waste test program

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, M.L.F.; Molecke, M.A.

    1993-10-01

    This document defines the technical requirements for a test program designed to measure time-dependent concentrations of actinide elements from contact-handled transuranic (CH TRU) waste immersed in brines similar to those found in the underground workings of the Waste Isolation Pilot Plant (WIPP). This test program wig determine the influences of TRU waste constituents on the concentrations of dissolved and suspended actinides relevant to the performance of the WIPP. These influences (which include pH, Eh, complexing agents, sorbent phases, and colloidal particles) can affect solubilities and colloidal mobilization of actinides. The test concept involves fully inundating several TRU waste types with simulated WIPP brines in sealed containers and monitoring the concentrations of actinide species in the leachate as a function of time. The results from this program will be used to test numeric models of actinide concentrations derived from laboratory studies. The model is required for WIPP performance assessment with respect to the Environmental Protection Agency`s 40 CFR Part 191B.

  19. Technical requirements for the actinide source-term waste test program

    International Nuclear Information System (INIS)

    Phillips, M.L.F.; Molecke, M.A.

    1993-10-01

    This document defines the technical requirements for a test program designed to measure time-dependent concentrations of actinide elements from contact-handled transuranic (CH TRU) waste immersed in brines similar to those found in the underground workings of the Waste Isolation Pilot Plant (WIPP). This test program wig determine the influences of TRU waste constituents on the concentrations of dissolved and suspended actinides relevant to the performance of the WIPP. These influences (which include pH, Eh, complexing agents, sorbent phases, and colloidal particles) can affect solubilities and colloidal mobilization of actinides. The test concept involves fully inundating several TRU waste types with simulated WIPP brines in sealed containers and monitoring the concentrations of actinide species in the leachate as a function of time. The results from this program will be used to test numeric models of actinide concentrations derived from laboratory studies. The model is required for WIPP performance assessment with respect to the Environmental Protection Agency's 40 CFR Part 191B

  20. Performance requirements of an inertial-fusion-energy source for hydrogen production

    International Nuclear Information System (INIS)

    Hovingh, J.

    1983-01-01

    Performance of an inertial fusion system for the production of hydrogen is compared to a tandem-mirror-system hydrogen producer. Both systems use the General Atomic sulfur-iodine hydrogen-production cycle and produce no net electric power to the grid. An ICF-driven hydrogen producer will have higher system gains and lower electrical-consumption ratios than the design point for the tandem-mirror system if the inertial-fusion-energy gain eta Q > 8.8. For the ICF system to have a higher hydrogen production rate per unit fusion power than the tandem-mirror system requires that eta Q > 17. These can be achieved utilizing realistic laser and pellet performances

  1. Current radiation protection requirements governing the use of radioactive sources for medical purposes

    International Nuclear Information System (INIS)

    Dumenigo Gonzalez, Cruz; De la Fuente Punch, Andres; Quevedo Garcia, Jose; Diaz Guerra, Pedro; Lopez Forteza, Yamil

    2004-01-01

    With the recent endorsement of the Regulations Basic Standard for Radiological Safety, and the Guides For the implementation of the Safety Regulations in the Practice of the Radiotherapy and For the implementation of the Safety Regulations in the Practice of the Nuclear Medicine, the basic regulatory framework for the conduction of these two practices in the Republic of Cuba has been completed. Principles of these regulations are in total agreement with the recommendations of the International Atomic Energy Agency and the World Health Organization. To the purpose of establishing the police that rules the implementation of this new Regulations, the Regulatory Authority (CNSN), carried out an evaluation of the achievability of the requirements included. The present paper shows the results of the evaluation of the Safety carried out for the users' institutions in the light of the new Regulations. Such evaluation was based in the analysis of the documentation submitted by users when applying for Licence, as well as on the results of the periodic inspections conducted by the Regulatory Authority. The authors of this paper developed a methodology for identifying the non correspondences with the requirements in the regulation prevailing in each one of the users' institutions. Categorizing the non correspondences as function of its importance for the safety, the methodology makes it possible to establish a prioritization order in resolving such n on correspondence , and the optimization of the existing limited resources in the country can be achieved. Authors of this paper considered that in spite of the non correspondence identified, the safety in the development of the practice is not compromised

  2. Galaxy evolution and large-scale structure in the far-infrared. II. The IRAS faint source survey

    International Nuclear Information System (INIS)

    Lonsdale, C.J.; Hacking, P.B.; Conrow, T.P.; Rowan-Robinson, M.

    1990-01-01

    The new IRAS Faint Source Survey data base is used to confirm the conclusion of Hacking et al. (1987) that the 60 micron source counts fainter than about 0.5 Jy lie in excess of predictions based on nonevolving model populations. The existence of an anisotropy between the northern and southern Galactic caps discovered by Rowan-Robinson et al. (1986) and Needham and Rowan-Robinson (1988) is confirmed, and it is found to extend below their sensitivity limit to about 0.3 Jy in 60 micron flux density. The count anisotropy at f(60) greater than 0.3 can be interpreted reasonably as due to the Local Supercluster; however, no one structure accounting for the fainter anisotropy can be easily identified in either optical or far-IR two-dimensional sky distributions. The far-IR galaxy sky distributions are considerably smoother than distributions from the published optical galaxy catalogs. It is likely that structure of the large size discussed here have been discriminated against in earlier studies due to insufficient volume sampling. 105 refs

  3. Attenuation of contaminant plumes in homogeneous aquifers: Sensitivity to source function at moderate to large peclet numbers

    International Nuclear Information System (INIS)

    Selander, W.N.; Lane, F.E.; Rowat, J.H.

    1995-05-01

    A groundwater mass transfer calculation is an essential part of the performance assessment for radioactive waste disposal facilities. AECL's IRUS (Intrusion Resistant Underground Structure) facility, which is designed for the near-surface disposal of low-level radioactive waste (LLRW), is to be situated in the sandy overburden at AECL's Chalk River Laboratories. Flow in the sandy aquifers at the proposed IRUS site is relatively homogeneous and advection-dominated (large Peclet numbers). Mass transfer along the mean direction of flow from the IRUS site may be described using the one-dimensional advection-dispersion equation, for which a Green's function representation of downstream radionuclide flux is convenient. This report shows that in advection-dominated aquifers, dispersive attenuation of initial contaminant releases depends principally on two time scales: the source duration and the pulse breakthrough time. Numerical investigation shows further that the maximum downstream flux or concentration depends on these time scales in a simple characteristic way that is minimally sensitive to the shape of the initial source pulse. (author). 11 refs., 2 tabs., 3 figs

  4. Sources of machine-induced background in the ATLAS and CMS detectors at the CERN Large Hadron Collider

    Energy Technology Data Exchange (ETDEWEB)

    Bruce, R.; et al.,

    2013-11-21

    One source of experimental background in the CERN Large Hadron Collider (LHC) is particles entering the detectors from the machine. These particles are created in cascades, caused by upstream interactions of beam protons with residual gas molecules or collimators. We estimate the losses on the collimators with SixTrack and simulate the showers with FLUKA and MARS to obtain the flux and distribution of particles entering the ATLAS and CMS detectors. We consider some machine configurations used in the first LHC run, with focus on 3.5 TeV operation as in 2011. Results from FLUKA and MARS are compared and a very good agreement is found. An analysis of logged LHC data provides, for different processes, absolute beam loss rates, which are used together with further simulations of vacuum conditions to normalize the results to rates of particles entering the detectors. We assess the relative importance of background from elastic and inelastic beam-gas interactions, and the leakage out of the LHC collimation system, and show that beam-gas interactions are the dominating source of machine-induced background for the studied machine scenarios. Our results serve as a starting point for the experiments to perform further simulations in order to estimate the resulting signals in the detectors.

  5. SPITZER IRS SPECTRA OF LUMINOUS 8 μm SOURCES IN THE LARGE MAGELLANIC CLOUD: TESTING COLOR-BASED CLASSIFICATIONS

    International Nuclear Information System (INIS)

    Buchanan, Catherine L.; Kastner, Joel H.; Hrivnak, Bruce J.; Sahai, Raghvendra

    2009-01-01

    We present archival Spitzer Infrared Spectrograph (IRS) spectra of 19 luminous 8 μm selected sources in the Large Magellanic Cloud (LMC). The object classes derived from these spectra and from an additional 24 spectra in the literature are compared with classifications based on Two Micron All Sky Survey (2MASS)/MSX (J, H, K, and 8 μm) colors in order to test the 'JHK8' (Kastner et al.) classification scheme. The IRS spectra confirm the classifications of 22 of the 31 sources that can be classified under the JHK8 system. The spectroscopic classification of 12 objects that were unclassifiable in the JHK8 scheme allow us to characterize regions of the color-color diagrams that previously lacked spectroscopic verification, enabling refinements to the JHK8 classification system. The results of these new classifications are consistent with previous results concerning the identification of the most infrared-luminous objects in the LMC. In particular, while the IRS spectra reveal several new examples of asymptotic giant branch (AGB) stars with O-rich envelopes, such objects are still far outnumbered by carbon stars (C-rich AGB stars). We show that Spitzer IRAC/MIPS color-color diagrams provide improved discrimination between red supergiants and oxygen-rich and carbon-rich AGB stars relative to those based on 2MASS/MSX colors. These diagrams will enable the most luminous IR sources in Local Group galaxies to be classified with high confidence based on their Spitzer colors. Such characterizations of stellar populations will continue to be possible during Spitzer's warm mission through the use of IRAC [3.6]-[4.5] and 2MASS colors.

  6. Open-source web-enabled data management, analyses, and visualization of very large data in geosciences using Jupyter, Apache Spark, and community tools

    Science.gov (United States)

    Chaudhary, A.

    2017-12-01

    Current simulation models and sensors are producing high-resolution, high-velocity data in geosciences domain. Knowledge discovery from these complex and large size datasets require tools that are capable of handling very large data and providing interactive data analytics features to researchers. To this end, Kitware and its collaborators are producing open-source tools GeoNotebook, GeoJS, Gaia, and Minerva for geosciences that are using hardware accelerated graphics and advancements in parallel and distributed processing (Celery and Apache Spark) and can be loosely coupled to solve real-world use-cases. GeoNotebook (https://github.com/OpenGeoscience/geonotebook) is co-developed by Kitware and NASA-Ames and is an extension to the Jupyter Notebook. It provides interactive visualization and python-based analysis of geospatial data and depending the backend (KTile or GeoPySpark) can handle data sizes of Hundreds of Gigabytes to Terabytes. GeoNotebook uses GeoJS (https://github.com/OpenGeoscience/geojs) to render very large geospatial data on the map using WebGL and Canvas2D API. GeoJS is more than just a GIS library as users can create scientific plots such as vector and contour and can embed InfoVis plots using D3.js. GeoJS aims for high-performance visualization and interactive data exploration of scientific and geospatial location aware datasets and supports features such as Point, Line, Polygon, and advanced features such as Pixelmap, Contour, Heatmap, and Choropleth. Our another open-source tool Minerva ((https://github.com/kitware/minerva) is a geospatial application that is built on top of open-source web-based data management system Girder (https://github.com/girder/girder) which provides an ability to access data from HDFS or Amazon S3 buckets and provides capabilities to perform visualization and analyses on geosciences data in a web environment using GDAL and GeoPandas wrapped in a unified API provided by Gaia (https

  7. General-purpose parallel algorithm based on CUDA for source pencils' deployment of large γ irradiator

    International Nuclear Information System (INIS)

    Yang Lei; Gong Xueyu; Wang Ling

    2013-01-01

    Combined with standard mathematical model for evaluating quality of deploying results, a new high-performance parallel algorithm for source pencils' deployment was obtained by using parallel plant growth simulation algorithm which was completely parallelized with CUDA execute model, and the corresponding code can run on GPU. Based on such work, several instances in various scales were used to test the new version of algorithm. The results show that, based on the advantage of old versions. the performance of new one is improved more than 500 times comparing with the CPU version, and also 30 times with the CPU plus GPU hybrid version. The computation time of new version is less than ten minutes for the irradiator of which the activity is less than 111 PBq. For a single GTX275 GPU, the maximum computing power of new version is no more than 167 PBq as well as the computation time is no more than 25 minutes, and for multiple GPUs, the power can be improved more. Overall, the new version of algorithm running on GPU can satisfy the requirement of source pencils' deployment of any domestic irradiator, and it is of high competitiveness. (authors)

  8. Pumping requirements and options for molecular beam epitaxy and gas source molecular beam epitaxy/chemical beam epitaxy

    International Nuclear Information System (INIS)

    McCollum, M.J.; Plano, M.A.; Haase, M.A.; Robbins, V.M.; Jackson, S.L.; Cheng, K.Y.; Stillman, G.E.

    1989-01-01

    This paper discusses the use of gas sources in growth by MBE as a result of current interest in growth of InP/InGaAsP/InGaAs lattice matched to InP. For gas flows greater than a few sccm, pumping speed requirements dictate the use of turbomolecular or diffusion pumps. GaAs samples with high p-type mobilities have been grown with diffusion pumped molecular beam epitaxial system. According to the authors, this demonstration of the inherent cleanliness of a properly designed diffusion pumping system indicates that a diffusion pump is an excellent inexpensive and reliable choice for growth by molecular beam epitaxy and gas source molecular beam epitaxy/chemical beam epitaxy

  9. Considerations for Probabilistic Analyses to Assess Potential Changes to Large-Break LOCA Definition for ECCS Requirements

    International Nuclear Information System (INIS)

    Wilkowski, G.; Rudland, D.; Wolterman, R.; Krishnaswamy, P.; Scott, P.; Rahman, S.; Fairbanks, C.

    2002-01-01

    The U.S.NRC has undertaken a study to explore changes to the body of Part 50 of the U.S. Federal Code of Regulations, to incorporate risk-informed attributes. One of the regulations selected for this study is 10 CFR 50.46, A cceptance Criteria for Emergency Core Cooling Systems for Light-Water Nuclear Power Reactors . These changes will potentially enhance safety and reduce unnecessary burden on utilities. Specific attention is being paid to redefining the maximum pipe break size for LB-LOCA by determining the spectrum of pipe diameter (or equivalent opening area) versus failure probabilities. In this regard, it is necessary to ensure that all contributors to probabilistic failures are accounted for when redefining ECCS requirements. This paper describes initial efforts being conducted for the U.S.NRC on redefining the LB-LOCA requirements. Consideration of the major contributors to probabilistic failure, and deterministic aspects for modeling them, are being addressed. At this time three major contributors to probabilistic failures are being considered. These include: (1) Analyses of the failure probability from cracking mechanisms that could involve rupture or large opening areas from either through-wall or surface flaws, whether the pipe system was approved for leak-before-break (LBB) or not. (2) Future degradation mechanisms, such as recent occurrence of PWSCC in PWR piping need to be included. This degradation mechanism was not recognized as being an issue when LBB was approved for many plants or when the initial risk-informed inspection plans were developed. (3) Other indirect causes of loss of pressure-boundary integrity than from cracks in the pipe system also should be included. The failure probability from probabilistic fracture mechanics will not account for these other indirect causes that could result in a large opening in the pressure boundary: i.e., failure of bolts on a steam generator manway, flanges, and valves; outside force damage from the

  10. Infrared-faint radio sources are at high redshifts. Spectroscopic redshift determination of infrared-faint radio sources using the Very Large Telescope

    Science.gov (United States)

    Herzog, A.; Middelberg, E.; Norris, R. P.; Sharp, R.; Spitler, L. R.; Parker, Q. A.

    2014-07-01

    Context. Infrared-faint radio sources (IFRS) are characterised by relatively high radio flux densities and associated faint or even absent infrared and optical counterparts. The resulting extremely high radio-to-infrared flux density ratios up to several thousands were previously known only for high-redshift radio galaxies (HzRGs), suggesting a link between the two classes of object. However, the optical and infrared faintness of IFRS makes their study difficult. Prior to this work, no redshift was known for any IFRS in the Australia Telescope Large Area Survey (ATLAS) fields which would help to put IFRS in the context of other classes of object, especially of HzRGs. Aims: This work aims at measuring the first redshifts of IFRS in the ATLAS fields. Furthermore, we test the hypothesis that IFRS are similar to HzRGs, that they are higher-redshift or dust-obscured versions of these massive galaxies. Methods: A sample of IFRS was spectroscopically observed using the Focal Reducer and Low Dispersion Spectrograph 2 (FORS2) at the Very Large Telescope (VLT). The data were calibrated based on the Image Reduction and Analysis Facility (IRAF) and redshifts extracted from the final spectra, where possible. This information was then used to calculate rest-frame luminosities, and to perform the first spectral energy distribution modelling of IFRS based on redshifts. Results: We found redshifts of 1.84, 2.13, and 2.76, for three IFRS, confirming the suggested high-redshift character of this class of object. These redshifts and the resulting luminosities show IFRS to be similar to HzRGs, supporting our hypothesis. We found further evidence that fainter IFRS are at even higher redshifts. Conclusions: Considering the similarities between IFRS and HzRGs substantiated in this work, the detection of IFRS, which have a significantly higher sky density than HzRGs, increases the number of active galactic nuclei in the early universe and adds to the problems of explaining the formation of

  11. Estimation of distance error by fuzzy set theory required for strength determination of HDR (192)Ir brachytherapy sources.

    Science.gov (United States)

    Kumar, Sudhir; Datta, D; Sharma, S D; Chourasiya, G; Babu, D A R; Sharma, D N

    2014-04-01

    Verification of the strength of high dose rate (HDR) (192)Ir brachytherapy sources on receipt from the vendor is an important component of institutional quality assurance program. Either reference air-kerma rate (RAKR) or air-kerma strength (AKS) is the recommended quantity to specify the strength of gamma-emitting brachytherapy sources. The use of Farmer-type cylindrical ionization chamber of sensitive volume 0.6 cm(3) is one of the recommended methods for measuring RAKR of HDR (192)Ir brachytherapy sources. While using the cylindrical chamber method, it is required to determine the positioning error of the ionization chamber with respect to the source which is called the distance error. An attempt has been made to apply the fuzzy set theory to estimate the subjective uncertainty associated with the distance error. A simplified approach of applying this fuzzy set theory has been proposed in the quantification of uncertainty associated with the distance error. In order to express the uncertainty in the framework of fuzzy sets, the uncertainty index was estimated and was found to be within 2.5%, which further indicates that the possibility of error in measuring such distance may be of this order. It is observed that the relative distance li estimated by analytical method and fuzzy set theoretic approach are consistent with each other. The crisp values of li estimated using analytical method lie within the bounds computed using fuzzy set theory. This indicates that li values estimated using analytical methods are within 2.5% uncertainty. This value of uncertainty in distance measurement should be incorporated in the uncertainty budget, while estimating the expanded uncertainty in HDR (192)Ir source strength measurement.

  12. A review of the feasibility of using a system based upon photobiology as a large scale source of renewable energy

    Energy Technology Data Exchange (ETDEWEB)

    Cogdell, R J; Hawthornthwaite, A M

    1992-07-01

    This report has reviewed critically the feasibility of using a Photobiological system to produce energy on a commercial scale. Two possible candidates have been identified. The first is to use our knowledge of the initial light-driven charge separation reaction in photosynthesis as a blue-print on which chemists could base the design of more efficient second generation solar cells. The details of this possibility are being considered in a separate report. The second is to harness the ability of anaerobic photosynthetic bacteria to support light-driven hydrogen production. This second option is the definite front runner, however as with other schemes to harness solar energy it does have the drawback that large surface areas will be required in order to produce commercially significant quantities of energy. A detailed evaluation of photobiological hydrogen production has been presented. (author).

  13. A large mantle water source for the northern San Andreas Fault System: A ghost of subduction past

    Science.gov (United States)

    Kirby, Stephen H.; Wang, Kelin; Brocher, Thomas M.

    2014-01-01

    Recent research indicates that the shallow mantle of the Cascadia subduction margin under near-coastal Pacific Northwest U.S. is cold and partially serpentinized, storing large quantities of water in this wedge-shaped region. Such a wedge probably formed to the south in California during an earlier period of subduction. We show by numerical modeling that after subduction ceased with the creation of the San Andreas Fault System (SAFS), the mantle wedge warmed, slowly releasing its water over a period of more than 25 Ma by serpentine dehydration into the crust above. This deep, long-term water source could facilitate fault slip in San Andreas System at low shear stresses by raising pore pressures in a broad region above the wedge. Moreover, the location and breadth of the water release from this model gives insights into the position and breadth of the SAFS. Such a mantle source of water also likely plays a role in the occurrence of Non-Volcanic Tremor (NVT) that has been reported along the SAFS in central California. This process of water release from mantle depths could also mobilize mantle serpentinite from the wedge above the dehydration front, permitting upward emplacement of serpentinite bodies by faulting or by diapiric ascent. Specimens of serpentinite collected from tectonically emplaced serpentinite blocks along the SAFS show mineralogical and structural evidence of high fluid pressures during ascent from depth. Serpentinite dehydration may also lead to tectonic mobility along other plate boundaries that succeed subduction, such as other continental transforms, collision zones, or along present-day subduction zones where spreading centers are subducting.

  14. Constraining the sources and cycling of dissolved organic carbon in a large oligotrophic lake using radiocarbon analyses

    Science.gov (United States)

    Zigah, Prosper K.; Minor, Elizabeth C.; McNichol, Ann P.; Xu, Li; Werne, Josef P.

    2017-07-01

    We measured the concentrations and isotopic compositions of solid phase extracted (SPE) dissolved organic carbon (DOC) and high molecular weight (HMW) DOC and their constituent organic components in order to better constrain the sources and cycling of DOC in a large oligotrophic lacustrine system (Lake Superior, North America). SPE DOC constituted a significant proportion (41-71%) of the lake DOC relative to HMW DOC (10-13%). Substantial contribution of 14C-depleted components to both SPE DOC (Δ14C = 25-43‰) and HMW DOC (Δ14C = 22-32‰) was evident during spring mixing, and depressed their radiocarbon values relative to the lake dissolved inorganic carbon (DIC; Δ14C ∼ 59‰). There was preferential removal of 14C-depleted (older) and thermally recalcitrant components from HMW DOC and SPE DOC in the summer. Contemporary photoautotrophic addition to HMW DOC was observed during summer stratification in contrast to SPE DOC, which decreased in concentration during stratification. Serial thermal oxidation radiocarbon analysis revealed a diversity of sources (both contemporary and older) within the SPE DOC, and also showed distinct components within the HMW DOC. The thermally labile components of HMW DOC were 14C-enriched and are attributed to heteropolysaccharides (HPS), peptides/amide and amino sugars (AMS) relative to the thermally recalcitrant components reflecting the presence of older material, perhaps carboxylic-rich alicyclic molecules (CRAM). The solvent extractable lipid-like fraction of HMW DOC was very 14C-depleted (as old as 1270-2320 14C years) relative to the carbohydrate-like and protein-like substances isolated by acid hydrolysis of HMW DOC. Our data constrain relative influences of contemporary DOC and old DOC, and DOC cycling in a modern freshwater ecosystem.

  15. An Integrated Pipeline of Open Source Software Adapted for Multi-CPU Architectures: Use in the Large-Scale Identification of Single Nucleotide Polymorphisms

    Directory of Open Access Journals (Sweden)

    B. Jayashree

    2007-01-01

    Full Text Available The large amounts of EST sequence data available from a single species of an organism as well as for several species within a genus provide an easy source of identification of intra- and interspecies single nucleotide polymorphisms (SNPs. In the case of model organisms, the data available are numerous, given the degree of redundancy in the deposited EST data. There are several available bioinformatics tools that can be used to mine this data; however, using them requires a certain level of expertise: the tools have to be used sequentially with accompanying format conversion and steps like clustering and assembly of sequences become time-intensive jobs even for moderately sized datasets. We report here a pipeline of open source software extended to run on multiple CPU architectures that can be used to mine large EST datasets for SNPs and identify restriction sites for assaying the SNPs so that cost-effective CAPS assays can be developed for SNP genotyping in genetics and breeding applications. At the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT, the pipeline has been implemented to run on a Paracel high-performance system consisting of four dual AMD Opteron processors running Linux with MPICH. The pipeline can be accessed through user-friendly web interfaces at http://hpc.icrisat.cgiar.org/PBSWeb and is available on request for academic use. We have validated the developed pipeline by mining chickpea ESTs for interspecies SNPs, development of CAPS assays for SNP genotyping, and confirmation of restriction digestion pattern at the sequence level.

  16. Search for Extended Sources in the Galactic Plane Using Six Years of Fermi -Large Area Telescope Pass 8 Data above 10 GeV

    Energy Technology Data Exchange (ETDEWEB)

    Ackermann, M.; Buehler, R. [Deutsches Elektronen Synchrotron DESY, D-15738 Zeuthen (Germany); Ajello, M. [Department of Physics and Astronomy, Clemson University, Kinard Lab of Physics, Clemson, SC 29634-0978 (United States); Baldini, L. [Università di Pisa and Istituto Nazionale di Fisica Nucleare, Sezione di Pisa I-56127 Pisa (Italy); Ballet, J. [Laboratoire AIM, CEA-IRFU/CNRS/Université Paris Diderot, Service d’Astrophysique, CEA Saclay, F-91191 Gif sur Yvette (France); Barbiellini, G. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, I-34127 Trieste (Italy); Bastieri, D. [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, I-35131 Padova (Italy); Bellazzini, R. [Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, I-56127 Pisa (Italy); Bissaldi, E.; Caragiulo, M. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Bloom, E. D.; Bottacini, E.; Cameron, R. A. [W. W. Hansen Experimental Physics Laboratory, Kavli Institute for Particle Astrophysics and Cosmology, Department of Physics and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Bonino, R. [Istituto Nazionale di Fisica Nucleare, Sezione di Torino, I-10125 Torino (Italy); Brandt, T. J.; Castro, D. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Bregeon, J. [Laboratoire Univers et Particules de Montpellier, Université Montpellier, CNRS/IN2P3, F-34095 Montpellier (France); Bruel, P. [Laboratoire Leprince-Ringuet, École polytechnique, CNRS/IN2P3, F-91128 Palaiseau (France); Caraveo, P. A. [INAF-Istituto di Astrofisica Spaziale e Fisica Cosmica Milano, via E. Bassini 15, I-20133 Milano (Italy); Cavazzuti, E., E-mail: jcohen@astro.umd.edu, E-mail: elizabeth.a.hays@nasa.gov [Agenzia Spaziale Italiana (ASI) Science Data Center, I-00133 Roma (Italy); and others

    2017-07-10

    The spatial extension of a γ -ray source is an essential ingredient to determine its spectral properties, as well as its potential multiwavelength counterpart. The capability to spatially resolve γ -ray sources is greatly improved by the newly delivered Fermi -Large Area Telescope (LAT) Pass 8 event-level analysis, which provides a greater acceptance and an improved point-spread function, two crucial factors for the detection of extended sources. Here, we present a complete search for extended sources located within 7° from the Galactic plane, using 6 yr of Fermi -LAT data above 10 GeV. We find 46 extended sources and provide their morphological and spectral characteristics. This constitutes the first catalog of hard Fermi -LAT extended sources, named the Fermi Galactic Extended Source Catalog, which allows a thorough study of the properties of the Galactic plane in the sub-TeV domain.

  17. Search for Extended Sources in the Galactic Plane Using Six Years of Fermi -Large Area Telescope Pass 8 Data above 10 GeV

    International Nuclear Information System (INIS)

    Ackermann, M.; Buehler, R.; Ajello, M.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bellazzini, R.; Bissaldi, E.; Caragiulo, M.; Bloom, E. D.; Bottacini, E.; Cameron, R. A.; Bonino, R.; Brandt, T. J.; Castro, D.; Bregeon, J.; Bruel, P.; Caraveo, P. A.; Cavazzuti, E.

    2017-01-01

    The spatial extension of a γ -ray source is an essential ingredient to determine its spectral properties, as well as its potential multiwavelength counterpart. The capability to spatially resolve γ -ray sources is greatly improved by the newly delivered Fermi -Large Area Telescope (LAT) Pass 8 event-level analysis, which provides a greater acceptance and an improved point-spread function, two crucial factors for the detection of extended sources. Here, we present a complete search for extended sources located within 7° from the Galactic plane, using 6 yr of Fermi -LAT data above 10 GeV. We find 46 extended sources and provide their morphological and spectral characteristics. This constitutes the first catalog of hard Fermi -LAT extended sources, named the Fermi Galactic Extended Source Catalog, which allows a thorough study of the properties of the Galactic plane in the sub-TeV domain.

  18. Technical basis and programmatic requirements for large block testing of coupled thermal-mechanical-hydrological-chemical processes

    International Nuclear Information System (INIS)

    Lin, Wunan.

    1993-09-01

    This document contains the technical basis and programmatic requirements for a scientific investigation plan that governs tests on a large block of tuff for understanding the coupled thermal- mechanical-hydrological-chemical processes. This study is part of the field testing described in Section 8.3.4.2.4.4.1 of the Site Characterization Plan (SCP) for the Yucca Mountain Project. The first, and most important objective is to understand the coupled TMHC processes in order to develop models that will predict the performance of a nuclear waste repository. The block and fracture properties (including hydrology and geochemistry) can be well characterized from at least five exposed surfaces, and the block can be dismantled for post-test examinations. The second objective is to provide preliminary data for development of models that will predict the quality and quantity of water in the near-field environment of a repository over the current 10,000 year regulatory period of radioactive decay. The third objective is to develop and evaluate the various measurement systems and techniques that will later be employed in the Engineered Barrier System Field Tests (EBSFT)

  19. Advanced Neutron Source Reactor (ANSR) phenomena identification and ranking (PIR) for large break loss of coolant accidents (LBLOCA)

    International Nuclear Information System (INIS)

    Ruggles, A.E.; Cheng, L.Y.; Dimenna, R.A.; Griffith, P.; Wilson, G.E.

    1994-06-01

    A team of experts in reactor analysis conducted a phenomena identification and ranking (PIR) exercise for a large break loss-of-coolant accident (LBLOCA) in the Advanced Neutron source Reactor (ANSR). The LBLOCA transient is broken into two separate parts for the PIR exercise. The first part considers the initial depressurization of the system that follows the opening of the break. The second part of the transient includes long-term decay heat removal after the reactor is shut down and the system is depressurized. A PIR is developed for each part of the LBLOCA. The ranking results are reviewed to establish if models in the RELAP5-MOD3 thermalhydraulic code are adequate for use in ANSR LBLOCA simulations. Deficiencies in the RELAP5-MOD3 code are identified and existing data or models are recommended to improve the code for this application. Experiments were also suggested to establish models for situations judged to be beyond current knowledge. The applicability of the ANSR PIR results is reviewed for the entire set of transients important to the ANSR safety analysis

  20. Influences of large-scale convection and moisture source on monthly precipitation isotope ratios observed in Thailand, Southeast Asia

    Science.gov (United States)

    Wei, Zhongwang; Lee, Xuhui; Liu, Zhongfang; Seeboonruang, Uma; Koike, Masahiro; Yoshimura, Kei

    2018-04-01

    Many paleoclimatic records in Southeast Asia rely on rainfall isotope ratios as proxies for past hydroclimatic variability. However, the physical processes controlling modern rainfall isotopic behaviors in the region is poorly constrained. Here, we combined isotopic measurements at six sites across Thailand with an isotope-incorporated atmospheric circulation model (IsoGSM) and the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model to investigate the factors that govern the variability of precipitation isotope ratios in this region. Results show that rainfall isotope ratios are both correlated with local rainfall amount and regional outgoing longwave radiation, suggesting that rainfall isotope ratios in this region are controlled not only by local rain amount (amount effect) but also by large-scale convection. As a transition zone between the Indian monsoon and the western North Pacific monsoon, the spatial difference of observed precipitation isotope among different sites are associated with moisture source. These results highlight the importance of regional processes in determining rainfall isotope ratios in the tropics and provide constraints on the interpretation of paleo-precipitation isotope records in the context of regional climate dynamics.

  1. ALTERNATIVE SOURCE APPORTIONMENT IN THE SURROUNDING REGION OF A LARGE STEEL INDUSTRY APPLYING Tillandsia usneoides AS BIOMONITOR

    Directory of Open Access Journals (Sweden)

    Laura Benevides dos Santos

    Full Text Available From the beginning of its operation, this large steel industrial complex in the Santa Cruz Industrial District, Rio de Janeiro, Brazil, with an annual capacity of 5 million tons, has been at the center of controversy related to its atmospheric emissions. Since the air filter used for its routine air particulate monitoring network is not appropriate for a source apportionment study, biomonitoring was tested as an alternative way to carry out this evaluation. Thus, the bromeliad species Tillandsia usneoides was used as a bioindicator in the Santa Cruz Industrial District, Rio de Janeiro, Brazil. Six samplings were performed over a period of approximately one year. The results showed that the sampling point located inside the industrial complex presented higher elemental concentration values for all samples. Among the quantifiable elements found in the biomonitor samples, iron seems to be the element that best represents the emissions from the steelwork complex, which was corroborated based on the analysis of dust jar samples collected inside the complex area.

  2. Evaluation of thermal stress in the anode chamber wall of a large volume magnetic bucket ion source

    International Nuclear Information System (INIS)

    Wells, Russell; Horiike, Hiroshi; Kuriyama, Masaaki; Ohara, Yoshihiro

    1984-02-01

    Thermal stress analysis was performed on the plasma chamber of the Large Volume Magnetic Multipole Bucket Ion Source (LVB) designed for use on the JT-60 NBI system. The energy absorbed by the walls of the plasma chambers of neutral beam injectors is of the order of 1% of the accelerator electrical drain power. A previous study indicates that a moderately high heat flux, of about 600W/cm 2 , is concentrated on the magnetic field cusp lines during normal full power operation. Abnormal arc discharges during conditioning of a stainless steel LVB produced localized melting of the stainless steel at several locations near the cusps lines. The power contained in abnormal arc discharges (arc spots) was estimated from the observed melting. Thermal stress analysis was performed numerically on representative sections of the copper LVB design for both stable and abnormal arc discharge conditions. Results show that this chamber should not fail due to thermal fatigue stesses arising from normal arc discharges. However, fatigue failure may occur after several hundred to a few thousand arc spots of 30mS duration at any one location. Limited arc discharge operation of the copper bucket was performed to partially verify the chamber's durability. (author)

  3. Environmental Monitoring and Characterization of Radiation Sources on UF Campus Using a Large Volume NaI Detector

    Science.gov (United States)

    Bruner, Jesse A.; Gardiner, Hannah E.; Jordan, Kelly A.; Baciak, James E.

    2016-09-01

    Environmental radiation surveys are important for applications such as safety and regulations. This is especially true for areas exposed to emissions from nuclear reactors, such as the University of Florida Training Reactor (UFTR). At the University of Florida, surveys are performed using the RSX-1 NaI detector, developed by Radiation Solutions Inc. The detector uses incoming gamma rays and an Advanced Digital Spectrometer module to produce a linear energy spectrum. These spectra can then be analyzed in real time with a personal computer using the built in software, RadAssist. We report on radiation levels around the University of Florida campus using two mobile detection platforms, car-borne and cart-borne. The car-borne surveys provide a larger, broader map of campus radiation levels. On the other hand, cart-borne surveys provide a more detailed radiation map because of its ability to reach places on campus cars cannot go. Throughout the survey data, there are consistent radon decay product energy peaks in addition to other sources such as medical I-131 found in a large crowd of people. Finally, we investigate further applications of this mobile detection platform, such as tracking the Ar-41 plume emitted from the UFTR and detection of potential environmental hazards.

  4. THE PHYSICS OF PROTOPLANETESIMAL DUST AGGLOMERATES. VI. EROSION OF LARGE AGGREGATES AS A SOURCE OF MICROMETER-SIZED PARTICLES

    International Nuclear Information System (INIS)

    Schraepler, Rainer; Blum, Juergen

    2011-01-01

    Observed protoplanetary disks consist of a large amount of micrometer-sized particles. Dullemond and Dominik pointed out for the first time the difficulty in explaining the strong mid-infrared excess of classical T Tauri stars without any dust-retention mechanisms. Because high relative velocities in between micrometer-sized and macroscopic particles exist in protoplanetary disks, we present experimental results on the erosion of macroscopic agglomerates consisting of micrometer-sized spherical particles via the impact of micrometer-sized particles. We find that after an initial phase, in which an impacting particle erodes up to 10 particles of an agglomerate, the impacting particles compress the agglomerate's surface, which partly passivates the agglomerates against erosion. Due to this effect, the erosion halts for impact velocities up to ∼30 m s -1 within our error bars. For higher velocities, the erosion is reduced by an order of magnitude. This outcome is explained and confirmed by a numerical model. In a next step, we build an analytical disk model and implement the experimentally found erosive effect. The model shows that erosion is a strong source of micrometer-sized particles in a protoplanetary disk. Finally, we use the stationary solution of this model to explain the amount of micrometer-sized particles in the observational infrared data of Furlan et al.

  5. Large-scale preparation of CdS quantum dots by direct thermolysis of a single-source precursor

    Energy Technology Data Exchange (ETDEWEB)

    Li Zhiguo; Cai Wei; Sui Jiehe [School of Material Science and Engineering, Harbin Institute of Technology, Harbin, Heilongjiang 150001 (China)

    2008-01-23

    CdS quantum dots (QDs) have been synthesized on a large scale, based on the direct thermolysis of one single-source precursor (Me{sub 4}N){sub 4}[S{sub 4}Cd{sub 10}(SPh){sub 16}], in hexadecylamine (HDA). Transmission electron microscopy (TEM) observations show that the CdS QDs are well-defined, nearly spherical particles. The clear lattice fringes in high-resolution TEM (HRTEM) images confirm the crystalline nature of the QDs. The broad diffraction in the x-ray diffraction (XRD) pattern and diffuse diffraction rings of the selected-area electron diffraction (SAED) pattern are typical of nanomeric-size particles and indicative of the hexagonal phase of CdS QDs. The absorption spectra confirm quantum confinement of CdS QDs. The synthesis process for CdS QDs was investigated by ultraviolet-visible (UV-vis) absorption spectroscopy. The results demonstrate that the nucleation and growth stages were separated automatically in a homogeneous system.

  6. REE and Isotopic Compositions of Lunar Basalts Demonstrate Partial Melting of Hybridized Mantle Sources after Cumulate Overturn is Required

    Science.gov (United States)

    Dygert, N. J.; Liang, Y.

    2017-12-01

    Lunar basalts maintain an important record of the composition of the lunar interior. Much of our understanding of the Moon's early evolution comes from studying their petrogenesis. Recent experimental work has advanced our knowledge of major and trace element fractionation during lunar magma ocean (LMO) crystallization [e.g., 1-3], which produced heterogeneous basalt sources in the Moon's mantle. With the new experimental constraints, we can evaluate isotopic and trace element signatures in lunar basalts in unprecedented detail, refining inferences about the Moon's dynamic history. Two petrogenetic models are invoked to explain the compositions of the basalts. The assimilation model argues they formed as primitive melts of early LMO cumulates that assimilated late LMO cumulates as they migrated upward. The cumulate overturn model argues that dense LMO cumulates sank into the lunar interior, producing hybridized sources that melted to form the basalts. Here we compare predicted Ce/Yb and Hf and Nd isotopes of partial melts of LMO cumulates with measured compositions of lunar basalts to evaluate whether they could have formed by end-member petrogenetic models. LMO crystallization models suggest all LMO cumulates have chondrite normalized Ce/Yb 1.5; these could not have formed by assimilation of any LMO cumulate or residual liquid (or KREEP basalt, which has isotopically negative ɛNd and ɛHf). In contrast, basalt REE patterns and isotopes can easily be modeled assuming partial melting of hybridized mantle sources, indicating overturn may be required. A chemical requirement for overturn independently confirms that late LMO cumulates are sufficiently low in viscosity to sink into the lunar interior, as suggested by recent rock deformation experiments [4]. Overturned, low viscosity late LMO cumulates would be relatively stable around the core [5]. High Ce/Yb basalts require that overturned cumulates were mixed back into the overlying mantle by convection within a few

  7. Determination of the in-containment source term for a Large-Break Loss of Coolant Accident

    International Nuclear Information System (INIS)

    2001-04-01

    This is the report of a project that focused on one of the most important design basis accidents: the Large Break Loss Of Coolant Accident (LBLOCA) (for pressurised water reactors). The first step in the calculation of the radiological consequences of this accident is the determination of the source term inside the containment. This work deals with this part of the calculation of the LBLOCA radiological consequences for which a previous benchmark (1988) has shown wide variations in the licensing practices adopted by European countries. The calculation of this source term may naturally be split in several steps (see chapter II), corresponding to several physical stages in the release of fission products: fraction of core failure, release from the damaged fuel, airborne part of the release and the release into the reactor coolant system and the sumps, chemical behaviour of iodine in the aqueous and gas phases, natural and spray removal in the containment atmosphere. A chapter is devoted to each of these topics. In addition, two other chapters deal with the basic assumptions to define the accidental sequence and the nuclides to be considered when computing doses associated with the LBLOCA. The report describes where there is agreement between the partner organisations and where there are still differences in approach. For example, there is agreement concerning the percentage of failed fuel which could be used in future licensing assessments (however this subject is still under discussion in France, a lower value is thinkable). For existing plants, AVN (Belgium) wishes to keep the initial licensing assumptions. For the release from damaged fuel, there is not complete agreement: AVN (Belgium) wishes to maintain its present approach. IPSN (France), GRS (Germany) and NNC (UK) prefer to use their own methodologies that result in slightly different values to the proposed values for a common position. There are presently no recommendations of the release of fuel particulates

  8. Adaptation of a web-based, open source electronic medical record system platform to support a large study of tuberculosis epidemiology

    Directory of Open Access Journals (Sweden)

    Fraser Hamish SF

    2012-11-01

    Full Text Available Abstract Background In 2006, we were funded by the US National Institutes of Health to implement a study of tuberculosis epidemiology in Peru. The study required a secure information system to manage data from a target goal of 16,000 subjects who needed to be followed for at least one year. With previous experience in the development and deployment of web-based medical record systems for TB treatment in Peru, we chose to use the OpenMRS open source electronic medical record system platform to develop the study information system. Supported by a core technical and management team and a large and growing worldwide community, OpenMRS is now being used in more than 40 developing countries. We adapted the OpenMRS platform to better support foreign languages. We added a new module to support double data entry, linkage to an existing laboratory information system, automatic upload of GPS data from handheld devices, and better security and auditing of data changes. We added new reports for study managers, and developed data extraction tools for research staff and statisticians. Further adaptation to handle direct entry of laboratory data occurred after the study was launched. Results Data collection in the OpenMRS system began in September 2009. By August 2011 a total of 9,256 participants had been enrolled, 102,274 forms and 13,829 laboratory results had been entered, and there were 208 users. The system is now entirely supported by the Peruvian study staff and programmers. Conclusions The information system served the study objectives well despite requiring some significant adaptations mid-stream. OpenMRS has more tools and capabilities than it did in 2008, and requires less adaptations for future projects. OpenMRS can be an effective research data system in resource poor environments, especially for organizations using or considering it for clinical care as well as research.

  9. Source to Sink Tectonic Fate of Large Oceanic Turbidite Systems and the Rupturing of Great and Giant Megathrust Earthquakes (Invited)

    Science.gov (United States)

    Scholl, D. W.; Kirby, S. H.; von Huene, R.

    2010-12-01

    OCEAN FLOOR OBSERVATIONS: Oceanic turbidite systems accumulate above igneous oceanic crust and are commonly huge in areal and volumetric dimensions. For example, the volume of the Zodiac fan of the Gulf of Alaska is roughly 300,000 cubic km. Other large oceanic systems construct the Amazon cone, flood the Bay of Bengal abyss, and accumulate along trench axes to thickness of 1 to 7 km and lengths of 1000 to 3000 km, e.g., the Aleutian-Alaska, Sumatra-Andaman, Makran, and south central Chile Trenches. THE ROCK RECORD: Despite the large dimensions of oceanic turbidite systems, they are poorly preserved in the rock record. This includes oceanic systems deposited in passive-margin oceans, e.g., the Paleozoic Iapetus and Rheric oceans of the Atlantic realm, This circumstance does not apply to Cretaceous and E. Tertiary rock sequences of the north Pacific rim where oceanic turbidite deposits are preserved as accretionary complexes, e.g., the Catalina-Pelona-Orocopia-Rand schist of California and the Chugach-Kodiak complex of Alaska. These rock bodies are exhumed crustal underplates of once deeply (15-30 km) subducted oceanic turbidite systems. PATH FROM SOURCE TO TECTONIC SINK: The fate of most oceanic turbidite systems is to be removed from the sea floor and, ultimately, destroyed. This circumstance is unavoidable because most of them are deposited on lower plate crust destined for destruction in a subduction zone. During the past 4-5 myr alone a volume of 1-1.5 million cubic km of sediment sourced from the glaciated drainages of the Gulf of Alaska flooded the 3000-km-long Aleutian-Alaska trench axis. A small part of this volume accumulated tectonically as a narrow, 10-30-km wide accretionary frontal prism. But about 80 percent was subducted and entered the subduction channel separating the two plates. The subduction channel, roughly 1 km thick, conveys the trench turbidite deposits landward down dip along the rupturing width of the seismogenic zone. SEISMIC CONSEQUENCE

  10. Radiation Protection and Safety of Radiation Sources: International Basic Safety Standards. General Safety Requirements. Pt. 3 (Chinese Edition)

    International Nuclear Information System (INIS)

    2014-01-01

    This publication is the new edition of the International Basic Safety Standards. The edition is co-sponsored by seven other international organizations — European Commission (EC/Euratom), FAO, ILO, OECD/NEA, PAHO, UNEP and WHO. It replaces the interim edition that was published in November 2011 and the previous edition of the International Basic Safety Standards which was published in 1996. It has been extensively revised and updated to take account of the latest finding of the United Nations Scientific Committee on the Effects of Atomic Radiation, and the latest recommendations of the International Commission on Radiological Protection. The publication details the requirements for the protection of people and the environment from harmful effects of ionizing radiation and for the safety of radiation sources. All circumstances of radiation exposure are considered

  11. Radiation protection and safety of radiation sources: International basic safety standards. General safety requirements. Pt. 3 (French Edition)

    International Nuclear Information System (INIS)

    2016-01-01

    This publication is the new edition of the International Basic Safety Standards. The edition is co-sponsored by seven other international organizations — European Commission (EC/Euratom), FAO, ILO, OECD/NEA, PAHO, UNEP and WHO. It replaces the interim edition that was published in November 2011 and the previous edition of the International Basic Safety Standards which was published in 1996. It has been extensively revised and updated to take account of the latest finding of the United Nations Scientific Committee on the Effects of Atomic Radiation, and the latest recommendations of the International Commission on Radiological Protection. The publication details the requirements for the protection of people and the environment from harmful effects of ionizing radiation and for the safety of radiation sources. All circumstances of radiation exposure are considered

  12. Radiation Protection and Safety of Radiation Sources: International Basic Safety Standards. General Safety Requirements. Pt. 3 (Arabic Edition)

    International Nuclear Information System (INIS)

    2015-01-01

    This publication is the new edition of the International Basic Safety Standards. The edition is co-sponsored by seven other international organizations — European Commission (EC/Euratom), FAO, ILO, OECD/NEA, PAHO, UNEP and WHO. It replaces the interim edition that was published in November 2011 and the previous edition of the International Basic Safety Standards which was published in 1996. It has been extensively revised and updated to take account of the latest finding of the United Nations Scientific Committee on the Effects of Atomic Radiation, and the latest recommendations of the International Commission on Radiological Protection. The publication details the requirements for the protection of people and the environment from harmful effects of ionizing radiation and for the safety of radiation sources. All circumstances of radiation exposure are considered

  13. IAEA Conference on Large Radiation Sources in Industry (Warsaw 1959): Which technologies of radiation processing survived and why?

    International Nuclear Information System (INIS)

    Zagorski, Z.P.

    1999-01-01

    The IAEA has organized in Warsaw an International Conference on Large Radiation Sources in Industry from 8 to 12 September 1959. Proceedings of the Conference have been published in two volumes of summary amount of 925 pages. This report presents analysis, which technologies presented at the Conference have survived and why. The analysis is interesting because already in the fifties practically full range of possibilities of radiation processing was explored, and partially implemented. Not many new technologies were presented at the next IAEA Conferences on the same theme. Already at the time of the Warsaw Conference an important role of economy of the technology has recognized. The present report selects the achievements of the Conference into two groups: the first concerns technologies which have not been implemented in the next decades and the second group which is the basis of highly profitable, unsubsidized commercial production. The criterion of belonging of the technology to the second group, is the value of the quotient of the cost of the ready, saleable product diminished by the cost of a raw material before processing, to the expense of radiation processing, being the sum of irradiation cost and such operations as transportation of the object to and from the irradiation facility. Low value of the quotient, as compared to successful technologies is prophesying badly as concerns the future of the commercial proposal. A special position among objects of radiation processing is occupied by radiation processing technologies direct towards the protection or improving of the environment. Market economy does not apply here and the implementation has to be subsidized. (author)

  14. Predictive Big Data Analytics: A Study of Parkinson's Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations.

    Science.gov (United States)

    Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W

    2016-01-01

    A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches

  15. Predictive Big Data Analytics: A Study of Parkinson's Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations.

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    Full Text Available A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI. The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data.Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i introduce methods for rebalancing imbalanced cohorts, (ii utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model

  16. Osteocytes, not Osteoblasts or Lining Cells, are the Main Source of the RANKL Required for Osteoclast Formation in Remodeling Bone.

    Directory of Open Access Journals (Sweden)

    Jinhu Xiong

    Full Text Available The cytokine receptor activator of nuclear factor kappa B ligand (RANKL, encoded by the Tnfsf11 gene, is essential for osteoclastogenesis and previous studies have shown that deletion of the Tnfsf11 gene using a Dmp1-Cre transgene reduces osteoclast formation in cancellous bone by more than 70%. However, the Dmp1-Cre transgene used in those studies leads to recombination in osteocytes, osteoblasts, and lining cells making it unclear whether one or more of these cell types produce the RANKL required for osteoclast formation in cancellous bone. Because osteoblasts, osteocytes, and lining cells have distinct locations and functions, distinguishing which of these cell types are sources of RANKL is essential for understanding the orchestration of bone remodeling. To distinguish between these possibilities, we have now created transgenic mice expressing the Cre recombinase under the control of regulatory elements of the Sost gene, which is expressed in osteocytes but not osteoblasts or lining cells in murine bone. Activity of the Sost-Cre transgene in osteocytes, but not osteoblast or lining cells, was confirmed by crossing Sost-Cre transgenic mice with tdTomato and R26R Cre-reporter mice, which express tdTomato fluorescent protein or LacZ, respectively, only in cells expressing the Cre recombinase or their descendants. Deletion of the Tnfsf11 gene in Sost-Cre mice led to a threefold decrease in osteoclast number in cancellous bone and increased cancellous bone mass, mimicking the skeletal phenotype of mice in which the Tnfsf11 gene was deleted using the Dmp1-Cre transgene. These results demonstrate that osteocytes, not osteoblasts or lining cells, are the main source of the RANKL required for osteoclast formation in remodeling cancellous bone.

  17. Seasonal Shifts in Primary Water Source Type: A Comparison of Largely Pastoral Communities in Uganda and Tanzania

    Directory of Open Access Journals (Sweden)

    Amber L. Pearson

    2016-01-01

    Full Text Available Many water-related illnesses show an increase during the wet season. This is often due to fecal contamination from runoff, yet, it is unknown whether seasonal changes in water availability may also play a role in increased illness via changes in the type of primary water source used by households. Very little is known about the dynamic aspects of access to water and changes in source type across seasons, particularly in semi-arid regions with annual water scarcity. The research questions in this study were: (1 To what degree do households in Uganda (UG and Tanzania (TZ change primary water source type between wet and dry seasons?; and (2 How might seasonal changes relate to water quality and health? Using spatial survey data from 92 households each in UG and TZ this study found that, from wet to dry season, 26% (UG and 9% (TZ of households switched from a source with higher risk of contamination to a source with lower risk. By comparison, only 20% (UG and 0% (TZ of households switched from a source with lower risk of contamination to a source with higher risk of contamination. This research suggests that one pathway through which water-related disease prevalence may differ across seasons is the use of water sources with higher risk contamination, and that households with access to sources with lower risks of contamination sometimes choose to use more contaminated sources.

  18. Monte Carlo study of radial energy deposition from primary and secondary particles for narrow and large proton beamlet source models

    International Nuclear Information System (INIS)

    Peeler, Christopher R; Titt, Uwe

    2012-01-01

    In spot-scanning intensity-modulated proton therapy, numerous unmodulated proton beam spots are delivered over a target volume to produce a prescribed dose distribution. To accurately model field size-dependent output factors for beam spots, the energy deposition at positions radial to the central axis of the beam must be characterized. In this study, we determined the difference in the central axis dose for spot-scanned fields that results from secondary particle doses by investigating energy deposition radial to the proton beam central axis resulting from primary protons and secondary particles for mathematical point source and distributed source models. The largest difference in the central axis dose from secondary particles resulting from the use of a mathematical point source and a distributed source model was approximately 0.43%. Thus, we conclude that the central axis dose for a spot-scanned field is effectively independent of the source model used to calculate the secondary particle dose. (paper)

  19. Intertidal Sandbar Welding as a Primary Source of Sediment for Dune Growth: Evidence from a Large Scale Field Experiment

    Science.gov (United States)

    Cohn, N.; Ruggiero, P.; de Vries, S.

    2016-12-01

    Dunes provide the first line of defense from elevated water levels in low-lying coastal systems, limiting potentially major flooding, economic damages, and loss of livelihood. Despite the well documented importance of healthy dunes, our predictive ability of dune growth, particularly following erosive storm events, remains poor - resulting in part from traditionally studying the wet and dry beach as separate entities. In fact, however, dune recovery and growth is closely tied to the subtidal morphology and the nearshore hydrodynamic conditions, necessitating treating the entire coastal zone from the shoreface to the backshore as an integrated system. In this context, to further improve our understanding of the physical processes allowing for beach and dune growth during fair weather conditions, a large field experiment, the Sandbar-aEolian Dune EXchange EXperiment, was performed in summer 2016 in southwestern Washington, USA. Measurements of nearshore and atmospheric hydrodynamics, in-situ sediment transport, and morphology change provide insight into the time and space scales of nearshore-beach-dune exchanges along a rapidly prograding stretch of coast over a 6 week period. As part of this experiment, the hypothesis that dune growth is limited by the welding of intertidal sandbars to the shoreline (Houser, 2009) was tested. Using laser particle counters, bed elevation sensors (sonar altimeters and Microsoft Kinect), continuously logging sediment traps, RGB and IR cameras, and repeat morphology surveys (terrestrial lidar, kite based structure from motion, and RTK GPS), spatial and temporal trends in aeolian sediment transport were assessed in relation to the synoptic onshore migration and welding of intertidal sandbars. Observations from this experiment demonstrate that (1) the intertidal zone is the primary source of sediment to the dunes during non-storm conditions, (2) rates of saltation increase during later stages of bar welding but equivalent wind conditions

  20. 40 CFR Table 3 to Subpart Dd of... - Tank Control Levels for Tanks at Existing Affected Sources as Required by 40 CFR 63.685(b)(1)

    Science.gov (United States)

    2010-07-01

    ... Existing Affected Sources as Required by 40 CFR 63.685(b)(1) 3 Table 3 to Subpart DD of Part 63 Protection... Hazardous Air Pollutants from Off-Site Waste and Recovery Operations Pt. 63, Subpt. DD, Table 3 Table 3 to Subpart DD of Part 63—Tank Control Levels for Tanks at Existing Affected Sources as Required by 40 CFR 63...

  1. 40 CFR Table 4 to Subpart Dd of... - Tank Control Levels for Tanks at New Affected Sources as Required by 40 CFR 63.685(b)(2)

    Science.gov (United States)

    2010-07-01

    ... Affected Sources as Required by 40 CFR 63.685(b)(2) 4 Table 4 to Subpart DD of Part 63 Protection of... Hazardous Air Pollutants from Off-Site Waste and Recovery Operations Pt. 63, Subpt. DD, Table 4 Table 4 to Subpart DD of Part 63—Tank Control Levels for Tanks at New Affected Sources as Required by 40 CFR 63.685(b...

  2. Study on the Requirement of Nitrogen Sources by Scheffersomyces Stipitis NRRL Y-7124 to Produce Ethanol from Xylose Based-media

    DEFF Research Database (Denmark)

    Mussatto, Solange I.; Carneiro, L. M.; Roberto, I. C.

    This study aimed at evaluating the requirement of nitrogen sources by the yeast Scheffersomyces stipitis NRRL Y-7124 to produce ethanol from xylose based-media. Different nitrogen sources were evaluated, which were used to supplement a defined xylose-based medium and also the hemicellulosic hydro...

  3. RF-source development for ITER: Large area H- beam extraction, modifications for long pulse operation and design of a half size ITER source

    International Nuclear Information System (INIS)

    Kraus, W.; Heinemann, B.; Falter, H.D.; Franzen, P.; Speth, E.; Entscheva, A.; Fantz, U.; Franke, T.; Holtum, D.; Martens, Ch.; McNeely, P.; Riedl, R.; Wilhelm, R.

    2005-01-01

    With an extraction area of 152 cm 2 a calorimetrically measured H - current density of 19.3 mA/cm 2 has been achieved at 0.45 Pa with 90 kW RF power. With 306 cm 2 the electrically measured H - current has reached up to 9.7 A corresponding to 32 mA/cm 2 at 100 kW. The current on the calorimeter is limited by the extraction system. Down to 0.2 Pa only a weak dependence on the source pressure has been observed. The test bed will be upgraded to demonstrate cw operation with deuterium. Based on the tested prototype a half size ITER RF-source of 80 cm x 90 cm with 360 kW RF power has been designed

  4. Optimizing desalinated sea water blending with other sources to meet magnesium requirements for potable and irrigation waters.

    Science.gov (United States)

    Avni, Noa; Eben-Chaime, Moshe; Oron, Gideon

    2013-05-01

    Sea water desalination provides fresh water that typically lacks minerals essential to human health and to agricultural productivity. Thus the rising proportion of desalinated sea water consumed by both the domestic and agricultural sectors constitutes a public health risk. Research on low-magnesium water irrigation showed that crops developed magnesium deficiency symptoms that could lead to plant death, and tomato yields were reduced by 10-15%. The World Health Organization (WHO) reported on a relationship between sudden cardiac death rates and magnesium intake deficits. An optimization model, developed and tested to provide recommendations for Water Distribution System (WDS) quality control in terms of meeting optimal water quality requirements, was run in computational experiments based on an actual regional WDS. The expected magnesium deficit due to the operation of a large Sea Water Desalination Plant (SWDP) was simulated, and an optimal operation policy, in which remineralization at the SWDP was combined with blending desalinated and natural water to achieve the required quality, was generated. The effects of remineralization costs and WDS physical layout on the optimal policy were examined by sensitivity analysis. As part of the sensitivity blending natural and desalinated water near the treatment plants will be feasible up to 16.2 US cents/m(3), considering all expenses. Additional chemical injection was used to meet quality criteria when blending was not feasible. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  5. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    Science.gov (United States)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models

  6. A study on source term assessment and waste disposal requirement of decontamination and decommissioning for the TRIGA research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Whang, Joo Ho; Lee, Kyung JIn; Lee, Jae Min; Choi, Gyu Seup; Shin, Byoung Sun [Kyunghee Univ., Seoul (Korea, Republic of)

    1999-08-15

    The objective and necessity of the project : TRIGA is the first nuclear facility that decide to decommission and decontamination in our nation. As we estimate the expected life of nuclear power generation at 30 or 40 years, the decommissioning business should be conducted around 2010, and the development of regulatory technique supporting it should be developed previously. From a view of decommissioning and decontamination, the research reactor is just small in scale but it include all decommissioning and decontamination conditions. So, the rules by regulatory authority with decommissioning will be a guide for nuclear power plant in the future. The basis of regulatory technique required when decommissioning the research reactor are the radiological safety security and the data for it. The source term is very important condition not only for security of worker but for evaluating how we dispose the waste is appropriate for conducting the middle store and the procedure after it when the final disposal is considered. The content and the scope in this report contain the procedure of conducting the assessment of the source term which is most important in understanding the general concept of the decommissioning procedure of the decommissioning and decontamination of TRIGA research reactor. That is, the sampling and measuring method is presented as how to measure the volume of the radioactivity of the nuclear facilities. And also, the criterion of classifying the waste occurred in other countries and the site release criteria which is the final step of decommissioning and decontamination presented through MARSSIM. Finally, the program to be applicable through comparing the methods of our nation and other countries ones is presented as plan for disposal of the waste in the decommissioning.

  7. Large-scale dam removal on the Elwha River, Washington, USA: source-to-sink sediment budget and synthesis

    Science.gov (United States)

    Warrick, Jonathan A.; Bountry, Jennifer A.; East, Amy E.; Magirl, Christopher S.; Randle, Timothy J.; Gelfenbaum, Guy R.; Ritchie, Andrew C.; Pess, George R.; Leung, Vivian; Duda, Jeff J.

    2015-01-01

    Understanding landscape responses to sediment supply changes constitutes a fundamental part of many problems in geomorphology, but opportunities to study such processes at field scales are rare. The phased removal of two large dams on the Elwha River, Washington, exposed 21 ± 3 million m3, or ~ 30 million tonnes (t), of sediment that had been deposited in the two former reservoirs, allowing a comprehensive investigation of watershed and coastal responses to a substantial increase in sediment supply. Here we provide a source-to-sink sediment budget of this sediment release during the first two years of the project (September 2011–September 2013) and synthesize the geomorphic changes that occurred to downstream fluvial and coastal landforms. Owing to the phased removal of each dam, the release of sediment to the river was a function of the amount of dam structure removed, the progradation of reservoir delta sediments, exposure of more cohesive lakebed sediment, and the hydrologic conditions of the river. The greatest downstream geomorphic effects were observed after water bodies of both reservoirs were fully drained and fine (silt and clay) and coarse (sand and gravel) sediments were spilling past the former dam sites. After both dams were spilling fine and coarse sediments, river suspended-sediment concentrations were commonly several thousand mg/L with ~ 50% sand during moderate and high river flow. At the same time, a sand and gravel sediment wave dispersed down the river channel, filling channel pools and floodplain channels, aggrading much of the river channel by ~ 1 m, reducing river channel sediment grain sizes by ~ 16-fold, and depositing ~ 2.2 million m3 of sand and gravel on the seafloor offshore of the river mouth. The total sediment budget during the first two years revealed that the vast majority (~ 90%) of the sediment released from the former reservoirs to the river passed through the fluvial system and was discharged to the coastal

  8. The Chandra Source Catalog : Automated Source Correlation

    Science.gov (United States)

    Hain, Roger; Evans, I. N.; Evans, J. D.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    Chandra Source Catalog (CSC) master source pipeline processing seeks to automatically detect sources and compute their properties. Since Chandra is a pointed mission and not a sky survey, different sky regions are observed for a different number of times at varying orientations, resolutions, and other heterogeneous conditions. While this provides an opportunity to collect data from a potentially large number of observing passes, it also creates challenges in determining the best way to combine different detection results for the most accurate characterization of the detected sources. The CSC master source pipeline correlates data from multiple observations by updating existing cataloged source information with new data from the same sky region as they become available. This process sometimes leads to relatively straightforward conclusions, such as when single sources from two observations are similar in size and position. Other observation results require more logic to combine, such as one observation finding a single, large source and another identifying multiple, smaller sources at the same position. We present examples of different overlapping source detections processed in the current version of the CSC master source pipeline. We explain how they are resolved into entries in the master source database, and examine the challenges of computing source properties for the same source detected multiple times. Future enhancements are also discussed. This work is supported by NASA contract NAS8-03060 (CXC).

  9. 34 CFR 364.29 - What are the requirements for coordinating Federal and State sources of funding?

    Science.gov (United States)

    2010-07-01

    ... State sources of funding? 364.29 Section 364.29 Education Regulations of the Offices of the Department... and State sources of funding? (a) The State plan must describe efforts to coordinate Federal and State funding for centers and IL services. (b) The State plan must identify the amounts, sources, and purposes...

  10. Top-down estimate of a large source of atmospheric carbon monoxide associated with fuel combustion in Asia

    Energy Technology Data Exchange (ETDEWEB)

    Kasibhatla, P.; Arellano, A.; Logan, J.A.; Palmer, P.I.; Novelli, P. [Duke University, Durham, NC (United States). Nicholas School of Environmental & Earth Science

    2002-10-01

    Deriving robust regional estimates of the sources of chemically and radiatively important gases and aerosols to the atmosphere is challenging. Using an inverse modeling methodology, it was found that the source of carbon monoxide from fossil-fuel and biofuel combustion in Asia during 1994 was 350-380 Tg yr{sup -1}, which is 110-140 Tg yr{sup -1} higher than bottom-up estimates derived using traditional inventory-based approaches. This discrepancy points to an important gap in our understanding of the human impact on atmospheric chemical composition.

  11. Results of research and development in large-scale research centers as an innovation source for firms

    International Nuclear Information System (INIS)

    Theenhaus, R.

    1978-01-01

    The twelve large-scale research centres of the Federal Republic of Germany with their 16,000 employees represent a considerable scientific and technical potential. Cooperation with industry with regard to large-scale projects has already become very close and the know-how flow as well as the contributions to innovation connected therewith are largely established. The first successful steps to utilizing the results of basic research, of spin off and those within the frame of research and development as well as the fulfilling of services are encouraging. However, there is a number of detail problems which can only be solved between all parties concerned, in particular between industry and all large-scale research centres. (orig./RW) [de

  12. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    OpenAIRE

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-01-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a dept...

  13. Sulphate leaching from diffuse agricultural and forest sources in a large central European catchment during 1900-2010

    Czech Academy of Sciences Publication Activity Database

    Kopáček, Jiří; Hejzlar, Josef; Porcal, Petr; Posch, M.

    470, February (2014), s. 543-550 ISSN 0048-9697 R&D Projects: GA ČR(CZ) GAP504/12/1218; GA ČR(CZ) GA526/09/0567 Institutional support: RVO:60077344 Keywords : modelling * sulphate leaching * sulphur mineralization * diffuse sources Subject RIV: DJ - Water Pollution ; Quality Impact factor: 4.099, year: 2014

  14. Host-Derived Sialic Acids Are an Important Nutrient Source Required for Optimal Bacterial Fitness In Vivo.

    Science.gov (United States)

    McDonald, Nathan D; Lubin, Jean-Bernard; Chowdhury, Nityananda; Boyd, E Fidelma

    2016-04-12

    A major challenge facing bacterial intestinal pathogens is competition for nutrient sources with the host microbiota.Vibrio cholerae is an intestinal pathogen that causes cholera, which affects millions each year; however, our knowledge of its nutritional requirements in the intestinal milieu is limited. In this study, we demonstrated that V. cholerae can grow efficiently on intestinal mucus and its component sialic acids and that a tripartite ATP-independent periplasmic SiaPQM strain, transporter-deficient mutant NC1777, was attenuated for colonization using a streptomycin-pretreated adult mouse model. In in vivo competition assays, NC1777 was significantly outcompeted for up to 3 days postinfection. NC1777 was also significantly outcompeted in in vitro competition assays in M9 minimal medium supplemented with intestinal mucus, indicating that sialic acid uptake is essential for fitness. Phylogenetic analyses demonstrated that the ability to utilize sialic acid was distributed among 452 bacterial species from eight phyla. The majority of species belonged to four phyla, Actinobacteria (members of Actinobacillus, Corynebacterium, Mycoplasma, and Streptomyces), Bacteroidetes (mainly Bacteroides, Capnocytophaga, and Prevotella), Firmicutes (members of Streptococcus, Staphylococcus, Clostridium, and Lactobacillus), and Proteobacteria (including Escherichia, Shigella, Salmonella, Citrobacter, Haemophilus, Klebsiella, Pasteurella, Photobacterium, Vibrio, and Yersinia species), mostly commensals and/or pathogens. Overall, our data demonstrate that the ability to take up host-derived sugars and sialic acid specifically allows V. cholerae a competitive advantage in intestinal colonization and that this is a trait that is sporadic in its occurrence and phylogenetic distribution and ancestral in some genera but horizontally acquired in others. Sialic acids are nine carbon amino sugars that are abundant on all mucous surfaces. The deadly human pathogen Vibrio cholerae contains

  15. Modeling of methane bubbles released from large sea-floor area: Condition required for methane emission to the atmosphere

    OpenAIRE

    Yamamoto, A.; Yamanaka, Y.; Tajika, E.

    2009-01-01

    Massive methane release from sea-floor sediments due to decomposition of methane hydrate, and thermal decomposition of organic matter by volcanic outgassing, is a potential contributor to global warming. However, the degree of global warming has not been estimated due to uncertainty over the proportion of methane flux from the sea-floor to reach the atmosphere. Massive methane release from a large sea-floor area would result in methane-saturated seawater, thus some methane would reach the atm...

  16. Host-Derived Sialic Acids Are an Important Nutrient Source Required for Optimal Bacterial Fitness In Vivo

    Directory of Open Access Journals (Sweden)

    Nathan D. McDonald

    2016-04-01

    Full Text Available A major challenge facing bacterial intestinal pathogens is competition for nutrient sources with the host microbiota. Vibrio cholerae is an intestinal pathogen that causes cholera, which affects millions each year; however, our knowledge of its nutritional requirements in the intestinal milieu is limited. In this study, we demonstrated that V. cholerae can grow efficiently on intestinal mucus and its component sialic acids and that a tripartite ATP-independent periplasmic SiaPQM strain, transporter-deficient mutant NC1777, was attenuated for colonization using a streptomycin-pretreated adult mouse model. In in vivo competition assays, NC1777 was significantly outcompeted for up to 3 days postinfection. NC1777 was also significantly outcompeted in in vitro competition assays in M9 minimal medium supplemented with intestinal mucus, indicating that sialic acid uptake is essential for fitness. Phylogenetic analyses demonstrated that the ability to utilize sialic acid was distributed among 452 bacterial species from eight phyla. The majority of species belonged to four phyla, Actinobacteria (members of Actinobacillus, Corynebacterium, Mycoplasma, and Streptomyces, Bacteroidetes (mainly Bacteroides, Capnocytophaga, and Prevotella, Firmicutes (members of Streptococcus, Staphylococcus, Clostridium, and Lactobacillus, and Proteobacteria (including Escherichia, Shigella, Salmonella, Citrobacter, Haemophilus, Klebsiella, Pasteurella, Photobacterium, Vibrio, and Yersinia species, mostly commensals and/or pathogens. Overall, our data demonstrate that the ability to take up host-derived sugars and sialic acid specifically allows V. cholerae a competitive advantage in intestinal colonization and that this is a trait that is sporadic in its occurrence and phylogenetic distribution and ancestral in some genera but horizontally acquired in others.

  17. A framework for quality assessment of just-in-time requirements : The case of open source feature requests

    NARCIS (Netherlands)

    Heck, P.M.; Zaidman, A.E.

    2017-01-01

    Until now, quality assessment of requirements has focused on traditional up-front requirements. Contrasting these traditional requirements are just-in-time (JIT) requirements, which are by definition incomplete, not specific and might be ambiguous when initially specified, indicating a different

  18. 40 CFR 63.11467 - What are the initial compliance demonstration requirements for new and existing sources?

    Science.gov (United States)

    2010-07-01

    ... Pollutants for Secondary Nonferrous Metals Processing Area Sources Standards, Compliance, and Monitoring... for structural integrity and fabric filter condition. You must record the results of the inspection...

  19. On the group approximation errors in description of neutron slowing-down at large distances from a source. Diffusion approach

    International Nuclear Information System (INIS)

    Kulakovskij, M.Ya.; Savitskij, V.I.

    1981-01-01

    The errors of multigroup calculating the neutron flux spatial and energy distribution in the fast reactor shield caused by using group and age approximations are considered. It is shown that at small distances from a source the age theory rather well describes the distribution of the slowing-down density. With the distance increase the age approximation leads to underestimating the neutron fluxes, and the error quickly increases at that. At small distances from the source (up to 15 lengths of free path in graphite) the multigroup diffusion approximation describes the distribution of slowing down density quite satisfactorily and at that the results almost do not depend on the number of groups. With the distance increase the multigroup diffusion calculations lead to considerable overestimating of the slowing-down density. The conclusion is drawn that the group approximation proper errors are opposite in sign to the error introduced by the age approximation and to some extent compensate each other

  20. Polycyclic aromatic hydrocarbons (PAHs) in a large South American industrial coastal area (Santos Estuary, Southeastern Brazil): Sources and depositional history

    International Nuclear Information System (INIS)

    Martins, Cesar C.; Bicego, Marcia C.; Mahiques, Michel M.; Figueira, Rubens C.L.; Tessler, Moyses G.; Montone, Rosalinda C.

    2011-01-01

    Highlights: → In early 1980s, Santos Estuary became known as one of the worst polluted in the world. → PAHs levels were similar to the values reported for marine sediments worldwide. → PAHs analyses indicated multiple sources of these compounds (oil and pyrolitic origin). → The decline of oil consumption due to the world oil crisis (late 1970s) was shown. → The input of organic pollutants is a historical problem for the Santos Estuary. - Abstract: Located in southeastern Brazil, the Santos Estuary has the most important industrial and urban population area of South America. Since the 1950's, increased urbanization and industrialization near the estuary margins has caused the degradation of mangroves and has increased the discharge of sewage and industrial effluents. The main objectives of this work were to determine the concentrations and sources of polycyclic aromatic hydrocarbons (PAHs) in sediment cores in order to investigate the input of these substances in the last 50 years. The PAHs analyses indicated multiple sources of these compounds (oil and pyrolitic origin), basically anthropogenic contributions from biomass, coal and fossil fuels combustion. The distribution of PAHs in the cores was associated with the formation and development of Cubatao industrial complex and the Santos harbour, waste disposal, world oil crisis and the pollution control program, which results in the decrease of organic pollutants input in this area.

  1. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    Science.gov (United States)

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-08-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.

  2. European Legislation to Prevent Loss of Control of Sources and to Recover Orphan Sources, and Other Requirements Relevant to the Scrap Metal Industry

    Energy Technology Data Exchange (ETDEWEB)

    Janssens, A.; Tanner, V.; Mundigl, S., E-mail: augustin.janssens@ec.europa.eu [European Commission (Luxembourg)

    2011-07-15

    European legislation (Council Directive 2003/122/EURATOM) has been adopted with regard to the control of high-activity sealed radioactive sources (HASS). This Directive is now part of an overall recast of current radiation protection legislation. At the same time the main Directive, 96/29/EURATOM, laying down Basic Safety Standards (BSS) for the health protection of the general public and workers against the dangers of ionizing radiation, is being revised in the light of the new recommendations of the International Commission on Radiological Protection (ICRP). The provisions for exemption and clearance are a further relevant feature of the new BSS. The current issues emerging from the revision and recast of the BSS are discussed, in the framework of the need to protect the scrap metal industry from orphan sources and to manage contaminated metal products. (author)

  3. The transport sectors potential contribution to the flexibility in the power sector required by large-scale wind power integration

    DEFF Research Database (Denmark)

    Nørgård, Per Bromand; Lund, H.; Mathiesen, B.V.

    2007-01-01

    -scale integration of renewable energy in the power system – in specific wind power. In the plan, 20 % of the road transport is based on electricity and 20 % on bio- fuels. This, together with other initiatives allows for up to 55-60 % wind power penetration in the power system. A fleet of 0.5 mio electrical...... vehicles in Denmark in 2030 connected to the grid 50 % of the time represents an aggregated flexible power capacity of 1- 1.5 GW and an energy capacity of 10-150 GWh.......In 2006, the Danish Society of Engineers developed a visionary plan for the Danish energy system in 2030. The paper presents and qualifies selected part of the analyses, illustrating the transport sectors potential to contribute to the flexibility in the power sector, necessary for large...

  4. A large-scale mass casualty simulation to develop the non-technical skills medical students require for collaborative teamwork.

    Science.gov (United States)

    Jorm, Christine; Roberts, Chris; Lim, Renee; Roper, Josephine; Skinner, Clare; Robertson, Jeremy; Gentilcore, Stacey; Osomanski, Adam

    2016-03-08

    There is little research on large-scale complex health care simulations designed to facilitate student learning of non-technical skills in a team-working environment. We evaluated the acceptability and effectiveness of a novel natural disaster simulation that enabled medical students to demonstrate their achievement of the non-technical skills of collaboration, negotiation and communication. In a mixed methods approach, survey data were available from 117 students and a thematic analysis undertaken of both student qualitative comments and tutor observer participation data. Ninety three per cent of students found the activity engaging for their learning. Three themes emerged from the qualitative data: the impact of fidelity on student learning, reflexivity on the importance of non-technical skills in clinical care, and opportunities for collaborative teamwork. Physical fidelity was sufficient for good levels of student engagement, as was sociological fidelity. We demonstrated the effectiveness of the simulation in allowing students to reflect upon and evidence their acquisition of skills in collaboration, negotiation and communication, as well as situational awareness and attending to their emotions. Students readily identified emerging learning opportunities though critical reflection. The scenarios challenged students to work together collaboratively to solve clinical problems, using a range of resources including interacting with clinical experts. A large class teaching activity, framed as a simulation of a natural disaster is an acceptable and effective activity for medical students to develop the non-technical skills of collaboration, negotiation and communication, which are essential to team working. The design could be of value in medical schools in disaster prone areas, including within low resource countries, and as a feasible intervention for learning the non-technical skills that are needed for patient safety.

  5. 40 CFR Table 3 to Subpart Mmmmm of... - Performance Test Requirements for New or Reconstructed Flame Lamination Affected Sources

    Science.gov (United States)

    2010-07-01

    ... data only required for venturi scrubbers) every 15 minutes during the entire duration of each 1-hour... (pressure drop data only required for Venturi scrubbers) over the period of the performance test by... liquid flow rate, scrubber effluent pH, and pressure drop (pressure drop data only required for venturi...

  6. Occurrence, spatial distribution, sources, and risks of polychlorinated biphenyls and heavy metals in surface sediments from a large eutrophic Chinese lake (Lake Chaohu)

    DEFF Research Database (Denmark)

    He, Wei; Bai, Ze-Lin; Liu, Wen-Xiu

    2016-01-01

    Surface sediment from large and eutrophic Lake Chaohu was investigated to determine the occurrence, spatial distribution, sources, and risks of polychlorinated biphenyls (PCBs) and heavy metals in one of the five biggest freshwater lakes in China. Total concentration of PCBs (Σ34PCBs) in Lake...... and microbial degradation accounted for 34.2 % and 65.8 % of total PCBs using PMF, and PMF revealed that natural and anthropogenic sources of heavy metals accounted for 38.1 % and 61.8 %, respectively. CA indicated that some toxic heavy metals (e.g., Cd, In, Tl, and Hg) were associated with Ca–Na–Mg minerals......, and Hg were at levels of environmental concern. The sediment in the drinking water source area (DWSA) was threatened by heavy metals from other areas, and some fundamental solutions were proposed to protect the DWSA....

  7. Detecting Large-Scale Brain Networks Using EEG: Impact of Electrode Density, Head Modeling and Source Localization

    Science.gov (United States)

    Liu, Quanying; Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante

    2018-01-01

    Resting state networks (RSNs) in the human brain were recently detected using high-density electroencephalography (hdEEG). This was done by using an advanced analysis workflow to estimate neural signals in the cortex and to assess functional connectivity (FC) between distant cortical regions. FC analyses were conducted either using temporal (tICA) or spatial independent component analysis (sICA). Notably, EEG-RSNs obtained with sICA were very similar to RSNs retrieved with sICA from functional magnetic resonance imaging data. It still remains to be clarified, however, what technological aspects of hdEEG acquisition and analysis primarily influence this correspondence. Here we examined to what extent the detection of EEG-RSN maps by sICA depends on the electrode density, the accuracy of the head model, and the source localization algorithm employed. Our analyses revealed that the collection of EEG data using a high-density montage is crucial for RSN detection by sICA, but also the use of appropriate methods for head modeling and source localization have a substantial effect on RSN reconstruction. Overall, our results confirm the potential of hdEEG for mapping the functional architecture of the human brain, and highlight at the same time the interplay between acquisition technology and innovative solutions in data analysis. PMID:29551969

  8. Connecting the small scale to the large scale: young massive stars and their environments from the Red MSX Source Survey.

    Science.gov (United States)

    Figura, Charles C.; Urquhart, James S.; Morgan, Lawrence

    2015-01-01

    We have conducted a detailed multi-wavelength investigation of a variety of massive star forming regions in order to characterise the impact of the interactions between the substructure of the dense protostellar clumps and their local environment, including feedback from the embedded proto-cluster.A selection of 70 MYSOs and HII regions identified by the RMS survey have been followed up with observations of the ammonia (1,1) and (2,2) inversion transitions made with the KFPA on the GBT. These maps have been combined with archival CO data to investigate the thermal and kinematic structure of the extended envelopes down to the dense clumps. We complement this larger-scale picture with high resolution near- and mid-infrared images to probe the properties of the embedded objects themselves.We present an overview of several sources from this sample that illustrate some of the the interactions that we observe. We find that high molecular column densities and kinetic temperatures are coincident with embedded sources and with shocks and outflows as exhibited in gas kinematics.

  9. Radiological risk assessment for the public under the loss of medium and large sources using bayesian methodology

    International Nuclear Information System (INIS)

    Kim, Joo Yeon; Jang, Han Ki; Lee, Jai Ki

    2005-01-01

    Bayesian methodology is appropriated for use in PRA because subjective knowledges as well as objective data are applied to assessment. In this study, radiological risk based on Bayesian methodology is assessed for the loss of source in field radiography. The exposure scenario for the lost source presented in U.S. NRC is reconstructed by considering the domestic situation and Bayes theorem is applied to updating of failure probabilities of safety functions. In case of updating of failure probabilities, it shows that 5% Bayes credible intervals using Jeffreys prior distribution are lower than ones using vague prior distribution. It is noted that Jeffreys prior distribution is appropriated in risk assessment for systems having very low failure probabilities. And, it shows that the mean of the expected annual dose for the public based on Bayesian methodology is higher than the dose based on classical methodology because the means of the updated probabilities are higher than classical probabilities. The database for radiological risk assessment are sparse in domestic. It summarizes that Bayesian methodology can be applied as an useful alternative for risk assessment and the study on risk assessment will be contributed to risk-informed regulation in the field of radiation safety

  10. Source apportionment of major and trace elements in aerosols during smog episodes in large cities in China

    Science.gov (United States)

    Furger, Markus; Rai, Pragati; Visser, Suzanne; Elser, Miriam; Canonaco, Francesco; Slowik, Jay G.; Huang, Ru-Jin; Prévôt, André S. H.; Baltensperger, Urs

    2017-04-01

    Air pollution in Chinese cities is one of the environmental problems China has to address to mitigate the impacts on human health, air quality and climate. Average concentrations of particulate matter exceed 100 μg m-3 in many places in China, and the government is developing and implementing strategies to reduce the load of pollutants by various measures. A characterization of airborne particulate matter (PM), especially its composition and sources, will help in optimizing reduction and mitigation strategies for air pollution. We collected PM10 aerosols with a rotating drum impactor (RDI) in Xi'an in December 2013 and in Beijing in January 2014 with 30-min time resolution and for three size ranges (cut-off sizes 10, 2.5 and 1 μm). Each campaign encompassed one or more high pollution episodes in the respective city. Elements from Na to Pb were analyzed with synchrotron radiation induced X-ray fluorescence spectrometry (SR-XRF), and the resulting time series were used for source apportionment performed with the Multilinear-Engine 2 (ME-2) implementation of the Positive Matrix Factorization algorithm. The preliminary computations yielded 5 factors for Beijing, namely road dust, sea salt, traffic-related, industrial, coal combustion. For Xi'an an additional desert dust factor was found. Further refinement could be expected from including the smaller size fractions, e.g. a sulfur-rich factor for secondary sulfate or a reacted chlorine factor in the fine mode fraction.

  11. Application of stochastic models in identification and apportionment of heavy metal pollution sources in the surface soils of a large-scale region.

    Science.gov (United States)

    Hu, Yuanan; Cheng, Hefa

    2013-04-16

    As heavy metals occur naturally in soils at measurable concentrations and their natural background contents have significant spatial variations, identification and apportionment of heavy metal pollution sources across large-scale regions is a challenging task. Stochastic models, including the recently developed conditional inference tree (CIT) and the finite mixture distribution model (FMDM), were applied to identify the sources of heavy metals found in the surface soils of the Pearl River Delta, China, and to apportion the contributions from natural background and human activities. Regression trees were successfully developed for the concentrations of Cd, Cu, Zn, Pb, Cr, Ni, As, and Hg in 227 soil samples from a region of over 7.2 × 10(4) km(2) based on seven specific predictors relevant to the source and behavior of heavy metals: land use, soil type, soil organic carbon content, population density, gross domestic product per capita, and the lengths and classes of the roads surrounding the sampling sites. The CIT and FMDM results consistently indicate that Cd, Zn, Cu, Pb, and Cr in the surface soils of the PRD were contributed largely by anthropogenic sources, whereas As, Ni, and Hg in the surface soils mostly originated from the soil parent materials.

  12. Liver diseases: A major, neglected global public health problem requiring urgent actions and large-scale screening.

    Science.gov (United States)

    Marcellin, Patrick; Kutala, Blaise K

    2018-02-01

    CLDs represent an important, and certainly underestimated, global public health problem. CLDs are highly prevalent and silent, related to different, sometimes associated causes. The distribution of the causes of these diseases is slowly changing, and within the next decade, the proportion of virus-induced CLDs will certainly decrease significantly while the proportion of NASH will increase. There is an urgent need for effective global actions including education, prevention and early diagnosis to manage and treat CLDs, thus preventing cirrhosis-related morbidity and mortality. Our role is to increase the awareness of the public, healthcare professionals and public health authorities to encourage active policies for early management that will decrease the short- and long-term public health burden of these diseases. Because necroinflammation is the key mechanism in the progression of CLDs, it should be detected early. Thus, large-scale screening for CLDs is needed. ALT levels are an easy and inexpensive marker of liver necroinflammation and could be the first-line tool in this process. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Collection of a radioactive source of $^{83}$Kr to study the gas distribution dynamics in a large GRPC detector

    CERN Multimedia

    An ultra-granular hadronic calorimeter was built using Glass Resistive Plate Chamber (GRPC) detectors as the sensitive medium. The gas of those detectors of $1 \\times 1$ m$^{2}$ each is constrained to be on one side of the detector. To ensure good gas distribution a prototype was built. Such a scheme could be extended to larger GRPC detectors of more than 2 m$^{2}$ if found efficient. To check the performance a radioactive gas could be used in association with the usual gas mixture used to operate the GRPC. The distribution of the radioactive gas can be monitored thanks to the 1 cm$^2$ resolution provided by the embedded electronics used to read out the detector. The radioactive $^{83}$Kr source (obtained from $^{83}$Rb decay) could be produced at the ISOLDE facility and will be used to study larger GRPC detectors at CERN.

  14. Parameterization of disorder predictors for large-scale applications requiring high specificity by using an extended benchmark dataset

    Directory of Open Access Journals (Sweden)

    Eisenhaber Frank

    2010-02-01

    Full Text Available Abstract Background Algorithms designed to predict protein disorder play an important role in structural and functional genomics, as disordered regions have been reported to participate in important cellular processes. Consequently, several methods with different underlying principles for disorder prediction have been independently developed by various groups. For assessing their usability in automated workflows, we are interested in identifying parameter settings and threshold selections, under which the performance of these predictors becomes directly comparable. Results First, we derived a new benchmark set that accounts for different flavours of disorder complemented with a similar amount of order annotation derived for the same protein set. We show that, using the recommended default parameters, the programs tested are producing a wide range of predictions at different levels of specificity and sensitivity. We identify settings, in which the different predictors have the same false positive rate. We assess conditions when sets of predictors can be run together to derive consensus or complementary predictions. This is useful in the framework of proteome-wide applications where high specificity is required such as in our in-house sequence analysis pipeline and the ANNIE webserver. Conclusions This work identifies parameter settings and thresholds for a selection of disorder predictors to produce comparable results at a desired level of specificity over a newly derived benchmark dataset that accounts equally for ordered and disordered regions of different lengths.

  15. Known and Novel Sources of Variability in the Nicotine Metabolite Ratio in a Large Sample of Treatment-Seeking Smokers

    Science.gov (United States)

    Chenoweth, Meghan J.; Novalen, Maria; Hawk, Larry W.; Schnoll, Robert A.; George, Tony P.; Cinciripini, Paul M.; Lerman, Caryn; Tyndale, Rachel F.

    2014-01-01

    Background The ratio of 3′hydroxycotinine to cotinine, or nicotine metabolite ratio (NMR), is strongly associated with CYP2A6 genotype, CYP2A6-mediated nicotine and cotinine metabolism, and nicotine clearance. Higher NMR (faster nicotine clearance) is associated retrospectively with heavier smoking and lower cessation rates. Methods NMR as a predictive biomarker of cessation outcomes is being investigated (NCT01314001). In addition to strong CYP2A6-genetic influences on NMR, demographic and hormonal factors alter NMR. Here we analyzed, for the first time together, these sources of variation on NMR in smokers screened for this clinical trial (N=1672). Results Participants (mean age=45.9) were 65.1% Caucasian, 34.9% African American, and 54.8% male. Mean NMR (SD) was higher in Caucasians vs. African Americans (0.41(0.20) vs. 0.33(0.21); P<0.001), and in females vs. males (0.41(0.22) vs. 0.37(0.20); P<0.001). Among females, birth control pill use (N=17) and hormone replacement therapy (N=14) were associated with 19.5% (P=0.09) and 29.3% (P=0.06) higher mean NMR, respectively, albeit non-significantly. BMI was negatively associated with NMR (Rho=−0.14; P<0.001), while alcohol use (Rho=0.11; P<0.001) and cigarette consumption (Rho=0.12; P<0.001) were positively associated with NMR. NMR was 16% percent lower in mentholated cigarette users (P<0.001). When analyzed together in a linear regression model, these predictors (each ≤2%) accounted for <8% of total NMR variation. Conclusions While these factors significantly affected NMR, they contributed little (together <8%; each ≤2%) to total NMR variation. Impact Thus when using NMR, for example to prospectively guide smoking cessation therapy, these sources of variation are unlikely to cause NMR misclassification. PMID:25012994

  16. Distributed chemical computing using ChemStar: an open source java remote method invocation architecture applied to large scale molecular data from PubChem.

    Science.gov (United States)

    Karthikeyan, M; Krishnan, S; Pandey, Anil Kumar; Bender, Andreas; Tropsha, Alexander

    2008-04-01

    We present the application of a Java remote method invocation (RMI) based open source architecture to distributed chemical computing. This architecture was previously employed for distributed data harvesting of chemical information from the Internet via the Google application programming interface (API; ChemXtreme). Due to its open source character and its flexibility, the underlying server/client framework can be quickly adopted to virtually every computational task that can be parallelized. Here, we present the server/client communication framework as well as an application to distributed computing of chemical properties on a large scale (currently the size of PubChem; about 18 million compounds), using both the Marvin toolkit as well as the open source JOELib package. As an application, for this set of compounds, the agreement of log P and TPSA between the packages was compared. Outliers were found to be mostly non-druglike compounds and differences could usually be explained by differences in the underlying algorithms. ChemStar is the first open source distributed chemical computing environment built on Java RMI, which is also easily adaptable to user demands due to its "plug-in architecture". The complete source codes as well as calculated properties along with links to PubChem resources are available on the Internet via a graphical user interface at http://moltable.ncl.res.in/chemstar/.

  17. Identifying sources of groundwater nitrate contamination in a large alluvial groundwater basin with highly diversified intensive agricultural production

    Science.gov (United States)

    Lockhart, K. M.; King, A. M.; Harter, T.

    2013-08-01

    Groundwater quality is a concern in alluvial aquifers underlying agricultural areas worldwide. Nitrate from land applied fertilizers or from animal waste can leach to groundwater and contaminate drinking water resources. The San Joaquin Valley, California, is an example of an agricultural landscape with a large diversity of field, vegetable, tree, nut, and citrus crops, but also confined animal feeding operations (CAFOs, here mostly dairies) that generate, store, and land apply large amounts of liquid manure. As in other such regions around the world, the rural population in the San Joaquin Valley relies almost exclusively on shallow domestic wells (≤ 150 m deep), of which many have been affected by nitrate. Variability in crops, soil type, and depth to groundwater contribute to large variability in nitrate occurrence across the underlying aquifer system. The role of these factors in controlling groundwater nitrate contamination levels is examined. Two hundred domestic wells were sampled in two sub-regions of the San Joaquin Valley, Stanislaus and Merced (Stan/Mer) and Tulare and Kings (Tul/Kings) Counties. Forty six percent of well water samples in Tul/Kings and 42% of well water samples in Stan/Mer exceeded the MCL for nitrate (10 mg/L NO3-N). For statistical analysis of nitrate contamination, 78 crop and landuse types were considered by grouping them into ten categories (CAFO, citrus, deciduous fruits and nuts, field crops, forage, native, pasture, truck crops, urban, and vineyards). Vadose zone thickness, soil type, well construction information, well proximity to dairies, and dominant landuse near the well were considered. In the Stan/Mer area, elevated nitrate levels in domestic wells most strongly correlate with the combination of very shallow (≤ 21 m) water table and the presence of either CAFO derived animal waste applications or deciduous fruit and nut crops (synthetic fertilizer applications). In Tulare County, statistical data indicate that elevated

  18. 26 CFR 1.857-4 - Tax imposed by reason of the failure to meet certain source-of-income requirements.

    Science.gov (United States)

    2010-04-01

    ..., DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Real Estate Investment Trusts § 1.857-4 Tax imposed by reason of the failure to meet certain source-of-income requirements. Section 857... 26 Internal Revenue 9 2010-04-01 2010-04-01 false Tax imposed by reason of the failure to meet...

  19. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    Science.gov (United States)

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  20. Efficient power generation from large 7500C heat sources. Application to coal-fired and nuclear power station

    International Nuclear Information System (INIS)

    Tilliette, Z.P.; Pierre, B.

    1980-03-01

    Considering the future concern about a more efficient, rational use of heat sources, and also about a greater location flexibility of power plants owing to dry cooling possibility, closed gas cycles can offer new solutions for fossil or nuclear energy. An efficient heat conversion into power is obtained by the combination of a main non-intercooled helium cycle with a flexible, superheated, low pressure bottoming steam cycle. Emphasis is placed on the matching of the two cycle; for that, a recuperator by-pass arrangement is used. The operation of the main gas turbocompressor does not depend upon the operation of the small steam cycle. Results are given for a conservative turbine inlet temperature of 750 0 C. Applications are made to a coal-fired power plant and to a gas turbine, gas-cooled nuclear reactor. Overall net plant efficiencies of 39 per cent and 46 per cent respectively are reached. For a cycle top temperature equal to 850 0 C, corresponding net efficiencies would be 42 and 49 per cent

  1. The desing study of high voltage plasma focus for a large fluence neutron source by using a water capacitor bank

    International Nuclear Information System (INIS)

    Ueno, Isao; Kobata, Tadasuke

    1983-01-01

    A new possibility for high intensity neutron source (HINS) would be opened by the plasma focus device if we have a high voltage capacitor bank. A scaling law of neutron yield for D-T gas discharge in plasma focus device is obtained after Imshennik, Filippov and Filippova. The resulting scaling law shows the realizability of the D-T HINS by the use of plasma focus, provided that the device is operated under a high voltage condition. Until now, it has been difficult to construct the high voltage capacitor bank of long life, for example with V 0 =300kV, C 0 =200μF and L 0 --5nH necessary in the level of HINS. It becomes possible to design this capacitor bank by using the coaxial water capacitor which has been developed for the electron and ion beam accelerator. The size of a capacitor designed for V 0 =300kV, C 0 =1μF is phi5m x 22m. Two hundred capacitors are used in parallel in order to get the 200μF. (author)

  2. Simulation of droplet impact onto a deep pool for large Froude numbers in different open-source codes

    Science.gov (United States)

    Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.

    2017-11-01

    A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.

  3. Molecular cloning of the large subunit of the high-Ca2+-requiring form of human Ca2+-activated neutral protease

    International Nuclear Information System (INIS)

    Imajoh, Shinobu; Aoki, Kazumasa; Ohno, Shigeo; Emori, Yasufumi; Kawasaki, Hiroshi; Sugihara, Hidemitsu; Suzuki, Koichi

    1988-01-01

    A nearly full-length cDNA clone for the large subunit of high-Ca 2+ -requiring Ca 2+ -activated neutral protease (mCANP) from human tissues has been isolated. The deduced protein, determined for the first time as an mCANP, has essentially the same structural features as those revealed previously for the large subunits of the low-Ca 2+ -requiring form (μCANP). Namely, the protein, comprising 700 amino acid residues, is characterized by four domains, containing a cysteine protease like domain and a Ca 2+ -binding domain. The overall amino acid sequence similarities of the mCANP large subunit with those of human μCANP and chicken CANP are 62% and 66%, respectively. These values are slightly lower than that observed between μCANP and chicken CANP (70%). Local sequence similarities vary with the domain, 73-78% in the cysteine protease like domain and 48-65% in the Ca 2+ -binding domain. These results suggest that CANPs with different Ca 2+ sensitivities share a common evolutionary origin and that their regulatory mechanisms are similar except for the Ca 2+ concentrations required for activation

  4. Biomass burning source characterization requirements in air quality models with and without data assimilation: challenges and opportunities

    Science.gov (United States)

    Hyer, E. J.; Zhang, J. L.; Reid, J. S.; Curtis, C. A.; Westphal, D. L.

    2007-12-01

    Quantitative models of the transport and evolution of atmospheric pollution have graduated from the laboratory to become a part of the operational activity of forecast centers. Scientists studying the composition and variability of the atmosphere put great efforts into developing methods for accurately specifying sources of pollution, including natural and anthropogenic biomass burning. These methods must be adapted for use in operational contexts, which impose additional strictures on input data and methods. First, only input data sources available in near real-time are suitable for use in operational applications. Second, operational applications must make use of redundant data sources whenever possible. This is a shift in philosophy: in a research context, the most accurate and complete data set will be used, whereas in an operational context, the system must be designed with maximum redundancy. The goal in an operational context is to produce, to the extent possible, consistent and timely output, given sometimes inconsistent inputs. The Naval Aerosol Analysis and Prediction System (NAAPS), a global operational aerosol analysis and forecast system, recently began incorporating assimilation of satellite-derived aerosol optical depth. Assimilation of satellite AOD retrievals has dramatically improved aerosol analyses and forecasts from this system. The use of aerosol data assimilation also changes the strategy for improving the smoke source function. The absolute magnitude of emissions events can be refined through feedback from the data assimilation system, both in real- time operations and in post-processing analysis of data assimilation results. In terms of the aerosol source functions, the largest gains in model performance are now to be gained by reducing data latency and minimizing missed detections. In this presentation, recent model development work on the Fire Locating and Monitoring of Burning Emissions (FLAMBE) system that provides smoke aerosol

  5. Derivation and characterization of human fetal MSCs: an alternative cell source for large-scale production of cardioprotective microparticles.

    Science.gov (United States)

    Lai, Ruenn Chai; Arslan, Fatih; Tan, Soon Sim; Tan, Betty; Choo, Andre; Lee, May May; Chen, Tian Sheng; Teh, Bao Ju; Eng, John Kun Long; Sidik, Harwin; Tanavde, Vivek; Hwang, Wei Sek; Lee, Chuen Neng; El Oakley, Reida Menshawe; Pasterkamp, Gerard; de Kleijn, Dominique P V; Tan, Kok Hian; Lim, Sai Kiang

    2010-06-01

    The therapeutic effects of mesenchymal stem cells (MSCs) transplantation are increasingly thought to be mediated by MSC secretion. We have previously demonstrated that human ESC-derived MSCs (hESC-MSCs) produce cardioprotective microparticles in pig model of myocardial ischemia/reperfusion (MI/R) injury. As the safety and availability of clinical grade human ESCs remain a concern, MSCs from fetal tissue sources were evaluated as alternatives. Here we derived five MSC cultures from limb, kidney and liver tissues of three first trimester aborted fetuses and like our previously described hESC-derived MSCs; they were highly expandable and had similar telomerase activities. Each line has the potential to generate at least 10(16-19) cells or 10(7-10) doses of cardioprotective secretion for a pig model of MI/R injury. Unlike previously described fetal MSCs, they did not express pluripotency-associated markers such as Oct4, Nanog or Tra1-60. They displayed a typical MSC surface antigen profile and differentiated into adipocytes, osteocytes and chondrocytes in vitro. Global gene expression analysis by microarray and qRT-PCR revealed a typical MSC gene expression profile that was highly correlated among the five fetal MSC cultures and with that of hESC-MSCs (r(2)>0.90). Like hESC-MSCs, they produced secretion that was cardioprotective in a mouse model of MI/R injury. HPLC analysis of the secretion revealed the presence of a population of microparticles with a hydrodynamic radius of 50-65 nm. This purified population of microparticles was cardioprotective at approximately 1/10 dosage of the crude secretion. (c) 2009 Elsevier Ltd. All rights reserved.

  6. Reconstruction of atmospheric trace metals pollution in Southwest China using sediments from a large and deep alpine lake: Historical trends, sources and sediment focusing.

    Science.gov (United States)

    Lin, Qi; Liu, Enfeng; Zhang, Enlou; Nath, Bibhash; Shen, Ji; Yuan, Hezhong; Wang, Rong

    2018-02-01

    Atmospheric pollution, one of the leading environmental problems in South and East Asia, and its impact on the terrestrial environmental quality remain poorly understood particularly in alpine areas where both historical and present-day mining and smelting operations might leave an imprint. Here, we reconstructed atmospheric trace metals pollution during the past century using core sediments from a large and deep alpine lake in Southwest China. The implication of in lake and/or in watershed sediment focusing in pollution quantification is discussed by analyzing 15 sediment cores. Factor analysis and enrichment factor indicated Cd, Pb and Sb as the typical pollutants. Distinct peaks of Pb and Sb pollution were observed around the 1920s, but little Pb pollution was detected in recent decades, different from other studies in similar regions. Cadmium pollution was observed until the mid-1980s synchronized with Sb. The distinctive variations in atmospheric trace metal pollution process in Southwest China highlight the regional and sub-regional sources of metal pollutants, which should be primarily attributed to non-ferrous metal smelting emissions. Both natural and anthropogenic metals showed wide concentration ranges though exhibited similar temporal trends in the 15 cores. Spatial variations of anthropogenic metals were influenced by the in-watershed pollutants remobilization, whereas, natural metals were regulated by the detrital materials in the sub-basin. In-lake sediment focusing had little influence on the spatial distributions of all metals, different from the traditional sediment focusing pattern observed in small lakes. Anthropogenic Cd accumulation in sediments ranged from 1.5 to 10.1mgm -2 in a specific core with an average of 6.5mgm -2 for the entire lake, highlighting that a reliable whole-lake pollutant budget requires an analysis of multiple cores. Our study suggests that the management of aquatic ecosystem health should take the remobilization of in

  7. SCARDEC: a new technique for the rapid determination of seismic moment magnitude, focal mechanism and source time functions for large earthquakes using body-wave deconvolution

    Science.gov (United States)

    Vallée, M.; Charléty, J.; Ferreira, A. M. G.; Delouis, B.; Vergoz, J.

    2011-01-01

    Accurate and fast magnitude determination for large, shallow earthquakes is of key importance for post-seismic response and tsumami alert purposes. When no local real-time data are available, which is today the case for most subduction earthquakes, the first information comes from teleseismic body waves. Standard body-wave methods give accurate magnitudes for earthquakes up to Mw= 7-7.5. For larger earthquakes, the analysis is more complex, because of the non-validity of the point-source approximation and of the interaction between direct and surface-reflected phases. The latter effect acts as a strong high-pass filter, which complicates the magnitude determination. We here propose an automated deconvolutive approach, which does not impose any simplifying assumptions about the rupture process, thus being well adapted to large earthquakes. We first determine the source duration based on the length of the high frequency (1-3 Hz) signal content. The deconvolution of synthetic double-couple point source signals—depending on the four earthquake parameters strike, dip, rake and depth—from the windowed real data body-wave signals (including P, PcP, PP, SH and ScS waves) gives the apparent source time function (STF). We search the optimal combination of these four parameters that respects the physical features of any STF: causality, positivity and stability of the seismic moment at all stations. Once this combination is retrieved, the integration of the STFs gives directly the moment magnitude. We apply this new approach, referred as the SCARDEC method, to most of the major subduction earthquakes in the period 1990-2010. Magnitude differences between the Global Centroid Moment Tensor (CMT) and the SCARDEC method may reach 0.2, but values are found consistent if we take into account that the Global CMT solutions for large, shallow earthquakes suffer from a known trade-off between dip and seismic moment. We show by modelling long-period surface waves of these events that

  8. Large, but not small, antigens require time- and temperature-dependent processing in accessory cells before they can be recognized by T cells

    DEFF Research Database (Denmark)

    Buus, S; Werdelin, O

    1986-01-01

    We have studied if antigens of different size and structure all require processing in antigen-presenting cells of guinea-pigs before they can be recognized by T cells. The method of mild paraformaldehyde fixation was used to stop antigen-processing in the antigen-presenting cells. As a measure...... of antigen presentation we used the proliferative response of appropriately primed T cells during a co-culture with the paraformaldehyde-fixed and antigen-exposed presenting cells. We demonstrate that the large synthetic polypeptide antigen, dinitrophenyl-poly-L-lysine, requires processing. After an initial......-dependent and consequently energy-requiring. Processing is strongly inhibited by the lysosomotrophic drug, chloroquine, suggesting a lysosomal involvement in antigen processing. The existence of a minor, non-lysosomal pathway is suggested, since small amounts of antigen were processed even at 10 degrees C, at which...

  9. [Evaluation and source analysis of the mercury pollution in soils and vegetables around a large-scale zinc smelting plant].

    Science.gov (United States)

    Liu, Fang; Wang, Shu-Xiao; Wu, Qing-Ru; Lin, Hai

    2013-02-01

    The farming soil and vegetable samples around a large-scale zinc smelter were collected for mercury content analyses, and the single pollution index method with relevant regulations was used to evaluate the pollution status of sampled soils and vegetables. The results indicated that the surface soil and vegetables were polluted with mercury to different extent. Of the soil samples, 78% exceeded the national standard. The mercury concentration in the most severely contaminated area was 29 times higher than the background concentration, reaching the severe pollution degree. The mercury concentration in all vegetable samples exceeded the standard of non-pollution vegetables. Mercury concentration, in the most severely polluted vegetables were 64.5 times of the standard, and averagely the mercury concentration in the vegetable samples was 25.4 times of the standard. For 85% of the vegetable samples, the mercury concentration, of leaves were significantly higher than that of roots, which implies that the mercury in leaves mainly came from the atmosphere. The mercury concentrations in vegetable roots were significantly correlated with that in soils, indicating the mercury in roots was mainly from soil. The mercury emissions from the zinc smelter have obvious impacts on the surrounding soils and vegetables. Key words:zinc smelting; mercury pollution; soil; vegetable; mercury content

  10. 48 CFR 6.302-1 - Only one responsible source and no other supplies or services will satisfy agency requirements.

    Science.gov (United States)

    2010-10-01

    ... innovative concept (see definition at 2.101), or, demonstrates a unique capability of the source to provide the particular research services proposed; (B) Offers a concept or services not otherwise available to... contract is for construction of a part of a utility system and the utility company itself is the only...

  11. Dietary Lipid Sources Influence Fatty Acid Composition in Tissue of Large Yellow Croaker (Larmichthys crocea by Regulating Triacylglycerol Synthesis and Catabolism at the Transcriptional Level.

    Directory of Open Access Journals (Sweden)

    Hong Qiu

    Full Text Available An 8-week feeding trial was conducted to evaluate the effects of dietary lipid sources on growth performance, fatty acid composition, rate-limiting enzyme activities and gene expression related to lipid metabolism in large yellow croaker (Larmichthys crocea. Five iso-nitrogenous and iso-lipidic experimental diets were formulated to contain different lipid sources, such as fish oil (FO, soybean oil (SO, linseed oil (LO, rapeseed oil (RO and peanut oil (PO, respectively. Triplicate groups of 50 fish (initial weight 13.77±0.07g were stocked in 15 floating net cages (1.5m×1.5m×2.0m. Fish fed the diets containing RO and LO had lower weight gain and specific growth rates than those fed the FO, SO and PO diets. Survival, feed efficiency, protein efficiency ratio, hepatosomatic index, viscerasomatic index and condition factor were not significantly affected by different dietary lipid sources. Fish fed the diet containing FO had higher lipid content in whole body compared with the other groups, whereas fish fed the SO diet had the lowest muscle lipid content. Fatty acid profiles of muscle and liver reflected the fatty acid composition of the diets. Plasma glucose, triglyceride, and the enzymatic activity of aspartate aminotransferase and alanine aminotransferase were significantly influenced by different dietary lipid sources, while total protein, cholesterol, superoxide dismutase or malondialdehyde in plasma were not affected by the different dietary lipid sources. Fish fed the LO diet had lower adipose triglyceride lipase and fatty acid synthase activities in liver than those fed the diets containing FO and RO, while the LO diet resulted in the highest hepatic carnitine palmitoultransferase-1 activity. Hepatic gene relative expression of adipose triglyceride lipase and carnitine palmitoyltransferase-1 in fish fed PO diet was significantly higher than all other groups, whereas fish fed the SO and LO diets had lower relative expression levels of

  12. Parametric Evaluation of Large-Scale High-Temperature Electrolysis Hydrogen Production Using Different Advanced Nuclear Reactor Heat Sources

    International Nuclear Information System (INIS)

    Harvego, Edwin A.; McKellar, Michael G.; O'Brien, James E.; Herring, J. Stephen

    2009-01-01

    High Temperature Electrolysis (HTE), when coupled to an advanced nuclear reactor capable of operating at reactor outlet temperatures of 800 C to 950 C, has the potential to efficiently produce the large quantities of hydrogen needed to meet future energy and transportation needs. To evaluate the potential benefits of nuclear-driven hydrogen production, the UniSim process analysis software was used to evaluate different reactor concepts coupled to a reference HTE process design concept. The reference HTE concept included an Intermediate Heat Exchanger and intermediate helium loop to separate the reactor primary system from the HTE process loops and additional heat exchangers to transfer reactor heat from the intermediate loop to the HTE process loops. The two process loops consisted of the water/steam loop feeding the cathode side of a HTE electrolysis stack, and the sweep gas loop used to remove oxygen from the anode side. The UniSim model of the process loops included pumps to circulate the working fluids and heat exchangers to recover heat from the oxygen and hydrogen product streams to improve the overall hydrogen production efficiencies. The reference HTE process loop model was coupled to separate UniSim models developed for three different advanced reactor concepts (a high-temperature helium cooled reactor concept and two different supercritical CO2 reactor concepts). Sensitivity studies were then performed to evaluate the affect of reactor outlet temperature on the power cycle efficiency and overall hydrogen production efficiency for each of the reactor power cycles. The results of these sensitivity studies showed that overall power cycle and hydrogen production efficiencies increased with reactor outlet temperature, but the power cycles producing the highest efficiencies varied depending on the temperature range considered

  13. Two separable functional domains of simian virus 40 large T antigen: carboxyl-terminal region of simian virus 40 large T antigen is required for efficient capsid protein synthesis.

    Science.gov (United States)

    Tornow, J; Polvino-Bodnar, M; Santangelo, G; Cole, C N

    1985-01-01

    The carboxyl-terminal portion of simian virus 40 large T antigen is essential for productive infection of CV-1 and CV-1p green monkey kidney cells. Mutant dlA2459, lacking 14 base pairs at 0.193 map units, was positive for viral DNA replication, but unable to form plaques in CV-1p cells (J. Tornow and C.N. Cole, J. Virol. 47:487-494, 1983). In this report, the defect of dlA2459 is further defined. Simian virus 40 late mRNAs were transcribed, polyadenylated, spliced, and transported in dlA2459-infected cells, but the level of capsid proteins produced in infected CV-1 green monkey kidney cells was extremely low. dlA2459 large T antigen lacks those residues known to be required for adenovirus helper function, and the block to productive infection by dlA2459 occurs at the same stage of infection as the block to productive adenovirus infection of CV-1 cells. These results suggest that the adenovirus helper function is required for productive infection by simian virus 40. Mutant dlA2459 was able to grow on the Vero and BSC-1 lines of African green monkey kidney cells. Additional mutants affecting the carboxyl-terminal portion of large T were prepared. Mutant inv2408 contains an inversion of the DNA between the BamHI and BclI sites (0.144 to 0.189 map units). This inversion causes transposition of the carboxyl-terminal 26 amino acids of large T antigen and the carboxyl-terminal 18 amino acids of VP1. This mutant was viable, even though the essential information absent from dlA2459 large T antigen has been transferred to the carboxyl terminus of VP1 of inv2408. The VP1 polypeptide carrying this carboxyl-terminal portion of large T could overcome the defect of dlA2459. This indicates that the carboxyl terminus of large T antigen is a separate and separable functional domain. Images PMID:2982029

  14. Characterisation of the high dynamic range Large Pixel Detector (LPD) and its use at X-ray free electron laser sources

    Science.gov (United States)

    Veale, M. C.; Adkin, P.; Booker, P.; Coughlan, J.; French, M. J.; Hart, M.; Nicholls, T.; Schneider, A.; Seller, P.; Pape, I.; Sawhney, K.; Carini, G. A.; Hart, P. A.

    2017-12-01

    The STFC Rutherford Appleton Laboratory have delivered the Large Pixel Detector (LPD) for MHz frame rate imaging at the European XFEL. The detector system has an active area of 0.5 m × 0.5 m and consists of a million pixels on a 500 μm pitch. Sensors have been produced from 500 μm thick Hammamatsu silicon tiles that have been bump bonded to the readout ASIC using a silver epoxy and gold stud technique. Each pixel of the detector system is capable of measuring 105 12 keV photons per image readout at 4.5 MHz. In this paper results from the testing of these detectors at the Diamond Light Source and the Linac Coherent Light Source (LCLS) are presented. The performance of the detector in terms of linearity, spatial uniformity and the performance of the different ASIC gain stages is characterised.

  15. Variants for the development of electricity generating sources to meet the load requirements of Republic of Moldova

    International Nuclear Information System (INIS)

    Comendant, I.; Sula, A.

    1996-01-01

    The Institute of Power Engineering of the Academy of Sciences of Moldova elaborated a project of the Energy Programme of Republic Moldova in 1994. Within the framework of this project it was studied the development of electricity generating sources taking into account a possibility of utilization of power reserves of Romanian power system. Under the condition of acute lack of investments and high uncertainty concerning the development of the national economy it was found a rational solution which supposes a wide integration of the power systems of Republic Moldova and Romania. (author) 1 tab

  16. Primary energy sources for electricity supply in the FRG - demand and requirements as seen by the electricity supply industry

    International Nuclear Information System (INIS)

    Bierhoff, R.

    1977-01-01

    Starting from the present energy supply situation in the FRG, the attempt is made to elucidate basic tendencies for its development until 1990. The author pleads for the necessary growth by means of a series of theses. The supply with electric power being in the foreground can only be secured in the long run by means of greater utilization of coal and nuclear energy. Due to costs, other energy sources - playing a major role - will contribute less to the supply of electric power. (UA) [de

  17. Analysis of the custom design/fabrication/testing requirements for a large-hole drilling machine for use in an underground radioactive waste repository

    International Nuclear Information System (INIS)

    Grams, W.H.; Gnirk, P.F.

    1976-01-01

    This report presents an analysis of the fabrication and field test requirements for a drilling machine that would be applicable to the drilling of large diameter holes for the emplacement of radioactive waste canisters in an underground repository. On the basis of a previous study in 1975 by RE/SPEC Inc. for the Oak Ridge National Laboratory, it was concluded that none of the commercially available machines were ideally suited for the desired drilling application, and that it was doubtful whether a machine with the required capabilities would become available as a standard equipment item. The results of the current study, as presented herein, provide a definitive basis for selecting the desired specifications, estimating the design, fabrication, and testing costs, and analyzing the cost-benefit characteristics of a custom-designed drilling machine for the emplacement hole drilling task

  18. Rates of nitrogen from nitric and ammoniacal sources required by upland rice genotypes originating from Brazil and Colombia

    Directory of Open Access Journals (Sweden)

    Hector Augusto Sandoval Contreras

    2016-06-01

    Full Text Available The aim of this study was to evaluate the initial growth, nitrogen (N uptake, and agronomic efficiency after the use of N fertilizers in upland rice cultivation. The experiment was conducted in a greenhouse by using pots filled with surface-layer (0 to 20 cm soil collected from the municipality of Jaguapitã, Paraná. The experimental design was completely randomized with 4 replications. A factorial scheme of 5 × 2 was used, in which the factors were 5 N rates (0, 25, 50, 75, and 100 kg ha-1 N and 2 cultivars of rice (Fedearroz Lagunas [Colombian] and IAPAR- 9 [Brazilian]. The N sources tested were ammonium sulfate (Experiment I and calcium nitrate (Experiment II. The following variables were evaluated: number of tillers per pot (NTP, dry mass of the shoots (DMS, N content in the dry mass (NCDM, and agronomic efficiency of N fertilizer (AEN. The data obtained in the experiments were evaluated using analysis of variance, and mean values were compared using Tukey’s test at 5% significance for rice cultivar effects or adjusted to polynomial regression equations for N rates. Use of calcium nitrate yielded higher values of NTP, NCDM, and AEN. The cultivar Lagunas showed higher NTP, while IAPAR-9 showed higher DMS. An increase in N rates, for both sources, resulted in the increase of NTP, DMS, and NCDM; however, AEN was decreased.

  19. Identification of Evidence for Key Parameters in Decision-Analytic Models of Cost Effectiveness: A Description of Sources and a Recommended Minimum Search Requirement.

    Science.gov (United States)

    Paisley, Suzy

    2016-06-01

    This paper proposes recommendations for a minimum level of searching for data for key parameters in decision-analytic models of cost effectiveness and describes sources of evidence relevant to each parameter type. Key parameters are defined as treatment effects, adverse effects, costs, resource use, health state utility values (HSUVs) and baseline risk of events. The recommended minimum requirement for treatment effects is comprehensive searching according to available methodological guidance. For other parameter types, the minimum is the searching of one bibliographic database plus, where appropriate, specialist sources and non-research-based and non-standard format sources. The recommendations draw on the search methods literature and on existing analyses of how evidence is used to support decision-analytic models. They take account of the range of research and non-research-based sources of evidence used in cost-effectiveness models and of the need for efficient searching. Consideration is given to what constitutes best evidence for the different parameter types in terms of design and scientific quality and to making transparent the judgments that underpin the selection of evidence from the options available. Methodological issues are discussed, including the differences between decision-analytic models of cost effectiveness and systematic reviews when searching and selecting evidence and comprehensive versus sufficient searching. Areas are highlighted where further methodological research is required.

  20. The SAGE-Spec Spitzer Legacy program: the life-cycle of dust and gas in the Large Magellanic Cloud. Point source classification - III

    Science.gov (United States)

    Jones, O. C.; Woods, P. M.; Kemper, F.; Kraemer, K. E.; Sloan, G. C.; Srinivasan, S.; Oliveira, J. M.; van Loon, J. Th.; Boyer, M. L.; Sargent, B. A.; McDonald, I.; Meixner, M.; Zijlstra, A. A.; Ruffle, P. M. E.; Lagadec, E.; Pauly, T.; Sewiło, M.; Clayton, G. C.; Volk, K.

    2017-09-01

    The Infrared Spectrograph (IRS) on the Spitzer Space Telescope observed nearly 800 point sources in the Large Magellanic Cloud (LMC), taking over 1000 spectra. 197 of these targets were observed as part of the SAGE-Spec Spitzer Legacy program; the remainder are from a variety of different calibration, guaranteed time and open time projects. We classify these point sources into types according to their infrared spectral features, continuum and spectral energy distribution shape, bolometric luminosity, cluster membership and variability information, using a decision-tree classification method. We then refine the classification using supplementary information from the astrophysical literature. We find that our IRS sample is comprised substantially of YSO and H II regions, post-main-sequence low-mass stars: (post-)asymptotic giant branch stars and planetary nebulae and massive stars including several rare evolutionary types. Two supernova remnants, a nova and several background galaxies were also observed. We use these classifications to improve our understanding of the stellar populations in the LMC, study the composition and characteristics of dust species in a variety of LMC objects, and to verify the photometric classification methods used by mid-IR surveys. We discover that some widely used catalogues of objects contain considerable contamination and others are missing sources in our sample.

  1. The Training Requirements for the Workers a Legal Instrument to Ensure the Safety Use of the Ionizing Radiation Sources

    International Nuclear Information System (INIS)

    Rosca, G.; Coroianu, A.; Stanescu, G.

    2009-01-01

    Recognizing the need for a graded and commensurate with the practice associated risk approach, the Romanian Regulatory Authority developed the legal framework for defining the roles, duties and responsibilities for the radiation workers (RWs) and the radiological safety officer (RPO). The licensee is responsible to provide for the RWs basic knowledge and understanding of radiation proprieties, good knowledge of the local rules and the operational radiation protection methods and the safety features of the devices, on the job training under the supervision of a RPO or a qualified expert (RPE). Every 5 years the participation to a refresher course is required

  2. Large tandem accelerators

    International Nuclear Information System (INIS)

    Jones, C.M.

    1976-01-01

    The increasing importance of energetic heavy ion beams in the study of atomic physics, nuclear physics, and materials science has partially or wholly motivated the construction of a new generation of tandem accelerators designed to operate at maximum terminal potentials in the range 14 to 30 MV. In addition, a number of older tandem accelerators are now being significantly upgraded to improve their heavy ion performance. Both of these developments have reemphasized the importance of negative heavy ion sources. The new large tandem accelerators are described, and the requirements placed on negative heavy ion source technology by these and other tandem accelerators used for the acceleration of heavy ions are discussed. First, a brief description is given of the large tandem accelerators which have been completed recently, are under construction, or are funded for construction, second, the motivation for construction of these accelerators is discussed, and last, criteria for negative ion sources for use with these accelerators are presented

  3. Accelerated Carbonation of Steel Slags Using CO{sub 2} Diluted Sources: CO{sub 2} Uptakes and Energy Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Baciocchi, Renato, E-mail: baciocchi@ing.uniroma2.it; Costa, Giulia [Department of Civil Engineering and Computer Science Engineering, University of Rome “Tor Vergata”, Rome (Italy); Polettini, Alessandra; Pomi, Raffaella; Stramazzo, Alessio [Department of Civil and Environmental Engineering, University of Rome “La Sapienza”, Rome (Italy); Zingaretti, Daniela [Department of Civil Engineering and Computer Science Engineering, University of Rome “Tor Vergata”, Rome (Italy)

    2016-01-18

    This work presents the results of carbonation experiments performed on Basic Oxygen Furnace (BOF) steel slag samples employing gas mixtures containing 40 and 10% CO{sub 2} vol. simulating the gaseous effluents of gasification and combustion processes respectively, as well as 100% CO{sub 2} for comparison purposes. Two routes were tested, the slurry-phase (L/S = 5 l/kg, T = 100°C and Ptot = 10 bar) and the thin-film (L/S = 0.3–0.4 l kg, T = 50°C and Ptot = 7–10 bar) routes. For each one, the CO{sub 2} uptake achieved as a function of the reaction time was analyzed and on this basis, the energy requirements associated with each carbonation route and gas mixture composition were estimated considering to store the CO{sub 2} emissions of a medium size natural gas fired power plant (20 MW). For the slurry-phase route, maximum CO{sub 2} uptakes ranged from around 8% at 10% CO{sub 2}, to 21.1% (BOF-a) and 29.2% (BOF-b) at 40% CO{sub 2} and 32.5% (BOF-a) and 40.3% (BOF-b) at 100% CO{sub 2}. For the thin-film route, maximum uptakes of 13% (BOF-c) and 19.5% (BOF-d) at 40% CO{sub 2}, and 17.8% (BOF-c) and 20.2% (BOF-d) at 100% were attained. The energy requirements of the two analyzed process routes appeared to depend chiefly on the CO{sub 2} uptake of the slag. For both process route, the minimum overall energy requirements were found for the tests with 40% CO{sub 2} flows (i.e., 1400−1600 MJ/t{sub CO{sub 2}} for the slurry-phase and 2220 – 2550 MJ/t{sub CO{sub 2}} for the thin-film route).

  4. A large air shower array to search for astrophysical sources emitting γ-rays with energies ≥1014 eV

    International Nuclear Information System (INIS)

    Borione, A.; Covault, C.E.; Cronin, J.W.; Fick, B.E.; Gibbs, K.G.; Krimm, H.A.; Mascarenhas, N.C.; McKay, T.A.; Mueller, D.; Newport, B.J.; Ong, R.A.; Rosenberg, L.J.; Sanders, H.; Catanese, M.; Ciampa, D.; Green, K.D.; Kolodziejczak, J.; Matthews, J.; Nitz, D.; Sinclair, D.; Van der Velde, J.C.

    1994-01-01

    We describe the technical details and the performance of a large array which detects both the electron and muon components in extensive air showers with energies ≥10 14 eV. The array was designed to search for γ-rays from astrophysical sources. The background of cosmic rays is reduced by the selection of muon poor events. The array consists of 1089 scintillation detectors on the surface covering an area of 0.23 km 2 and 1024 scintillation counters of 2.5 m 2 each, buried 3 m below the surface for muon detection. Each of the surface detectors has its own local electronics and local data storage controlled by a microprocessor. The array is located at Dugway, Utah USA (40.2 N, 112.8 W) where the average atmospheric depth is 870 g/cm 2 . ((orig.))

  5. Mentha spicata L. infusions as sources of antioxidant phenolic compounds: emerging reserve lots with special harvest requirements.

    Science.gov (United States)

    Rita, Ingride; Pereira, Carla; Barros, Lillian; Santos-Buelga, Celestino; Ferreira, Isabel C F R

    2016-10-12

    Mentha spicata L., commonly known as spearmint, is widely used in both fresh and dry forms, for infusion preparation or in European and Indian cuisines. Recently, with the evolution of the tea market, several novel products with added value are emerging, and the standard lots have evolved to reserve lots, with special harvest requirements that confer them with enhanced organoleptic and sensorial characteristics. The apical leaves of these batches are collected in specific conditions having, then, a different chemical profile. In the present study, standard and reserve lots of M. spicata were assessed in terms of the antioxidants present in infusions prepared from the different lots. The reserve lots presented the highest concentration in all the compounds identified in relation to the standard lots, with 326 and 188 μg mL -1 of total phenolic compounds, respectively. Both types of samples presented rosmarinic acid as the most abundant phenolic compound, at concentrations of 169 and 101 μg mL -1 for reserve and standard lots, respectively. The antioxidant activity was higher in the reserve lots which had the highest total phenolic compounds content, with EC 50 values ranging from 152 to 336 μg mL -1 . The obtained results provide scientific information that may allow the consumer to make a conscientious choice.

  6. Prevalence, source and severity of work-related injuries among "foreign" construction workers in a large Malaysian organisation: a cross-sectional study.

    Science.gov (United States)

    Zerguine, Haroun; Tamrin, Shamsul Bahri Mohd; Jalaludin, Juliana

    2018-02-02

    Malaysian construction sector is regarded as critical in the field of health because of the high rates of accidents and fatalities. This research aimed to determine the prevalence, sources and severity of injuries and its association with commitment to safety among foreign construction workers. A cross-sectional study was conducted among 323 foreign construction workers from six construction projects of a large organization in Malaysia, using a simple random sampling method. Data was collected using a structured questionnaire to assess work-related injuries and safety commitment. The collected data was analysed by SPSS 22.0 using descriptive statistics and chi-square test. The prevalence of work-related injuries in a one year period was 22.6%, where most of the injuries were of moderate severity (39.7%) and falls from heights represented the main source (31.5%). The majority of the foreign construction workers had perceived between moderate and high safety commitment, which was significantly associated with work-related injuries. The results also showed a significant association of work-related injuries with the company\\'s interest in Safety and Health, Safety and Health training, and safety equipment. Thus, the implementation of new procedures and providing relevant trainings and safety equipment; will lead to a decrease in injury rates in construction sites.

  7. Energy requirements of the red kangaroo (Macropus rufus): impacts of age, growth and body size in a large desert-dwelling herbivore.

    Science.gov (United States)

    Munn, A J; Dawson, T J

    2003-09-01

    Generally, young growing mammals have resting metabolic rates (RMRs) that are proportionally greater than those of adult animals. This is seen in the red kangaroo ( Macropus rufus), a large (>20 kg) herbivorous marsupial common to arid and semi-arid inland Australia. Juvenile red kangaroos have RMRs 1.5-1.6 times those expected for adult marsupials of an equivalent body mass. When fed high-quality chopped lucerne hay, young-at-foot (YAF) kangaroos, which have permanently left the mother's pouch but are still sucking, and recently weaned red kangaroos had digestible energy intakes of 641+/-27 kJ kg(-0.75) day(-1) and 677+/-26 kJ kg(-0.75) day(-1), respectively, significantly higher than the 385+/-37 kJ kg(-0.75) day(-1) ingested by mature, non-lactating females. However, YAF and weaned red kangaroos had maintenance energy requirements (MERs) that were not significantly higher than those of mature, non-lactating females, the values ranging between 384 kJ kg(-0.75) day(-1) and 390 kJ kg(-0.75) day(-1) digestible energy. Importantly, the MER of mature female red kangaroos was 84% of that previously reported for similarly sized, but still growing, male red kangaroos. Growth was the main factor affecting the proportionally higher energy requirements of the juvenile red kangaroos relative to non-reproductive mature females. On a good quality diet, juvenile red kangaroos from permanent pouch exit until shortly after weaning (ca. 220-400 days) had average growth rates of 55 g body mass day(-1). At this level of growth, juveniles had total daily digestible energy requirements (i.e. MER plus growth energy requirements) that were 1.7-1.8 times the MER of mature, non-reproductive females. Our data suggest that the proportionally higher RMR of juvenile red kangaroos is largely explained by the additional energy needed for growth. Energy contents of the tissue gained by the YAF and weaned red kangaroos during growth were estimated to be 5.3 kJ g(-1), within the range found for

  8. The Aspergillus nidulans acuL gene encodes a mitochondrial carrier required for the utilization of carbon sources that are metabolized via the TCA cycle.

    Science.gov (United States)

    Flipphi, Michel; Oestreicher, Nathalie; Nicolas, Valérie; Guitton, Audrey; Vélot, Christian

    2014-07-01

    In Aspergillus nidulans, the utilization of acetate as sole carbon source requires several genes (acu). Most of them are also required for the utilization of fatty acids. This is the case for acuD and acuE, which encode the two glyoxylate cycle-specific enzymes, isocitrate lyase and malate synthase, respectively, but also for acuL that we have identified as AN7287, and characterized in this study. Deletion of acuL resulted in the same phenotype as the original acuL217 mutant. acuL encodes a 322-amino acid protein which displays all the structural features of a mitochondrial membrane carrier, and shares 60% identity with the Saccharomyces cerevisiae succinate/fumarate mitochondrial antiporter Sfc1p (also named Acr1p). Consistently, the AcuL protein was shown to localize in mitochondria, and partial cross-complementation was observed between the S. cerevisiae and A. nidulans homologues. Extensive phenotypic characterization suggested that the acuL gene is involved in the utilization of carbon sources that are catabolized via the TCA cycle, and therefore require gluconeogenesis. In addition, acuL proves to be co-regulated with acuD and acuE. Overall, our data suggest that AcuL could link the glyoxylate cycle to gluconeogenesis by exchanging cytoplasmic succinate for mitochondrial fumarate. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Invasive mussels alter the littoral food web of a large lake: stable isotopes reveal drastic shifts in sources and flow of energy.

    Directory of Open Access Journals (Sweden)

    Ted Ozersky

    Full Text Available We investigated how establishment of invasive dreissenid mussels impacted the structure and energy sources of the littoral benthic food web of a large temperate lake. We combined information about pre- and postdreissenid abundance, biomass, and secondary production of the littoral benthos with results of carbon and nitrogen stable isotope analysis of archival (predreissenid and recent (postdreissenid samples of all common benthic taxa. This approach enabled us to determine the importance of benthic and sestonic carbon to the littoral food web before, and more than a decade after dreissenid establishment. Long term dreissenid presence was associated with a 32-fold increase in abundance, 6-fold increase in biomass, and 14-fold increase in secondary production of the littoral benthos. Dreissenids comprised a large portion of the post-invasion benthos, making up 13, 38, and 56% of total abundance, biomass, and secondary production, respectively. The predreissenid food web was supported primarily by benthic primary production, while sestonic material was relatively more important to the postdreissenid food web. The absolute importance of both sestonic material and benthic primary production to the littoral benthos increased considerably following dreissenid establishment. Our results show drastic alterations to food web structure and suggest that dreissenid mussels redirect energy and material from the water column to the littoral benthos both through biodeposition of sestonic material as well as stimulation of benthic primary production.

  10. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System: Outage-Limited Scenario

    KAUST Repository

    Makki, Behrooz

    2016-03-22

    This paper investigates the performance of the point-To-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas, which are required to satisfy different outage probability constraints. Our results are obtained for different fading conditions and the effect of the power amplifiers efficiency/feedback error probability on the performance of the MIMO-HARQ systems is analyzed. Then, we use some recent results on the achievable rates of finite block-length codes, to analyze the effect of the codewords lengths on the system performance. Moreover, we derive closed-form expressions for the asymptotic performance of the MIMO-HARQ systems when the number of antennas increases. Our analytical and numerical results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 1972-2012 IEEE.

  11. Os, Nd and Sr isotope and trace element geochemistry of the Muli picrites: Insights into the mantle source of the Emeishan Large Igneous Province

    Science.gov (United States)

    Li, Jie; Xu, Ji-Feng; Suzuki, Katsuhiko; He, Bin; Xu, Yi-Gang; Ren, Zhong-Yuan

    2010-09-01

    A suite of picrites and basalts from the Muli area, in the northwestern part of the Emeishan continental flood basalt province, provides new and valuable information on the geochemistry of the Emeishan Large Igneous Province (LIP) and its source. The Muli picrites can be classified as type-1 or type-2. The former shows ocean-island basalt-like trace element characteristics, with γ Os (260 Ma) values and ɛ Nd (260 Ma) values ranging from + 7.5 to + 11.5 and from + 6.0 to + 7.8, respectively. This is the first time that picrites with highly radiogenic Os and high Os contents (up to 3.3 ppb) have been recognized in the Emeishan LIP. These characteristics probably reflect a relatively enriched component in the Emeishan LIP source. The type-2 picrites are characterized by non-radiogenic γ Os (260 Ma) values ranging from - 4.2 to - 0.3, and they may be further subdivided into type-2A and type-2B picrites. Type-2A picrites contain moderate amounts of the light rare earth elements (LREEs), have low Ce N/Yb N values (1.1-2.0), and a relatively high initial ɛ Nd (+ 5.0 to + 6.6). In terms of Os and Nd isotopes, the Muli type-2A picrites are similar to the Song Da komatiites of Vietnam and the Gorgona Island picrites, revealing the existence of a depleted mantle component in the Emeishan LIP source. In contrast with the type-2A picrites, type-2B lavas exhibit a negative Nb anomaly and relatively lower initial ɛ Nd and γ Os values (Nb/La > 1.8; ɛ Nd (260 Ma) = - 5.5 to + 6.4; γ Os (260 Ma) = - 4.2 to - 1.9), suggesting that the type-2B lavas have a depleted mantle source, similar to type-2A, but that the type-2B lavas are also influenced by various degrees of mixing of depleted plume-derived melt, sub-continental lithospheric mantle, and/or continental crust. Given that the basalts in the Muli area show similar geochemical features to those of the type-2B picrites, their origins are inferred to be similar.

  12. Occurrence, spatial distribution, sources, and risks of polychlorinated biphenyls and heavy metals in surface sediments from a large eutrophic Chinese lake (Lake Chaohu).

    Science.gov (United States)

    He, Wei; Bai, Ze-Lin; Liu, Wen-Xiu; Kong, Xiang-Zhen; Yang, Bin; Yang, Chen; Jørgensen, Sven Erik; Xu, Fu-Liu

    2016-06-01

    Surface sediment from large and eutrophic Lake Chaohu was investigated to determine the occurrence, spatial distribution, sources, and risks of polychlorinated biphenyls (PCBs) and heavy metals in one of the five biggest freshwater lakes in China. Total concentration of PCBs (Σ34PCBs) in Lake Chaohu was 672 pg g(-1) dry weight (dw), with a range of 7 to 3999 pg g(-1) dw, which was lower than other water bodies worldwide. The majority of heavy metals were detected at all sampling locations, except for Sr, B, and In. Concentrations of Al, Fe, Ca, Mn, Sr, Co, Zn, Cd, Pb, and Hg were similar to that reported for other lakes globally. Concentrations of K, Mg, Na, Li, Ga, and Ag were greater than the average, whereas those of Cr, Ni, and Cu were lower. Cluster analysis (CA) and positive matrix factorization (PMF) yielded accordant results for the source apportionment of PCBs. The technical PCBs and microbial degradation accounted for 34.2 % and 65.8 % of total PCBs using PMF, and PMF revealed that natural and anthropogenic sources of heavy metals accounted for 38.1 % and 61.8 %, respectively. CA indicated that some toxic heavy metals (e.g., Cd, In, Tl, and Hg) were associated with Ca-Na-Mg minerals rather than Fe-Mn minerals. The uncorrelated results between organic matter revealed by pyrolysis technology and heavy metals might be caused by the existence of competitive adsorption between organic matter and minerals. PCBs and heavy metals were coupling discharge without organochlorine pesticides (OCPs), but with polycyclic aromatic hydrocarbons (PAHs) and polybrominated diphenyl ethers (PBDEs). No sediment sample exceeded the toxic threshold for dioxin-like PCBs (dl-PCBs) set at 20 pg toxicity equivalency quantity (TEQ) g(-1), (max dl-PCBs, 10.9 pg TEQ g(-1)). However, concentrations of Ag, Cd, and Hg were at levels of environmental concern. The sediment in the drinking water source area (DWSA) was threatened by heavy metals from other areas, and some

  13. Connectivity of a large embayment and coastal fishery: spawning aggregations in one bay source local and broad-scale fishery replenishment.

    Science.gov (United States)

    Hamer, P A; Acevedo, S; Jenkins, G P; Newman, A

    2011-04-01

    Ichthyoplankton sampling and otolith chemistry were used to determine the importance of transient spawning aggregations of snapper Chrysophrys auratus (Sparidae) in a large embayment, Port Phillip Bay (PPB), Australia, as a source of local and broad-scale fishery replenishment. Ichthyoplankton sampling across five spawning seasons within PPB, across the narrow entrance to the bay and in adjacent coastal waters, indicated that although spawning may occur in coastal waters, the spawning aggregations within the bay were the primary source of larval recruitment to the bay. Otolith chemical signatures previously characterized for 0+ year C. auratus of two cohorts (2000 and 2001) were used as the baseline signatures to quantify the contribution that fish derived from reproduction in PPB make to fishery replenishment. Sampling of these cohorts over a 5 year period at various widely dispersed fishery regions, combined with maximum likelihood analyses of the chemistry of the 0+ year otolith portions of these older fish, indicated that C. auratus of 1 to 3+ years of age displayed both local residency and broad-scale emigration from PPB to populate coastal waters and an adjacent bay (Western Port). While the PPB fishery was consistently dominated (>70%) by locally derived fish irrespective of cohort or age, the contribution of fish that had originated from PPB to distant populations increased with age. At 4 to 5+ years of age, when C. auratus mature and fully recruit to the fishery, populations of both cohorts across the entire central and western Victorian fishery, including two major embayments and c. 800 km of coastal waters, were dominated (>70%) by fish that had originated from the spawning aggregations and nursery habitat within PPB. Dependence of this broadly dispersed fishery on replenishment from heavily targeted spawning aggregations within one embayment has significant implications for management and monitoring programmes. © 2011 The Authors. Journal of Fish

  14. A new nonlinear blind source separation method with chaos indicators for decoupling diagnosis of hybrid failures: A marine propulsion gearbox case with a large speed variation

    International Nuclear Information System (INIS)

    Li, Zhixiong; Peng, Z

    2016-01-01

    The normal operation of propulsion gearboxes ensures the ship safety. Chaos indicators could efficiently indicate the state change of the gearboxes. However, accurate detection of gearbox hybrid faults using Chaos indicators is a challenging task and the detection under speed variation conditions is attracting considerable attentions. Literature review suggests that the gearbox vibration is a kind of nonlinear mixture of variant vibration sources and the blind source separation (BSS) is reported to be a promising technique for fault vibration analysis, but very limited work has addressed the nonlinear BSS approach for hybrid faults decoupling diagnosis. Aiming to enhance the fault detection performance of Chaos indicators, this work presents a new nonlinear BSS algorithm for gearbox hybrid faults detection under a speed variation condition. This new method appropriately introduces the kernel spectral regression (KSR) framework into the morphological component analysis (MCA). The original vibration data are projected into the reproducing kernel Hilbert space (RKHS) where the instinct nonlinear structure in the original data can be linearized by KSR. Thus the MCA is able to deal with nonlinear BSS in the KSR space. Reliable hybrid faults decoupling is then achieved by this new nonlinear MCA (NMCA). Subsequently, by calculating the Chaos indicators of the decoupled fault components and comparing them with benchmarks, the hybrid faults can be precisely identified. Two specially designed case studies were implemented to evaluate the proposed NMCA-Chaos method on hybrid gear faults decoupling diagnosis. The performance of the NMCA-Chaos was compared with state of art techniques. The analysis results show high performance of the proposed method on hybrid faults detection in a marine propulsion gearbox with large speed variations.

  15. Kinota: An Open-Source NoSQL implementation of OGC SensorThings for large-scale high-resolution real-time environmental monitoring

    Science.gov (United States)

    Miles, B.; Chepudira, K.; LaBar, W.

    2017-12-01

    The Open Geospatial Consortium (OGC) SensorThings API (STA) specification, ratified in 2016, is a next-generation open standard for enabling real-time communication of sensor data. Building on over a decade of OGC Sensor Web Enablement (SWE) Standards, STA offers a rich data model that can represent a range of sensor and phenomena types (e.g. fixed sensors sensing fixed phenomena, fixed sensors sensing moving phenomena, mobile sensors sensing fixed phenomena, and mobile sensors sensing moving phenomena) and is data agnostic. Additionally, and in contrast to previous SWE standards, STA is developer-friendly, as is evident from its convenient JSON serialization, and expressive OData-based query language (with support for geospatial queries); with its Message Queue Telemetry Transport (MQTT), STA is also well-suited to efficient real-time data publishing and discovery. All these attributes make STA potentially useful for use in environmental monitoring sensor networks. Here we present Kinota(TM), an Open-Source NoSQL implementation of OGC SensorThings for large-scale high-resolution real-time environmental monitoring. Kinota, which roughly stands for Knowledge from Internet of Things Analyses, relies on Cassandra its underlying data store, which is a horizontally scalable, fault-tolerant open-source database that is often used to store time-series data for Big Data applications (though integration with other NoSQL or rational databases is possible). With this foundation, Kinota can scale to store data from an arbitrary number of sensors collecting data every 500 milliseconds. Additionally, Kinota architecture is very modular allowing for customization by adopters who can choose to replace parts of the existing implementation when desirable. The architecture is also highly portable providing the flexibility to choose between cloud providers like azure, amazon, google etc. The scalable, flexible and cloud friendly architecture of Kinota makes it ideal for use in next

  16. A large point-source outbreak of Salmonella Typhimurium linked to chicken, pork and salad rolls from a Vietnamese bakery in Sydney

    Directory of Open Access Journals (Sweden)

    Beth Cullen

    2012-06-01

    Full Text Available Introduction: In January 2011, Sydney South West Public Health Unit was notified of a large number of people presenting with gastroenteritis over two days at a local hospital emergency department (ED.Methods: Case-finding was conducted through hospital EDs and general practitioners, which resulted in the notification of 154 possible cases, from which 83 outbreak cases were identified. Fifty-eight cases were interviewed about demographics, symptom profile and food histories. Stool samples were collected and submitted for analysis. An inspection was conducted at a Vietnamese bakery and food samples were collected and submitted for analysis. Further case ascertainment occurred to ensure control measures were successful.Results: Of the 58 interviewed cases, the symptom profile included diarrhoea (100%, fever (79.3% and vomiting (89.7%. Salmonella Typhimurium multiple-locus-variable number tandem repeats analysis (MLVA type 3-10-8-9-523 was identified in 95.9% (47/49 of stool samples. Cases reported consuming chicken, pork or salad rolls from a single Vietnamese bakery. Environmental swabs detected widespread contamination with Salmonella at the premises.Discussion: This was a large point-source outbreak associated with the consumption of Vietnamese-style pork, chicken and salad rolls. These foods have been responsible for significant outbreaks in the past. The typical ingredients of raw egg butter or mayonnaise and pate are often implicated, as are the food-handling practices in food outlets. This indicates the need for education in better food-handling practices, including the benefits of using safer products. Ongoing surveillance will monitor the success of new food regulations introduced in New South Wales during 2011 for improving food-handling practices and reducing foodborne illness.

  17. Large-eddy simulation of pollutant dispersion from a ground-level area source over urban street canyons with irreversible chemical reactions

    Science.gov (United States)

    Du, T. Z.; Liu, C.-H.; Zhao, Y. B.

    2014-10-01

    In this study, the dispersion of chemically reactive pollutants is calculated by large-eddy simulation (LES) in a neutrally stratified urban canopy layer (UCL) over urban areas. As a pilot attempt, idealized street canyons of unity building-height-to-street-width (aspect) ratio are used. Nitric oxide (NO) is emitted from the ground surface of the first street canyon into the domain doped with ozone (O3). In the absence of ultraviolet radiation, this irreversible chemistry produces nitrogen dioxide (NO2), developing a reactive plume over the rough urban surface. A range of timescales of turbulence and chemistry are utilized to examine the mechanism of turbulent mixing and chemical reactions in the UCL. The Damköhler number (Da) and the reaction rate (r) are analyzed along the vertical direction on the plane normal to the prevailing flow at 10 m after the source. The maximum reaction rate peaks at an elevation where Damköhler number Da is equal or close to unity. Hence, comparable timescales of turbulence and reaction could enhance the chemical reactions in the plume.

  18. Participant profiles according to recruitment source in a large Web-based prospective study: experience from the Nutrinet-Santé study.

    Science.gov (United States)

    Kesse-Guyot, Emmanuelle; Andreeva, Valentina; Castetbon, Katia; Vernay, Michel; Touvier, Mathilde; Méjean, Caroline; Julia, Chantal; Galan, Pilar; Hercberg, Serge

    2013-09-13

    Interest in Internet-based epidemiologic research is growing given the logistic and cost advantages. Cohort recruitment to maximally diversify the sociodemographic profiles of participants, however, remains a contentious issue. The aim of the study was to characterize the sociodemographic profiles according to the recruitment mode of adult volunteers enrolled in a Web-based cohort. The French NutriNet-Santé Web-based cohort was launched in 2009. Recruitment is ongoing and largely relies on recurrent multimedia campaigns. One month after enrollment, participants are asked how they learned about the study (eg, general newscast or a health program on television, radio newscast, newspaper articles, Internet, personal advice, leaflet/flyers) The sociodemographic profiles of participants recruited through operative communication channels (radio, print media, Internet, advice) were compared with the profiles of those informed through television by using polytomous logistic regression. Among the 88,238 participants enrolled through the end of 2011, 30,401 (34.45%), 16,751 (18.98%), and 14,309 (16.22%) learned about the study from television, Internet, and radio newscasts, respectively. Sociodemographic profiles were various, with 14,541 (16.5%) aged ≥60 years, 20,166 (22.9%) aged income income €3700/month. Compared to employed individuals, unemployed and retired participants were less likely to be informed about the study through other sources than through television (adjusted ORs 0.56-0.83, Pbased studies regarding the development of promising targeted or general population recruitment strategies.

  19. A novel chromosome region maintenance 1-independent nuclear export signal of the large form of hepatitis delta antigen that is required for the viral assembly.

    Science.gov (United States)

    Lee, C H; Chang, S C; Wu, C H; Chang, M F

    2001-03-16

    Hepatitis delta virus (HDV) is a satellite virus of hepatitis B virus, as it requires hepatitis B virus for virion production and transmission. We have previously demonstrated that sequences within the C-terminal 19-amino acid domain flanking the isoprenylation motif of the large hepatitis delta antigen (HDAg-L) are important for virion assembly. In this study, site-directed mutagenesis and immunofluorescence staining demonstrated that in the absence of hepatitis B virus surface antigen (HBsAg), the wild-type HDAg-L was localized in the nuclei of transfected COS7 cells. Nevertheless, in the presence of HBsAg, the HDAg-L became both nuclei- and cytoplasm-distributed in about half of the cells. An HDAg-L mutant with a substitution of Pro-205 to alanine could neither form HDV-like particles nor shift the subcellular localization in the presence of HBsAg. In addition, nuclear trafficking of HDAg-L in heterokaryons indicated that HDAg-L is a nucleocytoplasmic shuttling protein. A proline-rich HDAg peptide spanning amino acid residues 198 to 210, designated NES(HDAg-L), can function as a nuclear export signal (NES) in Xenopus oocytes. Pro-205 is critical for the NES function. Furthermore, assembly of HDV is insensitive to leptomycin B, indicating that the NES(HDAg-L) directs nuclear export of HDAg-L to the cytoplasm via a chromosome region maintenance 1-independent pathway.

  20. The very large G-protein-coupled receptor VLGR1: a component of the ankle link complex required for the normal development of auditory hair bundles.

    Science.gov (United States)

    McGee, Joann; Goodyear, Richard J; McMillan, D Randy; Stauffer, Eric A; Holt, Jeffrey R; Locke, Kirsten G; Birch, David G; Legan, P Kevin; White, Perrin C; Walsh, Edward J; Richardson, Guy P

    2006-06-14

    Sensory hair bundles in the inner ear are composed of stereocilia that can be interconnected by a variety of different link types, including tip links, horizontal top connectors, shaft connectors, and ankle links. The ankle link antigen is an epitope specifically associated with ankle links and the calycal processes of photoreceptors in chicks. Mass spectrometry and immunoblotting were used to identify this antigen as the avian ortholog of the very large G-protein-coupled receptor VLGR1, the product of the Usher syndrome USH2C (Mass1) locus. Like ankle links, Vlgr1 is expressed transiently around the base of developing hair bundles in mice. Ankle links fail to form in the cochleae of mice carrying a targeted mutation in Vlgr1 (Vlgr1/del7TM), and the bundles become disorganized just after birth. FM1-43 [N-(3-triethylammonium)propyl)-4-(4-(dibutylamino)styryl) pyridinium dibromide] dye loading and whole-cell recordings indicate mechanotransduction is impaired in cochlear, but not vestibular, hair cells of early postnatal Vlgr1/del7TM mutant mice. Auditory brainstem recordings and distortion product measurements indicate that these mice are severely deaf by the third week of life. Hair cells from the basal half of the cochlea are lost in 2-month-old Vlgr1/del7TM mice, and retinal function is mildly abnormal in aged mutants. Our results indicate that Vlgr1 is required for formation of the ankle link complex and the normal development of cochlear hair bundles.

  1. Chemical Characterization and Source Apportionment of Size Fractionated Atmospheric Aerosols, and, Evaluating Student Attitudes and Learning in Large Lecture General Chemistry Classes

    Science.gov (United States)

    Allen, Gregory Harold

    Chemical speciation and source apportionment of size fractionated atmospheric aerosols were investigated using laser desorption time-of-flight mass spectrometry (LD TOF-MS) and source apportionment was carried out using carbon-14 accelerator mass spectrometry (14C AMS). Sample collection was carried out using the Davis Rotating-drum Unit for Monitoring impact analyzer in Davis, Colfax, and Yosemite, CA. Ambient atmospheric aerosols collected during the winter of 2010/11 and 2011/12 showed a significant difference in the types of compounds found in the small and large sized particles. The difference was due to the increase number of oxidized carbon species that were found in the small particles size ranges, but not in the large particles size ranges. Overall, the ambient atmospheric aerosols collected during the winter in Davis, CA had and average fraction modern of F14C = 0.753 +/- 0.006, indicating that the majority of the size fractionated particles originated from biogenic sources. Samples collected during the King Fire in Colfax, CA were used to determine the contribution of biomass burning (wildfire) aerosols. Factor analysis was used to reduce the ions found in the LD TOF-MS analysis of the King Fire samples. The final factor analysis generated a total of four factors that explained an overall 83% of the variance in the data set. Two of the factors correlated heavily with increased smoke events during the sample period. The increased smoke events produced a large number of highly oxidized organic aerosols (OOA2) and aromatic compounds that are indicative of biomass burning organic aerosols (WBOA). The signal intensities of the factors generated in the King Fire data were investigated in samples collected in Yosemite and Davis, CA to look at the impact of biomass burning on ambient atmospheric aerosols. In both comparison sample collections the OOA2 and WBOA factors both increased during biomass burning events located near the sampling sites. The correlation

  2. The flexibility requirements for power plants with CCS in a future energy system with a large share of intermittent renewable energy sources

    NARCIS (Netherlands)

    Brouwer, A. S.; van den Broek, M.; Seebregts, A.; Faaij, A. P. C.

    2013-01-01

    This paper investigates flexibility issues of future low-carbon power systems. The short-term power system impacts of intermittent renewables are identified and roughly quantified based on a review of wind integration studies. Next, the flexibility parameters of three types of power plants with CO2

  3. Biomarker and carbon isotope constraints (δ13C, Δ14C) on sources and cycling of particulate organic matter discharged by large Siberian rivers draining permafrost areas

    International Nuclear Information System (INIS)

    Winterfeld, Maria

    2014-08-01

    Circumpolar permafrost soils store about half of the global soil organic carbon pool. These huge amounts of organic matter (OM) could accumulate due to low temperatures and water saturated soil conditions over the course of millennia. Currently most of this OM remains frozen and therefore does not take part in the active carbon cycle, making permafrost soils a globally important carbon sink. Over the last decades mean annual air temperatures in the Arctic increased stronger than the global mean and this trend is projected to continue. As a result the permafrost carbon pool is under climate pressure possibly creating a positive climate feedback due to the thaw-induced release of greenhouse gases to the atmosphere. Arctic warming will lead to increased annual permafrost thaw depths and Arctic river runoff likely resulting in enhanced mobilization and export of old, previously frozen soil-derived OM. Consequently, the great arctic rivers play an important role in global biogeochemical cycles by connecting the large permafrost carbon pool of their hinterlands with the arctic shelf seas and the Arctic Ocean. The first part of this thesis deals with particulate organic matter (POM) from the Lena Delta and adjacent Buor Khaya Bay. The Lena River in central Siberia is one of the major pathways translocating terrestrial OM from its southernmost reaches near Lake Baikal to the coastal zone of the Laptev Sea. The permafrost soils from the Lena catchment area store huge amounts of pre-aged OM, which is expected to be remobilized due to climate warming. To characterize the composition and vegetation sources of OM discharged by the Lena River, the lignin phenol and carbon isotopic composition (δ 13 C and Δ 14 C) in total suspended matter (TSM) from surface waters, surface sediments from the Buor Khaya Bay along with soils from the Lena Delta's first (Holocene) and third terraces (Pleistocene ice complex) were analyzed. The lignin compositions of these samples are

  4. EXCESS RF POWER REQUIRED FOR RF CONTROL OF THE SPALLATION NEUTRON SOURCE (SNS) LINAC, A PULSED HIGH-INTENSITY SUPERCONDUCTING PROTON ACCELERATOR

    International Nuclear Information System (INIS)

    Lynch, M.; Kwon, S.

    2001-01-01

    A high-intensity proton linac, such as that being planned for the SNS, requires accurate RF control of cavity fields for the entire pulse in order to avoid beam spill. The current design requirement for the SNS is RF field stability within ±0.5% and ±0.5 o [1]. This RF control capability is achieved by the control electronics using the excess RF power to correct disturbances. To minimize the initial capital costs, the RF system is designed with 'just enough' RF power. All the usual disturbances exist, such as beam noise, klystron/HVPS noise, coupler imperfections, transport losses, turn-on and turn-off transients, etc. As a superconducting linac, there are added disturbances of large magnitude, including Lorentz detuning and microphonics. The effects of these disturbances and the power required to correct them are estimated, and the result shows that the highest power systems in the SNS have just enough margin, with little or no excess margin

  5. Positron sources

    International Nuclear Information System (INIS)

    Chehab, R.

    1994-01-01

    A tentative survey of positron sources is given. Physical processes on which positron generation is based are indicated and analyzed. Explanation of the general features of electromagnetic interactions and nuclear β + decay makes it possible to predict the yield and emittance for a given optical matching system between the positron source and the accelerator. Some kinds of matching systems commonly used - mainly working with solenoidal field - are studied and the acceptance volume calculated. Such knowledge is helpful in comparing different matching systems. Since for large machines, a significant distance exists between the positron source and the experimental facility, positron emittance has to be preserved during beam transfer over large distances and methods used for that purpose are indicated. Comparison of existing positron sources leads to extrapolation to sources for future linear colliders. Some new ideas associated with these sources are also presented. (orig.)

  6. Predictive Big Data Analytics: A Study of Parkinson’s Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations

    Science.gov (United States)

    Dinov, Ivo D.; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W.; Price, Nathan D.; Van Horn, John D.; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M.; Dauer, William; Toga, Arthur W.

    2016-01-01

    Background A unique archive of Big Data on Parkinson’s Disease is collected, managed and disseminated by the Parkinson’s Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson’s disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data–large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources–all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Methods and Findings Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several

  7. Sistemas Integrados de energías con fuentes renovables, requisitos y opciones. Integrated systems of energy with renewable sources, requirements and options

    Directory of Open Access Journals (Sweden)

    Antonio Sarmiento Sera

    2015-04-01

    Full Text Available En el presente reporte se consideró una instalación conectada a la red eléctrica en la isla de Cuba. Se tenía el interés de introducir un determinado % de energía a partir de fuentes renovables, y se poseía un determinado potencial de energía eólica y fotovoltaica. Se analizaron los requisitos y opciones energéticas, se realizaron simulaciones de alternativas con el programa HOMER y se concluyó con la determinación de las condiciones o potenciales de las fuentes renovables para la recomendación de cada opción energética, y se presentaron los resultados de forma gráfica y fácil comprensión. Se ofreció un análisis de las posibilidades reales que en el ámbito de una localidad, pueden aprovecharse en función de diversificar de manera sostenible, el esquema energético comunitario  con la utilización de las fuentes renovables de energía, utilizando la variante que desde el punto económico y ambiental resulte de más conveniencia.  In this report was considered an electric net connected installation in the island of Cuba. It had the interest of introducing a certain % of energy starting from renewable sources, and a certain potential of wind and photovoltaic energy was possessed. The requirements and energy options were analyzed, and it were carried out simulations of alternative with the HOMER program and it was concluded with the determination of the conditions or potentials of the renewable sources for the recommendation of each energy option, and the results were presented in graphic way and easy understanding. It was offered an analysis of the real possibilities in the environment of a town. It can take advantage in function of diversifying from a sustainable way, with the community energy outline using the renewable sources of energy, and taking the variant of more convenience from the economic and environmental point of view.

  8. Positron sources

    International Nuclear Information System (INIS)

    Chehab, R.

    1989-01-01

    A tentative survey of positron sources is given. Physical processes on which positron generation is based are indicated and analyzed. Explanation of the general features of electromagnetic interactions and nuclear β + decay makes it possible to predict the yield and emittance for a given optical matching system between the positron source and the accelerator. Some kinds of matching systems commonly used - mainly working with solenoidal fields - are studied and the acceptance volume calculated. Such knowledge is helpful in comparing different matching systems. Since for large machines, a significant distance exists between the positron source and the experimental facility, positron emittance has to be preserved during beam transfer over large distances and methods used for that purpose are indicated. Comparison of existing positron sources leads to extrapolation to sources for future linear colliders

  9. Separation and capture of CO2 from large stationary sources and sequestration in geological formations--coalbeds and deep saline aquifers.

    Science.gov (United States)

    White, Curt M; Strazisar, Brian R; Granite, Evan J; Hoffman, James S; Pennline, Henry W

    2003-06-01

    The topic of global warming as a result of increased atmospheric CO2 concentration is arguably the most important environmental issue that the world faces today. It is a global problem that will need to be solved on a global level. The link between anthropogenic emissions of CO2 with increased atmospheric CO2 levels and, in turn, with increased global temperatures has been well established and accepted by the world. International organizations such as the United Nations Framework Convention on Climate Change (UNFCCC) and the Intergovernmental Panel on Climate Change (IPCC) have been formed to address this issue. Three options are being explored to stabilize atmospheric levels of greenhouse gases (GHGs) and global temperatures without severely and negatively impacting standard of living: (1) increasing energy efficiency, (2) switching to less carbon-intensive sources of energy, and (3) carbon sequestration. To be successful, all three options must be used in concert. The third option is the subject of this review. Specifically, this review will cover the capture and geologic sequestration of CO2 generated from large point sources, namely fossil-fuel-fired power gasification plants. Sequestration of CO2 in geological formations is necessary to meet the President's Global Climate Change Initiative target of an 18% reduction in GHG intensity by 2012. Further, the best strategy to stabilize the atmospheric concentration of CO2 results from a multifaceted approach where sequestration of CO2 into geological formations is combined with increased efficiency in electric power generation and utilization, increased conservation, increased use of lower carbon-intensity fuels, and increased use of nuclear energy and renewables. This review covers the separation and capture of CO2 from both flue gas and fuel gas using wet scrubbing technologies, dry regenerable sorbents, membranes, cryogenics, pressure and temperature swing adsorption, and other advanced concepts. Existing

  10. Dissolved organic matter fluorescence at wavelength 275/342 nm as a key indicator for detection of point-source contamination in a large Chinese drinking water lake.

    Science.gov (United States)

    Zhou, Yongqiang; Jeppesen, Erik; Zhang, Yunlin; Shi, Kun; Liu, Xiaohan; Zhu, Guangwei

    2016-02-01

    Surface drinking water sources have been threatened globally and there have been few attempts to detect point-source contamination in these waters using chromophoric dissolved organic matter (CDOM) fluorescence. To determine the optimal wavelength derived from CDOM fluorescence as an indicator of point-source contamination in drinking waters, a combination of field campaigns in Lake Qiandao and a laboratory wastewater addition experiment was used. Parallel factor (PARAFAC) analysis identified six components, including three humic-like, two tryptophan-like, and one tyrosine-like component. All metrics showed strong correlation with wastewater addition (r(2) > 0.90, p CDOM fluorescence at 275/342 nm was the most responsive wavelength to the point-source contamination in the lake. Our results suggest that pollutants in Lake Qiandao had the highest concentrations in the river mouths of upstream inflow tributaries and the single wavelength at 275/342 nm may be adapted for online or in situ fluorescence measurements as an early warning of contamination events. This study demonstrates the potential utility of CDOM fluorescence to monitor water quality in surface drinking water sources. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Si/SiC-based DD hetero-structure IMPATTs as MM-wave power-source: a generalized large-signal analysis

    International Nuclear Information System (INIS)

    Mukherjee, Moumita; Tripathy, P. R.; Pati, S. P.

    2015-01-01

    A full-scale, self-consistent, non-linear, large-signal model of double-drift hetero-structure IMPATT diode with general doping profile is derived. This newly developed model, for the first time, has been used to analyze the large-signal characteristics of hexagonal SiC-based double-drift IMPATT diode. Considering the fabrication feasibility, the authors have studied the large-signal characteristics of Si/SiC-based hetero-structure devices. Under small-voltage modulation (∼ 2%, i.e. small-signal conditions) results are in good agreement with calculations done using a linearised small-signal model. The large-signal values of the diode's negative conductance (5 × 10 6 S/m 2 ), susceptance (10.4 × 10 7 S/m 2 ), average breakdown voltage (207.6 V), and power generating efficiency (15%, RF power: 25.0 W at 94 GHz) are obtained as a function of oscillation amplitude (50% of DC breakdown voltage) for a fixed average current density. The large-signal calculations exhibit power and efficiency saturation for large-signal (> 50%) voltage modulation and thereafter decrease gradually with further increasing voltage-modulation. This generalized large-signal formulation is applicable for all types of IMPATT structures with distributed and narrow avalanche zones. The simulator is made more realistic by incorporating the space-charge effects, realistic field and temperature dependent material parameters in Si and SiC. The electric field snap-shots and the large-signal impedance and admittance of the diode with current excitation are expressed in closed loop form. This study will act as a guide for researchers to fabricate a high-power Si/SiC-based IMPATT for possible application in high-power MM-wave communication systems. (paper)

  12. Optical CDMA components requirements

    Science.gov (United States)

    Chan, James K.

    1998-08-01

    Optical CDMA is a complementary multiple access technology to WDMA. Optical CDMA potentially provides a large number of virtual optical channels for IXC, LEC and CLEC or supports a large number of high-speed users in LAN. In a network, it provides asynchronous, multi-rate, multi-user communication with network scalability, re-configurability (bandwidth on demand), and network security (provided by inherent CDMA coding). However, optical CDMA technology is less mature in comparison to WDMA. The components requirements are also different from WDMA. We have demonstrated a video transport/switching system over a distance of 40 Km using discrete optical components in our laboratory. We are currently pursuing PIC implementation. In this paper, we will describe the optical CDMA concept/features, the demonstration system, and the requirements of some critical optical components such as broadband optical source, broadband optical amplifier, spectral spreading/de- spreading, and fixed/programmable mask.

  13. Large-format, high-speed, X-ray pnCCDs combined with electron and ion imaging spectrometers in a multipurpose chamber for experiments at 4th generation light sources

    International Nuclear Information System (INIS)

    Strueder, Lothar; Epp, Sascha; Rolles, Daniel; Hartmann, Robert; Holl, Peter; Lutz, Gerhard; Soltau, Heike; Eckart, Rouven; Reich, Christian; Heinzinger, Klaus; Thamm, Christian; Rudenko, Artem; Krasniqi, Faton; Kuehnel, Kai-Uwe; Bauer, Christian; Schroeter, Claus-Dieter; Moshammer, Robert; Techert, Simone; Miessner, Danilo; Porro, Matteo

    2010-01-01

    Fourth generation accelerator-based light sources, such as VUV and X-ray Free Electron Lasers (FEL), deliver ultra-brilliant (∼10 12 -10 13 photons per bunch) coherent radiation in femtosecond (∼10-100 fs) pulses and, thus, require novel focal plane instrumentation in order to fully exploit their unique capabilities. As an additional challenge for detection devices, existing (FLASH, Hamburg) and future FELs (LCLS, Menlo Park; SCSS, Hyogo and the European XFEL, Hamburg) cover a broad range of photon energies from the EUV to the X-ray regime with significantly different bandwidths and pulse structures reaching up to MHz micro-bunch repetition rates. Moreover, hundreds up to trillions of fragment particles, ions, electrons or scattered photons can emerge when a single light flash impinges on matter with intensities up to 10 22 W/cm 2 . In order to meet these challenges, the Max Planck Advanced Study Group (ASG) within the Center for Free Electron Laser Science (CFEL) has designed the CFEL-ASG MultiPurpose (CAMP) chamber. It is equipped with specially developed photon and charged particle detection devices dedicated to cover large solid-angles. A variety of different targets are supported, such as atomic, (aligned) molecular and cluster jets, particle injectors for bio-samples or fixed target arrangements. CAMP houses 4π solid-angle ion and electron momentum imaging spectrometers ('reaction microscope', REMI, or 'velocity map imaging', VMI) in a unique combination with novel, large-area, broadband (50 eV-25 keV), high-dynamic-range, single-photon-counting and imaging X-ray detectors based on the pnCCDs. This instrumentation allows a new class of coherent diffraction experiments in which both electron and ion emission from the target may be simultaneously monitored. This permits the investigation of dynamic processes in this new regime of ultra-intense, high-energy radiation-matter interaction. After an introduction into the salient features of the CAMP chamber and

  14. The impact of large-scale energy storage requirements on the choice between electricity and hydrogen as the major energy carrier in a non-fossil renewables-only scenario

    International Nuclear Information System (INIS)

    Converse, Alvin O.

    2006-01-01

    The need for large-scale storage, when the energy source is subject to periods of low-energy generation, as it would be in a direct solar or wind energy system, could be the factor which justifies the choice of hydrogen, rather than electricity, as the principal energy carrier. It could also be the 'Achilles heel' of a solar-based sustainable energy system, tipping the choice to a nuclear breeder system

  15. Predictive Big Data Analytics: A Study of Parkinson?s Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations

    OpenAIRE

    Dinov, Ivo D.; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W.; Price, Nathan D.; Van Horn, John D.; Ames, Joseph

    2016-01-01

    Background A unique archive of Big Data on Parkinson?s Disease is collected, managed and disseminated by the Parkinson?s Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationsh...

  16. Reporting funding source or conflict of interest in abstracts of randomized controlled trials, no evidence of a large impact on general practitioners' confidence in conclusions, a three-arm randomized controlled trial.

    Science.gov (United States)

    Buffel du Vaure, Céline; Boutron, Isabelle; Perrodeau, Elodie; Ravaud, Philippe

    2014-04-28

    Systematic reporting of funding sources is recommended in the CONSORT Statement for abstracts. However, no specific recommendation is related to the reporting of conflicts of interest (CoI). The objective was to compare physicians' confidence in the conclusions of abstracts of randomized controlled trials of pharmaceutical treatment indexed in PubMed. We planned a three-arm parallel-group randomized trial. French general practitioners (GPs) were invited to participate and were blinded to the study's aim. We used a representative sample of 75 abstracts of pharmaceutical industry-funded randomized controlled trials published in 2010 and indexed in PubMed. Each abstract was standardized and reported in three formats: 1) no mention of the funding source or CoI; 2) reporting the funding source only; and 3) reporting the funding source and CoI. GPs were randomized according to a computerized randomization on a secure Internet system at a 1:1:1 ratio to assess one abstract among the three formats. The primary outcome was GPs' confidence in the abstract conclusions (0, not at all, to 10, completely confident). The study was planned to detect a large difference with an effect size of 0.5. Between October 2012 and June 2013, among 605 GPs contacted, 354 were randomized, 118 for each type of abstract. The mean difference (95% confidence interval) in GPs' confidence in abstract findings was 0.2 (-0.6; 1.0) (P = 0.84) for abstracts reporting the funding source only versus no funding source or CoI; -0.4 (-1.3; 0.4) (P = 0.39) for abstracts reporting the funding source and CoI versus no funding source and CoI; and -0.6 (-1.5; 0.2) (P = 0.15) for abstracts reporting the funding source and CoI versus the funding source only. We found no evidence of a large impact of trial report abstracts mentioning funding sources or CoI on GPs' confidence in the conclusions of the abstracts. ClinicalTrials.gov identifier: NCT01679873.

  17. Assembling large, complex environmental metagenomes

    Energy Technology Data Exchange (ETDEWEB)

    Howe, A. C. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Plant Soil and Microbial Sciences; Jansson, J. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Division; Malfatti, S. A. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Tringe, S. G. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Tiedje, J. M. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Plant Soil and Microbial Sciences; Brown, C. T. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Computer Science and Engineering

    2012-12-28

    The large volumes of sequencing data required to sample complex environments deeply pose new challenges to sequence analysis approaches. De novo metagenomic assembly effectively reduces the total amount of data to be analyzed but requires significant computational resources. We apply two pre-assembly filtering approaches, digital normalization and partitioning, to make large metagenome assemblies more computationaly tractable. Using a human gut mock community dataset, we demonstrate that these methods result in assemblies nearly identical to assemblies from unprocessed data. We then assemble two large soil metagenomes from matched Iowa corn and native prairie soils. The predicted functional content and phylogenetic origin of the assembled contigs indicate significant taxonomic differences despite similar function. The assembly strategies presented are generic and can be extended to any metagenome; full source code is freely available under a BSD license.

  18. The daily hour forecasting of the electrical energy production from renewable energy sources – a required condition for the operation of the new energy market model

    International Nuclear Information System (INIS)

    Kalpachka, Gergana; Kalpachki, Georgi

    2011-01-01

    The report presented the new energy market model in Bulgaria and the main attention is directed to a daily hour forecasting of the electrical energy production from renewable energy sources. The need of development of a methodology and the development of the most precise methods for predicting is reviewed and some of the used methods at the moment are presented. An analysis of the problems related to the daily hour forecasting is done using data from the producers of electrical energy from renewable energy sources in the territory of western Bulgaria. Keywords: Renewable energy sources, daily hour forecasting, electrical energy

  19. Emission of 2-methyl-3-buten-2-ol by pines: A potentially large natural source of reactive carbon to the atmosphere

    Science.gov (United States)

    Harley, Peter; Fridd-Stroud, Verity; Greenberg, James; Guenther, Alex; Vasconcellos, PéRola

    1998-10-01

    High rates of emission of 2-methyl-3-buten-2-ol (MBO) were measured from needles of several pine species. Emissions of MBO in the light were 1 to 2 orders of magnitude higher than emissions of monoterpenes and, in contrast to monoterpene emissions from pines, were absent in the dark. MBO emissions were strongly dependent on incident light, behaving similarly to net photosynthesis. Emission rates of MBO increased exponentially with temperature up to approximately 35°C. Above approximately 42°C, emission rates declined rapidly. Emissions could be modeled using existing algorithms for isoprene emission. We propose that emissions of MBO from lodgepole and ponderosa pine are the primary source of high concentrations of this compound, averaging 1-3 ppbv, found in ambient air samples collected in Colorado at an isolated mountain site approximately 3050 m above sea level. Subsequent field studies in a ponderosa pine plantation in California confirmed high MBO emissions, which averaged 25 μg C g-1 h-1 for 1-year-old needles, corrected to 30°C and photon flux of 1000 μmol m-2 s-1. A total of 34 pine species growing at Eddy Arboretum in Placerville, California, were investigated, of which 11 exhibited high emissions of MBO (>5 μg C g-1 h-1), and 6 emitted small but detectable amounts. All the emitting species are of North American origin, and most are restricted to western North America. These results indicate that MBO emissions from pines may constitute a significant source of reactive carbon and a significant source of acetone, to the atmosphere, particularly in the western United States.

  20. An advanced control system for the optimal operation and management of medium size power systems with a large penetration from renewable power sources

    Energy Technology Data Exchange (ETDEWEB)

    Nogaret, E.; Stavrakakis, G.; Kariniotakis, G. [Ecole de Mines de Paris, Centre d`Energetique, Sophia-Antipolis (France)] [and others

    1997-10-01

    An advanced control system for the optimal operation and management of autonomous wind-diesel systems is presented. This system minimises the production costs through an on-line optimal scheduling of the power units, which takes into account the technical constraints of the diesel units, as well as short-term forecasts of the load and renewable resources. The power system security is maximised through on-line security assessment modules, which enable the power system to withstand sudden changes in the production of the renewable sources. The control system was evaluated using data from the island of Lemnos, where it has been installed and operated since January 1995. (Author)

  1. Welding simulation of large-diameter thick-walled stainless steel pipe joints. Fast computation of residual stress and influence of heat source model

    International Nuclear Information System (INIS)

    Maekawa, Akira; Serizawa, Hisashi; Nakacho, Keiji; Murakawa, Hidekazu

    2011-01-01

    There are many weld zones in the apparatus and piping installed in nuclear power plants and residual stress generated in the zone by weld process is the most important influence factor for maintaining structural integrity. Though the weld residual stress is frequently evaluated using numerical simulation, fast simulation techniques have been demanded because of the enormous calculation times used. Recently, the fast weld residual stress evaluation based on three-dimensional accurate analysis became available through development of the Iterative Substructure Method (ISM). In this study, the computational performance of the welding simulation code using the ISM was improved to get faster computations and more accurate welding simulation. By adding functions such as parallel processing, the computation speed was much faster than that of the conventional finite element method code. Furthermore, the accuracy of the improved code was validated by measurements. The influence of two different weld heat source models on the simulation results was also investigated and it was found that the moving heat source was effective to achieve accurate weld simulation for multi-pass welds. (author)

  2. A large outbreak of acute gastroenteritis in Shippensburg, Pennsylvania, 1972 revisited: evidence for common source exposure to a recombinant GII.Pg/GII.3 norovirus.

    Science.gov (United States)

    Johnson, J A; Parra, G I; Levenson, E A; Green, K Y

    2017-06-01

    Historical outbreaks can be an important source of information in the understanding of norovirus evolution and epidemiology. Here, we revisit an outbreak of undiagnosed gastroenteritis that occurred in Shippensburg, Pennsylvania in 1972. Nearly 5000 people fell ill over the course of 10 days. Symptoms included diarrhea, vomiting, stomach cramps, and fever, lasting for a median of 24 h. Using current techniques, including next-generation sequencing of full-length viral genomic amplicons, we identified an unusual norovirus recombinant (GII.Pg/GII.3) in nine of 15 available stool samples from the outbreak. This particular recombinant virus has not been reported in recent decades, although GII.3 and GII.Pg genotypes have been detected individually in current epidemic strains. The consensus nucleotide sequences were nearly identical among the four viral genomes analysed, although each strain had three to seven positions in the genome with heterogenous non-synonymous nucleotide subpopulations. Two of these resulting amino acid polymorphisms were conserved in frequency among all four cases, consistent with common source exposure and successful transmission of a mixed viral population. Continued investigation of variant nucleotide populations and recombination events among ancestral norovirus strains such as the Shippensburg virus may provide unique insight into the origin of contemporary strains.

  3. Minerals Intake Distributions in a Large Sample of Iranian at-Risk Population Using the National Cancer Institute Method: Do They Meet Their Requirements?

    Science.gov (United States)

    Heidari, Zahra; Feizi, Awat; Azadbakht, Leila; Sarrafzadegan, Nizal

    2015-01-01

    Minerals are required for the body's normal function. The current study assessed the intake distribution of minerals and estimated the prevalence of inadequacy and excess among a representative sample of healthy middle aged and elderly Iranian people. In this cross-sectional study, the second follow up to the Isfahan Cohort Study (ICS), 1922 generally healthy people aged 40 and older were investigated. Dietary intakes were collected using 24 hour recalls and two or more consecutive food records. Distribution of minerals intake was estimated using traditional (averaging dietary intake days) and National Cancer Institute (NCI) methods, and the results obtained from the two methods, were compared. The prevalence of minerals intake inadequacy or excess was estimated using the estimated average requirement (EAR) cut-point method, the probability approach and the tolerable upper intake levels (UL). There were remarkable differences between values obtained using traditional and NCI methods, particularly in the lower and upper percentiles of the estimated intake distributions. A high prevalence of inadequacy of magnesium (50 - 100 %), calcium (21 - 93 %) and zinc (30 - 55 % for males > 50 years) was observed. Significant gender differences were found regarding inadequate intakes of calcium (21 - 76 % for males vs. 45 - 93 % for females), magnesium (92 % vs. 100 %), iron (0 vs. 15 % for age group 40 - 50 years) and zinc (29 - 55 % vs. 0 %) (all; p < 0.05). Severely imbalanced intakes of magnesium, calcium and zinc were observed among the middle-aged and elderly Iranian population. Nutritional interventions and population-based education to improve healthy diets among the studied population at risk are needed.

  4. Adherence to 2016 European Society of Cardiology guidelines predicts outcome in a large real-world population of heart failure patients requiring cardiac resynchronization therapy.

    Science.gov (United States)

    Stabile, Giuseppe; Pepi, Patrizia; Palmisano, Pietro; D'Onofrio, Antonio; De Simone, Antonio; Caico, Salvatore Ivan; Pecora, Domenico; Rapacciuolo, Antonio; Arena, Giuseppe; Marini, Massimiliano; Pieragnoli, Paolo; Badolati, Sandra; Savarese, Gianluca; Maglia, Gianpiero; Iuliano, Assunta; Botto, Giovanni Luca; Malacrida, Maurizio; Bertaglia, Emanuele

    2018-04-14

    Professional guidelines are based on the best available evidence. However, patients treated in clinical practice may differ from those included in reference trials. The aim of this study was to evaluate the effects of cardiac resynchronization therapy (CRT) in a large population of patients implanted with a CRT device stratified in accordance with the 2016 European heart failure (HF) guidelines. We collected data on 930 consecutive patients from the Cardiac Resynchronization Therapy MOdular REgistry. The primary end point was a composite of death and HF hospitalization. Five hundred sixty-three (60.5%) patients met class I indications, 145 (15.6%) class IIa, 108 (11.6%) class IIb, and 114 (12.3%) class III. After a median follow-up of 1001 days, 120 patients who had an indication for CRT implantation had died and 71 had been hospitalized for HF. The time to the end point was longer in patients with a class I indication (hazard ratio 0.55; 95% confidence interval 0.39-0.76; P = .0001). After 12 months, left ventricular (LV) end-systolic volume had decreased by ≥15% in 61.5% of patients whereas in 57.5% of patients the absolute LV ejection fraction improvement was ≥5%. Adherence to class I was also associated with an absolute LV ejection fraction increase of >5% (P = .0142) and an LV end-systolic volume decrease of ≥15% (P = .0055). In our population, ∼60% of patients underwent implantation according to the 2016 European HF guidelines class I indication. Adherence to class I was associated with a lower death and HF hospitalization rate and better LV reverse remodeling. Copyright © 2018 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  5. AN EXTREME ANALOGUE OF ϵ AURIGAE: AN M-GIANT ECLIPSED EVERY 69 YEARS BY A LARGE OPAQUE DISK SURROUNDING A SMALL HOT SOURCE

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, Joseph E.; Stassun, Keivan G.; Lund, Michael B.; Conroy, Kyle E. [Department of Physics and Astronomy, Vanderbilt University, 6301 Stevenson Center, Nashville, TN 37235 (United States); Siverd, Robert J. [Las Cumbres Observatory Global Telescope Network, 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Pepper, Joshua [Department of Physics, Lehigh University, 16 Memorial Drive East, Bethlehem, PA 18015 (United States); Tang, Sumin [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Kafka, Stella [American Association of Variable Star Observers, 49 Bay State Road, Cambridge, MA 02138 (United States); Gaudi, B. Scott; Stevens, Daniel J.; Kochanek, Christopher S. [Department of Astronomy, The Ohio State University, Columbus, OH 43210 (United States); Beatty, Thomas G. [Department of Astronomy and Astrophysics, The Pennsylvania State University, 525 Davey Lab, University Park, PA 16802 (United States); Shappee, Benjamin J. [Carnegie Observatories, 813 Santa Barbara Street, Pasadena, CA 91101 (United States)

    2016-05-01

    We present TYC 2505-672-1 as a newly discovered and remarkable eclipsing system comprising an M-type red giant that undergoes a ∼3.45 year long, near-total eclipse (depth of ∼4.5 mag) with a very long period of ∼69.1 years. TYC 2505-672-1 is now the longest-period eclipsing binary system yet discovered, more than twice as long as that of the currently longest-period system, ϵ Aurigae. We show from analysis of the light curve including both our own data and historical data spanning more than 120 years and from modeling of the spectral energy distribution, both before and during eclipse, that the red giant primary is orbited by a moderately hot source ( T {sub eff} ≈ 8000 K) that is itself surrounded by an extended, opaque circumstellar disk. From the measured ratio of luminosities, the radius of the hot companion must be in the range of 0.1–0.5 R {sub ⊙} (depending on the assumed radius of the red giant primary), which is an order of magnitude smaller than that for a main sequence A star and 1–2 orders of magnitude larger than that for a white dwarf. The companion is therefore most likely a “stripped red giant” subdwarf-B type star destined to become a He white dwarf. It is, however, somewhat cooler than most sdB stars, implying a very low mass for this “pre-He-WD” star. The opaque disk surrounding this hot source may be a remnant of the stripping of its former hydrogen envelope. However, it is puzzling how this object became stripped, given that it is at present so distant (orbital semimajor axis of ∼24 au) from the current red giant primary star. Extrapolating from our calculated ephemeris, the next eclipse should begin in early UT 2080 April and end in mid UT 2083 September (eclipse center UT 2081 December 24). In the meantime, radial velocity observations would establish the masses of the components, and high-cadence UV observations could potentially reveal oscillations of the hot companion that would further constrain its evolutionary

  6. A semi-parabolic wake model for large offshore wind farms based on the open source CFD solver OpenFOAM

    Directory of Open Access Journals (Sweden)

    Cabezón D.

    2014-01-01

    Full Text Available Wake effect represents one of the main sources of energy loss and uncertainty when designing offshore wind farms. Traditionally analytical models have been used to optimize and estimate power deficits. However these models have shown to underestimate wake effect and consequently overestimate output power [1, 2]. This means that analytical models can be very helpful at optimizing preliminary layouts but not as accurate as needed for an ultimate fine design. Different techniques can be found in the literature to study wind turbine wakes that include simplified kinematic models and more advanced field models, that solve flow equations with different turbulence closure schemes. See the review papers of Crespo et al. [3], Vermeer et al. [4], and Sanderse et al. [5]. Purely elliptic Computational Fluid Dynamics (CFD models based on the actuator disk technique have been developed during the last years [6–8]. They consider wind turbine rotor as a disk where a distribution of axial forces act over the incoming air. It is a fair approach but it can still be computationally expensive for big wind farms in an operative mode. With this technique still active, an alternative approach inspired on the parabolic wake models [9, 10] is proposed. Wind turbine rotors continue to be represented as actuator disks but now the domain is split into subdomains containing one or more wind turbines. The output of each subdomain is mapped onto the input boundary of the next one until the end of the domain is reached, getting a considerable decrease on computational time, by a factor of order 10. As the model is based on the open source CFD solver OpenFOAM, it can be parallelized to speed-up convergence. The near wake is calculated so no initial wind speed deficit profiles have to be supposed as in totally parabolic models and alternative turbulence models, such as the anisotropic Reynolds Stress Model (RSM can be used. Traditional problems of elliptic models related to

  7. Enhanced production of electron cyclotron resonance plasma by exciting selective microwave mode on a large-bore electron cyclotron resonance ion source with permanent magnet.

    Science.gov (United States)

    Kimura, Daiju; Kurisu, Yosuke; Nozaki, Dai; Yano, Keisuke; Imai, Youta; Kumakura, Sho; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki

    2014-02-01

    We are constructing a tandem type ECRIS. The first stage is large-bore with cylindrically comb-shaped magnet. We optimize the ion beam current and ion saturation current by a mobile plate tuner. They change by the position of the plate tuner for 2.45 GHz, 11-13 GHz, and multi-frequencies. The peak positions of them are close to the position where the microwave mode forms standing wave between the plate tuner and the extractor. The absorbed powers are estimated for each mode. We show a new guiding principle, which the number of efficient microwave mode should be selected to fit to that of multipole of the comb-shaped magnets. We obtained the excitation of the selective modes using new mobile plate tuner to enhance ECR efficiency.

  8. Enhanced production of electron cyclotron resonance plasma by exciting selective microwave mode on a large-bore electron cyclotron resonance ion source with permanent magnet

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Daiju, E-mail: kimura@nf.eie.eng.osaka-u.ac.jp; Kurisu, Yosuke; Nozaki, Dai; Yano, Keisuke; Imai, Youta; Kumakura, Sho; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki [Division of Electrical, Electronic and Information Engineering, Graduate School of Engineering, Osaka University, 2-1 Yamada-oka, Suita-shi, Osaka 565-0871 (Japan)

    2014-02-15

    We are constructing a tandem type ECRIS. The first stage is large-bore with cylindrically comb-shaped magnet. We optimize the ion beam current and ion saturation current by a mobile plate tuner. They change by the position of the plate tuner for 2.45 GHz, 11–13 GHz, and multi-frequencies. The peak positions of them are close to the position where the microwave mode forms standing wave between the plate tuner and the extractor. The absorbed powers are estimated for each mode. We show a new guiding principle, which the number of efficient microwave mode should be selected to fit to that of multipole of the comb-shaped magnets. We obtained the excitation of the selective modes using new mobile plate tuner to enhance ECR efficiency.

  9. Source term analysis in severe accident induced by large break loss of coolant accident coincident with ship blackout for ship reactor

    International Nuclear Information System (INIS)

    Zhang Yanzhao; Zhang Fan; Zhao Xinwen; Zheng Yingfeng

    2013-01-01

    Using MELCOR code, the accident analysis model was established for a ship reactor. The behaviors of radioactive fission products were analyzed in the case of severe accident induced by large break loss of coolant accident coincident with ship blackout. The research mainly focused on the behaviors of release, transport, retention and the final distribution of inert gas and CsI. The results show that 83.12% of inert gas releases from the core, and the most of inert gas exists in the containment. About 83.08% of CsI release from the core, 72.66% of which is detained in the debris and the primary system, and 27.34% releases into the containment. The results can give a reference for the evaluation of cabin dose and nuclear emergency management. (authors)

  10. Quartz enhanced photoacoustic H{sub 2}S gas sensor based on a fiber-amplifier source and a custom tuning fork with large prong spacing

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Hongpeng; Liu, Xiaoli; Zheng, Huadan; Yin, Xukun; Ma, Weiguang; Zhang, Lei; Yin, Wangbao; Jia, Suotang [State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Laser Spectroscopy, Shanxi University, Taiyuan 030006 (China); Sampaolo, Angelo [Dipartimento Interateneo di Fisica, Università degli Studi di Bari and Politecnico di Bari, CNR-IFN UOS BARI, Via Amendola 173, Bari 70126 (Italy); Department of Electrical and Computer Engineering, Rice University, Houston, Texas 77005 (United States); Dong, Lei, E-mail: donglei@sxu.edu.cn [State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Laser Spectroscopy, Shanxi University, Taiyuan 030006 (China); Department of Electrical and Computer Engineering, Rice University, Houston, Texas 77005 (United States); Patimisco, Pietro; Spagnolo, Vincenzo [Dipartimento Interateneo di Fisica, Università degli Studi di Bari and Politecnico di Bari, CNR-IFN UOS BARI, Via Amendola 173, Bari 70126 (Italy); Tittel, Frank K. [Department of Electrical and Computer Engineering, Rice University, Houston, Texas 77005 (United States)

    2015-09-14

    A quartz enhanced photoacoustic spectroscopy (QEPAS) sensor, employing an erbium-doped fiber amplified laser source and a custom quartz tuning fork (QTF) with its two prongs spaced ∼800 μm apart, is reported. The sensor employs an acoustic micro-resonator (AmR) which is assembled in an “on-beam” QEPAS configuration. Both length and vertical position of the AmR are optimized in terms of signal-to-noise ratio, significantly improving the QEPAS detection sensitivity by a factor of ∼40, compared to the case of a sensor using a bare custom QTF. The fiber-amplifier-enhanced QEPAS sensor is applied to H{sub 2}S trace gas detection, reaching a sensitivity of ∼890 ppb at 1 s integration time, similar to those obtained with a power-enhanced QEPAS sensor equipped with a standard QTF, but with the advantages of easy optical alignment, simple installation, and long-term stability.

  11. Climate change impact on streamflow in large-scale river basins: projections and their uncertainties sourced from GCMs and RCP scenarios

    Science.gov (United States)

    Nasonova, Olga N.; Gusev, Yeugeniy M.; Kovalev, Evgeny E.; Ayzel, Georgy V.

    2018-06-01

    Climate change impact on river runoff was investigated within the framework of the second phase of the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP2) using a physically-based land surface model Soil Water - Atmosphere - Plants (SWAP) (developed in the Institute of Water Problems of the Russian Academy of Sciences) and meteorological projections (for 2006-2099) simulated by five General Circulation Models (GCMs) (including GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM, and NorESM1-M) for each of four Representative Concentration Pathway (RCP) scenarios (RCP2.6, RCP4.5, RCP6.0, and RCP8.5). Eleven large-scale river basins were used in this study. First of all, SWAP was calibrated and validated against monthly values of measured river runoff with making use of forcing data from the WATCH data set and all GCMs' projections were bias-corrected to the WATCH. Then, for each basin, 20 projections of possible changes in river runoff during the 21st century were simulated by SWAP. Analysis of the obtained hydrological projections allowed us to estimate their uncertainties resulted from application of different GCMs and RCP scenarios. On the average, the contribution of different GCMs to the uncertainty of the projected river runoff is nearly twice larger than the contribution of RCP scenarios. At the same time the contribution of GCMs slightly decreases with time.

  12. A large-area grid ionisation chamber with high resolution for the measurement of alpha sources in samples with low specific activity

    International Nuclear Information System (INIS)

    Hoetzl, H.; Winkler, R.

    1978-06-01

    Construction and properties of a gridded ionization chamber for α-paricle spectrometry of low-level large-area samples are presented. Great importance was attached to high spectrometric resolution, low background, long-term stability, simple construction and operation, and easy decontamination if necessary. Using modern charge-sensitive preamplifiers spectrometric resolution is 20,6 keV FWHM (0,4%) at 5,30 MeV over the total effective area of 300 m 2 . Counting gas is an argon-methane mixture (P-10 gas) at atmospheric pressure. Background is 13 cph in the energy interval from 4 to 6 MeV and minimum detectable activity is 0.01 pCi Pu-239 at 1000 min measuring time. Ionization chambers of this type are used for direct α-spectrometric surveillance of long-lived α-emitting nuclides in the atmosphere after electrostatic deposition of the aerosols and for the determination of α-emitting nuclides in the emissions of nuclear power plants. After plasma ashing of the aerosols on filters from the stack monitoring system the minimum detectable concentration of e.g. Pu-239/240 in the gaseous effluent of a nuclear power plant is about 0.1 fCi per m 3 . (orig.) [de

  13. DISCOVERY OF A LARGE POPULATION OF ULTRALUMINOUS X-RAY SOURCES IN THE BULGELESS GALAXIES NGC 337 AND ESO 501-23

    Energy Technology Data Exchange (ETDEWEB)

    Somers, Garrett; Mathur, Smita; Martini, Paul; Grier, Catherine J. [Department of Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Watson, Linda [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Ferrarese, Laura, E-mail: somers@astronomy.ohio-state.edu [Hertzberg Institute of Astrophysics, 5071 West Saanich Road, Victoria, BC V9E 2E7 (Canada)

    2013-11-01

    We have used Chandra observations of eight bulgeless disk galaxies to identify new ultraluminous X-ray source (ULX) candidates, study their high-mass X-ray binary (HMXB) population, and search for low-luminosity active galactic nuclei (AGNs). We report the discovery of 16 new ULX candidates in our sample of galaxies. Eight of these are found in the star forming galaxy NGC 337, none of which are expected to be background contaminants. The HMXB luminosity function of NGC 337 implies a star formation rate (SFR) of 6.8{sup +4.4}{sub -3.5} M{sub ☉} yr{sup –1}, consistent at 1.5σ with a recent state of the art SFR determination. We also report the discovery of a bright ULX candidate (X-1) in ESO 501-23. X-1's spectrum is well fit by an absorbed power law with Γ= 1.18{sup +0.19}{sub -0.11} and N{sub H} = 1.13{sup +7.07}{sub -1.13}×10{sup 20} cm{sup –2}, implying a 0.3-8 keV flux of 1.08{sup +0.05}{sub -0.07}×10{sup -12} erg s{sup –1} cm{sup –2}. Its X-ray luminosity (L{sub X} ) is poorly constrained due to uncertainties in the host galaxy's distance, but we argue that its spectrum implies L{sub X} > 10{sup 40} erg s{sup –1}. An optical counterpart to this object may be present in an Hubble Space Telescope image. We also identify ULX candidates in IC 1291, PGC 3853, NGC 5964, and NGC 2805. We find no evidence of nuclear activity in the galaxies in our sample, placing a flux upper limit of 4 × 10{sup –15} erg s{sup –1} cm{sup –2} on putative AGN. Additionally, the Type II-P supernova SN 2011DQ in NGC 337, which exploded two months before our X-ray observation, is undetected.

  14. Analysis of drug-drug interactions among patients receiving antiretroviral regimens using data from a large open-source prescription database.

    Science.gov (United States)

    Patel, Nimish; Borg, Peter; Haubrich, Richard; McNicholl, Ian

    2018-06-14

    Results of a study of contraindicated concomitant medication use among recipients of preferred antiretroviral therapy (ART) regimens are reported. A retrospective study was conducted to evaluate concomitant medication use in a cohort of previously treatment-naive, human immunodeficiency virus (HIV)-infected U.S. patients prescribed preferred ART regimens during the period April 2014-March 2015. Data were obtained from a proprietary longitudinal prescription database; elements retrieved included age, sex, and prescription data. The outcome of interest was the frequency of drug-drug interactions (DDIs) associated with concomitant use of contraindicated medications. Data on 25,919 unique treatment-naive patients who used a preferred ART regimen were collected. Overall, there were 384 instances in which a contraindicated medication was dispensed for concurrent use with a recommended ART regimen. Rates of contraindicated concomitant medication use differed significantly by ART regimen; the highest rate (3.2%) was for darunavir plus ritonavir plus emtricitabine-tenofovir disoproxil fumarate (DRV plus RTV plus FTC/TDF), followed by elvitegravir-cobicistat-emtricitabine-tenofovir disoproxil fumarate (EVG/c/FTC/TDF)(2.8%). The highest frequencies of DDIs were associated with ART regimens that included a pharmacoenhancing agent: DRV plus RTV plus FTC/TDF (3.2%) and EVG/c/FTC/TDF (2.8%). In a large population of treatment-naive HIV-infected patients, ART regimens that contained a pharmacoenhancing agent were involved most frequently in contraindicated medication-related DDIs. All of the DDIs could have been avoided by using therapeutic alternatives within the same class not associated with a DDI. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  15. Searching the Gamma-Ray Sky for Counterparts to Gravitational Wave Sources Fermi Gamma-Ray Burst Monitor and Large Area Telescope Observations of LVT151012 and GW151226

    Science.gov (United States)

    Racusin, J. L.; Burns, E.; Goldstein, A.; Connaughton, V.; Wilson-Hodge, C. A.; Jenke, P.; Blackburn, L.; Briggs, M. S.; Broida, J.; Camp, J.; hide

    2017-01-01

    We present the Fermi Gamma-ray Burst Monitor (GBM) and Large Area Telescope (LAT) observations of the LIGO binary black hole merger event GW151226 and candidate LVT151012. At the time of the LIGO triggers on LVT151012 and GW151226, GBM was observing 68% and 83% of the localization regions, and LAT was observing 47% and 32%, respectively. No candidate electromagnetic counterparts were detected by either the GBM or LAT. We present a detailed analysis of the GBM and LAT data over a range of timescales from seconds to years, using automated pipelines and new techniques for characterizing the flux upper bounds across large areas of the sky. Due to the partial GBM and LAT coverage of the large LIGO localization regions at the trigger times for both events, differences in source distances and masses, as well as the uncertain degree to which emission from these sources could be beamed, these non-detections cannot be used to constrain the variety of theoretical models recently applied to explain the candidate GBM counterpart to GW150914.

  16. SEARCHING THE GAMMA-RAY SKY FOR COUNTERPARTS TO GRAVITATIONAL WAVE SOURCES: FERMI GAMMA-RAY BURST MONITO R AND LARGE AREA TELESCOPE OBSERVATIONS OF LVT151012 AND GW151226

    Energy Technology Data Exchange (ETDEWEB)

    Racusin, J. L.; Camp, J.; Singer, L. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Burns, E. [Physics Dept, University of Alabama in Huntsville, 320 Sparkman Dr., Huntsville, AL 35805 (United States); Goldstein, A.; Connaughton, V.; Littenberg, T.; Cleveland, W. [Universities Space Research Association, 320 Sparkman Dr. Huntsville, AL 35806 (United States); Wilson-Hodge, C. A.; Hui, C. M. [Astrophysics Office, ZP12, NASA/Marshall Space Flight Center, Huntsville, AL 35812 (United States); Jenke, P.; Briggs, M. S.; Bhat, P. N. [CSPAR, University of Alabama in Huntsville, 320 Sparkman Dr., Huntsville, AL 35805 (United States); Blackburn, L. [LIGO, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Broida, J.; Christensen, N. [Physics and Astronomy, Carleton College, MN 55057 (United States); Shawhan, P. [Department of Physics, University of Maryland, College Park, MD 20742 (United States); Veitch, J. [University of Birmingham, Birmingham B15 2TT (United Kingdom); Fitzpatrick, G. [School of Physics, University College Dublin, Belfield, Stillorgan Road, Dublin 4 (Ireland); Gibby, M. H. [Jacobs Technology, Inc., Huntsville, AL (United States); Collaboration: Fermi LAT Collaboration; and others

    2017-01-20

    We present the Fermi Gamma-ray Burst Monitor (GBM) and Large Area Telescope (LAT) observations of the LIGO binary black hole merger event GW151226 and candidate LVT151012. At the time of the LIGO triggers on LVT151012 and GW151226, GBM was observing 68% and 83% of the localization regions, and LAT was observing 47% and 32%, respectively. No candidate electromagnetic counterparts were detected by either the GBM or LAT. We present a detailed analysis of the GBM and LAT data over a range of timescales from seconds to years, using automated pipelines and new techniques for characterizing the flux upper bounds across large areas of the sky. Due to the partial GBM and LAT coverage of the large LIGO localization regions at the trigger times for both events, differences in source distances and masses, as well as the uncertain degree to which emission from these sources could be beamed, these non-detections cannot be used to constrain the variety of theoretical models recently applied to explain the candidate GBM counterpart to GW150914.

  17. Effects of long term feeding diets differing in protein source and pre-slaughter starvation on biometry, qualitative traits and liver IGF-I expression in large rainbow trout

    Directory of Open Access Journals (Sweden)

    Emilio Tibaldi

    2010-01-01

    Full Text Available The effects of feeding two complete extruded diets differing in protein source (fish meal-FM vs. vegetable proteins-VP over 30 weeks and subsequent 30 days of starvation on biometry, fillet composition and liver IGF-I mRNA were studied in large rainbow trout. At the end of the feeding period, the dietary protein source little affected major biometry traits, dressing out yields and overall adiposity (P>0.05 but fish given the VP diet resulted in higher content of PUFA n-6 fatty acids in mus- cle (0.46 vs. 0.22 g/100g fillet, P0.05 and of all fatty acids in fillet (P<0.05, except DHA. Liver IGF-I mRNA content was little affected by the test diet and starvation.

  18. Review of areas that may require simultaneous coupled solution of the thermal hydraulic and fission product/aerosol behavior equations for source term determination

    International Nuclear Information System (INIS)

    Kress, T.S.

    1984-01-01

    In the determination of the behavior of nuclear aerosols in the reactor coolant system and in the containment for the development of severe accident source terms, present practice generally is to first perform thermal hydraulic calculations for specific plant types and sequences and then to utilize the results as input for separate fission product/aerosol dynamic transport calculations. It is recognized that there are several areas in which the thermal-hydraulics and the fission product/aerosol behavior may be significantly coupled and that it is then basically incorrect to do the analyses in a separated manner. This review paper produces a speculative list of these potentially coupled areas and attempts to assess the importance of the coupling for as many of the specific items that time has allowed before this conference

  19. Decree no. 76-480 of 24 May 1976 implementing Section 17 of the Amended Revenue Act for 1975 (no. 75-1242 of 27 December 1975) concerning the fees required for large nuclear installations

    International Nuclear Information System (INIS)

    1976-01-01

    This Decree was published in the Official French Gazette of 4 June 1976 and lays down measures for fixing and collecting the dues required for large nuclear installations. These dues are fixed by the Minister of Industry and Research for each operator on the basis of information supplied by the Head of the Central Service for the Safety of Nuclear Installations. The sums thus collected are used in particular to reimburse the expenses incurred for the safety analyses made by the Commissariat a l'Energie Atomique and for the inspections prescribed for installations. (N.E.A.) [fr

  20. The potential distributions, and estimated spatial requirements and population sizes, of the medium to large-sized mammals in the planning domain of the Greater Addo Elephant National Park project

    Directory of Open Access Journals (Sweden)

    A.F. Boshoff

    2002-12-01

    Full Text Available The Greater Addo Elephant National Park project (GAENP involves the establishment of a mega biodiversity reserve in the Eastern Cape, South Africa. Conservation planning in the GAENP planning domain requires systematic information on the potential distributions and estimated spatial requirements, and population sizes of the medium to largesized mammals. The potential distribution of each species is based on a combination of literature survey, a review of their ecological requirements, and consultation with conservation scientists and managers. Spatial requirements were estimated within 21 Mammal Habitat Classes derived from 43 Land Classes delineated by expert-based vegetation and river mapping procedures. These estimates were derived from spreadsheet models based on forage availability estimates and the metabolic requirements of the respective mammal species, and that incorporate modifications of the agriculture-based Large Stock Unit approach. The potential population size of each species was calculated by multiplying its density estimate with the area of suitable habitat. Population sizes were calculated for pristine, or near pristine, habitats alone, and then for these habitats together with potentially restorable habitats for two park planning domain scenarios. These data will enable (a the measurement of the effectiveness of the GAENP in achieving predetermined demographic, genetic and evolutionary targets for mammals that can potentially occur in selected park sizes and configurations, (b decisions regarding acquisition of additional land to achieve these targets to be informed, (c the identification of species for which targets can only be met through metapopulation management,(d park managers to be guided regarding the re-introduction of appropriate species, and (e the application of realistic stocking rates. Where possible, the model predictions were tested by comparison with empirical data, which in general corroborated the

  1. IAEA news: • Newcomer countries face common challenges in nuclear infrastructure development. • Safety and licensing requirements for small modular reactors: IAEA hosts first workshop for regulators. • IAEA reaches milestone in disposal of radioactive sources

    International Nuclear Information System (INIS)

    Kollar, Lenka; Dyck, Elisabeth; Dixit, Aabha; Gaspar, Miklos; Gil, Laura

    2016-01-01

    • Newcomer countries face common challenges in nuclear infrastructure development: Countries embarking on a nuclear power programme need to make sure that the development of their legal, regulatory and support infrastructure keeps pace with the construction of the power plant itself. This is the only way to ensure that the programme proceeds in a safe, secure and sustainable way, concluded participants of a workshop on nuclear power infrastructure development hosted at the IAEA last February. • Safety and licensing requirements for small modular reactors: IAEA hosts first workshop for regulators: A new generation of advanced, prefab nuclear power reactors called small modular reactors (SMRs) could be licensed and hit the market as early as 2020, and the IAEA is helping regulators prepare for their debut. In a series of workshops that began earlier this year, the IAEA is working closely with regulators on approaches to safety and licensing ahead of potential SMR deployment worldwide. • IAEA reaches milestone in disposal of radioactive sources: Successful tests of a promising technology for moving and storing low level radioactive sealed sources are paving the way for a new disposal method for dealing with small volumes of radioactive waste around the world. The method, which involves placing and covering sealed sources in a narrow hole a few hundred metres deep, would allow countries to safely and securely take charge of their own disused radioactive sources. The proof of concept for the technology was tested in Croatia late last year — without the use of actual radioactive material.

  2. Stable isotope and noble gas constraints on the source and residence time of spring water from the Table Mountain Group Aquifer, Paarl, South Africa and implications for large scale abstraction

    Science.gov (United States)

    Miller, J. A.; Dunford, A. J.; Swana, K. A.; Palcsu, L.; Butler, M.; Clarke, C. E.

    2017-08-01

    Large scale groundwater abstraction is increasingly being used to support large urban centres especially in areas of low rainfall but presents particular challenges in the management and sustainability of the groundwater system. The Table Mountain Group (TMG) Aquifer is one of the largest and most important aquifer systems in South Africa and is currently being considered as an alternative source of potable water for the City of Cape Town, a metropolis of over four million people. The TMG aquifer is a fractured rock aquifer hosted primarily in super mature sandstones, quartzites and quartz arenites. The groundwater naturally emanates from numerous springs throughout the cape region. One set of springs were examined to assess the source and residence time of the spring water. Oxygen and hydrogen isotopes indicate that the spring water has not been subject to evaporation and in combination with Na/Cl ratios implies that recharge to the spring systems is via coastal precipitation. Although rainfall in the Cape is usually modelled on orographic rainfall, δ18O and δ2H values of some rainfall samples are strongly positive indicating a stratiform component as well. Comparing the spring water δ18O and δ2H values with that of local rainfall, indicates that the springs are likely derived from continuous bulk recharge over the immediate hinterland to the springs and not through large and/or heavy downpours. Noble gas concentrations, combined with tritium and radiocarbon activities indicate that the residence time of the TMG groundwater in this area is decadal in age with a probable maximum upper limit of ∼40 years. This residence time is probably a reflection of the slow flow rate through the fractured rock aquifer and hence indicates that the interconnectedness of the fractures is the most important factor controlling groundwater flow. The short residence time of the groundwater suggest that recharge to the springs and the Table Mountain Group Aquifer as a whole is

  3. The Chandra Source Catalog: Statistical Characterization

    Science.gov (United States)

    Primini, Francis A.; Nowak, M. A.; Houck, J. C.; Davis, J. E.; Glotfelty, K. J.; Karovska, M.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Evans, I. N.; Evans, J. D.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) will ultimately contain more than ˜250000 x-ray sources in a total area of ˜1% of the entire sky, using data from ˜10000 separate ACIS and HRC observations of a multitude of different types of x-ray sources (see Evans et al. this conference). In order to maximize the scientific benefit of such a large, heterogeneous dataset, careful characterization of the statistical properties of the catalog, i.e., completeness, sensitivity, false source rate, and accuracy of source properties, is required. Our Characterization efforts include both extensive simulations of blank-sky and point source datasets, and detailed comparisons of CSC results with those of other x-ray and optical catalogs. We present here a summary of our characterization results for CSC Release 1 and preliminary plans for future releases. This work is supported by NASA contract NAS8-03060 (CXC).

  4. Response to Question Concerning whether the Requirements for Preconstruction Review of New or Modified Air Pollution Sources Apply to the Relocation of an Existing Asphalt Concrete Plant when such Relocation does not Result in any Increase in Emissions

    Science.gov (United States)

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  5. Environmental problems connected to the use of renewable energy sources

    International Nuclear Information System (INIS)

    Mottana, A.; Pignotti, S.

    2000-01-01

    The development of FER (renewable energy sources) can represent a fundamental answer to the growing energy need and the requirement for a new environmental quality. Also the renewable sources, however, have an environmental cost, whose amount can be considered of little importance at a world balance, but can have a large impact at a local level. Among FER the author has chosen hydroelectric source, biomass and wind energy, since they are most effective according to the aims of this discussion [it

  6. Spectrometers for compact neutron sources

    Science.gov (United States)

    Voigt, J.; Böhm, S.; Dabruck, J. P.; Rücker, U.; Gutberlet, T.; Brückel, T.

    2018-03-01

    We discuss the potential for neutron spectrometers at novel accelerator driven compact neutron sources. Such a High Brilliance Source (HBS) relies on low energy nuclear reactions, which enable cryogenic moderators in very close proximity to the target and neutron optics at comparably short distances from the moderator compared to existing sources. While the first effect aims at increasing the phase space density of a moderator, the second allows the extraction of a large phase space volume, which is typically requested for spectrometer applications. We find that competitive spectrometers can be realized if (a) the neutron production rate can be synchronized with the experiment repetition rate and (b) the emission characteristics of the moderator can be matched to the phase space requirements of the experiment. MCNP simulations for protons or deuterons on a Beryllium target with a suitable target/moderator design yield a source brightness, from which we calculate the sample fluxes by phase space considerations for different types of spectrometers. These match closely the figures of todays spectrometers at medium flux sources. Hence we conclude that compact neutron sources might be a viable option for next generation neutron sources.

  7. Plume-related mantle source of super-large rare metal deposits from the Lovozero and Khibina massifs on the Kola Peninsula, Eastern part of Baltic Shield: Sr, Nd and Hf isotope systematics

    Science.gov (United States)

    Kogarko, L. N.; Lahaye, Y.; Brey, G. P.

    2010-03-01

    The two world’s largest complexes of highly alkaline nepheline syenites and related rare metal loparite and eudialyte deposits, the Khibina and Lovozero massifs, occur in the central part of the Kola Peninsula. We measured for the first time in situ the trace element concentrations and the Sr, Nd and Hf isotope ratios by LA-ICP-MS (laser ablation inductively coupled plasma mass spectrometer) in loparite, eudialyte an in some other pegmatitic minerals. The results are in aggreement with the whole rock Sr and Nd isotope which suggests the formation of these superlarge rare metal deposits in a magmatic closed system. The initial Hf, Sr, Nd isotope ratios are similar to the isotopic signatures of OIB indicating depleted mantle as a source. This leads to the suggestion that the origin of these gigantic alkaline intrusions is connected to a deep seated mantle source—possibly to a lower mantle plume. The required combination of a depleted mantle and high rare metal enrichment in the source can be explained by the input of incompatible elements by metasomatising melts/fluids into the zones of alkaline magma generation shortly before the partial melting event (to avoid ingrowth of radiogenic isotopes). The minerals belovite and pyrochlore from the pegmatites are abnormally high in 87Sr /86Sr ratios. This may be explained by closed system isotope evolution as a result of a significant increase in Rb/Sr during the evolution of the peralkaline magma.

  8. Byproduct metals and rare-earth elements used in the production of light-emitting diodes—Overview of principal sources of supply and material requirements for selected markets

    Science.gov (United States)

    Wilburn, David R.

    2012-01-01

    The use of light-emitting diodes (LEDs) is expanding because of environmental issues and the efficiency and cost savings achieved compared with use of traditional incandescent lighting. The longer life and reduced power consumption of some LEDs have led to annual energy savings, reduced maintenance costs, and lower emissions of carbon dioxide, sulfur dioxide, and nitrogen oxides from powerplants because of the resulting decrease in energy consumption required for lighting applications when LEDs are used to replace less-energy-efficient sources. Metals such as arsenic, gallium, indium, and the rare-earth elements (REEs) cerium, europium, gadolinium, lanthanum, terbium, and yttrium are important mineral materials used in LED semiconductor technology. Most of the world's supply of these materials is produced as byproducts from the production of aluminum, copper, lead, and zinc. Most of the rare earths required for LED production in 2011 came from China, and most LED production facilities were located in Asia. The LED manufacturing process is complex and is undergoing much change with the growth of the industry and the changes in demand patterns of associated commodities. In many respects, the continued growth of the LED industry, particularly in the general lighting sector, is tied to its ability to increase LED efficiency and color uniformity while decreasing the costs of producing, purchasing, and operating LEDs. Research is supported by governments of China, the European Union, Japan, the Republic of Korea, and the United States. Because of the volume of ongoing research in this sector, it is likely that the material requirements of future LEDs may be quite different than LEDs currently (2011) in use as industry attempts to cut costs by reducing material requirements of expensive heavy rare-earth phosphors and increasing the sizes of wafers for economies of scale. Improved LED performance will allow customers to reduce the number of LEDs in automotive, electronic

  9. Radioisotope Power Sources

    International Nuclear Information System (INIS)

    Culwell, J. P.

    1963-01-01

    The radioisotope power programme of the US Atomic Energy Commission has brought forth a whole new technology of the use of radioisotopes as energy sources in electric power generators. Radioisotope power systems are particularly suited for remote applications where long-lived, compact, reliable power is needed. Able to perform satisfactorily under extreme environmental conditions of temperature, sunlight and electromagnetic radiations, these ''atomic batteries'' are attractive power sources for remote data collecting devices, monitoring systems, satellites and other space missions. Radioisotopes used as fuels generally are either alpha or beta emitters. Alpha emitters are the preferable fuels but are more expensive and less available than beta fuels and are generally reserved for space applications. Beta fuels separated from reactor fission wastes are being used exclusively in land and sea applications at the present. It can be expected, however, that beta emitters such as stiontium-90 eventually will be used in space. Development work is being carried out on generators which will use mixed fission products as fuel. This fuel will be less expensive than the pure radioisotopes since the costs of isotope separation and purification are eliminated. Prototype thermoelectric generators, fuelled with strontium-90 and caesium-137, are now in operation or being developed for use in weather stations, marine navigation aids and deep sea monitoring devices. A plutonium-238 thermoelectric generator is in orbit operating as electric power source in a US Navy TRANSIT satellite. Generators are under development for use on US National Aeronautics and Space Administration missions. The large quantities of radioactivity involved in radioisotope power sources require that special attention be given to safety aspects of the units. Rigid safety requirements have been established and extensive tests have been conducted to insure that these systems can be employed without creating undue

  10. Investigating Primary Source Literacy

    Science.gov (United States)

    Archer, Joanne; Hanlon, Ann M.; Levine, Jennie A.

    2009-01-01

    Primary source research requires students to acquire specialized research skills. This paper presents results from a user study testing the effectiveness of a Web guide designed to convey the concepts behind "primary source literacy". The study also evaluated students' strengths and weaknesses when conducting primary source research. (Contains 3…

  11. Large Pelagics Telephone Survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Large Pelagics Telephone Survey (LPTS) collects fishing effort information directly from captains holding Highly Migratory Species (HMS) permits (required by...

  12. Technology Requirements for Information Management

    Science.gov (United States)

    Graves, Sara; Knoblock, Craig A.; Lannom, Larry

    2002-01-01

    This report provides the results of a panel study conducted into the technology requirements for information management in support of application domains of particular government interest, including digital libraries, mission operations, and scientific research. The panel concluded that it was desirable to have a coordinated program of R&D that pursues a science of information management focused on an environment typified by applications of government interest - highly distributed with very large amounts of data and a high degree of heterogeneity of sources, data, and users.

  13. Muon sources

    International Nuclear Information System (INIS)

    Parsa, Z.

    2001-01-01

    A full high energy muon collider may take considerable time to realize. However, intermediate steps in its direction are possible and could help facilitate the process. Employing an intense muon source to carry out forefront low energy research, such as the search for muon-number non-conservation, represents one interesting possibility. For example, the MECO proposal at BNL aims for 2 x 10 -17 sensitivity in their search for coherent muon-electron conversion in the field of a nucleus. To reach that goal requires the production, capture and stopping of muons at an unprecedented 10 11 μ/sec. If successful, such an effort would significantly advance the state of muon technology. More ambitious ideas for utilizing high intensity muon sources are also being explored. Building a muon storage ring for the purpose of providing intense high energy neutrino beams is particularly exciting.We present an overview of muon sources and example of a muon storage ring based Neutrino Factory at BNL with various detector location possibilities

  14. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different elect