WorldWideScience

Sample records for high average annual

  1. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  2. Spatial models for probabilistic prediction of wind power with application to annual-average and high temporal resolution data

    DEFF Research Database (Denmark)

    Lenzi, Amanda; Pinson, Pierre; Clemmensen, Line Katrine Harder

    2017-01-01

    average wind power generation, and for a high temporal resolution (typically wind power averages over 15-min time steps). In both cases, we use a spatial hierarchical statistical model in which spatial correlation is captured by a latent Gaussian field. We explore how such models can be handled...... with stochastic partial differential approximations of Matérn Gaussian fields together with Integrated Nested Laplace Approximations. We demonstrate the proposed methods on wind farm data from Western Denmark, and compare the results to those obtained with standard geostatistical methods. The results show...

  3. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  4. GIS Tools to Estimate Average Annual Daily Traffic

    Science.gov (United States)

    2012-06-01

    This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...

  5. Annual average equivalent dose of workers form health area

    International Nuclear Information System (INIS)

    Daltro, T.F.L.; Campos, L.L.

    1992-01-01

    The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)

  6. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  7. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  8. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.; El Kenawy, Ahmed M.; Azorin-Molina, Cesar; Chura, O.; Trujillo, F.; Aguilar, Enric; Martí n-Herná ndez, Natalia; Ló pez-Moreno, Juan Ignacio; Sanchez-Lorenzo, Arturo; Morá n-Tejeda, Enrique; Revuelto, Jesú s; Ycaza, P.; Friend, F.

    2015-01-01

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control

  9. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  10. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  11. Global Annual Average PM2.5 Grids from MODIS and MISR Aerosol Optical Depth (AOD)

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Annual PM2.5 Grids from MODIS and MISR Aerosol Optical Depth (AOD) data set represents a series of annual average grids (2001-2010) of fine particulate matter...

  12. Global Annual Average PM2.5 Grids from MODIS and MISR Aerosol Optical Depth (AOD)

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Annual PM2.5 Grids from MODIS and MISR Aerosol Optical Depth (AOD) data sets represent a series of annual average grids (2001-2010) of fine particulate matter...

  13. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  14. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  15. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  16. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  17. Estimation of average annual streamflows and power potentials for Alaska and Hawaii

    Energy Technology Data Exchange (ETDEWEB)

    Verdin, Kristine L. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL)

    2004-05-01

    This paper describes the work done to develop average annual streamflow estimates and power potential for the states of Alaska and Hawaii. The Elevation Derivatives for National Applications (EDNA) database was used, along with climatic datasets, to develop flow and power estimates for every stream reach in the EDNA database. Estimates of average annual streamflows were derived using state-specific regression equations, which were functions of average annual precipitation, precipitation intensity, drainage area, and other elevation-derived parameters. Power potential was calculated through the use of the average annual streamflow and the hydraulic head of each reach, which is calculated from the EDNA digital elevation model. In all, estimates of streamflow and power potential were calculated for over 170,000 stream segments in the Alaskan and Hawaiian datasets.

  18. Variation in the annual average radon concentration measured in homes in Mesa County, Colorado

    International Nuclear Information System (INIS)

    Rood, A.S.; George, J.L.; Langner, G.H. Jr.

    1990-04-01

    The purpose of this study is to examine the variability in the annual average indoor radon concentration. The TMC has been collecting annual average radon data for the past 5 years in 33 residential structures in Mesa County, Colorado. This report is an interim report that presents the data collected up to the present. Currently, the plans are to continue this study in the future. 62 refs., 3 figs., 12 tabs

  19. Radon and radon daughters indoors, problems in the determination of the annual average

    International Nuclear Information System (INIS)

    Swedjemark, G.A.

    1984-01-01

    The annual average of the concentration of radon and radon daughters in indoor air is required both in studies such as determining the collective dose to a population and at comparing with limits. Measurements are often carried out during a time period shorter than a year for practical reasons. Methods for estimating the uncertainties due to temporal variations in an annual average calculated from measurements carried out during various lengths of the sampling periods. These methods have been applied to the results from long-term measurements of radon-222 in a few houses. The possibilities to use correction factors in order to get a more adequate annual average have also been studied and some examples have been given. (orig.)

  20. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  1. High Average Power, High Energy Short Pulse Fiber Laser System

    Energy Technology Data Exchange (ETDEWEB)

    Messerly, M J

    2007-11-13

    Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.

  2. A sampling strategy for estimating plot average annual fluxes of chemical elements from forest soils

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.; Vries, de W.

    2010-01-01

    A sampling strategy for estimating spatially averaged annual element leaching fluxes from forest soils is presented and tested in three Dutch forest monitoring plots. In this method sampling locations and times (days) are selected by probability sampling. Sampling locations were selected by

  3. Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data

    Science.gov (United States)

    Gebert, Warren A.; Walker, John F.; Kennedy, James L.

    2011-01-01

    Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.

  4. Estimating Loess Plateau Average Annual Precipitation with Multiple Linear Regression Kriging and Geographically Weighted Regression Kriging

    Directory of Open Access Journals (Sweden)

    Qiutong Jin

    2016-06-01

    Full Text Available Estimating the spatial distribution of precipitation is an important and challenging task in hydrology, climatology, ecology, and environmental science. In order to generate a highly accurate distribution map of average annual precipitation for the Loess Plateau in China, multiple linear regression Kriging (MLRK and geographically weighted regression Kriging (GWRK methods were employed using precipitation data from the period 1980–2010 from 435 meteorological stations. The predictors in regression Kriging were selected by stepwise regression analysis from many auxiliary environmental factors, such as elevation (DEM, normalized difference vegetation index (NDVI, solar radiation, slope, and aspect. All predictor distribution maps had a 500 m spatial resolution. Validation precipitation data from 130 hydrometeorological stations were used to assess the prediction accuracies of the MLRK and GWRK approaches. Results showed that both prediction maps with a 500 m spatial resolution interpolated by MLRK and GWRK had a high accuracy and captured detailed spatial distribution data; however, MLRK produced a lower prediction error and a higher variance explanation than GWRK, although the differences were small, in contrast to conclusions from similar studies.

  5. Changes in Average Annual Precipitation in Argentina’s Pampa Region and Their Possible Causes

    Directory of Open Access Journals (Sweden)

    Silvia Pérez

    2015-01-01

    Full Text Available Changes in annual rainfall in five sub-regions of the Argentine Pampa Region (Rolling, Central, Mesopotamian, Flooding and Southern were examined for the period 1941 to 2010 using data from representative locations in each sub-region. Dubious series were adjusted by means of a homogeneity test and changes in mean value were evaluated using a hydrometeorological time series segmentation method. In addition, an association was sought between shifts in mean annual rainfall and changes in large-scale atmospheric pressure systems, as measured by the Atlantic Multidecadal Oscillation (AMO, the Pacific Decadal Oscillation (PDO and the Southern Oscillation Index (SOI. The results indicate that the Western Pampas (Central and Southern are more vulnerable to abrupt changes in average annual rainfall than the Eastern Pampas (Mesopotamian, Rolling and Flooding. Their vulnerability is further increased by their having the lowest average rainfall. The AMO showed significant negative correlations with all sub-regions, while the PDO and SOI showed significant positive and negative correlations respectively with the Central, Flooding and Southern Pampa. The fact that the PDO and AMO are going through the phases of their cycles that tend to reduce rainfall in much of the Pampas helps explain the lower rainfall recorded in the Western Pampas sub-regions in recent years. This has had a significant impact on agriculture and the environment.

  6. High Average Power Fiber Laser for Satellite Communications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...

  7. Socio-demographic predictors and average annual rates of caesarean section in Bangladesh between 2004 and 2014.

    Directory of Open Access Journals (Sweden)

    Md Nuruzzaman Khan

    Full Text Available Globally the rates of caesarean section (CS have steadily increased in recent decades. This rise is not fully accounted for by increases in clinical factors which indicate the need for CS. We investigated the socio-demographic predictors of CS and the average annual rates of CS in Bangladesh between 2004 and 2014.Data were derived from four waves of nationally representative Bangladesh Demographic and Health Survey (BDHS conducted between 2004 and 2014. Rate of change analysis was used to calculate the average annual rate of increase in CS from 2004 to 2014, by socio-demographic categories. Multi-level logistic regression was used to identify the socio-demographic predictors of CS in a cross-sectional analysis of the 2014 BDHS data.CS rates increased from 3.5% in 2004 to 23% in 2014. The average annual rate of increase in CS was higher among women of advanced maternal age (≥35 years, urban areas, and relatively high socio-economic status; with higher education, and who regularly accessed antenatal services. The multi-level logistic regression model indicated that lower (≤19 and advanced maternal age (≥35, urban location, relatively high socio-economic status, higher education, birth of few children (≤2, antenatal healthcare visits, overweight or obese were the key factors associated with increased utilization of CS. Underweight was a protective factor for CS.The use of CS has increased considerably in Bangladesh over the survey years. This rising trend and the risk of having CS vary significantly across regions and socio-economic status. Very high use of CS among women of relatively high socio-economic status and substantial urban-rural difference call for public awareness and practice guideline enforcement aimed at optimizing the use of CS.

  8. Socio-demographic predictors and average annual rates of caesarean section in Bangladesh between 2004 and 2014.

    Science.gov (United States)

    Khan, Md Nuruzzaman; Islam, M Mofizul; Shariff, Asma Ahmad; Alam, Md Mahmudul; Rahman, Md Mostafizur

    2017-01-01

    Globally the rates of caesarean section (CS) have steadily increased in recent decades. This rise is not fully accounted for by increases in clinical factors which indicate the need for CS. We investigated the socio-demographic predictors of CS and the average annual rates of CS in Bangladesh between 2004 and 2014. Data were derived from four waves of nationally representative Bangladesh Demographic and Health Survey (BDHS) conducted between 2004 and 2014. Rate of change analysis was used to calculate the average annual rate of increase in CS from 2004 to 2014, by socio-demographic categories. Multi-level logistic regression was used to identify the socio-demographic predictors of CS in a cross-sectional analysis of the 2014 BDHS data. CS rates increased from 3.5% in 2004 to 23% in 2014. The average annual rate of increase in CS was higher among women of advanced maternal age (≥35 years), urban areas, and relatively high socio-economic status; with higher education, and who regularly accessed antenatal services. The multi-level logistic regression model indicated that lower (≤19) and advanced maternal age (≥35), urban location, relatively high socio-economic status, higher education, birth of few children (≤2), antenatal healthcare visits, overweight or obese were the key factors associated with increased utilization of CS. Underweight was a protective factor for CS. The use of CS has increased considerably in Bangladesh over the survey years. This rising trend and the risk of having CS vary significantly across regions and socio-economic status. Very high use of CS among women of relatively high socio-economic status and substantial urban-rural difference call for public awareness and practice guideline enforcement aimed at optimizing the use of CS.

  9. EnviroAtlas - Average Annual Precipitation 1981-2010 by HUC12 for the Conterminous United States

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset provides the average annual precipitation by 12-digit Hydrologic Unit (HUC). The values were estimated from maps produced by the PRISM...

  10. EnviroAtlas - Annual average potential wind energy resource by 12-digit HUC for the Conterminous United States

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the annual average potential wind energy resource in kilowatt hours per square kilometer per day for each 12-digit Hydrologic Unit...

  11. High average power solid state laser power conditioning system

    International Nuclear Information System (INIS)

    Steinkraus, R.F.

    1987-01-01

    The power conditioning system for the High Average Power Laser program at Lawrence Livermore National Laboratory (LLNL) is described. The system has been operational for two years. It is high voltage, high power, fault protected, and solid state. The power conditioning system drives flashlamps that pump solid state lasers. Flashlamps are driven by silicon control rectifier (SCR) switched, resonant charged, (LC) discharge pulse forming networks (PFNs). The system uses fiber optics for control and diagnostics. Energy and thermal diagnostics are monitored by computers

  12. The annual averaged atmospheric dispersion factor and deposition factor according to methods of atmospheric stability classification

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hae Sun; Jeong, Hyo Joon; Kim, Eun Han; Han, Moon Hee; Hwang, Won Tae [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-09-15

    This study analyzes the differences in the annual averaged atmospheric dispersion factor and ground deposition factor produced using two classification methods of atmospheric stability, which are based on a vertical temperature difference and the standard deviation of horizontal wind direction fluctuation. Daedeok and Wolsong nuclear sites were chosen for an assessment, and the meteorological data at 10 m were applied to the evaluation of atmospheric stability. The XOQDOQ software program was used to calculate atmospheric dispersion factors and ground deposition factors. The calculated distances were chosen at 400 m, 800 m, 1,200 m, 1,600 m, 2,400 m, and 3,200 m away from the radioactive material release points. All of the atmospheric dispersion factors generated using the atmospheric stability based on the vertical temperature difference were shown to be higher than those from the standard deviation of horizontal wind direction fluctuation. On the other hand, the ground deposition factors were shown to be same regardless of the classification method, as they were based on the graph obtained from empirical data presented in the Nuclear Regulatory Commission's Regulatory Guide 1.111, which is unrelated to the atmospheric stability for the ground level release. These results are based on the meteorological data collected over the course of one year at the specified sites; however, the classification method of atmospheric stability using the vertical temperature difference is expected to be more conservative.

  13. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  14. Thermal effects in high average power optical parametric amplifiers.

    Science.gov (United States)

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given.

  15. Eighth CW and High Average Power RF Workshop

    CERN Document Server

    2014-01-01

    We are pleased to announce the next Continuous Wave and High Average RF Power Workshop, CWRF2014, to take place at Hotel NH Trieste, Trieste, Italy from 13 to 16 May, 2014. This is the eighth in the CWRF workshop series and will be hosted by Elettra - Sincrotrone Trieste S.C.p.A. (www.elettra.eu). CWRF2014 will provide an opportunity for designers and users of CW and high average power RF systems to meet and interact in a convivial environment to share experiences and ideas on applications which utilize high-power klystrons, gridded tubes, combined solid-state architectures, high-voltage power supplies, high-voltage modulators, high-power combiners, circulators, cavities, power couplers and tuners. New ideas for high-power RF system upgrades and novel ways of RF power generation and distribution will also be discussed. CWRF2014 sessions will start on Tuesday morning and will conclude on Friday lunchtime. A visit to Elettra and FERMI will be organized during the workshop. ORGANIZING COMMITTEE (OC): Al...

  16. High average power diode pumped solid state lasers for CALIOPE

    International Nuclear Information System (INIS)

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers

  17. Metallurgical source-contribution analysis of PM10 annual average concentration: A dispersion modeling approach in moravian-silesian region

    Directory of Open Access Journals (Sweden)

    P. Jančík

    2013-10-01

    Full Text Available The goal of the article is to present analysis of metallurgical industry contribution to annual average PM10 concentrations in Moravian-Silesian based on means of the air pollution modelling in accord with the Czech reference methodology SYMOS´97.

  18. Modeling and forecasting monthly movement of annual average solar insolation based on the least-squares Fourier-model

    International Nuclear Information System (INIS)

    Yang, Zong-Chang

    2014-01-01

    Highlights: • Introduce a finite Fourier-series model for evaluating monthly movement of annual average solar insolation. • Present a forecast method for predicting its movement based on the extended Fourier-series model in the least-squares. • Shown its movement is well described by a low numbers of harmonics with approximately 6-term Fourier series. • Predict its movement most fitting with less than 6-term Fourier series. - Abstract: Solar insolation is one of the most important measurement parameters in many fields. Modeling and forecasting monthly movement of annual average solar insolation is of increasingly importance in areas of engineering, science and economics. In this study, Fourier-analysis employing finite Fourier-series is proposed for evaluating monthly movement of annual average solar insolation and extended in the least-squares for forecasting. The conventional Fourier analysis, which is the most common analysis method in the frequency domain, cannot be directly applied for prediction. Incorporated with the least-square method, the introduced Fourier-series model is extended to predict its movement. The extended Fourier-series forecasting model obtains its optimums Fourier coefficients in the least-square sense based on its previous monthly movements. The proposed method is applied to experiments and yields satisfying results in the different cities (states). It is indicated that monthly movement of annual average solar insolation is well described by a low numbers of harmonics with approximately 6-term Fourier series. The extended Fourier forecasting model predicts the monthly movement of annual average solar insolation most fitting with less than 6-term Fourier series

  19. High-average-power laser medium based on silica glass

    Science.gov (United States)

    Fujimoto, Yasushi; Nakatsuka, Masahiro

    2000-01-01

    Silica glass is one of the most attractive materials for a high-average-power laser. We have developed a new laser material base don silica glass with zeolite method which is effective for uniform dispersion of rare earth ions in silica glass. High quality medium, which is bubbleless and quite low refractive index distortion, must be required for realization of laser action. As the main reason of bubbling is due to hydroxy species remained in the gelation same, we carefully choose colloidal silica particles, pH value of hydrochloric acid for hydrolysis of tetraethylorthosilicate on sol-gel process, and temperature and atmosphere control during sintering process, and then we get a bubble less transparent rare earth doped silica glass. The refractive index distortion of the sample also discussed.

  20. 40 CFR 80.825 - How is the refinery or importer annual average toxics value determined?

    Science.gov (United States)

    2010-07-01

    ... volume of applicable gasoline produced or imported in batch i. Ti = The toxics value of batch i. n = The number of batches of gasoline produced or imported during the averaging period. i = Individual batch of gasoline produced or imported during the averaging period. (b) The calculation specified in paragraph (a...

  1. Procedure for the characterization of radon potential in existing dwellings and to assess the annual average indoor radon concentration

    International Nuclear Information System (INIS)

    Collignan, Bernard; Powaga, Emilie

    2014-01-01

    Risk assessment due to radon exposure indoors is based on annual average indoor radon activity concentration. To assess the radon exposure in a building, measurement is generally performed during at least two months during heating period in order to be representative of the annual average value. This is because radon presence indoors could be very variable during time. This measurement protocol is fairly reliable but may be a limiting in the radon risk management, particularly during a real estate transaction due to the duration of the measurement and the limitation of the measurement period. A previous field study defined a rapid methodology to characterize radon entry in dwellings. The objective of this study was at first, to test this methodology in various dwellings to assess its relevance with a daily test. At second, a ventilation model was used to assess numerically the air renewal of a building, the indoor air quality all along the year and the annual average indoor radon activity concentration, based on local meteorological conditions, some building characteristics and in-situ characterization of indoor pollutant emission laws. Experimental results obtained on thirteen individual dwellings showed that it is generally possible to obtain a representative characterization of radon entry into homes. It was also possible to refine the methodology defined in the previous study. In addition, numerical assessments of annual average indoor radon activity concentration showed generally a good agreement with measured values. These results are encouraging to allow a procedure with a short measurement time to be used to characterize long-term radon potential in dwellings. - Highlights: • Test of a daily procedure to characterize radon potential in dwellings. • Numerical assessment of the annual radon concentration. • Procedure applied on thirteen dwellings, characterization generally satisfactory. • Procedure useful to manage radon risk in dwellings, for real

  2. Recent developments in high average power driver technology

    International Nuclear Information System (INIS)

    Prestwich, K.R.; Buttram, M.T.; Rohwein, G.J.

    1979-01-01

    Inertial confinement fusion (ICF) reactors will require driver systems operating with tens to hundreds of megawatts of average power. The pulse power technology that will be required to build such drivers is in a primitive state of development. Recent developments in repetitive pulse power are discussed. A high-voltage transformer has been developed and operated at 3 MV in a single pulse experiment and is being tested at 1.5 MV, 5 kj and 10 pps. A low-loss, 1 MV, 10 kj, 10 pps Marx generator is being tested. Test results from gas-dynamic spark gaps that operate both in the 100 kV and 700 kV range are reported. A 250 kV, 1.5 kA/cm 2 , 30 ns electron beam diode has operated stably for 1.6 x 10 5 pulses

  3. Estimation of Annual Average Soil Loss, Based on Rusle Model in Kallar Watershed, Bhavani Basin, Tamil Nadu, India

    Science.gov (United States)

    Rahaman, S. Abdul; Aruchamy, S.; Jegankumar, R.; Ajeez, S. Abdul

    2015-10-01

    Soil erosion is a widespread environmental challenge faced in Kallar watershed nowadays. Erosion is defined as the movement of soil by water and wind, and it occurs in Kallar watershed under a wide range of land uses. Erosion by water can be dramatic during storm events, resulting in wash-outs and gullies. It can also be insidious, occurring as sheet and rill erosion during heavy rains. Most of the soil lost by water erosion is by the processes of sheet and rill erosion. Land degradation and subsequent soil erosion and sedimentation play a significant role in impairing water resources within sub watersheds, watersheds and basins. Using conventional methods to assess soil erosion risk is expensive and time consuming. A comprehensive methodology that integrates Remote sensing and Geographic Information Systems (GIS), coupled with the use of an empirical model (Revised Universal Soil Loss Equation- RUSLE) to assess risk, can identify and assess soil erosion potential and estimate the value of soil loss. GIS data layers including, rainfall erosivity (R), soil erodability (K), slope length and steepness (LS), cover management (C) and conservation practice (P) factors were computed to determine their effects on average annual soil loss in the study area. The final map of annual soil erosion shows a maximum soil loss of 398.58 t/ h-1/ y-1. Based on the result soil erosion was classified in to soil erosion severity map with five classes, very low, low, moderate, high and critical respectively. Further RUSLE factors has been broken into two categories, soil erosion susceptibility (A=RKLS), and soil erosion hazard (A=RKLSCP) have been computed. It is understood that functions of C and P are factors that can be controlled and thus can greatly reduce soil loss through management and conservational measures.

  4. Strengthened glass for high average power laser applications

    International Nuclear Information System (INIS)

    Cerqua, K.A.; Lindquist, A.; Jacobs, S.D.; Lambropoulos, J.

    1987-01-01

    Recent advancements in high repetition rate and high average power laser systems have put increasing demands on the development of improved solid state laser materials with high thermal loading capabilities. The authors have developed a process for strengthening a commercially available Nd doped phosphate glass utilizing an ion-exchange process. Results of thermal loading fracture tests on moderate size (160 x 15 x 8 mm) glass slabs have shown a 6-fold improvement in power loading capabilities for strengthened samples over unstrengthened slabs. Fractographic analysis of post-fracture samples has given insight into the mechanism of fracture in both unstrengthened and strengthened samples. Additional stress analysis calculations have supported these findings. In addition to processing the glass' surface during strengthening in a manner which preserves its post-treatment optical quality, the authors have developed an in-house optical fabrication technique utilizing acid polishing to minimize subsurface damage in samples prior to exchange treatment. Finally, extension of the strengthening process to alternate geometries of laser glass has produced encouraging results, which may expand the potential or strengthened glass in laser systems, making it an exciting prospect for many applications

  5. Potential of high-average-power solid state lasers

    International Nuclear Information System (INIS)

    Emmett, J.L.; Krupke, W.F.; Sooy, W.R.

    1984-01-01

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels

  6. Assessing the effects of land-use changes on annual average gross erosion

    Directory of Open Access Journals (Sweden)

    Armando Brath

    2002-01-01

    Full Text Available The effects of land-use changes on potential annual gross erosion in the uplands of the Emilia-Romagna administrative region, a broad geographical area of some 22 000 km2 in northern-central Italy, have been analysed by application of the Universal Soil Loss Equation (USLE. The presence of an extended mountain chain, particularly subject to soil erosion, makes the estimation of annual gross erosion relevant in defining regional soil-conservation strategies. The USLE, derived empirically for plots, is usually applied at the basin scale. In the present study, the method is implemented in a distributed framework for the hilly and mountainous portion of Emilia-Romagna through a discretisation of the region into elementary square cells. The annual gross erosion is evaluated by combining morphological, pedological and climatic information. The stream network and the tributary area drained by each elementary cell, which are needed for the local application of the USLE, are derived automatically from a Digital Elevation Model (DEM of grid size 250 x 250 m. The rainfall erosivity factor is evaluated from local estimates of rainfall of six-hour storm duration and two-year return period. The soil erodibility and slope length-steepness factors are derived from digital maps of land use, pedology and geomorphology. Furthermore, historical land-use maps of the district of Bologna (a large portion — 3720 km2 — of the area under study, allow the effect of actual land use changes on the soil erosion process to be assessed. The analysis shows the influence of land-use changes on annual gross erosion as well as the increasing vulnerability of upland areas to soil erosion processes during recent decades. Keywords: USLE, gross erosion, distributed modelling, land use changes, northern-central Italy

  7. [Algorithm for taking into account the average annual background of air pollution in the assessment of health risks].

    Science.gov (United States)

    Fokin, M V

    2013-01-01

    State Budgetary Educational Institution of Higher Professional Education "I.M. Sechenov First Moscow State Medical University" of the Ministry of Health care and Social Development, Moscow, Russian Federation. The assessment of health risks from air pollution with emissions from industrial facilities, without the average annual background of air pollution does not meet sanitary legislation. However Russian Federal Service for Hydrometeorology and Environmental Monitoring issues official certificates for a limited number of areas covered by the observations of the full program on the stationary points. Questions of accounting average background air pollution in the evaluation of health risks from exposure to emissions from industrial facilities are considered.

  8. ASSESSMENT OF THE AVERAGE ANNUAL EFFECTIVE DOSES FOR THE INHABITANTS OF THE SETTLEMENTS LOCATED IN THE TERRITORIES CONTAMINATED DUE TO THE CHERNOBYL ACCIDENT

    Directory of Open Access Journals (Sweden)

    N. G. Vlasova

    2012-01-01

    Full Text Available Catalogue of the average annual effective exposure doses of the inhabitants of the territories contaminated due to the Chernobul accident had been developed according to the method of the assessment of the average annual effective exposure doses of the settlements inhabitants. The cost-efficacy of the use of the average annual effective dose assessment method was 250 000 USD for the current 5 years. Average annual effective dose exceeded 1 mSv/year for 191 Belarus settlements from 2613. About 50 000 persons are living in these settlements.

  9. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  10. Impact of trees on pollutant dispersion in street canyons: A numerical study of the annual average effects in Antwerp, Belgium.

    Science.gov (United States)

    Vranckx, Stijn; Vos, Peter; Maiheu, Bino; Janssen, Stijn

    2015-11-01

    Effects of vegetation on pollutant dispersion receive increased attention in attempts to reduce air pollutant concentration levels in the urban environment. In this study, we examine the influence of vegetation on the concentrations of traffic pollutants in urban street canyons using numerical simulations with the CFD code OpenFOAM. This CFD approach is validated against literature wind tunnel data of traffic pollutant dispersion in street canyons. The impact of trees is simulated for a variety of vegetation types and the full range of approaching wind directions at 15° interval. All these results are combined using meteo statistics, including effects of seasonal leaf loss, to determine the annual average effect of trees in street canyons. This analysis is performed for two pollutants, elemental carbon (EC) and PM10, using background concentrations and emission strengths for the city of Antwerp, Belgium. The results show that due to the presence of trees the annual average pollutant concentrations increase with about 8% (range of 1% to 13%) for EC and with about 1.4% (range of 0.2 to 2.6%) for PM10. The study indicates that this annual effect is considerably smaller than earlier estimates which are generally based on a specific set of governing conditions (1 wind direction, full leafed trees and peak hour traffic emissions). Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Estimated average annual rate of change of CD4(+) T-cell counts in patients on combination antiretroviral therapy

    DEFF Research Database (Denmark)

    Mocroft, Amanda; Phillips, Andrew N; Ledergerber, Bruno

    2010-01-01

    BACKGROUND: Patients receiving combination antiretroviral therapy (cART) might continue treatment with a virologically failing regimen. We sought to identify annual change in CD4(+) T-cell count according to levels of viraemia in patients on cART. METHODS: A total of 111,371 CD4(+) T-cell counts...... and viral load measurements in 8,227 patients were analysed. Annual change in CD4(+) T-cell numbers was estimated using mixed models. RESULTS: After adjustment, the estimated average annual change in CD4(+) T-cell count significantly increased when viral load was cells/mm(3), 95......% confidence interval [CI] 26.6-34.3), was stable when viral load was 500-9,999 copies/ml (3.1 cells/mm(3), 95% CI -5.3-11.5) and decreased when viral load was >/=10,000 copies/ml (-14.8 cells/mm(3), 95% CI -4.5--25.1). Patients taking a boosted protease inhibitor (PI) regimen had more positive annual CD4(+) T-cell...

  12. High Average Power UV Free Electron Laser Experiments At JLAB

    International Nuclear Information System (INIS)

    Douglas, David; Benson, Stephen; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle; Tennant, Christopher; Williams, Gwyn

    2012-01-01

    Having produced 14 kW of average power at ∼2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.

  13. Short pulse mid-infrared amplifier for high average power

    CSIR Research Space (South Africa)

    Botha, LR

    2006-09-01

    Full Text Available High pressure CO2 lasers are good candidates for amplifying picosecond mid infrared pulses. High pressure CO2 lasers are notorious for being unreliable and difficult to operate. In this paper a high pressure CO2 laser is presented based on well...

  14. Picosecond mid-infrared amplifier for high average power.

    CSIR Research Space (South Africa)

    Botha, LR

    2007-04-01

    Full Text Available High pressure CO2 lasers are good candidates for amplifying picosecond mid infrared pulses. High pressure CO2 lasers are notorious for being unreliable and difficult to operate. In this paper a high pressure CO2 laser is presented based on well...

  15. Average Anisotropy Characteristics of High Energy Cosmic Ray ...

    Indian Academy of Sciences (India)

    Further Shrivastava & Shukla (1996) reported that there is a high correlation between solar wind velocity and Ap index. As we know from convection diffusion approximate theory, solar wind velocity plays an important role in cosmic ray modulation. In the absence of solar wind data, one can use the daily values of Ap index.

  16. Who Are Most, Average, or High-Functioning Adults?

    Science.gov (United States)

    Gregg, Noel; Coleman, Chris; Lindstrom, Jennifer; Lee, Christopher

    2007-01-01

    The growing number of high-functioning adults seeking accommodations from testing agencies and postsecondary institutions presents an urgent need to ensure reliable and valid diagnostic decision making. The potential for this population to make significant contributions to society will be greater if we provide the learning and testing…

  17. Energy stability in a high average power FEL

    International Nuclear Information System (INIS)

    Mermings, L.; Bisognano, J.; Delayen, J.

    1995-01-01

    Recirculating, energy-recovering linacs can be used as driver accelerators for high power FELs. Instabilities which arise from fluctuations of the cavity fields or beam current are investigated. Energy changes can cause beam loss on apertures, or, when coupled to M, phase oscillations. Both effects change the beam induced voltage in the cavities and can lead to unstable variations of the accelerating field. Stability analysis for small perturbations from equilibrium is performed and threshold currents are determined. Furthermore, the analytical model is extended to include feedback. Comparison with simulation results derived from direct integration of the equations of motion is presented. Design strategies to increase the instability threshold are discussed and the UV Demo FEL, proposed for construction at CEBAF, and the INP Recuperatron at Novosibirsk are used as examples

  18. Optimization and Annual Average Power Predictions of a Backward Bent Duct Buoy Oscillating Water Column Device Using the Wells Turbine.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Christopher S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bull, Diana L [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Willits, Steven M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Fontaine, Arnold A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-08-01

    This Technical Report presents work completed by The Applied Research Laboratory at The Pennsylvania State University, in conjunction with Sandia National Labs, on the optimization of the power conversion chain (PCC) design to maximize the Average Annual Electric Power (AAEP) output of an Oscillating Water Column (OWC) device. The design consists of two independent stages. First, the design of a floating OWC, a Backward Bent Duct Buoy (BBDB), and second the design of the PCC. The pneumatic power output of the BBDB in random waves is optimized through the use of a hydrodynamically coupled, linear, frequency-domain, performance model that links the oscillating structure to internal air-pressure fluctuations. The PCC optimization is centered on the selection and sizing of a Wells Turbine and electric power generation equipment. The optimization of the PCC involves the following variables: the type of Wells Turbine (fixed or variable pitched, with and without guide vanes), the radius of the turbine, the optimal vent pressure, the sizing of the power electronics, and number of turbines. Also included in this Technical Report are further details on how rotor thrust and torque are estimated, along with further details on the type of variable frequency drive selected.

  19. Effect of Average Annual Mean Serum Ferritin Levels on QTc Interval and QTc Dispersion in Beta-Thalassemia Major

    Directory of Open Access Journals (Sweden)

    Yazdan Ghandi

    2017-08-01

    Full Text Available Background There is evidence indicating impaired cardiomyocytic contractility, delayed electrical conduction and increased electrophysiological heterogeneities due to iron toxicity in beta-thalassemia major patients. In the present study, we compared the electrocardiographic and echocardiographic features of beta-thalassemia major patients with a healthy control group. Materials and Methods The average annual serum ferritin levels of fifty beta-thalassemia major patients were assessed. For each patient, corrected QT (QTc intervals and QTc dispersions (QTcd were calculated and V1S and V5R were measured. All subjects underwent two-dimensional M-mode echocardiography and Doppler study and were compared with 50 healthy subjects as a control group. Results QTc interval and dispersion were significantly higher in beta-thalassemia major patients (P= 0.001. The mean V5R (20.04 ± 4.34 vs. 17.14 ± 2.55 mm and V1S (10.24 ± 2.62 vs. 7.83 ± 0.38 mm showed considerably higher mean values in patients in comparison with control group.Peak mitral inflow velocity at early diastole and early to late ratio in the case- group was markedly higher(P

  20. Record high-average current from a high-brightness photoinjector

    Energy Technology Data Exchange (ETDEWEB)

    Dunham, Bruce; Barley, John; Bartnik, Adam; Bazarov, Ivan; Cultrera, Luca; Dobbins, John; Hoffstaetter, Georg; Johnson, Brent; Kaplan, Roger; Karkare, Siddharth; Kostroun, Vaclav; Li Yulin; Liepe, Matthias; Liu Xianghong; Loehl, Florian; Maxson, Jared; Quigley, Peter; Reilly, John; Rice, David; Sabol, Daniel [Cornell Laboratory for Accelerator-Based Sciences and Education, Cornell University, Ithaca, New York 14853 (United States); and others

    2013-01-21

    High-power, high-brightness electron beams are of interest for many applications, especially as drivers for free electron lasers and energy recovery linac light sources. For these particular applications, photoemission injectors are used in most cases, and the initial beam brightness from the injector sets a limit on the quality of the light generated at the end of the accelerator. At Cornell University, we have built such a high-power injector using a DC photoemission gun followed by a superconducting accelerating module. Recent results will be presented demonstrating record setting performance up to 65 mA average current with beam energies of 4-5 MeV.

  1. High-throughput machining using high average power ultrashort pulse lasers and ultrafast polygon scanner

    Science.gov (United States)

    Schille, Joerg; Schneider, Lutz; Streek, André; Kloetzer, Sascha; Loeschner, Udo

    2016-03-01

    In this paper, high-throughput ultrashort pulse laser machining is investigated on various industrial grade metals (Aluminium, Copper, Stainless steel) and Al2O3 ceramic at unprecedented processing speeds. This is achieved by using a high pulse repetition frequency picosecond laser with maximum average output power of 270 W in conjunction with a unique, in-house developed two-axis polygon scanner. Initially, different concepts of polygon scanners are engineered and tested to find out the optimal architecture for ultrafast and precision laser beam scanning. Remarkable 1,000 m/s scan speed is achieved on the substrate, and thanks to the resulting low pulse overlap, thermal accumulation and plasma absorption effects are avoided at up to 20 MHz pulse repetition frequencies. In order to identify optimum processing conditions for efficient high-average power laser machining, the depths of cavities produced under varied parameter settings are analyzed and, from the results obtained, the characteristic removal values are specified. The maximum removal rate is achieved as high as 27.8 mm3/min for Aluminium, 21.4 mm3/min for Copper, 15.3 mm3/min for Stainless steel and 129.1 mm3/min for Al2O3 when full available laser power is irradiated at optimum pulse repetition frequency.

  2. Potential for efficient frequency conversion at high average power using solid state nonlinear optical materials

    International Nuclear Information System (INIS)

    Eimerl, D.

    1985-01-01

    High-average-power frequency conversion using solid state nonlinear materials is discussed. Recent laboratory experience and new developments in design concepts show that current technology, a few tens of watts, may be extended by several orders of magnitude. For example, using KD*P, efficient doubling (>70%) of Nd:YAG at average powers approaching 100 KW is possible; and for doubling to the blue or ultraviolet regions, the average power may approach 1 MW. Configurations using segmented apertures permit essentially unlimited scaling of average power. High average power is achieved by configuring the nonlinear material as a set of thin plates with a large ratio of surface area to volume and by cooling the exposed surfaces with a flowing gas. The design and material fabrication of such a harmonic generator are well within current technology

  3. Average annual doses, lifetime doses and associated risk of cancer death for radiation workers in various fuel fabrication facilities in India

    International Nuclear Information System (INIS)

    Iyer, P.S.; Dhond, R.V.

    1980-01-01

    Lifetime doses based on average annual doses are estimated for radiation workers in various fuel fabrication facilities in India. For such cumulative doses, the risk of radiation-induced cancer death is computed. The methodology for arriving at these estimates and the assumptions made are discussed. Based on personnel monitoring records from 1966 to 1978, the average annual dose equivalent for radiation workers is estimated as 0.9 mSv (90 mrem), and the maximum risk of cancer death associated with this occupational dose as 1.35x10 -5 a -1 , as compared with the risk of death due to natural causes of 7x10 -4 a -1 and the risk of death due to background radiation alone of 1.5x10 -5 a -1 . (author)

  4. Improved performance of high average power semiconductor arrays for applications in diode pumped solid state lasers

    International Nuclear Information System (INIS)

    Beach, R.; Emanuel, M.; Benett, W.; Freitas, B.; Ciarlo, D.; Carlson, N.; Sutton, S.; Skidmore, J.; Solarz, R.

    1994-01-01

    The average power performance capability of semiconductor diode laser arrays has improved dramatically over the past several years. These performance improvements, combined with cost reductions pursued by LLNL and others in the fabrication and packaging of diode lasers, have continued to reduce the price per average watt of laser diode radiation. Presently, we are at the point where the manufacturers of commercial high average power solid state laser systems used in material processing applications can now seriously consider the replacement of their flashlamp pumps with laser diode pump sources. Additionally, a low cost technique developed and demonstrated at LLNL for optically conditioning the output radiation of diode laser arrays has enabled a new and scalable average power diode-end-pumping architecture that can be simply implemented in diode pumped solid state laser systems (DPSSL's). This development allows the high average power DPSSL designer to look beyond the Nd ion for the first time. Along with high average power DPSSL's which are appropriate for material processing applications, low and intermediate average power DPSSL's are now realizable at low enough costs to be attractive for use in many medical, electronic, and lithographic applications

  5. High-throughput machining using a high-average power ultrashort pulse laser and high-speed polygon scanner

    Science.gov (United States)

    Schille, Joerg; Schneider, Lutz; Streek, André; Kloetzer, Sascha; Loeschner, Udo

    2016-09-01

    High-throughput ultrashort pulse laser machining is investigated on various industrial grade metals (aluminum, copper, and stainless steel) and Al2O3 ceramic at unprecedented processing speeds. This is achieved by using a high-average power picosecond laser in conjunction with a unique, in-house developed polygon mirror-based biaxial scanning system. Therefore, different concepts of polygon scanners are engineered and tested to find the best architecture for high-speed and precision laser beam scanning. In order to identify the optimum conditions for efficient processing when using high-average laser powers, the depths of cavities made in the samples by varying the processing parameter settings are analyzed and, from the results obtained, the characteristic removal values are specified. For overlapping pulses of optimum fluence, the removal rate is as high as 27.8 mm3/min for aluminum, 21.4 mm3/min for copper, 15.3 mm3/min for stainless steel, and 129.1 mm3/min for Al2O3, when a laser beam of 187 W average laser powers irradiates. On stainless steel, it is demonstrated that the removal rate increases to 23.3 mm3/min when the laser beam is very fast moving. This is thanks to the low pulse overlap as achieved with 800 m/s beam deflection speed; thus, laser beam shielding can be avoided even when irradiating high-repetitive 20-MHz pulses.

  6. Development of a high average current polarized electron source with long cathode operational lifetime

    Energy Technology Data Exchange (ETDEWEB)

    C. K. Sinclair; P. A. Adderley; B. M. Dunham; J. C. Hansknecht; P. Hartmann; M. Poelker; J. S. Price; P. M. Rutt; W. J. Schneider; M. Steigerwald

    2007-02-01

    Substantially more than half of the electromagnetic nuclear physics experiments conducted at the Continuous Electron Beam Accelerator Facility of the Thomas Jefferson National Accelerator Facility (Jefferson Laboratory) require highly polarized electron beams, often at high average current. Spin-polarized electrons are produced by photoemission from various GaAs-based semiconductor photocathodes, using circularly polarized laser light with photon energy slightly larger than the semiconductor band gap. The photocathodes are prepared by activation of the clean semiconductor surface to negative electron affinity using cesium and oxidation. Historically, in many laboratories worldwide, these photocathodes have had short operational lifetimes at high average current, and have often deteriorated fairly quickly in ultrahigh vacuum even without electron beam delivery. At Jefferson Lab, we have developed a polarized electron source in which the photocathodes degrade exceptionally slowly without electron emission, and in which ion back bombardment is the predominant mechanism limiting the operational lifetime of the cathodes during electron emission. We have reproducibly obtained cathode 1/e dark lifetimes over two years, and 1/e charge density and charge lifetimes during electron beam delivery of over 2?105???C/cm2 and 200 C, respectively. This source is able to support uninterrupted high average current polarized beam delivery to three experimental halls simultaneously for many months at a time. Many of the techniques we report here are directly applicable to the development of GaAs photoemission electron guns to deliver high average current, high brightness unpolarized beams.

  7. Predicting Freshman Grade Point Average From College Admissions Test Scores and State High School Test Scores

    OpenAIRE

    Koretz, Daniel; Yu, C; Mbekeani, Preeya Pandya; Langi, M.; Dhaliwal, Tasminda Kaur; Braslow, David Arthur

    2016-01-01

    The current focus on assessing “college and career readiness” raises an empirical question: How do high school tests compare with college admissions tests in predicting performance in college? We explored this using data from the City University of New York and public colleges in Kentucky. These two systems differ in the choice of college admissions test, the stakes for students on the high school test, and demographics. We predicted freshman grade point average (FGPA) from high school GPA an...

  8. High-Average-Power Diffraction Pulse-Compression Gratings Enabling Next-Generation Ultrafast Laser Systems

    Energy Technology Data Exchange (ETDEWEB)

    Alessi, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-11-01

    Pulse compressors for ultrafast lasers have been identified as a technology gap in the push towards high peak power systems with high average powers for industrial and scientific applications. Gratings for ultrashort (sub-150fs) pulse compressors are metallic and can absorb a significant percentage of laser energy resulting in up to 40% loss as well as thermal issues which degrade on-target performance. We have developed a next generation gold grating technology which we have scaled to the petawatt-size. This resulted in improvements in efficiency, uniformity and processing as compared to previous substrate etched gratings for high average power. This new design has a deposited dielectric material for the grating ridge rather than etching directly into the glass substrate. It has been observed that average powers as low as 1W in a compressor can cause distortions in the on-target beam. We have developed and tested a method of actively cooling diffraction gratings which, in the case of gold gratings, can support a petawatt peak power laser with up to 600W average power. We demonstrated thermo-mechanical modeling of a grating in its use environment and benchmarked with experimental measurement. Multilayer dielectric (MLD) gratings are not yet used for these high peak power, ultrashort pulse durations due to their design challenges. We have designed and fabricated broad bandwidth, low dispersion MLD gratings suitable for delivering 30 fs pulses at high average power. This new grating design requires the use of a novel Out Of Plane (OOP) compressor, which we have modeled, designed, built and tested. This prototype compressor yielded a transmission of 90% for a pulse with 45 nm bandwidth, and free of spatial and angular chirp. In order to evaluate gratings and compressors built in this project we have commissioned a joule-class ultrafast Ti:Sapphire laser system. Combining the grating cooling and MLD technologies developed here could enable petawatt laser systems to

  9. Free-space optical communications with peak and average constraints: High SNR capacity approximation

    KAUST Repository

    Chaaban, Anas

    2015-09-07

    The capacity of the intensity-modulation direct-detection (IM-DD) free-space optical channel with both average and peak intensity constraints is studied. A new capacity lower bound is derived by using a truncated-Gaussian input distribution. Numerical evaluation shows that this capacity lower bound is nearly tight at high signal-to-noise ratio (SNR), while it is shown analytically that the gap to capacity upper bounds is a small constant at high SNR. In particular, the gap to the high-SNR asymptotic capacity of the channel under either a peak or an average constraint is small. This leads to a simple approximation of the high SNR capacity. Additionally, a new capacity upper bound is derived using sphere-packing arguments. This bound is tight at high SNR for a channel with a dominant peak constraint.

  10. Total Quality Management (TQM) Practices and School Climate amongst High, Average and Low Performance Secondary Schools

    Science.gov (United States)

    Ismail, Siti Noor

    2014-01-01

    Purpose: This study attempted to determine whether the dimensions of TQM practices are predictors of school climate. It aimed to identify the level of TQM practices and school climate in three different categories of schools, namely high, average and low performance schools. The study also sought to examine which dimensions of TQM practices…

  11. Recent advances in the development of high average power induction accelerators for industrial and environmental applications

    International Nuclear Information System (INIS)

    Neau, E.L.

    1994-01-01

    Short-pulse accelerator technology developed during the early 1960's through the late 1980's is being extended to high average power systems capable of use in industrial and environmental applications. Processes requiring high dose levels and/or high volume throughput will require systems with beam power levels from several hundreds of kilowatts to megawatts. Beam accelerating potentials can range from less than 1 MeV to as much as 10 MeV depending on the type of beam, depth of penetration required, and the density of the product being treated. This paper addresses the present status of a family of high average power systems, with output beam power levels up to 200 kW, now in operation that use saturable core switches to achieve output pulse widths of 50 to 80 nanoseconds. Inductive adders and field emission cathodes are used to generate beams of electrons or x-rays at up to 2.5 MeV over areas of 1000 cm 2 . Similar high average power technology is being used at ≤ 1 MeV to drive repetitive ion beam sources for treatment of material surfaces over 100's of cm 2

  12. Development of a high average current polarized electron source with long cathode operational lifetime

    Directory of Open Access Journals (Sweden)

    C. K. Sinclair

    2007-02-01

    Full Text Available Substantially more than half of the electromagnetic nuclear physics experiments conducted at the Continuous Electron Beam Accelerator Facility of the Thomas Jefferson National Accelerator Facility (Jefferson Laboratory require highly polarized electron beams, often at high average current. Spin-polarized electrons are produced by photoemission from various GaAs-based semiconductor photocathodes, using circularly polarized laser light with photon energy slightly larger than the semiconductor band gap. The photocathodes are prepared by activation of the clean semiconductor surface to negative electron affinity using cesium and oxidation. Historically, in many laboratories worldwide, these photocathodes have had short operational lifetimes at high average current, and have often deteriorated fairly quickly in ultrahigh vacuum even without electron beam delivery. At Jefferson Lab, we have developed a polarized electron source in which the photocathodes degrade exceptionally slowly without electron emission, and in which ion back bombardment is the predominant mechanism limiting the operational lifetime of the cathodes during electron emission. We have reproducibly obtained cathode 1/e dark lifetimes over two years, and 1/e charge density and charge lifetimes during electron beam delivery of over 2×10^{5}   C/cm^{2} and 200 C, respectively. This source is able to support uninterrupted high average current polarized beam delivery to three experimental halls simultaneously for many months at a time. Many of the techniques we report here are directly applicable to the development of GaAs photoemission electron guns to deliver high average current, high brightness unpolarized beams.

  13. Development of high-average-power-laser medium based on silica glass

    International Nuclear Information System (INIS)

    Fujimoto, Yasushi; Nakatsuka, Masahiro

    2000-01-01

    We have developed a high-average-power laser material based on silica glass. A new method using Zeolite X is effective for homogeneously dispersing rare earth ions in silica glass to get a high quantum yield. High quality medium, which is bubbleless and quite low refractive index distortion, must be required for realization of laser action, and therefore, we have carefully to treat the gelation and sintering processes, such as, selection of colloidal silica, pH value of for hydrolysis of tetraethylorthosilicate, and sintering history. The quality of the sintered sample and the applications are discussed. (author)

  14. High Temperature Materials Laboratory third annual report

    Energy Technology Data Exchange (ETDEWEB)

    Tennery, V.J.; Foust, F.M.

    1990-12-01

    The High Temperature Materials Laboratory has completed its third year of operation as a designated DOE User Facility at the Oak Ridge National Laboratory. Growth of the user program is evidenced by the number of outside institutions who have executed user agreements since the facility began operation in 1987. A total of 88 nonproprietary agreements (40 university and 48 industry) and 20 proprietary agreements (1 university, 19 industry) are now in effect. Sixty-eight nonproprietary research proposals (39 from university, 28 from industry, and 1 other government facility) and 8 proprietary proposals were considered during this reporting period. Research projects active in FY 1990 are summarized.

  15. Semi-analytical wave functions in relativistic average atom model for high-temperature plasmas

    International Nuclear Information System (INIS)

    Guo Yonghui; Duan Yaoyong; Kuai Bin

    2007-01-01

    The semi-analytical method is utilized for solving a relativistic average atom model for high-temperature plasmas. Semi-analytical wave function and the corresponding energy eigenvalue, containing only a numerical factor, are obtained by fitting the potential function in the average atom into hydrogen-like one. The full equations for the model are enumerated, and more attentions are paid upon the detailed procedures including the numerical techniques and computer code design. When the temperature of plasmas is comparatively high, the semi-analytical results agree quite well with those obtained by using a full numerical method for the same model and with those calculated by just a little different physical models, and the result's accuracy and computation efficiency are worthy of note. The drawbacks for this model are also analyzed. (authors)

  16. High average power Q-switched 1314 nm two-crystal Nd:YLF laser

    CSIR Research Space (South Africa)

    Botha, RC

    2015-02-01

    Full Text Available . 40, No. 4 / OPTICS LETTERS High average power Q-switched 1314 nm two-crystal Nd:YLF laser R. C. Botha,1,2,* W. Koen,3 M. J. D. Esser,3,4 C. Bollig,3,5 W. L. Combrinck,1,6 H. M. von Bergmann,2 and H. J. Strauss3 1HartRAO, P.O. Box 443...

  17. High energy, high average power solid state green or UV laser

    Science.gov (United States)

    Hackel, Lloyd A.; Norton, Mary; Dane, C. Brent

    2004-03-02

    A system for producing a green or UV output beam for illuminating a large area with relatively high beam fluence. A Nd:glass laser produces a near-infrared output by means of an oscillator that generates a high quality but low power output and then multi-pass through and amplification in a zig-zag slab amplifier and wavefront correction in a phase conjugator at the midway point of the multi-pass amplification. The green or UV output is generated by means of conversion crystals that follow final propagation through the zig-zag slab amplifier.

  18. High average power, highly brilliant laser-produced plasma source for soft X-ray spectroscopy.

    Science.gov (United States)

    Mantouvalou, Ioanna; Witte, Katharina; Grötzsch, Daniel; Neitzel, Michael; Günther, Sabrina; Baumann, Jonas; Jung, Robert; Stiel, Holger; Kanngiesser, Birgit; Sandner, Wolfgang

    2015-03-01

    In this work, a novel laser-produced plasma source is presented which delivers pulsed broadband soft X-radiation in the range between 100 and 1200 eV. The source was designed in view of long operating hours, high stability, and cost effectiveness. It relies on a rotating and translating metal target and achieves high stability through an on-line monitoring device using a four quadrant extreme ultraviolet diode in a pinhole camera arrangement. The source can be operated with three different laser pulse durations and various target materials and is equipped with two beamlines for simultaneous experiments. Characterization measurements are presented with special emphasis on the source position and emission stability of the source. As a first application, a near edge X-ray absorption fine structure measurement on a thin polyimide foil shows the potential of the source for soft X-ray spectroscopy.

  19. Specification of optical components for a high average-power laser environment

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.R.; Chow, R.; Rinmdahl, K.A.; Willis, J.B.; Wong, J.N.

    1997-06-25

    Optical component specifications for the high-average-power lasers and transport system used in the Atomic Vapor Laser Isotope Separation (AVLIS) plant must address demanding system performance requirements. The need for high performance optics has to be balanced against the practical desire to reduce the supply risks of cost and schedule. This is addressed in optical system design, careful planning with the optical industry, demonstration of plant quality parts, qualification of optical suppliers and processes, comprehensive procedures for evaluation and test, and a plan for corrective action.

  20. Rf system modeling for the high average power FEL at CEBAF

    International Nuclear Information System (INIS)

    Merminga, L.; Fugitt, J.; Neil, G.; Simrock, S.

    1995-01-01

    High beam loading and energy recovery compounded by use of superconducting cavities, which requires tight control of microphonic noise, place stringent constraints on the linac rf system design of the proposed high average power FEL at CEBAF. Longitudinal dynamics imposes off-crest operation, which in turn implies a large tuning angle to minimize power requirements. Amplitude and phase stability requirements are consistent with demonstrated performance at CEBAF. A numerical model of the CEBAF rf control system is presented and the response of the system is examined under large parameter variations, microphonic noise, and beam current fluctuations. Studies of the transient behavior lead to a plausible startup and recovery scenario

  1. High average power CW FELs [Free Electron Laser] for application to plasma heating: Designs and experiments

    International Nuclear Information System (INIS)

    Booske, J.H.; Granatstein, V.L.; Radack, D.J.; Antonsen, T.M. Jr.; Bidwell, S.; Carmel, Y.; Destler, W.W.; Latham, P.E.; Levush, B.; Mayergoyz, I.D.; Zhang, Z.X.

    1989-01-01

    A short period wiggler (period ∼ 1 cm), sheet beam FEL has been proposed as a low-cost source of high average power (1 MW) millimeter-wave radiation for plasma heating and space-based radar applications. Recent calculation and experiments have confirmed the feasibility of this concept in such critical areas as rf wall heating, intercepted beam (''body'') current, and high voltage (0.5 - 1 MV) sheet beam generation and propagation. Results of preliminary low-gain sheet beam FEL oscillator experiments using a field emission diode and pulse line accelerator have verified that lasing occurs at the predicted FEL frequency. Measured start oscillation currents also appear consistent with theoretical estimates. Finally, we consider the possibilities of using a short-period, superconducting planar wiggler for improved beam confinement, as well as access to the high gain, strong pump Compton regime with its potential for highly efficient FEL operation

  2. Predicting Long-Term College Success through Degree Completion Using ACT[R] Composite Score, ACT Benchmarks, and High School Grade Point Average. ACT Research Report Series, 2012 (5)

    Science.gov (United States)

    Radunzel, Justine; Noble, Julie

    2012-01-01

    This study compared the effectiveness of ACT[R] Composite score and high school grade point average (HSGPA) for predicting long-term college success. Outcomes included annual progress towards a degree (based on cumulative credit-bearing hours earned), degree completion, and cumulative grade point average (GPA) at 150% of normal time to degree…

  3. Analysis of the distributions of hourly NO2 concentrations contributing to annual average NO2 concentrations across the European monitoring network between 2000 and 2014

    Directory of Open Access Journals (Sweden)

    C. S. Malley

    2018-03-01

    Full Text Available Exposure to nitrogen dioxide (NO2 is associated with negative human health effects, both for short-term peak concentrations and from long-term exposure to a wider range of NO2 concentrations. For the latter, the European Union has established an air quality limit value of 40 µg m−3 as an annual average. However, factors such as proximity and strength of local emissions, atmospheric chemistry, and meteorological conditions mean that there is substantial variation in the hourly NO2 concentrations contributing to an annual average concentration. The aim of this analysis was to quantify the nature of this variation at thousands of monitoring sites across Europe through the calculation of a standard set of chemical climatology statistics. Specifically, at each monitoring site that satisfied data capture criteria for inclusion in this analysis, annual NO2 concentrations, as well as the percentage contribution from each month, hour of the day, and hourly NO2 concentrations divided into 5 µg m−3 bins were calculated. Across Europe, 2010–2014 average annual NO2 concentrations (NO2AA exceeded the annual NO2 limit value at 8 % of > 2500 monitoring sites. The application of this chemical climatology approach showed that sites with distinct monthly, hour of day, and hourly NO2 concentration bin contributions to NO2AA were not grouped into specific regions of Europe, furthermore, within relatively small geographic regions there were sites with similar NO2AA, but with differences in these contributions. Specifically, at sites with highest NO2AA, there were generally similar contributions from across the year, but there were also differences in the contribution of peak vs. moderate hourly NO2 concentrations to NO2AA, and from different hours across the day. Trends between 2000 and 2014 for 259 sites indicate that, in general, the contribution to NO2AA from winter months has increased, as has the contribution from the rush-hour periods of

  4. Analysis of the distributions of hourly NO2 concentrations contributing to annual average NO2 concentrations across the European monitoring network between 2000 and 2014

    Science.gov (United States)

    Malley, Christopher S.; von Schneidemesser, Erika; Moller, Sarah; Braban, Christine F.; Hicks, W. Kevin; Heal, Mathew R.

    2018-03-01

    Exposure to nitrogen dioxide (NO2) is associated with negative human health effects, both for short-term peak concentrations and from long-term exposure to a wider range of NO2 concentrations. For the latter, the European Union has established an air quality limit value of 40 µg m-3 as an annual average. However, factors such as proximity and strength of local emissions, atmospheric chemistry, and meteorological conditions mean that there is substantial variation in the hourly NO2 concentrations contributing to an annual average concentration. The aim of this analysis was to quantify the nature of this variation at thousands of monitoring sites across Europe through the calculation of a standard set of chemical climatology statistics. Specifically, at each monitoring site that satisfied data capture criteria for inclusion in this analysis, annual NO2 concentrations, as well as the percentage contribution from each month, hour of the day, and hourly NO2 concentrations divided into 5 µg m-3 bins were calculated. Across Europe, 2010-2014 average annual NO2 concentrations (NO2AA) exceeded the annual NO2 limit value at 8 % of > 2500 monitoring sites. The application of this chemical climatology approach showed that sites with distinct monthly, hour of day, and hourly NO2 concentration bin contributions to NO2AA were not grouped into specific regions of Europe, furthermore, within relatively small geographic regions there were sites with similar NO2AA, but with differences in these contributions. Specifically, at sites with highest NO2AA, there were generally similar contributions from across the year, but there were also differences in the contribution of peak vs. moderate hourly NO2 concentrations to NO2AA, and from different hours across the day. Trends between 2000 and 2014 for 259 sites indicate that, in general, the contribution to NO2AA from winter months has increased, as has the contribution from the rush-hour periods of the day, while the contribution from

  5. Incidence Rates of Clinical Mastitis among Canadian Holsteins Classified as High, Average, or Low Immune Responders

    Science.gov (United States)

    Miglior, Filippo; Mallard, Bonnie A.

    2013-01-01

    The objective of this study was to compare the incidence rate of clinical mastitis (IRCM) between cows classified as high, average, or low for antibody-mediated immune responses (AMIR) and cell-mediated immune responses (CMIR). In collaboration with the Canadian Bovine Mastitis Research Network, 458 lactating Holsteins from 41 herds were immunized with a type 1 and a type 2 test antigen to stimulate adaptive immune responses. A delayed-type hypersensitivity test to the type 1 test antigen was used as an indicator of CMIR, and serum antibody of the IgG1 isotype to the type 2 test antigen was used for AMIR determination. By using estimated breeding values for these traits, cows were classified as high, average, or low responders. The IRCM was calculated as the number of cases of mastitis experienced over the total time at risk throughout the 2-year study period. High-AMIR cows had an IRCM of 17.1 cases per 100 cow-years, which was significantly lower than average and low responders, with 27.9 and 30.7 cases per 100 cow-years, respectively. Low-AMIR cows tended to have the most severe mastitis. No differences in the IRCM were noted when cows were classified based on CMIR, likely due to the extracellular nature of mastitis-causing pathogens. The results of this study demonstrate the desirability of breeding dairy cattle for enhanced immune responses to decrease the incidence and severity of mastitis in the Canadian dairy industry. PMID:23175290

  6. Research on DC-RF superconducting photocathode injector for high average power FELs

    International Nuclear Information System (INIS)

    Zhao Kui; Hao Jiankui; Hu Yanle; Zhang Baocheng; Quan Shengwen; Chen Jiaer; Zhuang Jiejia

    2001-01-01

    To obtain high average current electron beams for a high average power Free Electron Laser (FEL), a DC-RF superconducting injector is designed. It consists of a DC extraction gap, a 1+((1)/(2)) superconducting cavity and a coaxial input system. The DC gap, which takes the form of a Pierce configuration, is connected to the 1+((1)/(2)) superconducting cavity. The photocathode is attached to the negative electrode of the DC gap. The anode forms the bottom of the ((1)/(2)) cavity. Simulations are made to model the beam dynamics of the electron beams extracted by the DC gap and accelerated by the superconducting cavity. High quality electron beams with emittance lower than 3 π-mm-mrad can be obtained. The optimization of experiments with the DC gap, as well as the design of experiments with the coaxial coupler have all been completed. An optimized 1+((1)/(2)) superconducting cavity is in the process of being studied and manufactured

  7. Development and significance of a fetal electrocardiogram recorded by signal-averaged high-amplification electrocardiography.

    Science.gov (United States)

    Hayashi, Risa; Nakai, Kenji; Fukushima, Akimune; Itoh, Manabu; Sugiyama, Toru

    2009-03-01

    Although ultrasonic diagnostic imaging and fetal heart monitors have undergone great technological improvements, the development and use of fetal electrocardiograms to evaluate fetal arrhythmias and autonomic nervous activity have not been fully established. We verified the clinical significance of the novel signal-averaged vector-projected high amplification ECG (SAVP-ECG) method in fetuses from 48 gravidas at 32-41 weeks of gestation and in 34 neonates. SAVP-ECGs from fetuses and newborns were recorded using a modified XYZ-leads system. Once noise and maternal QRS waves were removed, the P, QRS, and T wave intervals were measured from the signal-averaged fetal ECGs. We also compared fetal and neonatal heart rates (HRs), coefficients of variation of heart rate variability (CV) as a parasympathetic nervous activity, and the ratio of low to high frequency (LF/HF ratio) as a sympathetic nervous activity. The rate of detection of a fetal ECG by SAVP-ECG was 72.9%, and the fetal and neonatal QRS and QTc intervals were not significantly different. The neonatal CVs and LF/HF ratios were significantly increased compared with those in the fetus. In conclusion, we have developed a fetal ECG recording method using the SAVP-ECG system, which we used to evaluate autonomic nervous system development.

  8. Micro-engineered first wall tungsten armor for high average power laser fusion energy systems

    Science.gov (United States)

    Sharafat, Shahram; Ghoniem, Nasr M.; Anderson, Michael; Williams, Brian; Blanchard, Jake; Snead, Lance; HAPL Team

    2005-12-01

    The high average power laser program is developing an inertial fusion energy demonstration power reactor with a solid first wall chamber. The first wall (FW) will be subject to high energy density radiation and high doses of high energy helium implantation. Tungsten has been identified as the candidate material for a FW armor. The fundamental concern is long term thermo-mechanical survivability of the armor against the effects of high temperature pulsed operation and exfoliation due to the retention of implanted helium. Even if a solid tungsten armor coating would survive the high temperature cyclic operation with minimal failure, the high helium implantation and retention would result in unacceptable material loss rates. Micro-engineered materials, such as castellated structures, plasma sprayed nano-porous coatings and refractory foams are suggested as a first wall armor material to address these fundamental concerns. A micro-engineered FW armor would have to be designed with specific geometric features that tolerate high cyclic heating loads and recycle most of the implanted helium without any significant failure. Micro-engineered materials are briefly reviewed. In particular, plasma-sprayed nano-porous tungsten and tungsten foams are assessed for their potential to accommodate inertial fusion specific loads. Tests show that nano-porous plasma spray coatings can be manufactured with high permeability to helium gas, while retaining relatively high thermal conductivities. Tungsten foams where shown to be able to overcome thermo-mechanical loads by cell rotation and deformation. Helium implantation tests have shown, that pulsed implantation and heating releases significant levels of implanted helium. Helium implantation and release from tungsten was modeled using an expanded kinetic rate theory, to include the effects of pulsed implantations and thermal cycles. Although, significant challenges remain micro-engineered materials are shown to constitute potential

  9. Micro-engineered first wall tungsten armor for high average power laser fusion energy systems

    International Nuclear Information System (INIS)

    Sharafat, Shahram; Ghoniem, Nasr M.; Anderson, Michael; Williams, Brian; Blanchard, Jake; Snead, Lance

    2005-01-01

    The high average power laser program is developing an inertial fusion energy demonstration power reactor with a solid first wall chamber. The first wall (FW) will be subject to high energy density radiation and high doses of high energy helium implantation. Tungsten has been identified as the candidate material for a FW armor. The fundamental concern is long term thermo-mechanical survivability of the armor against the effects of high temperature pulsed operation and exfoliation due to the retention of implanted helium. Even if a solid tungsten armor coating would survive the high temperature cyclic operation with minimal failure, the high helium implantation and retention would result in unacceptable material loss rates. Micro-engineered materials, such as castellated structures, plasma sprayed nano-porous coatings and refractory foams are suggested as a first wall armor material to address these fundamental concerns. A micro-engineered FW armor would have to be designed with specific geometric features that tolerate high cyclic heating loads and recycle most of the implanted helium without any significant failure. Micro-engineered materials are briefly reviewed. In particular, plasma-sprayed nano-porous tungsten and tungsten foams are assessed for their potential to accommodate inertial fusion specific loads. Tests show that nano-porous plasma spray coatings can be manufactured with high permeability to helium gas, while retaining relatively high thermal conductivities. Tungsten foams where shown to be able to overcome thermo-mechanical loads by cell rotation and deformation. Helium implantation tests have shown, that pulsed implantation and heating releases significant levels of implanted helium. Helium implantation and release from tungsten was modeled using an expanded kinetic rate theory, to include the effects of pulsed implantations and thermal cycles. Although, significant challenges remain micro-engineered materials are shown to constitute potential

  10. Predicting Freshman Grade Point Average From College Admissions Test Scores and State High School Test Scores

    Directory of Open Access Journals (Sweden)

    Daniel Koretz

    2016-09-01

    Full Text Available The current focus on assessing “college and career readiness” raises an empirical question: How do high school tests compare with college admissions tests in predicting performance in college? We explored this using data from the City University of New York and public colleges in Kentucky. These two systems differ in the choice of college admissions test, the stakes for students on the high school test, and demographics. We predicted freshman grade point average (FGPA from high school GPA and both college admissions and high school tests in mathematics and English. In both systems, the choice of tests had only trivial effects on the aggregate prediction of FGPA. Adding either test to an equation that included the other had only trivial effects on prediction. Although the findings suggest that the choice of test might advantage or disadvantage different students, it had no substantial effect on the over- and underprediction of FGPA for students classified by race-ethnicity or poverty.

  11. On the XFEL Schrödinger Equation: Highly Oscillatory Magnetic Potentials and Time Averaging

    KAUST Repository

    Antonelli, Paolo

    2014-01-14

    We analyse a nonlinear Schrödinger equation for the time-evolution of the wave function of an electron beam, interacting selfconsistently through a Hartree-Fock nonlinearity and through the repulsive Coulomb interaction of an atomic nucleus. The electrons are supposed to move under the action of a time dependent, rapidly periodically oscillating electromagnetic potential. This can be considered a simplified effective single particle model for an X-ray free electron laser. We prove the existence and uniqueness for the Cauchy problem and the convergence of wave-functions to corresponding solutions of a Schrödinger equation with a time-averaged Coulomb potential in the high frequency limit for the oscillations of the electromagnetic potential. © 2014 Springer-Verlag Berlin Heidelberg.

  12. Cloud-based design of high average power traveling wave linacs

    Science.gov (United States)

    Kutsaev, S. V.; Eidelman, Y.; Bruhwiler, D. L.; Moeller, P.; Nagler, R.; Barbe Welzel, J.

    2017-12-01

    The design of industrial high average power traveling wave linacs must accurately consider some specific effects. For example, acceleration of high current beam reduces power flow in the accelerating waveguide. Space charge may influence the stability of longitudinal or transverse beam dynamics. Accurate treatment of beam loading is central to the design of high-power TW accelerators, and it is especially difficult to model in the meter-scale region where the electrons are nonrelativistic. Currently, there are two types of available codes: tracking codes (e.g. PARMELA or ASTRA) that cannot solve self-consistent problems, and particle-in-cell codes (e.g. Magic 3D or CST Particle Studio) that can model the physics correctly but are very time-consuming and resource-demanding. Hellweg is a special tool for quick and accurate electron dynamics simulation in traveling wave accelerating structures. The underlying theory of this software is based on the differential equations of motion. The effects considered in this code include beam loading, space charge forces, and external magnetic fields. We present the current capabilities of the code, provide benchmarking results, and discuss future plans. We also describe the browser-based GUI for executing Hellweg in the cloud.

  13. Evaluation of machine learning algorithms for prediction of regions of high Reynolds averaged Navier Stokes uncertainty

    Science.gov (United States)

    Ling, J.; Templeton, J.

    2015-08-01

    Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. Feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.

  14. Design and component specifications for high average power laser optical systems

    Energy Technology Data Exchange (ETDEWEB)

    O' Neil, R.W.; Sawicki, R.H.; Johnson, S.A.; Sweatt, W.C.

    1987-01-01

    Laser imaging and transport systems are considered in the regime where laser-induced damage and/or thermal distortion have significant design implications. System design and component specifications are discussed and quantified in terms of the net system transport efficiency and phase budget. Optical substrate materials, figure, surface roughness, coatings, and sizing are considered in the context of visible and near-ir optical systems that have been developed at Lawrence Livermore National Laboratory for laser isotope separation applications. In specific examples of general applicability, details of the bulk and/or surface absorption, peak and/or average power damage threshold, coating characteristics and function, substrate properties, or environmental factors will be shown to drive the component size, placement, and shape in high-power systems. To avoid overstressing commercial fabrication capabilities or component design specifications, procedures will be discussed for compensating for aberration buildup, using a few carefully placed adjustable mirrors. By coupling an aggressive measurements program on substrates and coatings to the design effort, an effective technique has been established to project high-power system performance realistically and, in the process, drive technology developments to improve performance or lower cost in large-scale laser optical systems. 13 refs.

  15. Design and component specifications for high average power laser optical systems

    International Nuclear Information System (INIS)

    O'Neil, R.W.; Sawicki, R.H.; Johnson, S.A.; Sweatt, W.C.

    1987-01-01

    Laser imaging and transport systems are considered in the regime where laser-induced damage and/or thermal distortion have significant design implications. System design and component specifications are discussed and quantified in terms of the net system transport efficiency and phase budget. Optical substrate materials, figure, surface roughness, coatings, and sizing are considered in the context of visible and near-ir optical systems that have been developed at Lawrence Livermore National Laboratory for laser isotope separation applications. In specific examples of general applicability, details of the bulk and/or surface absorption, peak and/or average power damage threshold, coating characteristics and function, substrate properties, or environmental factors will be shown to drive the component size, placement, and shape in high-power systems. To avoid overstressing commercial fabrication capabilities or component design specifications, procedures will be discussed for compensating for aberration buildup, using a few carefully placed adjustable mirrors. By coupling an aggressive measurements program on substrates and coatings to the design effort, an effective technique has been established to project high-power system performance realistically and, in the process, drive technology developments to improve performance or lower cost in large-scale laser optical systems. 13 refs

  16. Estimation of Radionuclide Concentrations and Average Annual Committed Effective Dose due to Ingestion for the Population in the Red River Delta, Vietnam.

    Science.gov (United States)

    Van, Tran Thi; Bat, Luu Tam; Nhan, Dang Duc; Quang, Nguyen Hao; Cam, Bui Duy; Hung, Luu Viet

    2018-02-16

    Radioactivity concentrations of nuclides of the 232 Th and 238 U radioactive chains and 40 K, 90 Sr, 137 Cs, and 239+240 Pu were surveyed for raw and cooked food of the population in the Red River delta region, Vietnam, using α-, γ-spectrometry, and liquid scintillation counting techniques. The concentration of 40 K in the cooked food was the highest compared to those of other radionuclides ranging from (23 ± 5) (rice) to (347 ± 50) Bq kg -1 dw (tofu). The 210 Po concentration in the cooked food ranged from its limit of detection (LOD) of 5 mBq kg -1  dw (rice) to (4.0 ± 1.6) Bq kg -1  dw (marine bivalves). The concentrations of other nuclides of the 232 Th and 238 U chains in the food were low, ranging from LOD of 0.02 Bq kg -1  dw to (1.1 ± 0.3) Bq kg -1  dw. The activity concentrations of 90 Sr, 137 Cs, and 239+240 Pu in the food were minor compared to that of the natural radionuclides. The average annual committed effective dose to adults in the study region was estimated and it ranged from 0.24 to 0.42 mSv a -1 with an average of 0.32 mSv a -1 , out of which rice, leafy vegetable, and tofu contributed up to 16.2%, 24.4%, and 21.3%, respectively. The committed effective doses to adults due to ingestion of regular diet in the Red River delta region, Vietnam are within the range determined in other countries worldwide. This finding suggests that Vietnamese food is safe for human consumption with respect to radiation exposure.

  17. Sub-100 fs high average power directly blue-diode-laser-pumped Ti:sapphire oscillator

    Science.gov (United States)

    Rohrbacher, Andreas; Markovic, Vesna; Pallmann, Wolfgang; Resan, Bojan

    2016-03-01

    Ti:sapphire oscillators are a proven technology to generate sub-100 fs (even sub-10 fs) pulses in the near infrared and are widely used in many high impact scientific fields. However, the need for a bulky, expensive and complex pump source, typically a frequency-doubled multi-watt neodymium or optically pumped semiconductor laser, represents the main obstacle to more widespread use. The recent development of blue diodes emitting over 1 W has opened up the possibility of directly diode-laser-pumped Ti:sapphire oscillators. Beside the lower cost and footprint, a direct diode pumping provides better reliability, higher efficiency and better pointing stability to name a few. The challenges that it poses are lower absorption of Ti:sapphire at available diode wavelengths and lower brightness compared to typical green pump lasers. For practical applications such as bio-medicine and nano-structuring, output powers in excess of 100 mW and sub-100 fs pulses are required. In this paper, we demonstrate a high average power directly blue-diode-laser-pumped Ti:sapphire oscillator without active cooling. The SESAM modelocking ensures reliable self-starting and robust operation. We will present two configurations emitting 460 mW in 82 fs pulses and 350 mW in 65 fs pulses, both operating at 92 MHz. The maximum obtained pulse energy reaches 5 nJ. A double-sided pumping scheme with two high power blue diode lasers was used for the output power scaling. The cavity design and the experimental results will be discussed in more details.

  18. Generation and Applications of High Average Power Mid-IR Supercontinuum in Chalcogenide Fibers

    OpenAIRE

    Petersen, Christian Rosenberg

    2016-01-01

    Mid-infrared supercontinuum with up to 54.8 mW average power, and maximum bandwidth of 1.77-8.66 μm is demonstrated as a result of pumping tapered chalcogenide photonic crystal fibers with a MHz parametric source at 4 μm

  19. Industrial applications of high-average power high-peak power nanosecond pulse duration Nd:YAG lasers

    Science.gov (United States)

    Harrison, Paul M.; Ellwi, Samir

    2009-02-01

    Within the vast range of laser materials processing applications, every type of successful commercial laser has been driven by a major industrial process. For high average power, high peak power, nanosecond pulse duration Nd:YAG DPSS lasers, the enabling process is high speed surface engineering. This includes applications such as thin film patterning and selective coating removal in markets such as the flat panel displays (FPD), solar and automotive industries. Applications such as these tend to require working spots that have uniform intensity distribution using specific shapes and dimensions, so a range of innovative beam delivery systems have been developed that convert the gaussian beam shape produced by the laser into a range of rectangular and/or shaped spots, as required by demands of each project. In this paper the authors will discuss the key parameters of this type of laser and examine why they are important for high speed surface engineering projects, and how they affect the underlying laser-material interaction and the removal mechanism. Several case studies will be considered in the FPD and solar markets, exploring the close link between the application, the key laser characteristics and the beam delivery system that link these together.

  20. The use of induction linacs with nonlinear magnetic drive as high average power accelerators

    International Nuclear Information System (INIS)

    Birx, D.L.; Cook, E.G.; Hawkins, S.A.; Newton, M.A.; Poor, S.E.; Reginato, L.L.; Schmidt, J.A.; Smith, M.W.

    1985-01-01

    The marriage of induction linac technology with Nonlinear Magnetic Modulators has produced some unique capabilities. It appears possible to produce electron beams with average currents measured in amperes, at gradients exceeding 1 MeV/m, and with power efficiences approaching 50%. A 2 MeV, 5 kA electron accelerator is under construction at Lawrence Livermore National Laboratory (LLNL) to allow us to demonstrate some of these concepts. Progress on this project is reported here. (orig.)

  1. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  2. Effects of solar activity and galactic cosmic ray cycles on the modulation of the annual average temperature at two sites in southern Brazil

    Science.gov (United States)

    Frigo, Everton; Antonelli, Francesco; da Silva, Djeniffer S. S.; Lima, Pedro C. M.; Pacca, Igor I. G.; Bageston, José V.

    2018-04-01

    Quasi-periodic variations in solar activity and galactic cosmic rays (GCRs) on decadal and bidecadal timescales have been suggested as a climate forcing mechanism for many regions on Earth. One of these regions is southern Brazil, where the lowest values during the last century were observed for the total geomagnetic field intensity at the Earth's surface. These low values are due to the passage of the center of the South Atlantic Magnetic Anomaly (SAMA), which crosses the Brazilian territory from east to west following a latitude of ˜ 26°. In areas with low geomagnetic intensity, such as the SAMA, the incidence of GCRs is increased. Consequently, possible climatic effects related to the GCRs tend to be maximized in this region. In this work, we investigate the relationship between the ˜ 11-year and ˜ 22-year cycles that are related to solar activity and GCRs and the annual average temperature recorded between 1936 and 2014 at two weather stations, both located near a latitude of 26° S but at different longitudes. The first of these stations (Torres - TOR) is located in the coastal region, and the other (Iraí - IRA) is located in the interior, around 450 km from the Atlantic Ocean. Sunspot data and the solar modulation potential for cosmic rays were used as proxies for the solar activity and the GCRs, respectively. Our investigation of the influence of decadal and bidecadal cycles in temperature data was carried out using the wavelet transform coherence (WTC) spectrum. The results indicate that periodicities of 11 years may have continuously modulated the climate at TOR via a nonlinear mechanism, while at IRA, the effects of this 11-year modulation period were intermittent. Four temperature maxima, separated by around 20 years, were detected in the same years at both weather stations. These temperature maxima are almost coincident with the maxima of the odd solar cycles. Furthermore, these maxima occur after transitions from even to odd solar cycles, that is

  3. Multi-Repeated Projection Lithography for High-Precision Linear Scale Based on Average Homogenization Effect

    Directory of Open Access Journals (Sweden)

    Dongxu Ren

    2016-04-01

    Full Text Available A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.

  4. Extended averaging phase-shift schemes for Fizeau interferometry on high-numerical-aperture spherical surfaces

    Science.gov (United States)

    Burke, Jan

    2010-08-01

    Phase-shifting Fizeau interferometry on spherical surfaces is impaired by phase-shift errors increasing with the numerical aperture, unless a custom optical set-up or wavelength shifting is used. This poses a problem especially for larger numerical apertures, and requires good error tolerance of the phase-shift method used; but it also constitutes a useful testing facility for phase-shift formulae, because a vast range of phase-shift intervals can be tested in a single measurement. In this paper I show how the "characteristic polynomials" method can be used to generate a phase-shifting method for the actual numerical aperture, and analyse residual cyclical phase errors by comparing a phase map from an interferogram with a few fringes to a phase mpa from a nulled fringe. Unrelated to the phase-shift miscalibration, thirdharmonic error fringes are found. These can be dealt with by changing the nominal phase shift from 90°/step to 60°/step and re-tailoring the evaluation formula for third-harmonic rejection. The residual error has the same frequency as the phase-shift signal itself, and can be removed by averaging measurements. Some interesting features of the characteristic polynomials for the averaged formulae emerge, which also shed some light on the mechanism that generates cyclical phase errors.

  5. Free-space optical communications with peak and average constraints: High SNR capacity approximation

    KAUST Repository

    Chaaban, Anas; Morvan, Jean-Marie; Alouini, Mohamed-Slim

    2015-01-01

    . Numerical evaluation shows that this capacity lower bound is nearly tight at high signal-to-noise ratio (SNR), while it is shown analytically that the gap to capacity upper bounds is a small constant at high SNR. In particular, the gap to the high

  6. Development of linear proton accelerators with the high average beam power

    CERN Document Server

    Bomko, V A; Egorov, A M

    2001-01-01

    Review of the current situation in the development of powerful linear proton accelerators carried out in many countries is given. The purpose of their creation is solving problems of safe and efficient nuclear energetics on a basis of the accelerator-reactor complex. In this case a proton beam with the energy up to 1 GeV, the average current of 30 mA is required. At the same time there is a needed in more powerful beams,for example, for production of tritium and transmutation of nuclear waste products. The creation of accelerators of such a power will be followed by the construction of linear accelerators of 1 GeV but with a more moderate beam current. They are intended for investigation of many aspects of neutron physics and neutron engineering. Problems in the creation of efficient constructions for the basic and auxiliary equipment, the reliability of the systems, and minimization of the beam losses in the process of acceleration will be solved.

  7. 7.5 MeV High Average Power Linear Accelerator System for Food Irradiation Applications

    International Nuclear Information System (INIS)

    Eichenberger, Carl; Palmer, Dennis; Wong, Sik-Lam; Robison, Greg; Miller, Bruce; Shimer, Daniel

    2005-09-01

    In December 2004 the US Food and Drug Administration (FDA) approved the use of 7.5 MeV X-rays for irradiation of food products. The increased efficiency for treatment at 7.5 MeV (versus the previous maximum allowable X-ray energy of 5 MeV) will have a significant impact on processing rates and, therefore, reduce the per-package cost of irradiation using X-rays. Titan Pulse Sciences Division is developing a new food irradiation system based on this ruling. The irradiation system incorporates a 7.5 MeV electron linear accelerator (linac) that is capable of 100 kW average power. A tantalum converter is positioned close to the exit window of the scan horn. The linac is an RF standing waveguide structure based on a 5 MeV accelerator that is used for X-ray processing of food products. The linac is powered by a 1300 MHz (L-Band) klystron tube. The electrical drive for the klystron is a solid state modulator that uses inductive energy store and solid-state opening switches. The system is designed to operate 7000 hours per year. Keywords: Rf Accelerator, Solid state modulator, X-ray processing

  8. Annual report 1989 operation of the high flux reactor

    International Nuclear Information System (INIS)

    Ahlf, J.; Gevers, A.

    1989-01-01

    In 1989 the operation of the High Flux Reactor Petten was carried out as planned. The availability was more than 100% of scheduled operating time. The average occupation of the reactor by experimental devices was 72% of the practical occupation limit. The reactor was utilized for research programmes in support of nuclear fission reactors and thermonuclear fusion, for fundamental research with neutrons and for radioisotope production. General activities in support of running irradiation programmes progressed in the normal way. Development activities addressed upgrading of irradiation devices, neutron radiography and neutron capture therapy

  9. Annual report 1990. Operation of the high flux reactor

    International Nuclear Information System (INIS)

    Ahlf, J.; Gevers, A.

    1990-01-01

    In 1990 the operation of the High Flux Reactor was carried out as planned. The availability was 96% of scheduled operating time. The average utilization of the reactor was 71% of the practical limit. The reactor was utilized for research programmes in support of nuclear fission reactors and thermonuclear fusion, for fundamental research with neutrons, for radioisotope production, and for various smaller activities. General activities in support of running irradiation programmes progressed in the normal way. Development activities addressed upgrading of irradiation devices, neutron radiography and neutron capture therapy

  10. Annual Report 1991. Operation of the high flux reactor

    International Nuclear Information System (INIS)

    Ahlf, J.; Gevers, A.

    1992-01-01

    In 1991 the operation of the High Flux Reactor was carried out as planned. The availability was more than 100% of scheduled operating time. The average utilization of the reactor was 69% of the practical limit. The reactor was utilized for research programmes in support of nuclear fission reactors and thermonuclear fusion, for fundamental research with neutrons, for radioisotope production, and for various smaller activities. Development activities addressed upgrading of irradiation devices, neutron capture therapy, neutron radiography and neutron transmutation doping of silicon. General activities in support of running irradiation programmes progressed in the normal way

  11. Pulse repetition frequency effects in a high average power x-ray preionized excimer laser

    International Nuclear Information System (INIS)

    Fontaine, B.; Forestier, B.; Delaporte, P.; Canarelli, P.

    1989-01-01

    Experimental study of waves damping in a high repetition rate excimer laser is undertaken. Excitation of laser active medium in a subsonic loop is achieved by means of a classical discharge, through transfer capacitors. The discharge stability is controlled by a wire ion plasma (w.i.p.) X-rays gun. The strong acoustic waves induced by the active medium excitation may lead to a decrease, at high PRF, of the energy per pulse. First results of the influence of a damping of induced density perturbations between two successive pulses are presented

  12. Choice of initial operating parameters for high average current linear accelerators

    International Nuclear Information System (INIS)

    Batchelor, K.

    1976-01-01

    In designing an accelerator for high currents it is evident that beam losses in the machine must be minimized, which implies well matched beams, and that adequate acceptance under severe space charge conditions must be met. This paper investigates the input parameters to an Alvarez type drift-tube accelerator resulting from such factors

  13. Reynolds-Averaged Turbulence Model Assessment for a Highly Back-Pressured Isolator Flowfield

    Science.gov (United States)

    Baurle, Robert A.; Middleton, Troy F.; Wilson, L. G.

    2012-01-01

    The use of computational fluid dynamics in scramjet engine component development is widespread in the existing literature. Unfortunately, the quantification of model-form uncertainties is rarely addressed with anything other than sensitivity studies, requiring that the computational results be intimately tied to and calibrated against existing test data. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Due to ground test facility limitations, this expanded role is believed to be a requirement by some in the test and evaluation community if scramjet engines are to be given serious consideration as a viable propulsion device. An effort has been initiated at the NASA Langley Research Center to validate several turbulence closure models used for Reynolds-averaged simulations of scramjet isolator flows. The turbulence models considered were the Menter BSL, Menter SST, Wilcox 1998, Wilcox 2006, and the Gatski-Speziale explicit algebraic Reynolds stress models. The simulations were carried out using the VULCAN computational fluid dynamics package developed at the NASA Langley Research Center. A procedure to quantify the numerical errors was developed to account for discretization errors in the validation process. This procedure utilized the grid convergence index defined by Roache as a bounding estimate for the numerical error. The validation data was collected from a mechanically back-pressured constant area (1 2 inch) isolator model with an isolator entrance Mach number of 2.5. As expected, the model-form uncertainty was substantial for the shock-dominated, massively separated flowfield within the isolator as evidenced by a 6 duct height variation in shock train length depending on the turbulence model employed. Generally speaking, the turbulence models that did not include an explicit stress limiter more closely

  14. Choice of initial operating parameters for high average current linear accelerators

    International Nuclear Information System (INIS)

    Batchelor, K.

    1976-01-01

    Recent emphasis on alternative energy sources together with the need for intense neutron sources for testing of materials for CTR has resulted in renewed interest in high current (approximately 100 mA) c.w. proton and deuteron linear accelerators. In desinging an accelerator for such high currents, it is evident that beam losses in the machine must be minimized, which implies well matched beams, and that adequate acceptance under severe space charge conditions must be met. An investigation is presented of the input parameters to an Alvarez type drift-tube accelerator resulting from such factors. The analysis indicates that an accelerator operating at a frequency of 50 MHz is capable of accepting deuteron currents of about 0.4 amperes and proton currents of about 1.2 amperes. These values depend critically on the assumed values of beam emittance and on the ability to properly ''match'' this to the linac acceptance

  15. Is it better to be average? High and low performance as predictors of employee victimization.

    Science.gov (United States)

    Jensen, Jaclyn M; Patel, Pankaj C; Raver, Jana L

    2014-03-01

    Given increased interest in whether targets' behaviors at work are related to their victimization, we investigated employees' job performance level as a precipitating factor for being victimized by peers in one's work group. Drawing on rational choice theory and the victim precipitation model, we argue that perpetrators take into consideration the risks of aggressing against particular targets, such that high performers tend to experience covert forms of victimization from peers, whereas low performers tend to experience overt forms of victimization. We further contend that the motivation to punish performance deviants will be higher when performance differentials are salient, such that the effects of job performance on covert and overt victimization will be exacerbated by group performance polarization, yet mitigated when the target has high equity sensitivity (benevolence). Finally, we investigate whether victimization is associated with future performance impairments. Results from data collected at 3 time points from 576 individuals in 62 work groups largely support the proposed model. The findings suggest that job performance is a precipitating factor to covert victimization for high performers and overt victimization for low performers in the workplace with implications for subsequent performance.

  16. Gender Gaps in High School GPA and ACT Scores: High School Grade Point Average and ACT Test Score by Subject and Gender. Information Brief 2014-12

    Science.gov (United States)

    ACT, Inc., 2014

    2014-01-01

    Female students who graduated from high school in 2013 averaged higher grades than their male counterparts in all subjects, but male graduates earned higher scores on the math and science sections of the ACT. This information brief looks at high school grade point average and ACT test score by subject and gender

  17. Sliding Mode Pulsed Averaging IC Drivers for High Brightness Light Emitting Diodes

    Energy Technology Data Exchange (ETDEWEB)

    Dr. Anatoly Shteynberg, PhD

    2006-08-17

    This project developed new Light Emitting Diode (LED) driver ICs associated with specific (uniquely operated) switching power supplies that optimize performance for High Brightness LEDs (HB-LEDs). The drivers utilize a digital control core with a newly developed nonlinear, hysteretic/sliding mode controller with mixed-signal processing. The drivers are flexible enough to allow both traditional microprocessor interface as well as other options such as “on the fly” adjustment of color and brightness. Some other unique features of the newly developed drivers include • AC Power Factor Correction; • High power efficiency; • Substantially fewer external components should be required, leading to substantial reduction of Bill of Materials (BOM). Thus, the LED drivers developed in this research : optimize LED performance by increasing power efficiency and power factor. Perhaps more remarkably, the LED drivers provide this improved performance at substantially reduced costs compared to the present LED power electronic driver circuits. Since one of the barriers to market penetration for HB-LEDs (in particular “white” light LEDs) is cost/lumen, this research makes important contributions in helping the advancement of SSL consumer acceptance and usage.

  18. Characterization of a klystrode as a RF source for high-average-power accelerators

    International Nuclear Information System (INIS)

    Rees, D.; Keffeler, D.; Roybal, W.; Tallerico, P.J.

    1995-01-01

    The klystrode is a relatively new type of RF source that has demonstrated dc-to-RF conversion efficiencies in excess of 70% and a control characteristic uniquely different from those for klystron amplifiers. The different control characteristic allows the klystrode to achieve this high conversion efficiency while still providing a control margin for regulation of the accelerator cavity fields. The authors present test data from a 267-MHz, 250-kW, continuous-wave (CW) klystrode amplifier and contrast this data with conventional klystron performance, emphasizing the strengths and weaknesses of the klystrode technology for accelerator applications. They present test results describing that limitation for the 250-kW, CW klystrode and extrapolate the data to other frequencies. A summary of the operating regime explains the clear advantages of the klystrode technology over the klystron technology

  19. Design of a high average-power FEL driven by an existing 20 MV electrostatic-accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Kimel, I.; Elias, L.R. [Univ. of Central Florida, Orlando, FL (United States)

    1995-12-31

    There are some important applications where high average-power radiation is required. Two examples are industrial machining and space power-beaming. Unfortunately, up to date no FEL has been able to show more than 10 Watts of average power. To remedy this situation we started a program geared towards the development of high average-power FELs. As a first step we are building in our CREOL laboratory, a compact FEL which will generate close to 1 kW in CW operation. As the next step we are also engaged in the design of a much higher average-power system based on a 20 MV electrostatic accelerator. This FEL will be capable of operating CW with a power output of 60 kW. The idea is to perform a high power demonstration using the existing 20 MV electrostatic accelerator at the Tandar facility in Buenos Aires. This machine has been dedicated to accelerate heavy ions for experiments and applications in nuclear and atomic physics. The necessary adaptations required to utilize the machine to accelerate electrons will be described. An important aspect of the design of the 20 MV system, is the electron beam optics through almost 30 meters of accelerating and decelerating tubes as well as the undulator. Of equal importance is a careful design of the long resonator with mirrors able to withstand high power loading with proper heat dissipation features.

  20. Mixed-mode distribution systems for high average power electron cyclotron heating

    International Nuclear Information System (INIS)

    White, T.L.; Kimrey, H.D.; Bigelow, T.S.

    1984-01-01

    The ELMO Bumpy Torus-Scale (EBT-S) experiment consists of 24 simple magnetic mirrors joined end-to-end to form a torus of closed magnetic field lines. In this paper, we first describe an 80% efficient mixed-mode unpolarized heating system which couples 28-GHz microwave power to the midplane of the 24 EBT-S cavities. The system consists of two radiused bends feeding a quasi-optical mixed-mode toroidal distribution manifold. Balancing power to the 24 cavities is determined by detailed computer ray tracing. A second 28-GHz electron cyclotron heating (ECH) system using a polarized grid high field launcher is described. The launcher penetrates the fundamental ECH resonant surface without a vacuum window with no observable breakdown up to 1 kW/cm 2 (source limited) with 24 kW delivered to the plasma. This system uses the same mixed-mode output as the first system but polarizes the launched power by using a grid of WR42 apertures. The efficiency of this system is 32%, but can be improved by feeding multiple launchers from a separate distribution manifold

  1. Large-eddy simulation/Reynolds-averaged Navier-Stokes hybrid schemes for high speed flows

    Science.gov (United States)

    Xiao, Xudong

    Three LES/RANS hybrid schemes have been proposed for the prediction of high speed separated flows. Each method couples the k-zeta (Enstrophy) BANS model with an LES subgrid scale one-equation model by using a blending function that is coordinate system independent. Two of these functions are based on turbulence dissipation length scale and grid size, while the third one has no explicit dependence on the grid. To implement the LES/RANS hybrid schemes, a new rescaling-reintroducing method is used to generate time-dependent turbulent inflow conditions. The hybrid schemes have been tested on a Mach 2.88 flow over 25 degree compression-expansion ramp and a Mach 2.79 flow over 20 degree compression ramp. A special computation procedure has been designed to prevent the separation zone from expanding upstream to the recycle-plane. The code is parallelized using Message Passing Interface (MPI) and is optimized for running on IBM-SP3 parallel machine. The scheme was validated first for a flat plate. It was shown that the blending function has to be monotonic to prevent the RANS region from appearing in the LES region. In the 25 deg ramp case, the hybrid schemes provided better agreement with experiment in the recovery region. Grid refinement studies demonstrated the importance of using a grid independent blend function and further improvement with experiment in the recovery region. In the 20 deg ramp case, with a relatively finer grid, the hybrid scheme characterized by grid independent blending function well predicted the flow field in both the separation region and the recovery region. Therefore, with "appropriately" fine grid, current hybrid schemes are promising for the simulation of shock wave/boundary layer interaction problems.

  2. Population-averaged macaque brain atlas with high-resolution ex vivo DTI integrated into in vivo space.

    Science.gov (United States)

    Feng, Lei; Jeon, Tina; Yu, Qiaowen; Ouyang, Minhui; Peng, Qinmu; Mishra, Virendra; Pletikos, Mihovil; Sestan, Nenad; Miller, Michael I; Mori, Susumu; Hsiao, Steven; Liu, Shuwei; Huang, Hao

    2017-12-01

    Animal models of the rhesus macaque (Macaca mulatta), the most widely used nonhuman primate, have been irreplaceable in neurobiological studies. However, a population-averaged macaque brain diffusion tensor imaging (DTI) atlas, including comprehensive gray and white matter labeling as well as bony and facial landmarks guiding invasive experimental procedures, is not available. The macaque white matter tract pathways and microstructures have been rarely recorded. Here, we established a population-averaged macaque brain atlas with high-resolution ex vivo DTI integrated into in vivo space incorporating bony and facial landmarks, and delineated microstructures and three-dimensional pathways of major white matter tracts in vivo MRI/DTI and ex vivo (postmortem) DTI of ten rhesus macaque brains were acquired. Single-subject macaque brain DTI template was obtained by transforming the postmortem high-resolution DTI data into in vivo space. Ex vivo DTI of ten macaque brains was then averaged in the in vivo single-subject template space to generate population-averaged macaque brain DTI atlas. The white matter tracts were traced with DTI-based tractography. One hundred and eighteen neural structures including all cortical gyri, white matter tracts and subcortical nuclei, were labeled manually on population-averaged DTI-derived maps. The in vivo microstructural metrics of fractional anisotropy, axial, radial and mean diffusivity of the traced white matter tracts were measured. Population-averaged digital atlas integrated into in vivo space can be used to label the experimental macaque brain automatically. Bony and facial landmarks will be available for guiding invasive procedures. The DTI metric measurements offer unique insights into heterogeneous microstructural profiles of different white matter tracts.

  3. The Application of Cryogenic Laser Physics to the Development of High Average Power Ultra-Short Pulse Lasers

    Directory of Open Access Journals (Sweden)

    David C. Brown

    2016-01-01

    Full Text Available Ultrafast laser physics continues to advance at a rapid pace, driven primarily by the development of more powerful and sophisticated diode-pumping sources, the development of new laser materials, and new laser and amplification approaches such as optical parametric chirped-pulse amplification. The rapid development of high average power cryogenic laser sources seems likely to play a crucial role in realizing the long-sought goal of powerful ultrafast sources that offer concomitant high peak and average powers. In this paper, we review the optical, thermal, thermo-optic and laser parameters important to cryogenic laser technology, recently achieved laser and laser materials progress, the progression of cryogenic laser technology, discuss the importance of cryogenic laser technology in ultrafast laser science, and what advances are likely to be achieved in the near-future.

  4. Operation of the High Flux Reactor. Annual report 1985

    International Nuclear Information System (INIS)

    1985-01-01

    This year was characterized by the end of a major rebuilding of the installation during which the reactor vessel and its peripheral components were replaced by new and redesigned equipment. Both operational safety and experimental use were largely improved by the replacement. The reactor went back to routine operation on February 14, 1985, and has been operating without problem since then. All performance parameters were met. Other upgrading actions started during the year concerned new heat exchangers and improvements to the reactor building complex. The experimental load of the High Flux Reactor reached a satisfactory level with an average of 57%. New developments aimed at future safety related irradiation tests and at novel applications of neutrons from the horizontal beam tubes. A unique remote encapsulation hot cell facility became available adding new possibilities for fast breeder fuel testing and for intermediate specimen examination. The HFR Programme hosted an international meeting on development and use of reduced enrichment fuel for research reactors. All aspects of core physics, manufacture technology, and licensing of novel, proliferation-free, research reactor fuel were debated

  5. Attitudes and Opinions from the Nation's High Achieving Teens: 26th Annual Survey of High Achievers.

    Science.gov (United States)

    Who's Who among American High School Students, Lake Forest, IL.

    A national survey of 3,351 high achieving high school students (junior and senior level) was conducted. All students had A or B averages. Topics covered include lifestyles, political beliefs, violence and entertainment, education, cheating, school violence, sexual violence and date rape, peer pressure, popularity, suicide, drugs and alcohol,…

  6. High-Order Analytic Expansion of Disturbing Function for Doubly Averaged Circular Restricted Three-Body Problem

    Directory of Open Access Journals (Sweden)

    Takashi Ito

    2016-01-01

    Full Text Available Terms in the analytic expansion of the doubly averaged disturbing function for the circular restricted three-body problem using the Legendre polynomial are explicitly calculated up to the fourteenth order of semimajor axis ratio (α between perturbed and perturbing bodies in the inner case (α1. The expansion outcome is compared with results from numerical quadrature on an equipotential surface. Comparison with direct numerical integration of equations of motion is also presented. Overall, the high-order analytic expansion of the doubly averaged disturbing function yields a result that agrees well with the numerical quadrature and with the numerical integration. Local extremums of the doubly averaged disturbing function are quantitatively reproduced by the high-order analytic expansion even when α is large. Although the analytic expansion is not applicable in some circumstances such as when orbits of perturbed and perturbing bodies cross or when strong mean motion resonance is at work, our expansion result will be useful for analytically understanding the long-term dynamical behavior of perturbed bodies in circular restricted three-body systems.

  7. 1980 Annual status report: operation of the high flux reactor

    International Nuclear Information System (INIS)

    1981-01-01

    HFR Petten has been operated in 1980 in fulfilment of the 1980/83 JRC Programme Decision. Both reactor operation and utilization data have been met within a few percent of the goals set out in the annual working schedule, in support of a large variety of research programmes. Major improvements to experimental facilities have been introduced during the year and future modernization has been prepared

  8. High annual risk of tuberculosis infection among nursing students in South India: a cohort study.

    Directory of Open Access Journals (Sweden)

    Devasahayam J Christopher

    Full Text Available Nurses in developing countries are frequently exposed to infectious tuberculosis (TB patients, and have a high prevalence of TB infection. To estimate the incidence of new TB infection, we recruited a cohort of young nursing trainees at the Christian Medical College in Southern India. Annual tuberculin skin testing (TST was conducted to assess the annual risk of TB infection (ARTI in this cohort.436 nursing students completed baseline two-step TST testing in 2007 and 217 were TST-negative and therefore eligible for repeat testing in 2008. 181 subjects completed a detailed questionnaire on exposure to tuberculosis from workplace and social contacts. A physician verified the questionnaire and clinical log book and screened the subjects for symptoms of active TB. The majority of nursing students (96.7% were females, almost 84% were under 22 years of age, and 80% had BCG scars. Among those students who underwent repeat testing in 2008, 14 had TST conversions using the ATS/CDC/IDSA conversion definition of 10 mm or greater increase over baseline. The ARTI was therefore estimated as 7.8% (95%CI: 4.3-12.8%. This was significantly higher than the national average ARTI of 1.5%. Sputum collection and caring for pulmonary TB patients were both high risk activities that were associated with TST conversions in this young nursing cohort.Our study showed a high ARTI among young nursing trainees, substantially higher than that seen in the general Indian population. Indian healthcare providers and the Indian Revised National TB Control Programme will need to implement internationally recommended TB infection control interventions to protect its health care workforce.

  9. Design of an L-band normally conducting RF gun cavity for high peak and average RF power

    Energy Technology Data Exchange (ETDEWEB)

    Paramonov, V., E-mail: paramono@inr.ru [Institute for Nuclear Research of Russian Academy of Sciences, 60-th October Anniversary prospect 7a, 117312 Moscow (Russian Federation); Philipp, S. [Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, D-15738 Zeuthen (Germany); Rybakov, I.; Skassyrskaya, A. [Institute for Nuclear Research of Russian Academy of Sciences, 60-th October Anniversary prospect 7a, 117312 Moscow (Russian Federation); Stephan, F. [Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, D-15738 Zeuthen (Germany)

    2017-05-11

    To provide high quality electron bunches for linear accelerators used in free electron lasers and particle colliders, RF gun cavities operate with extreme electric fields, resulting in a high pulsed RF power. The main L-band superconducting linacs of such facilities also require a long RF pulse length, resulting in a high average dissipated RF power in the gun cavity. The newly developed cavity based on the proven advantages of the existing DESY RF gun cavities, underwent significant changes. The shape of the cells is optimized to reduce the maximal surface electric field and RF loss power. Furthermore, the cavity is equipped with an RF probe to measure the field amplitude and phase. The elaborated cooling circuit design results in a lower temperature rise on the cavity RF surface and permits higher dissipated RF power. The paper presents the main solutions and results of the cavity design.

  10. A high-average power tapered FEL amplifier at submillimeter frequencies using sheet electron beams and short-period wigglers

    International Nuclear Information System (INIS)

    Bidwell, S.W.; Radack, D.J.; Antonsen, T.M. Jr.; Booske, J.H.; Carmel, Y.; Destler, W.W.; Granatstein, V.L.; Levush, B.; Latham, P.E.; Zhang, Z.X.

    1990-01-01

    A high-average-power FEL amplifier operating at submillimeter frequencies is under development at the University of Maryland. Program goals are to produce a CW, ∼1 MW, FEL amplifier source at frequencies between 280 GHz and 560 GHz. To this end, a high-gain, high-efficiency, tapered FEL amplifier using a sheet electron beam and a short-period (superconducting) wiggler has been chosen. Development of this amplifier is progressing in three stages: (1) beam propagation through a long length (∼1 m) of short period (λ ω = 1 cm) wiggler, (2) demonstration of a proof-of-principle amplifier experiment at 98 GHz, and (3) designs of a superconducting tapered FEL amplifier meeting the ultimate design goal specifications. 17 refs., 1 fig., 1 tab

  11. A Front End for Multipetawatt Lasers Based on a High-Energy, High-Average-Power Optical Parametric Chirped-Pulse Amplifier

    International Nuclear Information System (INIS)

    Bagnoud, V.

    2004-01-01

    We report on a high-energy, high-average-power optical parametric chirped-pulse amplifier developed as the front end for the OMEGA EP laser. The amplifier provides a gain larger than 109 in two stages leading to a total energy of 400 mJ with a pump-to-signal conversion efficiency higher than 25%

  12. High Temperature Materials Laboratory User Program: 19th Annual Report, October 1, 2005 - September 30, 2006

    Energy Technology Data Exchange (ETDEWEB)

    Pasto, Arvid [ORNL

    2007-08-01

    Annual Report contains overview of the High Temperature Materials Laboratory User Program and includes selected highlights of user activities for FY2006. Report is submitted to individuals within sponsoring DOE agency and to other interested individuals.

  13. Leveraging Mechanism Simplicity and Strategic Averaging to Identify Signals from Highly Heterogeneous Spatial and Temporal Ozone Data

    Science.gov (United States)

    Brown-Steiner, B.; Selin, N. E.; Prinn, R. G.; Monier, E.; Garcia-Menendez, F.; Tilmes, S.; Emmons, L. K.; Lamarque, J. F.; Cameron-Smith, P. J.

    2017-12-01

    We summarize two methods to aid in the identification of ozone signals from underlying spatially and temporally heterogeneous data in order to help research communities avoid the sometimes burdensome computational costs of high-resolution high-complexity models. The first method utilizes simplified chemical mechanisms (a Reduced Hydrocarbon Mechanism and a Superfast Mechanism) alongside a more complex mechanism (MOZART-4) within CESM CAM-Chem to extend the number of simulated meteorological years (or add additional members to an ensemble) for a given modeling problem. The Reduced Hydrocarbon mechanism is twice as fast, and the Superfast mechanism is three times faster than the MOZART-4 mechanism. We show that simplified chemical mechanisms are largely capable of simulating surface ozone across the globe as well as the more complex chemical mechanisms, and where they are not capable, a simple standardized anomaly emulation approach can correct for their inadequacies. The second method uses strategic averaging over both temporal and spatial scales to filter out the highly heterogeneous noise that underlies ozone observations and simulations. This method allows for a selection of temporal and spatial averaging scales that match a particular signal strength (between 0.5 and 5 ppbv), and enables the identification of regions where an ozone signal can rise above the ozone noise over a given region and a given period of time. In conjunction, these two methods can be used to "scale down" chemical mechanism complexity and quantitatively determine spatial and temporal scales that could enable research communities to utilize simplified representations of atmospheric chemistry and thereby maximize their productivity and efficiency given computational constraints. While this framework is here applied to ozone data, it could also be applied to a broad range of geospatial data sets (observed or modeled) that have spatial and temporal coverage.

  14. High Temperature Materials Laboratory Thirteenth Annual Report: October 1999 Through September 2000; ANNUAL

    International Nuclear Information System (INIS)

    Pasto, AE

    2001-01-01

    The High Temperature Materials Laboratory (HTML) is designed to assist American industries, universities, and governmental agencies develop advanced materials by providing a skilled staff and numerous sophisticated, often one-of-a-kind pieces of materials characterization equipment. It is a nationally designated user facility sponsored by the U.S. Department of Energy's (DOE's) office of Transportation Technologies, Energy Efficiency and Renewable Energy. Physically, it is a 64,500-ft(sup 2) building at the Oak Ridge National Laboratory (ORNL). The HTML houses six ''user centers,'' which are clusters of specialized equipment designed for specific types of properties measurements. The HTML was conceived and built in the mid-1980s in response to the oil embargoes of the 1970s. The concept was to build a facility that would allow direct work with American industry, academia, and government laboratories in providing advanced high-temperature materials such as structural ceramics for energy-efficient engines. The HTML's scope of work has since expanded to include other, non-high-temperature materials of interest to transportation and other industries

  15. Development of a 33 kV, 20 A long pulse converter modulator for high average power klystron

    Energy Technology Data Exchange (ETDEWEB)

    Reghu, T.; Mandloi, V.; Shrivastava, Purushottam [Pulsed High Power Microwave Section, Raja Ramanna Centre for Advanced Technology, Indore 452013, M.P. (India)

    2014-05-15

    Research, design, and development of high average power, long pulse modulators for the proposed Indian Spallation Neutron Source are underway at Raja Ramanna Centre for Advanced Technology. With this objective, a prototype of long pulse modulator capable of delivering 33 kV, 20 A at 5 Hz repetition rate has been designed and developed. Three Insulated Gate Bipolar Transistors (IGBT) based switching modules driving high frequency, high voltage transformers have been used to generate high voltage output. The IGBT based switching modules are shifted in phase by 120° with respect to each other. The switching frequency is 25 kHz. Pulses of 1.6 ms pulse width, 80 μs rise time, and 70 μs fall time have been achieved at the modulator output. A droop of ±0.6% is achieved using a simple segmented digital droop correction technique. The total fault energy transferred to the load during fault has been measured by conducting wire burn tests and is found to be within 3.5 J.

  16. Development of a 33 kV, 20 A long pulse converter modulator for high average power klystron

    International Nuclear Information System (INIS)

    Reghu, T.; Mandloi, V.; Shrivastava, Purushottam

    2014-01-01

    Research, design, and development of high average power, long pulse modulators for the proposed Indian Spallation Neutron Source are underway at Raja Ramanna Centre for Advanced Technology. With this objective, a prototype of long pulse modulator capable of delivering 33 kV, 20 A at 5 Hz repetition rate has been designed and developed. Three Insulated Gate Bipolar Transistors (IGBT) based switching modules driving high frequency, high voltage transformers have been used to generate high voltage output. The IGBT based switching modules are shifted in phase by 120° with respect to each other. The switching frequency is 25 kHz. Pulses of 1.6 ms pulse width, 80 μs rise time, and 70 μs fall time have been achieved at the modulator output. A droop of ±0.6% is achieved using a simple segmented digital droop correction technique. The total fault energy transferred to the load during fault has been measured by conducting wire burn tests and is found to be within 3.5 J

  17. Development of a 33 kV, 20 A long pulse converter modulator for high average power klystron

    Science.gov (United States)

    Reghu, T.; Mandloi, V.; Shrivastava, Purushottam

    2014-05-01

    Research, design, and development of high average power, long pulse modulators for the proposed Indian Spallation Neutron Source are underway at Raja Ramanna Centre for Advanced Technology. With this objective, a prototype of long pulse modulator capable of delivering 33 kV, 20 A at 5 Hz repetition rate has been designed and developed. Three Insulated Gate Bipolar Transistors (IGBT) based switching modules driving high frequency, high voltage transformers have been used to generate high voltage output. The IGBT based switching modules are shifted in phase by 120° with respect to each other. The switching frequency is 25 kHz. Pulses of 1.6 ms pulse width, 80 μs rise time, and 70 μs fall time have been achieved at the modulator output. A droop of ±0.6% is achieved using a simple segmented digital droop correction technique. The total fault energy transferred to the load during fault has been measured by conducting wire burn tests and is found to be within 3.5 J.

  18. High average power, diode pumped petawatt laser systems: a new generation of lasers enabling precision science and commercial applications

    Science.gov (United States)

    Haefner, C. L.; Bayramian, A.; Betts, S.; Bopp, R.; Buck, S.; Cupal, J.; Drouin, M.; Erlandson, A.; Horáček, J.; Horner, J.; Jarboe, J.; Kasl, K.; Kim, D.; Koh, E.; Koubíková, L.; Maranville, W.; Marshall, C.; Mason, D.; Menapace, J.; Miller, P.; Mazurek, P.; Naylon, A.; Novák, J.; Peceli, D.; Rosso, P.; Schaffers, K.; Sistrunk, E.; Smith, D.; Spinka, T.; Stanley, J.; Steele, R.; Stolz, C.; Suratwala, T.; Telford, S.; Thoma, J.; VanBlarcom, D.; Weiss, J.; Wegner, P.

    2017-05-01

    Large laser systems that deliver optical pulses with peak powers exceeding one Petawatt (PW) have been constructed at dozens of research facilities worldwide and have fostered research in High-Energy-Density (HED) Science, High-Field and nonlinear physics [1]. Furthermore, the high intensities exceeding 1018W/cm2 allow for efficiently driving secondary sources that inherit some of the properties of the laser pulse, e.g. pulse duration, spatial and/or divergence characteristics. In the intervening decades since that first PW laser, single-shot proof-of-principle experiments have been successful in demonstrating new high-intensity laser-matter interactions and subsequent secondary particle and photon sources. These secondary sources include generation and acceleration of charged-particle (electron, proton, ion) and neutron beams, and x-ray and gamma-ray sources, generation of radioisotopes for positron emission tomography (PET), targeted cancer therapy, medical imaging, and the transmutation of radioactive waste [2, 3]. Each of these promising applications requires lasers with peak power of hundreds of terawatt (TW) to petawatt (PW) and with average power of tens to hundreds of kW to achieve the required secondary source flux.

  19. Iterative Bayesian Model Averaging: a method for the application of survival analysis to high-dimensional microarray data

    Directory of Open Access Journals (Sweden)

    Raftery Adrian E

    2009-02-01

    Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p

  20. Overview of the HiLASE project: high average power pulsed DPSSL systems for research and industry

    Czech Academy of Sciences Publication Activity Database

    Divoký, Martin; Smrž, Martin; Chyla, Michal; Sikocinski, Pawel; Severová, Patricie; Novák, Ondřej; Huynh, Jaroslav; Nagisetty, Siva S.; Miura, Taisuke; Pilař, Jan; Slezák, Jiří; Sawicka, Magdalena; Jambunathan, Venkatesan; Vanda, Jan; Endo, Akira; Lucianetti, Antonio; Rostohar, Danijela; Mason, P.D.; Phillips, P.J.; Ertel, K.; Banerjee, S.; Hernandez-Gomez, C.; Collier, J.L.; Mocek, Tomáš

    2014-01-01

    Roč. 2, SI (2014), s. 1-10 ISSN 2095-4719 R&D Projects: GA MŠk ED2.1.00/01.0027; GA MŠk EE2.3.20.0143; GA MŠk EE2.3.30.0057 Grant - others:HILASE(XE) CZ.1.05/2.1.00/01.0027; OP VK 6(XE) CZ.1.07/2.3.00/20.0143; OP VK 4 POSTDOK(XE) CZ.1.07/2.3.00/30.0057 Institutional support: RVO:68378271 Keywords : DPSSL * Yb3C:YAG * thin-disk * multi-slab * pulsed high average power laser Subject RIV: BH - Optics, Masers, Lasers

  1. High Recharge Areas in the Choushui River Alluvial Fan (Taiwan Assessed from Recharge Potential Analysis and Average Storage Variation Indexes

    Directory of Open Access Journals (Sweden)

    Jui-Pin Tsai

    2015-03-01

    Full Text Available High recharge areas significantly influence the groundwater quality and quantity in regional groundwater systems. Many studies have applied recharge potential analysis (RPA to estimate groundwater recharge potential (GRP and have delineated high recharge areas based on the estimated GRP. However, most of these studies define the RPA parameters with supposition, and this represents a major source of uncertainty for applying RPA. To objectively define the RPA parameter values without supposition, this study proposes a systematic method based on the theory of parameter identification. A surrogate variable, namely the average storage variation (ASV index, is developed to calibrate the RPA parameters, because of the lack of direct GRP observations. The study results show that the correlations between the ASV indexes and computed GRP values improved from 0.67 before calibration to 0.85 after calibration, thus indicating that the calibrated RPA parameters represent the recharge characteristics of the study area well; these data also highlight how defining the RPA parameters with ASV indexes can help to improve the accuracy. The calibrated RPA parameters were used to estimate the GRP distribution of the study area, and the GRP values were graded into five levels. High and excellent level areas are defined as high recharge areas, which composed 7.92% of the study area. Overall, this study demonstrates that the developed approach can objectively define the RPA parameters and high recharge areas of the Choushui River alluvial fan, and the results should serve as valuable references for the Taiwanese government in their efforts to conserve the groundwater quality and quantity of the study area.

  2. 1981 Annual status report. High-temperature materials

    International Nuclear Information System (INIS)

    1981-01-01

    The high temperature materials programme is executed at the JRC, Petten Establishment and has for the 1980/83 programme period the objective to promote within the European Community the development of high temperature materials required for future energy technologies. A range of engineering studies is being carried out. A data bank storing factual data on alloys for high temperature applications is being developed and has reached the operational phase

  3. 1982 Annual status report: high-temperature materials

    International Nuclear Information System (INIS)

    Van de Voorde, M.

    1983-01-01

    The High Temperature Materials Programme is executed at the JRC, Petten Establishment and has for the 1980/83 programme period the objective to promote within the European Community the development of high temperature materials required for future energy technologies. Materials and engineering studies include: corrosion with or without load, mechanical properties under static or dynamic loads, surface protection creep of tubular components in corrosive environments and high temperature materials data bank

  4. THE BARYON CYCLE AT HIGH REDSHIFTS: EFFECTS OF GALACTIC WINDS ON GALAXY EVOLUTION IN OVERDENSE AND AVERAGE REGIONS

    Energy Technology Data Exchange (ETDEWEB)

    Sadoun, Raphael [Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112-0830 (United States); Shlosman, Isaac; Choi, Jun-Hwan; Romano-Díaz, Emilio, E-mail: raphael.sadoun@utah.edu [Department of Physics and Astronomy, University of Kentucky, Lexington, KY 40506-0055 (United States)

    2016-10-01

    We employ high-resolution cosmological zoom-in simulations focusing on a high-sigma peak and an average cosmological field at z ∼ 6–12 in order to investigate the influence of environment and baryonic feedback on galaxy evolution in the reionization epoch. Strong feedback, e.g., galactic winds, caused by elevated star formation rates (SFRs) is expected to play an important role in this evolution. We compare different outflow prescriptions: (i) constant wind velocity (CW), (ii) variable wind scaling with galaxy properties (VW), and (iii) no outflows (NW). The overdensity leads to accelerated evolution of dark matter and baryonic structures, absent from the “normal” region, and to shallow galaxy stellar mass functions at the low-mass end. Although CW shows little dependence on the environment, the more physically motivated VW model does exhibit this effect. In addition, VW can reproduce the observed specific SFR (sSFR) and the sSFR–stellar mass relation, which CW and NW fail to satisfy simultaneously. Winds also differ substantially in affecting the state of the intergalactic medium (IGM). The difference lies in the volume-filling factor of hot, high-metallicity gas, which is near unity for CW, while such gas remains confined in massive filaments for VW, and locked up in galaxies for NW. Such gas is nearly absent from the normal region. Although all wind models suffer from deficiencies, the VW model seems to be promising in correlating the outflow properties with those of host galaxies. Further constraints on the state of the IGM at high z are needed to separate different wind models.

  5. Performance study of highly efficient 520 W average power long pulse ceramic Nd:YAG rod laser

    Science.gov (United States)

    Choubey, Ambar; Vishwakarma, S. C.; Ali, Sabir; Jain, R. K.; Upadhyaya, B. N.; Oak, S. M.

    2013-10-01

    We report the performance study of a 2% atomic doped ceramic Nd:YAG rod for long pulse laser operation in the millisecond regime with pulse duration in the range of 0.5-20 ms. A maximum average output power of 520 W with 180 J maximum pulse energy has been achieved with a slope efficiency of 5.4% using a dual rod configuration, which is the highest for typical lamp pumped ceramic Nd:YAG lasers. The laser output characteristics of the ceramic Nd:YAG rod were revealed to be nearly equivalent or superior to those of high-quality single crystal Nd:YAG rod. The laser pump chamber and resonator were designed and optimized to achieve a high efficiency and good beam quality with a beam parameter product of 16 mm mrad (M2˜47). The laser output beam was efficiently coupled through a 400 μm core diameter optical fiber with 90% overall transmission efficiency. This ceramic Nd:YAG laser will be useful for various material processing applications in industry.

  6. Analysis of high-frequency energy in long-term average spectra of singing, speech, and voiceless fricatives

    Science.gov (United States)

    Monson, Brian B.; Lotto, Andrew J.; Story, Brad H.

    2012-01-01

    The human singing and speech spectrum includes energy above 5 kHz. To begin an in-depth exploration of this high-frequency energy (HFE), a database of anechoic high-fidelity recordings of singers and talkers was created and analyzed. Third-octave band analysis from the long-term average spectra showed that production level (soft vs normal vs loud), production mode (singing vs speech), and phoneme (for voiceless fricatives) all significantly affected HFE characteristics. Specifically, increased production level caused an increase in absolute HFE level, but a decrease in relative HFE level. Singing exhibited higher levels of HFE than speech in the soft and normal conditions, but not in the loud condition. Third-octave band levels distinguished phoneme class of voiceless fricatives. Female HFE levels were significantly greater than male levels only above 11 kHz. This information is pertinent to various areas of acoustics, including vocal tract modeling, voice synthesis, augmentative hearing technology (hearing aids and cochlear implants), and training/therapy for singing and speech. PMID:22978902

  7. Analysis of high-frequency energy in long-term average spectra of singing, speech, and voiceless fricatives.

    Science.gov (United States)

    Monson, Brian B; Lotto, Andrew J; Story, Brad H

    2012-09-01

    The human singing and speech spectrum includes energy above 5 kHz. To begin an in-depth exploration of this high-frequency energy (HFE), a database of anechoic high-fidelity recordings of singers and talkers was created and analyzed. Third-octave band analysis from the long-term average spectra showed that production level (soft vs normal vs loud), production mode (singing vs speech), and phoneme (for voiceless fricatives) all significantly affected HFE characteristics. Specifically, increased production level caused an increase in absolute HFE level, but a decrease in relative HFE level. Singing exhibited higher levels of HFE than speech in the soft and normal conditions, but not in the loud condition. Third-octave band levels distinguished phoneme class of voiceless fricatives. Female HFE levels were significantly greater than male levels only above 11 kHz. This information is pertinent to various areas of acoustics, including vocal tract modeling, voice synthesis, augmentative hearing technology (hearing aids and cochlear implants), and training/therapy for singing and speech.

  8. Caltrans Average Annual Daily Traffic Volumes (2004)

    Data.gov (United States)

    California Environmental Health Tracking Program — [ from http://www.ehib.org/cma/topic.jsp?topic_key=79 ] Traffic exhaust pollutants include compounds such as carbon monoxide, nitrogen oxides, particulates (fine...

  9. High energy hadron-hadron collisions. Annual progress report

    International Nuclear Information System (INIS)

    Chou, T.T.

    1979-03-01

    Work on high energy hadron-hadron collisions in the geometrical model, performed under the DOE Contract No. EY-76-S-09-0946, is summarized. Specific items studied include the behavior of elastic hadron scatterings at super high energies and the existence of many dips, the computation of meson radii in the geometrical model, and the hadronic matter current effects in inelastic two-body collisions

  10. Annual report of the Division of High Temperature Engineering

    International Nuclear Information System (INIS)

    1982-10-01

    Research activities conducted in the Division of High Temperature Engineering during fiscal 1981 are described. R and D works of our division are mainly related to a multi-purpose very high-temperature gas-cooled reactor (VHTR) and a fusion reactor. This report deals with the main results obtained on material test, development of computer codes, heat transfer, fluid-dynamics, structural mechanics and the construction of an M + A (Mother and Adapter) section of a HENDEL (Helium Engineering Demonstration Loop) as well. (author)

  11. 1982 Annual status report: operation of the high flux reactor

    International Nuclear Information System (INIS)

    1983-01-01

    The high flux materials testing reactor has been operated in 1982 within a few percent of the pre-set schedule, attaining 73% overall availability. Its utilization reached another record figure in 20 years: 81% without, 92% with, the low enrichment test elements irradiated during the year

  12. The high field superconducting magnet program at LLNL: Annual report

    International Nuclear Information System (INIS)

    Miller, J.R.; Chaplin, M.R.; Kerns, J.A.; Leber, R.L.; Rosdahl, A.R.; Slack, D.S.; Summers, L.T.; Zbasnik, J.P.

    1986-01-01

    In FY 86 the program continued along several interrelated thrust areas. These thrust areas have been broadly labeled as follows: (1) Superconductor Research and Technology; (2) Magnet Systems Materials Technology; (3) Magnet Systems Design Technology; (4) High Field Test Facility; and (5) Technology Transfer

  13. Annual progress report 1988, operation of the high flux reactor

    International Nuclear Information System (INIS)

    1989-01-01

    In 1988 the High Flux Reactor Petten was routinely operated without any unforeseen event. The availability was 99% of scheduled operation. Utilization of the irradiation positions amounted to 80% of the practical occupation limit. The exploitation pattern comprised nuclear energy deployment, fundamental research with neutrons, and radioisotope production. General activities in support of running irradiation programmes progressed in the normal way. Development activities addressed upgrading of irradiation devices, neutron radiography and neutron capture therapy

  14. KEK (High Energy Accelerator Research Organization) annual report, 2005

    International Nuclear Information System (INIS)

    2006-01-01

    This report summarizes research activities of KEK (High Energy Accelerator Research Organization) in the fiscal year 2005. Two years have passed since the KEK was reorganized as an inter-university research institute corporation, and KEK continue to facilitate a wide range of research programs based on high-energy accelerators for users from universities. KEK consists of two research institutes, the Institute of Particle and Nuclear Studies (IPNS) and the Institute of Materials Science (IMSS); and two laboratories, the Accelerator Laboratory and the Applied Research Laboratory. KEK has been operating four major accelerator facilities in Tsukuba: the 12 GeV Proton Synchrotron (PS), the KEK B-factory (KEKB), the Photon Factory (PF), and the Electron/Positron Injector Linac. We are now engaged in the construction of the Japan Proton Accelerator Research Complex (J-PARC) in Tokai in cooperation with the Japan Atomic Energy Agency (JAEA). The J-PARC Center was established in February 2006 to take full responsibility for the operation of J-PARC. With the progress of construction, the PS ceased operation at the end of March 2006 after a history of 26 years. The task of KEK is to play a key role in the fields of elementary particle, nuclei, materials and life science as one of leading research facilities of the world. The fiscal year 2005 activities of both KEK employees and visiting researchers yielded excellent outcomes in these research fields. (J.P.N.)

  15. High Energy Physics Group. Annual progress report, fiscal year 1983

    International Nuclear Information System (INIS)

    1983-01-01

    Perhaps the most significant progress during the past twelve months of the Hawaii experimental program, aside from publication of results of earlier work, has been the favorable outcome of several important proposals in which a substantial fraction of our group is involved: the Mark II detector as first-up at the SLC, and DUMAND's Stage I approval, both by DOE review panels. When added to Fermilab approval of two neutrino bubble-chamber experiments at the Tevatron, E632 and E646, the major part of the Hawaii experimental program for the next few years is now well determined. Noteworthy in the SLAC/SLC/Mark II effort is the progress made in developing silicon microstrip detectors with microchip readout. Results from the IMB(H) proton decay experiment at the Morton Salt Mine, although not detecting proton decay, set the best lower limit on the proton's lifetime. Similarly the Very High Energy Gamma Ray project is closely linked with DUMAND, at least in principle, since these gammas are expected to arise from pi-zero decay, while the neutrinos come from charged meson decay. Some signal has been seen from Cygnus X-3, and other candidates are being explored. Preparations for upgrading the Fermilab 15' Bubble Chamber have made substantial progress. Sections of the Progress Report are devoted to VAX computer system improvements, other hardware and software improvements, travel in support of physics experiments, publications and other public reports, and last analysis of data still being gleaned from experimental data taken in years past (PEP-14 and E546, E388). High energy physics theoretical research is briefly described

  16. KEK (National Laboratory for High Energy Physics) annual report, 1988

    International Nuclear Information System (INIS)

    1989-01-01

    Throughout this year, TRISTAN has maintained the highest energy among the electron-positron colliders in the world. After operating at 57 GeV in the center of mass with full operation of the APS-type room temperature RF accelerating system, 16 units of 5-cell superconducting RF cavities 24 m in total length were installed in the Nikko straight section during the summer shutdown. As a result, 30.4 GeV/beam or 60.8 GeV in the center of mass was achieved beyond the original design energy goal of TRISTAN. All experimental collaborations at the four intersections have collected much interesting data in the new energy region of electron-positron collisions. The experiment SHIP, a search for highly ionizing particles, has completed data taking in the Nikko experimental hall and is going to give new limits on Dirac monopoles. At the 24th International Conference on High Energy Physics held at Munich in August, 1988, as CERN Courier's report, for instance, the results from TRISTAN were really the highlight in e + e - collision physics. Although we could not find any definite evidence for the existence of toponium under 60 GeV or other new particles under 56 GeV, we obtained much new physics concerning interfering effects between electromagnetic and weak interactions, new information about QCD and so on. Active experiments on hadron physics with the 12 GeV main ring also have been carried out. For instance, an internal gas target experiment with a polarized proton beam was performed by a group from Texas A and M University in cooperation with a Japanese group. The KEK PS is now a very unique proton machine in the 10 GeV energy region as well as Brookhaven's AGS. (J.P.N.)

  17. THE AVERAGE ANNUAL EFFECTIVE DOSES FOR THE POPULATION IN THE SETTLEMENTS OF THE RUSSIAN FEDERATION ATTRIBUTED TO ZONES OF RADIOACTIVE CONTAMINATION DUE TO THE CHERNOBYL ACCIDENT (FOR ZONATION PURPOSES, 2014

    Directory of Open Access Journals (Sweden)

    G. Ja. Bruk

    2015-01-01

    Full Text Available The Chernobyl accident in 1986 is one of the most large-scale radiation accidents in the world. It led to radioactive contamination of large areas in the European part of the Russian Federation and at the neighboring countries. Now, there are more than 4000 settlements with the total population of 1.5 million in the radioactively contaminated areas of the Russian Federation. The Bryansk region is the most intensely contaminated region. For example, the Krasnogorskiy district still has settlements with the level of soil contamination by cesium-137 exceeding 40 Cu/km2. The regions of Tula, Kaluga and Orel are also significantly affected. In addition to these four regions, there are 10 more regions with the radioactively contaminated settlements. After the Chernobyl accident, the affected areas were divided into zones of radioactive contamination. The attribution of the settlements to a particular zone is determined by the level of soil contamination with 137Cs and by a value of the average annual effective dose that could be formed in the absence of: 1 active measures for radiation protection, and 2 self-limitation in consumption of the local food products. The main regulatory document on this issue is the Federal law № 1244-1 (dated May, 15,1991 «On the social protection of the citizens who have been exposed to radiation as a result of the accident at the Chernobyl nuclear power plant». The law extends to the territories, where, since 1991: – The average annual effective dose for the population exceeds 1 mSv (the value of effective dose that could be formed in the absence of active radiation protection measures and self-limitation in consumption of the local food products; – Soil surface contamination with cesium-137 exceeds 1 Cu/km2. The paper presents results of calculations of the average effective doses in 2014. The purpose was to use the dose values (SGED90 in zonation of contaminated territories. Therefore, the

  18. KEK (National Laboratory for High Energy Physics) annual report, 1985

    International Nuclear Information System (INIS)

    Arai, Masatoshi; Kaneko, Toshiaki; Mori, Yoshiharu; Nakai, Kozi; Nakamura, Kenzo; Oide, Katsuya; Sato, Shigeru

    1986-01-01

    Aiming at the completion of TRISTAN colliding beam complex, the laboratory engaged in the construction works throughout this year. Following the commissioning of a high current 200 MeV electron linac for positron production and of a 250 MeV positron linac in April, positrons were successfully accelerated through the existing electron linac and the accumulation ring in October. On March 21, 1986, the electron-position collision in the accumulation ring was observed in its first trial at 5 GeV with a luminosity of about 10 28 /cm 2 s. The main ring accelerator tunnel, four experimental halls and other associated buildings were completed in this fiscal year. Each of the TRISTAN experimental groups has engaged in the construction of its own detector complex, aiming at the completion of the system by the spring of 1987. In particular, large superconducting solenoid magnets were successfully operated in the test. A large computer system with FACOM M382s for TRISTAN data analysis was commissioned in October. It is the serious concern to establish safety measures for the whole TRISTAN project. The positron beam accelerated by the existing 2.5 GeV electron linac was also fed to the Photon Factory storage ring. The 12 GeV proton synchrotron started the experiment on hadron science from the beginning of this fiscal year after one year shutdown. (Kako, I.)

  19. Annual report on the high temperature triaxial compression device

    International Nuclear Information System (INIS)

    Williams, N.D.; Menk, P.; Tully, R.; Houston, W.N.

    1981-01-01

    The investigation of the environmental effects on the mechanical and engineering properties of deep-sea sediments was initiated on June 15, 1980. The task is divided into three categories. First, the design and fabrication of a High Temperature Triaxial Compression Device (HITT). Second, an investigation of the mechanical and engineering properties of the deep-sea sediments at temperatures ranging from 277 to 473 degrees kelvin. Third, assist in the development of constitutive relationships and an analytical model which describe the temperature dependent creep deformations of the deep-sea sediments. The environmental conditions under which the soil specimens are to be tested are variations in temperature from 277 to 473 degrees kelvin. The corresponding water pressure will vary up to about 2.75 MPa as required to prevent boiling of the water and assure saturation of the test specimens. Two groups of tests are to be performed. First, triaxial compression tests during which strength measurements and constant head permeability determinations shall be made. Second, constant stress creep tests, during which axial and lateral strains shall be measured. In addition to the aforementioned variables, data shall also be acquired to incorporate the effects of consolidation history, strain rate, and heating rate. The bulk of the triaxial tests are to be performed undrained. The strength measurement tests are to be constant-rate-of-strain and the creep tests are to be constant-stress tests. The study of the mechanical properties of the deep-sea sediments as a function of temperature is an integrated program

  20. Stationary average consensus protocol for a class of heterogeneous high-order multi-agent systems with application for aircraft

    Science.gov (United States)

    Rezaei, Mohammad Hadi; Menhaj, Mohammad Bagher

    2018-01-01

    This paper investigates the stationary average consensus problem for a class of heterogeneous-order multi-agent systems. The goal is to bring the positions of agents to the average of their initial positions while letting the other states converge to zero. To this end, three different consensus protocols are proposed. First, based on the auxiliary variables information among the agents under switching directed networks and state-feedback control, a protocol is proposed whereby all the agents achieve stationary average consensus. In the second and third protocols, by resorting to only measurements of relative positions of neighbouring agents under fixed balanced directed networks, two control frameworks are presented with two strategies based on state-feedback and output-feedback control. Finally, simulation results are given to illustrate the effectiveness of the proposed protocols.

  1. Predictors of Learned Helplessness among Average and Mildly Gifted Girls and Boys Attending Initial High School Physics Instruction in Germany

    Science.gov (United States)

    Ziegler, Albert; Finsterwald, Monika; Grassinger, Robert

    2005-01-01

    In mathematics, physics, and chemistry, women are still considered to be at a disadvantage. In the present study, the development of the symptoms of learned helplessness was of particular interest. A study involving average and mildly gifted 8th-grade boys and girls (top 60%) investigated whether girls, regardless of ability level, experience…

  2. Phytoremediation of high phosphorus soil by annual ryegrass and common bermudagrass harvest

    Science.gov (United States)

    Removal of soil phosphorus (P) in crop harvest is a remediation option for soils high in P. This four-year field-plot study determined P uptake by annual ryegrass (ARG, Lolium multiflorum Lam.) and common bermudagrass (CB, Cynodon dactylon (L.) Pers.) from Ruston soil (fine-loamy, siliceous, thermic...

  3. The two normalization schemes of factorial moments in high energy collisions and the dependence intermittency degree on average transverse momentum

    International Nuclear Information System (INIS)

    Wu Yuanfnag; Liu Lianshou

    1992-01-01

    The two different normalization scheme of factorial moments are analyzed carefully. It is found that in both the cases of fixed multiplicity and of intermittency independent of multiplicity, the intermittency indexes obtained from these two normalization schemes are equal to each other. In the case of non-fixed multiplicity and intermittency depending on multiplicity, the formulae expressing the intermittency indexes from the two different normalization schemes in terms of the dynamical index are given. The experimentally observed dependency of intermittency degree on transverse momentum cut is fully recovered by means of the assumption that intermittency degree depends on average transverse momentum per event. It confirms importance of the dependency of intermittency on average momentum

  4. High annual radon concentration in dwellings and natural radioactivity content in nearby soil in some rural areas of Kosovo and Metohija

    Directory of Open Access Journals (Sweden)

    Gulan Ljiljana R.

    2013-01-01

    Full Text Available Some previous studies on radon concentration in dwellings of some areas of Kosovo and Metohija have revealed a high average radon concentration, even though the detectors were exposed for three months only. In order to better design a larger study in this region, the annual measurements in 25 houses were carried out as a pilot study. For each house, CR-39-based passive devices were exposed in two rooms for the two consecutive six-month periods to account for seasonal variations of radon concentration. Furthermore, in order to correlate the indoor radon with radium in nearby soil and to improve the knowledge of the natural radioactivity in the region, soil samples near each house were collected and 226Ra, 232Th, 40K activity concentration were measured. The indoor radon concentration resulted quite high from the average (163 Bq/m3 and generally it did not differ considerably between the two rooms and the two six-month periods. The natural radionuclides in soil resulted to be distributed quite uniformly. Moreover, the correlation between the226Ra content in soil and radon concentration in dwellings resulted to be low (R2=0.26. The annual effective dose from radon and its short-lived progeny (5.5 mSv, in average was calculated by using the last ICRP dose conversion factors. In comparison, the contribution to the annual effective dose of outdoor gamma exposure from natural radionuclides in soil is nearly negligible (66 mSv. In conclusion, the observed high radon levels are only partially correlated with radium in soil; moreover, a good estimate of the annual average of radon concentration can be obtained from a six-month measurement with a proper choice of exposure period, which could be useful when designing large surveys.

  5. Annual electricity consumption forecasting by neural network in high energy consuming industrial sectors

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Sohrabkhani, S.

    2008-01-01

    This paper presents an artificial neural network (ANN) approach for annual electricity consumption in high energy consumption industrial sectors. Chemicals, basic metals and non-metal minerals industries are defined as high energy consuming industries. It is claimed that, due to high fluctuations of energy consumption in high energy consumption industries, conventional regression models do not forecast energy consumption correctly and precisely. Although ANNs have been typically used to forecast short term consumptions, this paper shows that it is a more precise approach to forecast annual consumption in such industries. Furthermore, the ANN approach based on a supervised multi-layer perceptron (MLP) is used to show it can estimate the annual consumption with less error. Actual data from high energy consuming (intensive) industries in Iran from 1979 to 2003 is used to illustrate the applicability of the ANN approach. This study shows the advantage of the ANN approach through analysis of variance (ANOVA). Furthermore, the ANN forecast is compared with actual data and the conventional regression model through ANOVA to show its superiority. This is the first study to present an algorithm based on the ANN and ANOVA for forecasting long term electricity consumption in high energy consuming industries

  6. The full annual carbon balance of Eurasian boreal forests is highly sensitive to precipitation

    Science.gov (United States)

    Öquist, Mats; Bishop, Kevin; Grelle, Achim; Klemedtsson, Leif; Köhler, Stephan; Laudon, Hjalmar; Lindroth, Anders; Ottosson Löfvenius, Mikaell; Wallin, Marcus; Nilsson, Mats

    2013-04-01

    Boreal forest biomes are identified as one of the major sinks for anthropogenic atmospheric CO2 and are also predicted to be particularly sensitive to climate change. Recent advances in understanding the carbon balance of these biomes stems mainly from eddy-covariance measurements of the net ecosystem exchange (NEE). However, NEE includes only the vertical CO2 exchange driven by photosynthesis and ecosystem respiration. A full net ecosystem carbon balance (NECB) also requires inclusion of lateral carbon export (LCE) through catchment discharge. Currently LCE is often regarded as negligible for the NECB of boreal forest ecosystems of the northern hemisphere, commonly corresponding to ~5% of annual NEE. Here we use long term (13 year) data showing that annual LCE and NEE are strongly correlated (p=0.003); years with low C sequestration by the forest coincide with years when lateral C loss is high. The fraction of NEE lost annually through LCE varied markedly from solar radiation caused by clouds. The dual effect of precipitation implies that both the observed and the predicted increases in annual precipitation at high latitudes may reduce NECB in boreal forest ecosystems. Based on regional scaling of hydrological discharge and observed spatio-temporal variations in forest NEE we conclude that our finding is relevant for large areas of the boreal Eurasian landscape.

  7. Mitochondrial DNA Marker EST00083 Is Not Associated with High vs. Average IQ in a German Sample.

    Science.gov (United States)

    Moises, Hans W.; Yang, Liu; Kohnke, Michael; Vetter, Peter; Neppert, Jurgen; Petrill, Stephen A.; Plomin, Robert

    1998-01-01

    Tested the association of a mitochondrial DNA marker (EST00083) with high IQ in a sample of 47 German adults with high IQ scores and 77 adults with IQs estimated at lower than 110. Results do not support the hypothesis that high IQ is associated with this marker. (SLD)

  8. The measurement of power losses at high magnetic field densities or at small cross-section of test specimen using the averaging

    CERN Document Server

    Gorican, V; Hamler, A; Nakata, T

    2000-01-01

    It is difficult to achieve sufficient accuracy of power loss measurement at high magnetic field densities where the magnetic field strength gets more and more distorted, or in cases where the influence of noise increases (small specimen cross section). The influence of averaging on the accuracy of power loss measurement was studied on the cast amorphous magnetic material Metglas 2605-TCA. The results show that the accuracy of power loss measurements can be improved by using the averaging of data acquisition points.

  9. An Analysis of Java Programming Behaviors, Affect, Perceptions, and Syntax Errors among Low-Achieving, Average, and High-Achieving Novice Programmers

    Science.gov (United States)

    Rodrigo, Ma. Mercedes T.; Andallaza, Thor Collin S.; Castro, Francisco Enrique Vicente G.; Armenta, Marc Lester V.; Dy, Thomas T.; Jadud, Matthew C.

    2013-01-01

    In this article we quantitatively and qualitatively analyze a sample of novice programmer compilation log data, exploring whether (or how) low-achieving, average, and high-achieving students vary in their grasp of these introductory concepts. High-achieving students self-reported having the easiest time learning the introductory programming…

  10. How Well Does High School Grade Point Average Predict College Performance by Student Urbanicity and Timing of College Entry? REL 2017-250

    Science.gov (United States)

    Hodara, Michelle; Lewis, Karyn

    2017-01-01

    This report is a companion to a study that found that high school grade point average was a stronger predictor of performance in college-level English and math than were standardized exam scores among first-time students at the University of Alaska who enrolled directly in college-level courses. This report examines how well high school grade…

  11. Development of high average power industrial Nd:YAG laser with peak power of 10 kW class

    International Nuclear Information System (INIS)

    Kim, Cheol Jung; Kim, Jeong Mook; Jung, Chin Mann; Kim, Soo Sung; Kim, Kwang Suk; Kim, Min Suk; Cho, Jae Wan; Kim, Duk Hyun

    1992-03-01

    We developed and commercialized an industrial pulsed Nd:YAG laser with peak power of 10 kW class for fine cutting and drilling applications. Several commercial models have been investigated in design and performance. We improved its quality to the level of commercial Nd:YAG laser by an endurance test for each parts of laser system. The maximum peak power and average power of our laser were 10 kW and 250 W, respectively. Moreover, the laser pulse width could be controlled from 0.5 msec to 20 msec continuously. Many optical parts were localized and lowered much in cost. Only few parts were imported and almost 90% in cost were localized. Also, to accellerate the commercialization by the joint company, the training and transfer of technology were pursued in the joint participation in design and assembly by company researchers from the early stage. Three Nd:YAG lasers have been assembled and will be tested in industrial manufacturing process to prove the capability of developed Nd:YAG laser with potential users. (Author)

  12. Cost-effectiveness of annual versus biennial screening mammography for women with high mammographic breast density.

    Science.gov (United States)

    Pataky, Reka; Ismail, Zahra; Coldman, Andrew J; Elwood, Mark; Gelmon, Karen; Hedden, Lindsay; Hislop, Greg; Kan, Lisa; McCoy, Bonnie; Olivotto, Ivo A; Peacock, Stuart

    2014-12-01

    The sensitivity of screening mammography is much lower among women who have dense breast tissue, compared with women who have largely fatty breasts, and they are also at much higher risk of developing the disease. Increasing mammography screening frequency from biennially to annually has been suggested as a policy option to address the elevated risk in this population. The purpose of this study was to assess the cost-effectiveness of annual versus biennial screening mammography among women aged 50-79 with dense breast tissue. A Markov model was constructed based on screening, diagnostic, and treatment pathways for the population-based screening and cancer care programme in British Columbia, Canada. Model probabilities and screening costs were calculated from screening programme data. Costs for breast cancer treatment were calculated from treatment data, and utility values were obtained from the literature. Incremental cost-effectiveness was expressed as cost per quality adjusted life year (QALY), and probabilistic sensitivity analysis was conducted. Compared with biennial screening, annual screening generated an additional 0.0014 QALYs (95% CI: -0.0480-0.0359) at a cost of $819 ($ = Canadian dollars) per patient (95% CI: 506-1185), resulting in an incremental cost effectiveness ratio of $565,912/QALY. Annual screening had a 37.5% probability of being cost-effective at a willingness-to-pay threshold of $100,000/QALY. There is considerable uncertainty about the incremental cost-effectiveness of annual mammography. Further research on the comparative effectiveness of screening strategies for women with high mammographic breast density is warranted, particularly as digital mammography and density measurement become more widespread, before cost-effectiveness can be reevaluated. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  13. High-average-power 2 μm few-cycle optical parametric chirped pulse amplifier at 100 kHz repetition rate.

    Science.gov (United States)

    Shamir, Yariv; Rothhardt, Jan; Hädrich, Steffen; Demmler, Stefan; Tschernajew, Maxim; Limpert, Jens; Tünnermann, Andreas

    2015-12-01

    Sources of long wavelengths few-cycle high repetition rate pulses are becoming increasingly important for a plethora of applications, e.g., in high-field physics. Here, we report on the realization of a tunable optical parametric chirped pulse amplifier at 100 kHz repetition rate. At a central wavelength of 2 μm, the system delivered 33 fs pulses and a 6 W average power corresponding to 60 μJ pulse energy with gigawatt-level peak powers. Idler absorption and its crystal heating is experimentally investigated for a BBO. Strategies for further power scaling to several tens of watts of average power are discussed.

  14. Reconstruction of high temporal resolution Thomson scattering data during a modulated electron cyclotron resonance heating using conditional averaging

    International Nuclear Information System (INIS)

    Kobayashi, T.; Yoshinuma, M.; Ohdachi, S.; Ida, K.; Itoh, K.; Moon, C.; Yamada, I.; Funaba, H.; Yasuhara, R.; Tsuchiya, H.; Yoshimura, Y.; Igami, H.; Shimozuma, T.; Kubo, S.; Tsujimura, T. I.; Inagaki, S.

    2016-01-01

    This paper provides a software application of the sampling scope concept for fusion research. The time evolution of Thomson scattering data is reconstructed with a high temporal resolution during a modulated electron cyclotron resonance heating (MECH) phase. The amplitude profile and the delay time profile of the heat pulse propagation are obtained from the reconstructed signal for discharges having on-axis and off-axis MECH depositions. The results are found to be consistent with the MECH deposition.

  15. Reconstruction of high temporal resolution Thomson scattering data during a modulated electron cyclotron resonance heating using conditional averaging

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, T., E-mail: kobayashi.tatsuya@LHD.nifs.ac.jp; Yoshinuma, M.; Ohdachi, S. [National Institute for Fusion Science, Toki 509-5292 (Japan); SOKENDAI (The Graduate University for Advanced Studies), Toki 509-5292 (Japan); Ida, K. [National Institute for Fusion Science, Toki 509-5292 (Japan); SOKENDAI (The Graduate University for Advanced Studies), Toki 509-5292 (Japan); Research Center for Plasma Turbulence, Kyushu University, Kasuga 816-8580 (Japan); Itoh, K. [National Institute for Fusion Science, Toki 509-5292 (Japan); Research Center for Plasma Turbulence, Kyushu University, Kasuga 816-8580 (Japan); Moon, C.; Yamada, I.; Funaba, H.; Yasuhara, R.; Tsuchiya, H.; Yoshimura, Y.; Igami, H.; Shimozuma, T.; Kubo, S.; Tsujimura, T. I. [National Institute for Fusion Science, Toki 509-5292 (Japan); Inagaki, S. [Research Center for Plasma Turbulence, Kyushu University, Kasuga 816-8580 (Japan); Research Institute for Applied Mechanics, Kyushu University, Kasuga 816-8580 (Japan)

    2016-04-15

    This paper provides a software application of the sampling scope concept for fusion research. The time evolution of Thomson scattering data is reconstructed with a high temporal resolution during a modulated electron cyclotron resonance heating (MECH) phase. The amplitude profile and the delay time profile of the heat pulse propagation are obtained from the reconstructed signal for discharges having on-axis and off-axis MECH depositions. The results are found to be consistent with the MECH deposition.

  16. Annual CO2 budget and seasonal CO2 exchange signals at a high Arctic permafrost site on Spitsbergen, Svalbard archipelago

    DEFF Research Database (Denmark)

    Luërs, J.; Westermann, Signe; Piel, K.

    2014-01-01

    -lasting snow cover, and several months of darkness. This study presents a complete annual cycle of the CO2 net ecosystem exchange (NEE) dynamics for a high Arctic tundra area at the west coast of Svalbard based on eddy covariance flux measurements. The annual cumulative CO2 budget is close to 0 g C m-2 yr-1...

  17. Point Locations of 849 Continuous Record Streamflow Gages Used to Estmate Annual and Average Values of Water-Budget Components Based on Hydrograph Separation and PRISM Precipitation in the Appalachian Plateaus Region, 1900-2011

    Data.gov (United States)

    Department of the Interior — As part of the U.S. Geological Survey’s Groundwater Resources Program study of the Appalachian Plateaus aquifers, estimates of annual water-budget components were...

  18. Replication of Annual Cycles in Mn in Hudson River Cores: Mn Peaks During High Water Flow

    Science.gov (United States)

    Abbott, D. H.; Hutson, D.; Marrero, A. M.; Block, K. A.; Chang, C.; Cai, Y.

    2017-12-01

    Using the results from an ITRAX, XRF scanner, we previously reported apparent annual cycles in Mn in a single, high sedimentation rate Hudson River core, LWB1-8, taken off Yonkers, NY (Carlson et al., 2016). We replicated these results in three more high sedimentation rate cores and found stratigraphic markers that verify our inferences about the annual nature of the Mn cycles. The three new cores are LWB4-5 taken off Peekskill, NY, and LWB3-44 and LWB3-25, both taken in Haverstraw Bay. The cores are from water depths of 7-9 meters and all have high magnetic susceptibilities (typically > 30 cgs units) in their upper 1 to 2 meters. The high susceptibilities are primarily produced by magnetite from modern industrial combustion. One core, LWB1-8, has reconnaissance Cs dates that verify the annual nature of the cycles. More Cs dates are expected before the meeting. We developed several new methods of verifying the annual nature of our layer counts. The first is looking at the grain size distribution and age of layers with unusually high Mn peaks. Peaks in Si, Ni and Ti and peaks in percentage of coarse material typically accompany the peaks in Mn. Some are visible as yellow sandy layers. The five highest peaks in Mn in LWB1-8 have layer counted ages that correspond (within 1 year in the top meter and within 2 years in the bottom meter) to 1996, 1948, 1913, 1857 and 1790. The latter three events are the three largest historical spring freshets on the Hudson. 1996 is a year of unusually high flow rate during the spring freshet. Based on our work and previous work on Mn cycling in rivers, we infer that the peaks in Mn are produced by extreme erosional events that erode sediment and release pore water Mn into the water column. The other methods of testing our chronology involve marine storms that increase Ca and Sr and a search for fragments of the Peekskill meteorite that fell in October 1992. More information on the latter will be available by the meeting.

  19. Determination of the fission-neutron averaged cross sections of some high-energy threshold reactions of interest for reactor dosimetry

    International Nuclear Information System (INIS)

    Arribere, M.A.; Kestelman, A.J.; Korochinsky, S.; Blostein, J.J.

    2003-01-01

    For three high threshold reactions, we have measured the cross sections averaged over a 235 U fission neutron spectrum. The measured reactions, and corresponding averaged cross sections found, are: 127 I(n,2n) 126 I, (1.36±0.12) mb; 90 Zr(n,2n) 89m Zr, (13.86±0.83) μb; and 58 Ni(n,d+np+pn) 57 Co, (274±15) μb; all referred to the well known standard of (111±3) mb for the 58 Ni(n,p) 58m+g Co averaged cross section. The measured cross sections are of interest in nuclear engineering for the characterization of the fast neutron component in the energy distribution of reactor neutrons. (author)

  20. Development of laser diode-pumped high average power solid-state laser for the pumping of Ti:sapphire CPA system

    Energy Technology Data Exchange (ETDEWEB)

    Maruyama, Yoichiro; Tei, Kazuyoku; Kato, Masaaki; Niwa, Yoshito; Harayama, Sayaka; Oba, Masaki; Matoba, Tohru; Arisawa, Takashi; Takuma, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    Laser diode pumped all solid state, high repetition frequency (PRF) and high energy Nd:YAG laser using zigzag slab crystals has been developed for the pumping source of Ti:sapphire CPA system. The pumping laser installs two main amplifiers which compose ring type amplifier configuration. The maximum amplification gain of the amplifier system is 140 and the condition of saturated amplification is achieved with this high gain. The average power of fundamental laser radiation is 250 W at the PRF of 200 Hz and the pulse duration is around 20 ns. The average power of second harmonic is 105 W at the PRF of 170 Hz and the pulse duration is about 16 ns. The beam profile of the second harmonic is near top hat and will be suitable for the pumping of Ti:sapphire laser crystal. The wall plug efficiency of the laser is 2.0 %. (author)

  1. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  2. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  3. Initial Beam Dynamics Simulations of a High-Average-Current Field-Emission Electron Source in a Superconducting RadioFrequency Gun

    Energy Technology Data Exchange (ETDEWEB)

    Mohsen, O. [Northern Illinois U.; Gonin, I. [Fermilab; Kephart, R. [Fermilab; Khabiboulline, T. [Fermilab; Piot, P. [Northern Illinois U.; Solyak, N. [Fermilab; Thangaraj, J. C. [Fermilab; Yakovlev, V. [Fermilab

    2018-01-05

    High-power electron beams are sought-after tools in support to a wide array of societal applications. This paper investigates the production of high-power electron beams by combining a high-current field-emission electron source to a superconducting radio-frequency (SRF) cavity. We especially carry out beam-dynamics simulations that demonstrate the viability of the scheme to form $\\sim$ 300 kW average-power electron beam using a 1+1/2-cell SRF gun.

  4. High-average-power UV generation at 266 and 355 nm in β-BaB/sub 2/O/sub 4/

    International Nuclear Information System (INIS)

    Liu, K.C.; Rhoades, M.

    1987-01-01

    UV light has been generated previously by harmonic conversion from Nd:YAG lasers using the nonlinear crystals KD*P and ADP. Most of the previous studies have employed lasers with high peak power due to the low-harmonic-conversion efficiency of these crystals and also low average power due to the phase mismatch caused by temperature detuning resulting from UV absorption. A new nonlinear crystal β-BaB/sub 2/O/sub 4/ has recently been reported which provides for the possibility of overcoming the aforementioned problems. The authors utilized β-BaB/sub 2/O/sub 4/ to frequency triple and frequency quadruple a high-repetition-rate cw-pumped Nd:YAG laser and achieved up to 1-W average power with Gaussian spatial distribution at 266 and 355 nm. β-BaB/sub 2/O/sub 4/ has demonstrated its advantages for high-average-power UV generation. Its major drawback is a low-angular-acceptance bandwidth which requires a high-quality fundamental pump beam

  5. Soil and Water Assessment Tool model predictions of annual maximum pesticide concentrations in high vulnerability watersheds.

    Science.gov (United States)

    Winchell, Michael F; Peranginangin, Natalia; Srinivasan, Raghavan; Chen, Wenlin

    2018-05-01

    Recent national regulatory assessments of potential pesticide exposure of threatened and endangered species in aquatic habitats have led to increased need for watershed-scale predictions of pesticide concentrations in flowing water bodies. This study was conducted to assess the ability of the uncalibrated Soil and Water Assessment Tool (SWAT) to predict annual maximum pesticide concentrations in the flowing water bodies of highly vulnerable small- to medium-sized watersheds. The SWAT was applied to 27 watersheds, largely within the midwest corn belt of the United States, ranging from 20 to 386 km 2 , and evaluated using consistent input data sets and an uncalibrated parameterization approach. The watersheds were selected from the Atrazine Ecological Exposure Monitoring Program and the Heidelberg Tributary Loading Program, both of which contain high temporal resolution atrazine sampling data from watersheds with exceptionally high vulnerability to atrazine exposure. The model performance was assessed based upon predictions of annual maximum atrazine concentrations in 1-d and 60-d durations, predictions critical in pesticide-threatened and endangered species risk assessments when evaluating potential acute and chronic exposure to aquatic organisms. The simulation results showed that for nearly half of the watersheds simulated, the uncalibrated SWAT model was able to predict annual maximum pesticide concentrations within a narrow range of uncertainty resulting from atrazine application timing patterns. An uncalibrated model's predictive performance is essential for the assessment of pesticide exposure in flowing water bodies, the majority of which have insufficient monitoring data for direct calibration, even in data-rich countries. In situations in which SWAT over- or underpredicted the annual maximum concentrations, the magnitude of the over- or underprediction was commonly less than a factor of 2, indicating that the model and uncalibrated parameterization

  6. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  7. The impact of including children with intellectual disability in general education classrooms on the academic achievement of their low-, average-, and high-achieving peers.

    Science.gov (United States)

    Sermier Dessemontet, Rachel; Bless, Gérard

    2013-03-01

    This study aimed at assessing the impact of including children with intellectual disability (ID) in general education classrooms with support on the academic achievement of their low-, average-, and high-achieving peers without disability. A quasi-experimental study was conducted with an experimental group of 202 pupils from classrooms with an included child with mild or moderate ID, and a control group of 202 pupils from classrooms with no included children with special educational needs (matched pairs sample). The progress of these 2 groups in their academic achievement was compared over a period of 1 school year. No significant difference was found in the progress of the low-, average-, or high-achieving pupils from classrooms with or without inclusion. The results suggest that including children with ID in primary general education classrooms with support does not have a negative impact on the progress of pupils without disability.

  8. Investigation on repetition rate and pulse duration influences on ablation efficiency of metals using a high average power Yb-doped ultrafast laser

    Directory of Open Access Journals (Sweden)

    Lopez J.

    2013-11-01

    Full Text Available Ultrafast lasers provide an outstanding processing quality but their main drawback is the low removal rate per pulse compared to longer pulses. This limitation could be overcome by increasing both average power and repetition rate. In this paper, we report on the influence of high repetition rate and pulse duration on both ablation efficiency and processing quality on metals. All trials have been performed with a single tunable ultrafast laser (350 fs to 10ps.

  9. High average daily intake of PCDD/Fs and serum levels in residents living near a deserted factory producing pentachlorophenol (PCP) in Taiwan: Influence of contaminated fish consumption

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C.C. [Department of Environmental and Occupational Health, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Research Center of Environmental Trace Toxic Substances, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Lin, W.T. [Department of Environmental and Occupational Health, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Liao, P.C. [Department of Environmental and Occupational Health, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Research Center of Environmental Trace Toxic Substances, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Su, H.J. [Department of Environmental and Occupational Health, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Research Center of Environmental Trace Toxic Substances, Medical College, National Cheng Kung University, Tainan, Taiwan (China); Chen, H.L. [Department of Industrial Safety and Health, Hung Kuang University, Taichung, 34 Chung Chie Rd. Sha Lu, Taichung 433, Taiwan (China)]. E-mail: hsiulin@sunrise.hk.edu.tw

    2006-05-15

    An abandoned pentachlorophenol plant and nearby area in southern Taiwan was heavily contaminated by dioxins, impurities formed in the PCP production process. The investigation showed that the average serum PCDD/Fs of residents living nearby area (62.5 pg WHO-TEQ/g lipid) was higher than those living in the non-polluted area (22.5 and 18.2 pg WHO-TEQ/g lipid) (P < 0.05). In biota samples, average PCDD/F of milkfish in sea reservoir (28.3 pg WHO-TEQ/g) was higher than those in the nearby fish farm (0.15 pg WHO-TEQ/g), and Tilapia and shrimp showed the similar trend. The average daily PCDD/Fs intake of 38% participants was higher than 4 pg WHO-TEQ/kg/day suggested by the world health organization. Serum PCDD/F was positively associated with average daily intake (ADI) after adjustment for age, sex, BMI, and smoking status. In addition, a prospective cohort study is suggested to determine the long-term health effects on the people living near factory. - Inhabitants living near a deserted PCP factory are exposed to high PCDD/F levels.

  10. Short communication: Prevalence of digital dermatitis in Canadian dairy cattle classified as high, average, or low antibody- and cell-mediated immune responders.

    Science.gov (United States)

    Cartwright, S L; Malchiodi, F; Thompson-Crispi, K; Miglior, F; Mallard, B A

    2017-10-01

    Lameness is a major animal welfare issue affecting Canadian dairy producers, and it can lead to production, reproduction, and health problems in dairy cattle herds. Although several different lesions affect dairy cattle hooves, studies show that digital dermatitis is the most common lesion identified in Canadian dairy herds. It has also been shown that dairy cattle classified as having high immune response (IR) have lower incidence of disease compared with those animals with average and low IR; therefore, it has been hypothesized that IR plays a role in preventing infectious hoof lesions. The objective of this study was to compare the prevalence of digital dermatitis in Canadian dairy cattle that were classified for antibody-mediated (AMIR) and cell-mediated (CMIR) immune response. Cattle (n = 329) from 5 commercial dairy farms in Ontario were evaluated for IR using a patented test protocol that captures both AMIR and CMIR. Individuals were classified as high, average, or low responders based on standardized residuals for AMIR and CMIR. Residuals were calculated using a general linear model that included the effects of herd, parity, stage of lactation, and stage of pregnancy. Hoof health data were collected from 2011 to 2013 by the farm's hoof trimmer using Hoof Supervisor software (KS Dairy Consulting Inc., Dresser, WI). All trim events were included for each animal, and lesions were assessed as a binary trait at each trim event. Hoof health data were analyzed using a mixed model that included the effects of herd, stage of lactation (at trim date), parity (at trim date), IR category (high, average, and low), and the random effect of animal. All data were presented as prevalence within IR category. Results showed that cows with high AMIR had significantly lower prevalence of digital dermatitis than cattle with average and low AMIR. No significant difference in prevalence of digital dermatitis was observed between high, average, and low CMIR cows. These results

  11. Leaf anatomical and photosynthetic acclimation to cool temperature and high light in two winter versus two summer annuals.

    Science.gov (United States)

    Cohu, Christopher M; Muller, Onno; Adams, William W; Demmig-Adams, Barbara

    2014-09-01

    Acclimation of foliar features to cool temperature and high light was characterized in winter (Spinacia oleracea L. cv. Giant Nobel; Arabidopsis thaliana (L.) Heynhold Col-0 and ecotypes from Sweden and Italy) versus summer (Helianthus annuus L. cv. Soraya; Cucurbita pepo L. cv. Italian Zucchini Romanesco) annuals. Significant relationships existed among leaf dry mass per area, photosynthesis, leaf thickness and palisade mesophyll thickness. While the acclimatory response of the summer annuals to cool temperature and/or high light levels was limited, the winter annuals increased the number of palisade cell layers, ranging from two layers under moderate light and warm temperature to between four and five layers under cool temperature and high light. A significant relationship was also found between palisade tissue thickness and either cross-sectional area or number of phloem cells (each normalized by vein density) in minor veins among all four species and growth regimes. The two winter annuals, but not the summer annuals, thus exhibited acclimatory adjustments of minor vein phloem to cool temperature and/or high light, with more numerous and larger phloem cells and a higher maximal photosynthesis rate. The upregulation of photosynthesis in winter annuals in response to low growth temperature may thus depend on not only (1) a greater volume of photosynthesizing palisade tissue but also (2) leaf veins containing additional phloem cells and presumably capable of exporting a greater volume of sugars from the leaves to the rest of the plant. © 2014 Scandinavian Plant Physiology Society.

  12. Building a Grad Nation: Progress and Challenge in Ending the High School Dropout Epidemic. Annual Update 2015

    Science.gov (United States)

    DePaoli, Jennifer L.; Fox, Joanna Hornig; Ingram, Erin S.; Maushard, Mary; Bridgeland, John M.; Balfanz, Robert

    2015-01-01

    In 2013, the national high school graduation rate hit a record high of 81.4 percent, and for the third year in a row, the nation remained on pace to meet the 90 percent goal by the Class of 2020. This sixth annual update on America's high school dropout challenge shows that these gains have been made possible by raising graduation rates for…

  13. Building a Grad Nation: Progress and Challenge in Ending the High School Dropout Epidemic. Annual Update 2014

    Science.gov (United States)

    Balfanz, Robert; Bridgeland, John M.; Fox, Joanna Hornig; DePaoli, Jennifer L.; Ingram, Erin S.; Maushard, Mary

    2014-01-01

    This fifth annual update on America's high school dropout crisis shows that, for the first time in history, the nation has crossed the 80 percent high school graduation rate threshold and remains on pace, for the second year in a row, to meet the goal of a 90 percent high school graduation rate by the Class of 2020. This report highlights key…

  14. Annual effective dose equivalents arising from inhalation of 222Rn, 220Rn and their decay products in high background radiation area in China

    International Nuclear Information System (INIS)

    Zhang Zhonghou

    1985-01-01

    The author presents the data of on-the-sport investigations in the high background radiation area in Yangjiang County in 1975 and 1981. Monazite sand is contained in the soil of this area. The average concentrations of 222 Rn in the air indoors and out doors of the high background radiation area are 31.8 and 16.4 Bqm -3 respectively, which are equal to 2.9 and 1.5 times the average concentrations in the control area. The average concentrations of 220 Rn in the air indoors and outdoors of the high background area are 167.5 and 18.4 Bqm -3 , corresponding to 9.6 and 4.8 times those of the control area respectively. The average potential alpha energy concentrations for daughters of 222 Rn indoors and outdoors are 0.1 and 0.097 μJm -3 , which are equal to 2.6 and 2.2 times those of the control are respectively. The average potential alpha energy concentrations for daughters of 220 Rn indoors and outdoors are 0.255 and 0.053 μJm -3 , corresponding to 3.7 and 2.7 times those of the control area respectively. The average annual effective dose equivalents arising from inhalation of 222 Rn, 220 Rn and their decay products in high background radiation area are estimated to be 2.8 mSv per caput, in which 40.5% arise from 220 Rn and its decay products. This result is about 3 times that in the neighboring control area

  15. Differences in Learning Characteristics Between Students With High, Average, and Low Levels of Academic Procrastination: Students’ Views on Factors Influencing Their Learning

    Directory of Open Access Journals (Sweden)

    Lennart Visser

    2018-05-01

    Full Text Available Within the field of procrastination, much research has been conducted on factors that have an influence on academic procrastination. Less is known about how such factors may differ for various students. In addition, not much is known about differences in the process of how factors influence students’ learning and what creates differences in procrastination behavior between students with different levels of academic procrastination. In this study learning characteristics and the self-regulation behavior of three groups of students with different levels of academic procrastination were compared. The rationale behind this was that certain learning characteristics and self-regulation behaviors may play out differently in students with different levels of academic procrastination. Participants were first-year students (N = 22 with different levels of academic procrastination enrolled in an elementary teacher education program. The selection of the participants into three groups of students (low procrastination, n = 8; average procrastination, n = 8; high procrastination, n = 6 was based on their scores on a questionnaire measuring the students’ levels of academic procrastination. From semi-structured interviews, six themes emerged that describe how students in the three groups deal with factors that influence the students’ learning: degree program choice, getting started with study activities, engagement in study activities, ways of reacting to failure, view of oneself, and study results. This study shows the importance of looking at differences in how students deal with certain factors possibly negatively influencing their learning. Within the group of students with average and high levels of academic procrastination, factors influencing their learning are regularly present. These factors lead to procrastination behavior among students with high levels of academic procrastination, but this seems not the case among students with an average

  16. Differences in Learning Characteristics Between Students With High, Average, and Low Levels of Academic Procrastination: Students' Views on Factors Influencing Their Learning.

    Science.gov (United States)

    Visser, Lennart; Korthagen, Fred A J; Schoonenboom, Judith

    2018-01-01

    Within the field of procrastination, much research has been conducted on factors that have an influence on academic procrastination. Less is known about how such factors may differ for various students. In addition, not much is known about differences in the process of how factors influence students' learning and what creates differences in procrastination behavior between students with different levels of academic procrastination. In this study learning characteristics and the self-regulation behavior of three groups of students with different levels of academic procrastination were compared. The rationale behind this was that certain learning characteristics and self-regulation behaviors may play out differently in students with different levels of academic procrastination. Participants were first-year students ( N = 22) with different levels of academic procrastination enrolled in an elementary teacher education program. The selection of the participants into three groups of students (low procrastination, n = 8; average procrastination, n = 8; high procrastination, n = 6) was based on their scores on a questionnaire measuring the students' levels of academic procrastination. From semi-structured interviews, six themes emerged that describe how students in the three groups deal with factors that influence the students' learning: degree program choice, getting started with study activities, engagement in study activities, ways of reacting to failure, view of oneself, and study results. This study shows the importance of looking at differences in how students deal with certain factors possibly negatively influencing their learning. Within the group of students with average and high levels of academic procrastination, factors influencing their learning are regularly present. These factors lead to procrastination behavior among students with high levels of academic procrastination, but this seems not the case among students with an average level of academic

  17. Differences in Learning Characteristics Between Students With High, Average, and Low Levels of Academic Procrastination: Students’ Views on Factors Influencing Their Learning

    Science.gov (United States)

    Visser, Lennart; Korthagen, Fred A. J.; Schoonenboom, Judith

    2018-01-01

    Within the field of procrastination, much research has been conducted on factors that have an influence on academic procrastination. Less is known about how such factors may differ for various students. In addition, not much is known about differences in the process of how factors influence students’ learning and what creates differences in procrastination behavior between students with different levels of academic procrastination. In this study learning characteristics and the self-regulation behavior of three groups of students with different levels of academic procrastination were compared. The rationale behind this was that certain learning characteristics and self-regulation behaviors may play out differently in students with different levels of academic procrastination. Participants were first-year students (N = 22) with different levels of academic procrastination enrolled in an elementary teacher education program. The selection of the participants into three groups of students (low procrastination, n = 8; average procrastination, n = 8; high procrastination, n = 6) was based on their scores on a questionnaire measuring the students’ levels of academic procrastination. From semi-structured interviews, six themes emerged that describe how students in the three groups deal with factors that influence the students’ learning: degree program choice, getting started with study activities, engagement in study activities, ways of reacting to failure, view of oneself, and study results. This study shows the importance of looking at differences in how students deal with certain factors possibly negatively influencing their learning. Within the group of students with average and high levels of academic procrastination, factors influencing their learning are regularly present. These factors lead to procrastination behavior among students with high levels of academic procrastination, but this seems not the case among students with an average level of academic

  18. The Texas A and M student branch's annual high school teachers' conference

    International Nuclear Information System (INIS)

    Wood, A.; Clements, M.

    1991-01-01

    To quote the American Nuclear Society (ANS) Student Constitution, the objective of a student branch is the advancement of science and engineering relating to the atomic nucleus, and of allied science and arts. The Texas A and M University (TAMU) student chapter has extended this objective to that of promoting a better understanding of the nuclear sciences by the general public. The student branch has attempted to reach these objectives by sponsoring a variety of activities designed to motivate and interest individuals to become more aware of nuclear technology and its benefits. These activities are directed toward fellow college students, high school teachers and students, and the surrounding community. One of the largest and most important activities organized by the TAMU student branch is the annual student conference

  19. Environmental friendly high efficient light source. Plasma lamp. 2006 annual report

    Energy Technology Data Exchange (ETDEWEB)

    Courret, G.

    2006-07-01

    This annual report for 2006 for the Swiss Federal Office of Energy (SFOE) reports on work being done on the development of a high-efficiency source of light based on the light emission of a plasma. The report presents a review of work done in 2006, including thermodynamics and assessment of the efficiency of the magnetron, tests with small bulbs, study of the standing wave ratio (microwave fluxes) and the development of a new coupling system to allow ignition in very small bulbs. Also, knowledge on the fillings of the bulb and induced effects of the modulator were gained. The development of a second generation of modulator to obtain higher efficiency at lower power is noted.

  20. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  1. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  2. Amplified spontaneous emission and thermal management on a high average-power diode-pumped solid-state laser - the Lucia laser system

    International Nuclear Information System (INIS)

    Albach, D.

    2010-01-01

    The development of the laser triggered the birth of numerous fields in both scientific and industrial domains. High intensity laser pulses are a unique tool for light/matter interaction studies and applications. However, current flash-pumped glass-based systems are inherently limited in repetition-rate and efficiency. Development within recent years in the field of semiconductor lasers and gain media drew special attention to a new class of lasers, the so-called Diode Pumped Solid State Laser (DPSSL). DPSSLs are highly efficient lasers and are candidates of choice for compact, high average-power systems required for industrial applications but also as high-power pump sources for ultra-high intense lasers. The work described in this thesis takes place in the context of the 1 kilowatt average-power DPSSL program Lucia, currently under construction at the 'Laboratoire d'Utilisation des Laser Intenses' (LULI) at the Ecole Polytechnique, France. Generation of sub-10 nanosecond long pulses with energies of up to 100 joules at repetition rates of 10 hertz are mainly limited by Amplified Spontaneous Emission (ASE) and thermal effects. These limitations are the central themes of this work. Their impact is discussed within the context of a first Lucia milestone, set around 10 joules. The developed laser system is shown in detail from the oscillator level to the end of the amplification line. A comprehensive discussion of the impact of ASE and thermal effects is completed by related experimental benchmarks. The validated models are used to predict the performances of the laser system, finally resulting in a first activation of the laser system at an energy level of 7 joules in a single-shot regime and 6.6 joules at repetition rates up to 2 hertz. Limitations and further scaling approaches are discussed, followed by an outlook for the further development. (author) [fr

  3. Back pain in physically inactive students compared to physical education students with a high and average level of physical activity studying in Poland.

    Science.gov (United States)

    Kędra, Agnieszka; Kolwicz-Gańko, Aleksandra; Kędra, Przemysław; Bochenek, Anna; Czaprowski, Dariusz

    2017-11-28

    The aim of the study was (1) to characterise back pain in physically inactive students as well as in trained (with a high level of physical activity) and untrained (with an average level of physical activity) physical education (PE) students and (2) to find out whether there exist differences regarding the declared incidence of back pain (within the last 12 months) between physically inactive students and PE students as well as between trained (with a high level of physical activity) and untrained (with an average level of physical activity) PE students. The study included 1321 1st-, 2nd- and 3rd-year students (full-time bachelor degree course) of Physical Education, Physiotherapy, Pedagogy as well as Tourism and Recreation from 4 universities in Poland. A questionnaire prepared by the authors was applied as a research tool. The 10-point Visual Analogue Scale (VAS) was used to assess pain intensity. Prior to the study, the reliability of the questionnaire was assessed by conducting it on the group of 20 participants twice with a shorter interval. No significant differences between the results obtained in the two surveys were revealed (p education (p > 0.05). Back pain was more common in the group of trained students than among untrained individuals (p education students (p > 0.05). The trained students declared back pain more often than their untrained counterparts (p < 0.05).

  4. Performance of MgO:PPLN, KTA, and KNbO₃ for mid-wave infrared broadband parametric amplification at high average power.

    Science.gov (United States)

    Baudisch, M; Hemmer, M; Pires, H; Biegert, J

    2014-10-15

    The performance of potassium niobate (KNbO₃), MgO-doped periodically poled lithium niobate (MgO:PPLN), and potassium titanyl arsenate (KTA) were experimentally compared for broadband mid-wave infrared parametric amplification at a high repetition rate. The seed pulses, with an energy of 6.5 μJ, were amplified using 410 μJ pump energy at 1064 nm to a maximum pulse energy of 28.9 μJ at 3 μm wavelength and at a 160 kHz repetition rate in MgO:PPLN while supporting a transform limited duration of 73 fs. The high average powers of the interacting beams used in this study revealed average power-induced processes that limit the scaling of optical parametric amplification in MgO:PPLN; the pump peak intensity was limited to 3.8  GW/cm² due to nonpermanent beam reshaping, whereas in KNbO₃ an absorption-induced temperature gradient in the crystal led to permanent internal distortions in the crystal structure when operated above a pump peak intensity of 14.4  GW/cm².

  5. A highly detailed FEM volume conductor model based on the ICBM152 average head template for EEG source imaging and TCS targeting.

    Science.gov (United States)

    Haufe, Stefan; Huang, Yu; Parra, Lucas C

    2015-08-01

    In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.

  6. Neutron and gamma emission from highly excited states and states with high spin. Annual progress report

    International Nuclear Information System (INIS)

    Sperber, D.

    1976-08-01

    Both classical and quantum models for the collision between heavy ions were studied. Classical models were used to account for the possibility of strong damping. Two models which account for side peaking and considerable energy loss were proposed. According to the first, the ions clutch at the distance of closest approach and the radial energy is dissipated fast in the entrance channel. This is followed by a slow motion in the exit channel up to the snapping point. According to the second model, there is an asymmetry in the conservative potential between the entrance and exit channels. The exit channel potential includes deformations. A dynamical model including transfer was developed. The trajectories are determined dynamically whereas the transfer is considered as a random process. Semi-classical calculations (first order quantum calculation) were performed to test the validity of the classical model or the sharp cut-off approximation. The main conclusion is that for energies high above the Coulomb barrier, the classical approximation is adequate but close to the barrier, it is insufficient, and quantum effects are important. It was shown that a quantum mechanical model using time dependent perturbation accounts very well for the angular distribution in strongly damped collisions. A list of publications is included

  7. High methane emissions dominate annual greenhouse gas balances 30 years after bog rewetting

    Science.gov (United States)

    Vanselow-Algan, M.; Schmidt, S. R.; Greven, M.; Fiencke, C.; Kutzbach, L.; Pfeiffer, E.-M.

    2015-02-01

    Natural peatlands are important carbon sinks and sources of methane (CH4). In contrast, drained peatlands turn from a carbon sink to a carbon source and potentially emit nitrous oxide (N2O). Rewetting of peatlands thus implies climate change mitigation. However, data about the time span that is needed for the re-establishment of the carbon sink function by restoration is scarce. We therefore investigated the annual greenhouse gas (GHG) balances of three differently vegetated bog sites 30 years after rewetting. All three vegetation communities turned out to be sources of carbon dioxide (CO2) ranging between 0.6 ± 1.43 t CO2 ha-2 yr-1 (Sphagnum-dominated vegetation) and 3.09 ± 3.86 t CO2 ha-2 yr-1 (vegetation dominated by heath). While accounting for the different global warming potential (GWP) of the three greenhouse gases, the annual GHG balance was calculated. Emissions ranged between 25 and 53 t CO2-eq ha-1 yr-1 and were dominated by large emissions of CH4 (22 up to 51 t CO2-eq ha-1 yr-1), while highest rates were found at purple moor grass (Molinia caerulea) stands. These are to our knowledge the highest CH4 emissions so far reported for bog ecosystems in temperate Europe. As the restored area was subject to large fluctuations in water table, we conclude that the high CH4 emission rates were caused by a combination of both the temporal inundation of the easily decomposable plant litter of this grass species and the plant-mediated transport through its tissues. In addition, as a result of the land use history, the mixed soil material can serve as an explanation. With regards to the long time span passed since rewetting, we note that the initial increase in CH4 emissions due to rewetting as described in the literature is not limited to a short-term period.

  8. The ETA-II linear induction accelerator and IMP wiggler: A high-average-power millimeter-wave free-electron laser for plasma heating

    International Nuclear Information System (INIS)

    Allen, S.L.; Scharlemann, E.T.

    1993-01-01

    The authors have constructed a 140-GHz free-electron laser to generate high-average-power microwaves for heating the MTX tokamak plasma. A 5.5-m steady-state wiggler (Intense Microwave, Prototype-IMP) has been installed at the end of the upgraded 60-cell ETA-II accelerator, and is configured as an FEL amplifier for the output of a 140-GHz long-pulse gyrotron. Improvements in the ETA-II accelerator include a multicable-feed power distribution network, better magnetic alignment using a stretched-wire alignment technique (SWAT), and a computerized tuning algorithm that directly minimizes the transverse sweep (corkscrew motion) of the electron beam. The upgrades were first tested on the 20-cell, 3-MeV front end of ETA-II and resulted in greatly improved energy flatness and reduced corkscrew motion. The upgrades were then incorporated into the full 60-cell configuration of ETA-II, along with modifications to allow operation in 50-pulse bursts at pulse repetition frequencies up to 5 kHz. The pulse power modifications were developed and tested on the High Average Power Test Stand (HAPTS), and have significantly reduced the voltage and timing jitter of the MAG 1D magnetic pulse compressors. The 2-3 kA, 6-7 MeV beam from ETA-II is transported to the IMP wiggler, which has been reconfigured as a laced wiggler, with both permanent magnets and electromagnets, for high magnetic field operation. Tapering of the wiggler magnetic field is completely computer controlled and can be optimized based on the output power. The microwaves from the FEL are transmitted to the MTX tokamak by a windowless quasi-optical microwave transmission system. Experiments at MTX are focused on studies of electron-cyclotron-resonance heating (ECRH) of the plasma. The authors summarize here the accelerator and pulse power modifications, and describe the status of ETA-II, IMP, and MTX operations

  9. The ETA-II linear induction accelerator and IMP wiggler: A high-average-power millimeter-wave free-electron-laser for plasma heating

    International Nuclear Information System (INIS)

    Allen, S.L.; Scharlemann, E.T.

    1992-05-01

    We have constructed a 140-GHz free-electron laser to generate high-average-power microwaves for heating the MTX tokamak plasma. A 5.5-m steady-state wiggler (intense Microwave Prototype-IMP) has been installed at the end of the upgraded 60-cell ETA-II accelerator, and is configured as an FEL amplifier for the output of a 140-GHz long-pulse gyrotron. Improvements in the ETA-II accelerator include a multicable-feed power distribution network, better magnetic alignment using a stretched-wire alignment technique (SWAT). and a computerized tuning algorithm that directly minimizes the transverse sweep (corkscrew motion) of the electron beam. The upgrades were first tested on the 20-cell, 3-MeV front end of ETA-II and resulted in greatly improved energy flatness and reduced corkscrew motion. The upgrades were then incorporated into the full 60-cell configuration of ETA-II, along with modifications to allow operation in 50-pulse bursts at pulse repetition frequencies up to 5 kHz. The pulse power modifications were developed and tested on the High Average Power Test Stand (HAPTS), and have significantly reduced the voltage and timing jitter of the MAG 1D magnetic pulse compressors. The 2-3 kA. 6-7 MeV beam from ETA-II is transported to the IMP wiggler, which has been reconfigured as a laced wiggler, with both permanent magnets and electromagnets, for high magnetic field operation. Tapering of the wiggler magnetic field is completely computer controlled and can be optimized based on the output power. The microwaves from the FEL are transmitted to the MTX tokamak by a windowless quasi-optical microwave transmission system. Experiments at MTX are focused on studies of electron-cyclotron-resonance heating (ECRH) of the plasma. We summarize here the accelerator and pulse power modifications, and describe the status of ETA-II, IMP, and MTX operations

  10. RELATIONSHIPS BETWEEN ZONAL WIND ANOMALIES IN HIGH AND LOW TROPOSPHERE AND ANNUAL FREQUENCY OF NW PACIFIC TROPICAL CYCLONES

    Institute of Scientific and Technical Information of China (English)

    GONG Zhen-song; HE Min

    2007-01-01

    Relationships between large-scale zonal wind anomalies and annual frequency of NW Pacific tropical cyclones and possible mechanisms are investigated with the methods of correlation and composition.It is indicated that when △ U200-△U850 >0 in the eastern tropical Pacific and △ U200- △U850 <0 in western tropical Pacific, the Walker cell is stronger in the Pacific tropical region and the annual frequency of NW Pacific tropical cyclone are above normal. In the years with zonal wind anomalies, the circulation of high and low troposphere and the vertical motions in the troposphere have significant characteristics. In the time scale of short-range climate prediction, zonal wind anomalies in high and low troposphere are useful as a preliminary signal of the annual frequency prediction of NW Pacific tropical cyclones.

  11. Investigation of the thermal and optical performance of a spatial light modulator with high average power picosecond laser exposure for materials processing applications

    Science.gov (United States)

    Zhu, G.; Whitehead, D.; Perrie, W.; Allegre, O. J.; Olle, V.; Li, Q.; Tang, Y.; Dawson, K.; Jin, Y.; Edwardson, S. P.; Li, L.; Dearden, G.

    2018-03-01

    Spatial light modulators (SLMs) addressed with computer generated holograms (CGHs) can create structured light fields on demand when an incident laser beam is diffracted by a phase CGH. The power handling limitations of these devices based on a liquid crystal layer has always been of some concern. With careful engineering of chip thermal management, we report the detailed optical phase and temperature response of a liquid cooled SLM exposed to picosecond laser powers up to 〈P〉  =  220 W at 1064 nm. This information is critical for determining device performance at high laser powers. SLM chip temperature rose linearly with incident laser exposure, increasing by only 5 °C at 〈P〉  =  220 W incident power, measured with a thermal imaging camera. Thermal response time with continuous exposure was 1-2 s. The optical phase response with incident power approaches 2π radians with average power up to 〈P〉  =  130 W, hence the operational limit, while above this power, liquid crystal thickness variations limit phase response to just over π radians. Modelling of the thermal and phase response with exposure is also presented, supporting experimental observations well. These remarkable performance characteristics show that liquid crystal based SLM technology is highly robust when efficiently cooled. High speed, multi-beam plasmonic surface micro-structuring at a rate R  =  8 cm2 s-1 is achieved on polished metal surfaces at 〈P〉  =  25 W exposure while diffractive, multi-beam surface ablation with average power 〈P〉  =100 W on stainless steel is demonstrated with ablation rate of ~4 mm3 min-1. However, above 130 W, first order diffraction efficiency drops significantly in accord with the observed operational limit. Continuous exposure for a period of 45 min at a laser power of 〈P〉  =  160 W did not result in any detectable drop in diffraction efficiency, confirmed afterwards by the efficient

  12. High energy density in matter produced by heavy ion beams. Annual report 1987

    International Nuclear Information System (INIS)

    1988-08-01

    Research activities presented in this annual report were carried out in 1987 in the framework of the government-funded program 'High Energy Density in Matter Produced by Heavy Ion Beams'. It addresses fundamental problems of the generation and investigation of hot dense matter. Its initial motivation and its ultimate goal is the question whether inertial confinement can be achieved by intense heavy ion beams. The new accelerator facility SIS/ESR now under construction at GSI will provide an excellent potential for research in this field. The construction work at the new validity is on schedule. The building construction is near completion and the SIS accelerator will have its first beam at the beginning of next year. First experiments at lower intensity will start in summer 1989 and the full program will run after the cooler and storage ring ESR has got operational. Accordingly, the planning and the preparation of the high energy density experiments at this unique facility was an essential part of the activities last year. In this funding period emphasis was given to the experimental activities at the existing accelerator. In addition to a number of accelerator-oriented and instrumental developments, an experiment on beam-plasma interaction had first exciting results, a significant increase of the stopping power for heavy ions in plasma was measured. Other important activities were the investigation of dielectronic recombination of highly charged ions, spectroscopic investigations aiming at the pumping of short wavelength lasers by heavy ion beams and a crossed beam experiment for the determination of Bi + + Bi + ionization cross sections. As in previous years theoretical work an space-charge dominated beam dynamics as well as on hydrodynamics of dense plasmas, radiation transport and beam plasma interaction was continued, thus providing a basis for the future experiments. (orig.)

  13. High Temperature Materials Laboratory seventh annual report, October 1993--September 1994

    Energy Technology Data Exchange (ETDEWEB)

    Tennery, V.J.; Teague, P.A.

    1994-12-01

    The High Temperature Materials Laboratory (HTML) has completed its seventh year of operation as a designated Department of Energy User Facility at the Oak Ridge National Laboratory. Growth of the User Program has been demonstrated by the number of institutions executing user agreements since the HTML began operation in 1987. A total of 193 nonproprietary agreements (91 industry and 102 university) and 41 proprietary agreements (39 industry and two university) are now in effect. This represents an increase of 21 nonproprietary user agreements during FY 1994. Forty-one states are represented by these users. During FY 1994, the HTML User Program evaluated 106 nonproprietary proposals (46 from industry, 52 from universities, and 8 from other government facilities) and 8 proprietary proposals. The HTML User Advisory Committee approved about ninety-five percent of those evaluated proposals, sometimes after the prospective user revised the proposal based on comments from the Committee. This annual report discusses FY 1994 activities in the individual user centers, as well as plans for the future. It also gives statistics about users and their proposals and FY 1994 publications, and summarizes nonproprietary research projects active in FY 1994.

  14. High Temperature Materials Laboratory, Eleventh Annual Report: October 1997 through September 1998

    Energy Technology Data Exchange (ETDEWEB)

    Pasto, A.E.; Russell, B.J.

    2000-03-01

    The High Temperature Materials Laboratory (HTML) has completed its eleventh year of operation as a designated US Department of Energy User Facility at the Oak Ridge National Laboratory. This document profiles the historical growth of the HTML User and Fellowship Programs since their inception in 1987. Growth of the HTML programs has been demonstrated by the number of institutions executing user agreements and by the number of days of instrument use (user days) since the HTML began operation.A total of 522 agreements (351 industry,156 university,and 15 other federal agency) are now in effect (452 nonproprietary and 70 proprietary). This represents an increase of 75 user agreements since the last reporting period (for FY 1997). A state-by-state summary of the nonproprietary user agreements is given in Appendix A. Forty-six states are represented. During FY 1998, the HTML User Program evaluated 80 nonproprietary proposals (32 from industry, 45 from universities, and 3 from other government facilities) and several proprietary proposals. Appendix B provides a detailed breakdown of the nonproprietary proposals received during FY 1998. The HTML User Advisory Committee approved about 95% of those proposals, sometimes after the prospective user revised the proposal based on comments from the committee. This annual report discusses activities in the individual user centers as well as plans for the future. It also gives statistics about users, proposals, and publications as well as summaries of the nonproprietary research projects active during 1998.

  15. High Temperature Materials Laboratory Thirteenth Annual Report: October 1999 Through September 2000

    Energy Technology Data Exchange (ETDEWEB)

    Pasto, AE

    2001-11-07

    The High Temperature Materials Laboratory (HTML) User Program continued to work with industrial, academic, and governmental users this year, accepting 86 new projects and developing 50 new user agreements. The table on the following page presents the breakdown of these statistics. The figure on page 2 depicts the continued growth in user agreements and user projects. You may note that our total number of proposals is nearing 1000, and we expect to achieve this number in our first proposal review meeting of FY 2001. The large number of new agreements bodes well for the future. A list of proposals to the HTML follows this section; at the end of the report, we present a list of agreements between HTML and universities and industries, broken down by state. Program highlights this year included several outstanding user projects (some of which are discussed in later sections), the annual meeting of the HTML Programs Senior Advisory Committee, the completion of a formal Multiyear Program Plan (MYPP), and finalization of a purchase agreement with JEOL for a new-generation electron microscope.

  16. High Temperature Materials Laboratory eight and ninth annual reports, October 1994 through September 1996

    Energy Technology Data Exchange (ETDEWEB)

    Pasto, A.E.; Russell, B.J.

    1997-10-01

    The High Temperature Materials Laboratory (HTML) has completed its ninth year of operation as a designated US Department of Energy User Facility at the Oak Ridge National Laboratory. This document profiles the historical growth of the HTML User and Fellowship Programs since their inception in 1987. Growth of the HTML programs has been demonstrated by the number of institutions executing user agreements, and by the number of days of instrument use (user days) since the HTML began operation. A total of 276 nonproprietary agreements (135 industry, 135 university, and 6 other federal agency) and 56 proprietary agreements are now in effect. This represents an increase of 70 nonproprietary user agreements since the last reporting period (for FY 1994). A state-by-state summary of these nonproprietary user agreements is given in Appendix A, and an alphabetical listing is provided in Appendix B. Forty-four states are represented by these users. During FY 1995 and 1996, the HTML User Program evaluated 145 nonproprietary proposals (62 from industry, 82 from universities, and 1 from other government facilities) and several proprietary proposals. The HTML User Advisory Committee approved about 95% of those proposals, frequently after the prospective user revised the proposal based on comments from the committee. This annual report discusses activities in the individual user centers, as well as plans for the future. It also gives statistics about users, proposals, and publications as well as summaries of the nonproprietary research projects active during 1995 and 1996.

  17. High methane emissions dominated annual greenhouse gas balances 30 years after bog rewetting

    Science.gov (United States)

    Vanselow-Algan, M.; Schmidt, S. R.; Greven, M.; Fiencke, C.; Kutzbach, L.; Pfeiffer, E.-M.

    2015-07-01

    Natural peatlands are important carbon sinks and sources of methane (CH4). In contrast, drained peatlands turn from a carbon sink to a carbon source and potentially emit nitrous oxide (N2O). Rewetting of peatlands thus potentially implies climate change mitigation. However, data about the time span that is needed for the re-establishment of the carbon sink function by restoration are scarce. We therefore investigated the annual greenhouse gas (GHG) balances of three differently vegetated sites of a bog ecosystem 30 years after rewetting. All three vegetation communities turned out to be sources of carbon dioxide (CO2) ranging between 0.6 ± 1.43 t CO2 ha-2 yr-1 (Sphagnum-dominated vegetation) and 3.09 ± 3.86 t CO2 ha-2 yr-1 (vegetation dominated by heath). While accounting for the different global warming potential (GWP) of CO2, CH4 and N2O, the annual GHG balance was calculated. Emissions ranged between 25 and 53 t CO2-eq ha-1 yr-1 and were dominated by large emissions of CH4 (22-51 t CO2-eq ha-1 yr-1), with highest rates found at purple moor grass (Molinia caerulea) stands. These are to our knowledge the highest CH4 emissions so far reported for bog ecosystems in temperate Europe. As the restored area was subject to large fluctuations in the water table, we assume that the high CH4 emission rates were caused by a combination of both the temporal inundation of the easily decomposable plant litter of purple moor grass and the plant-mediated transport through its tissues. In addition, as a result of the land use history, mixed soil material due to peat extraction and refilling can serve as an explanation. With regards to the long time span passed since rewetting, we note that the initial increase in CH4 emissions due to rewetting as described in the literature is not inevitably limited to a short-term period.

  18. Commercial Integrated Heat Pump with Thermal Storage --Demonstrate Greater than 50% Average Annual Energy Savings, Compared with Baseline Heat Pump and Water Heater (Go/No-Go) FY16 4th Quarter Milestone Report

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Bo [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Baxter, Van D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rice, C. Keith [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Abu-Heiba, Ahmad [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-03-01

    For this study, we authored a new air source integrated heat pump (AS-IHP) model in EnergyPlus, and conducted building energy simulations to demonstrate greater than 50% average energy savings, in comparison to a baseline heat pump with electric water heater, over 10 US cities, based on the EnergyPlus quick-service restaurant template building. We also assessed water heating energy saving potentials using ASIHP versus gas heating, and pointed out climate zones where AS-IHPs are promising.

  19. Leading Contributors to the Research Consortium's Annual Program, 1992-2011: High-Visibility Institutions, Researchers, and Topics

    Science.gov (United States)

    Cardinal, Bradley J.; Lee, Hyo

    2013-01-01

    Between 1992-2011, peer-reviewed research on the Research Consortium's annual program has been published in abstract form in the "Research Quarterly for Exercise and Sport". On the basis of frequency, high-visibility institutions, researchers, and sub-disciplinary categories were identified. Data were extracted from each abstract (N =…

  20. Arrange and average algorithm for the retrieval of aerosol parameters from multiwavelength high-spectral-resolution lidar/Raman lidar data.

    Science.gov (United States)

    Chemyakin, Eduard; Müller, Detlef; Burton, Sharon; Kolgotin, Alexei; Hostetler, Chris; Ferrare, Richard

    2014-11-01

    We present the results of a feasibility study in which a simple, automated, and unsupervised algorithm, which we call the arrange and average algorithm, is used to infer microphysical parameters (complex refractive index, effective radius, total number, surface area, and volume concentrations) of atmospheric aerosol particles. The algorithm uses backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm as input information. Testing of the algorithm is based on synthetic optical data that are computed from prescribed monomodal particle size distributions and complex refractive indices that describe spherical, primarily fine mode pollution particles. We tested the performance of the algorithm for the "3 backscatter (β)+2 extinction (α)" configuration of a multiwavelength aerosol high-spectral-resolution lidar (HSRL) or Raman lidar. We investigated the degree to which the microphysical results retrieved by this algorithm depends on the number of input backscatter and extinction coefficients. For example, we tested "3β+1α," "2β+1α," and "3β" lidar configurations. This arrange and average algorithm can be used in two ways. First, it can be applied for quick data processing of experimental data acquired with lidar. Fast automated retrievals of microphysical particle properties are needed in view of the enormous amount of data that can be acquired by the NASA Langley Research Center's airborne "3β+2α" High-Spectral-Resolution Lidar (HSRL-2). It would prove useful for the growing number of ground-based multiwavelength lidar networks, and it would provide an option for analyzing the vast amount of optical data acquired with a future spaceborne multiwavelength lidar. The second potential application is to improve the microphysical particle characterization with our existing inversion algorithm that uses Tikhonov's inversion with regularization. This advanced algorithm has recently undergone development to allow automated and

  1. The theoretical strength of rubber: numerical simulations of polyisoprene networks at high tensile strains evidence the role of average chain tortuosity

    International Nuclear Information System (INIS)

    Hanson, David E; Barber, John L

    2013-01-01

    The ultimate stress and strain of polyisoprene rubber were studied by numerical simulations of three-dimensional random networks, subjected to tensile strains high enough to cause chain rupture. Previously published molecular chain force extension models and a numerical network construction procedure were used to perform the simulations for network crosslink densities between 2 × 10 19 and 1 × 10 20 cm −3 , corresponding to experimental dicumyl-peroxide concentrations of 1–5 parts per hundred. At tensile failure (defined as the point of maximum stress), we find that the fraction of network chains ruptured is between 0.1% and 1%, depending on the crosslink density. The fraction of network chains that are taut, i.e. their end-to-end distance is greater than their unstretched contour length, ranges between 10% and 15% at failure. Our model predicts that the theoretical (defect-free) failure stress should be about twice the highest experimental value reported. For extensions approaching failure, tensile stress is dominated by the network morphology and purely enthalpic bond distortion forces and, in this regime, the model has essentially no free parameters. The average initial chain tortuosity (τ) appears to be an important statistical property of rubber networks; if the stress is scaled by τ and the tensile strain is scaled by τ −1 , we obtain a master curve for stress versus strain, valid for all crosslink densities. We derive an analytic expression for the average tortuosity, which is in agreement with values calculated in the simulations. (paper)

  2. Plasma wakefields driven by an incoherent combination of laser pulses: a path towards high-average power laser-plasma accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Benedetti, C.; Schroeder, C.B.; Esarey, E.; Leemans, W.P.

    2014-05-01

    he wakefield generated in a plasma by incoherently combining a large number of low energy laser pulses (i.e.,without constraining the pulse phases) is studied analytically and by means of fully-self-consistent particle-in-cell simulations. The structure of the wakefield has been characterized and its amplitude compared with the amplitude of the wake generated by a single (coherent) laser pulse. We show that, in spite of the incoherent nature of the wakefield within the volume occupied by the laser pulses, behind this region the structure of the wakefield can be regular with an amplitude comparable or equal to that obtained from a single pulse with the same energy. Wake generation requires that the incoherent structure in the laser energy density produced by the combined pulses exists on a time scale short compared to the plasma period. Incoherent combination of multiple laser pulses may enable a technologically simpler path to high-repetition rate, high-average power laser-plasma accelerators and associated applications.

  3. Cost curves for implantation of small scale hydroelectric power plant project in function of the average annual energy production; Curvas de custo de implantacao de pequenos projetos hidreletricos em funcao da producao media anual de energia

    Energy Technology Data Exchange (ETDEWEB)

    Veja, Fausto Alfredo Canales; Mendes, Carlos Andre Bulhoes; Beluco, Alexandre

    2008-10-15

    Because of its maturity, small hydropower generation is one of the main energy sources to be considered for electrification of areas far from the national grid. Once a site with hydropower potential is identified, technical and economical studies to assess its feasibility shall be done. Cost curves are helpful tools in the appraisal of the economical feasibility of this type of projects. This paper presents a method to determine initial cost curves as a function of the average energy production of the hydropower plant, by using a set of parametric cost curves and the flow duration curve at the analyzed location. The method is illustrated using information related to 18 pre-feasibility studies made in 2002, at the Central-Atlantic rural region of Nicaragua. (author)

  4. Average effect estimates remain similar as evidence evolves from single trials to high-quality bodies of evidence: a meta-epidemiologic study.

    Science.gov (United States)

    Gartlehner, Gerald; Dobrescu, Andreea; Evans, Tammeka Swinson; Thaler, Kylie; Nussbaumer, Barbara; Sommer, Isolde; Lohr, Kathleen N

    2016-01-01

    The objective of our study was to use a diverse sample of medical interventions to assess empirically whether first trials rendered substantially different treatment effect estimates than reliable, high-quality bodies of evidence. We used a meta-epidemiologic study design using 100 randomly selected bodies of evidence from Cochrane reports that had been graded as high quality of evidence. To determine the concordance of effect estimates between first and subsequent trials, we applied both quantitative and qualitative approaches. For quantitative assessment, we used Lin's concordance correlation and calculated z-scores; to determine the magnitude of differences of treatment effects, we calculated standardized mean differences (SMDs) and ratios of relative risks. We determined qualitative concordance based on a two-tiered approach incorporating changes in statistical significance and magnitude of effect. First trials both overestimated and underestimated the true treatment effects in no discernible pattern. Nevertheless, depending on the definition of concordance, effect estimates of first trials were concordant with pooled subsequent studies in at least 33% but up to 50% of comparisons. The pooled magnitude of change as bodies of evidence advanced from single trials to high-quality bodies of evidence was 0.16 SMD [95% confidence interval (CI): 0.12, 0.21]. In 80% of comparisons, the difference in effect estimates was smaller than 0.5 SMDs. In first trials with large treatment effects (>0.5 SMD), however, estimates of effect substantially changed as new evidence accrued (mean change 0.68 SMD; 95% CI: 0.50, 0.86). Results of first trials often change, but the magnitude of change, on average, is small. Exceptions are first trials that present large treatment effects, which often dissipate as new evidence accrues. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Reynolds-averaged Navier-Stokes investigation of high-lift low-pressure turbine blade aerodynamics at low Reynolds number

    Science.gov (United States)

    Arko, Bryan M.

    Design trends for the low-pressure turbine (LPT) section of modern gas turbine engines include increasing the loading per airfoil, which promises a decreased airfoil count resulting in reduced manufacturing and operating costs. Accurate Reynolds-Averaged Navier-Stokes predictions of separated boundary layers and transition to turbulence are needed, as the lack of an economical and reliable computational model has contributed to this high-lift concept not reaching its full potential. Presented here for what is believed to be the first time applied to low-Re computations of high-lift linear cascade simulations is the Abe-Kondoh-Nagano (AKN) linear low-Re two-equation turbulence model which utilizes the Kolmogorov velocity scale for improved predictions of separated boundary layers. A second turbulence model investigated is the Kato-Launder modified version of the AKN, denoted MPAKN, which damps turbulent production in highly strained regions of flow. Fully Laminar solutions have also been calculated in an effort to elucidate the transitional quality of the turbulence model solutions. Time accurate simulations of three modern high-lift blades at a Reynolds number of 25,000 are compared to experimental data and higher-order computations in order to judge the accuracy of the results, where it is shown that the RANS simulations with highly refined grids can produce both quantitatively and qualitatively similar separation behavior as found in experiments. In particular, the MPAKN model is shown to predict the correct boundary layer behavior for all three blades, and evidence of transition is found through inspection of the components of the Reynolds Stress Tensor, spectral analysis, and the turbulence production parameter. Unfortunately, definitively stating that transition is occurring becomes an uncertain task, as similar evidence of the transition process is found in the Laminar predictions. This reveals that boundary layer reattachment may be a result of laminar

  6. Building a Grad Nation: Progress and Challenge in Ending the High School Dropout Epidemic. Annual Update, 2013

    Science.gov (United States)

    Balfanz, Robert; Bridgeland, John M.; Bruce, Mary; Fox, Joanna Hornig

    2013-01-01

    This fourth annual update on America's high school dropout crisis shows that for the first time the nation is on track to meet the goal of a 90 percent high school graduation rate by the Class of 2020--if the pace of improvement from 2006 to 2010 is sustained over the next 10 years. The greatest gains have occurred for the students of color and…

  7. Cell-Averaged discretization for incompressible Navier-Stokes with embedded boundaries and locally refined Cartesian meshes: a high-order finite volume approach

    Science.gov (United States)

    Bhalla, Amneet Pal Singh; Johansen, Hans; Graves, Dan; Martin, Dan; Colella, Phillip; Applied Numerical Algorithms Group Team

    2017-11-01

    We present a consistent cell-averaged discretization for incompressible Navier-Stokes equations on complex domains using embedded boundaries. The embedded boundary is allowed to freely cut the locally-refined background Cartesian grid. Implicit-function representation is used for the embedded boundary, which allows us to convert the required geometric moments in the Taylor series expansion (upto arbitrary order) of polynomials into an algebraic problem in lower dimensions. The computed geometric moments are then used to construct stencils for various operators like the Laplacian, divergence, gradient, etc., by solving a least-squares system locally. We also construct the inter-level data-transfer operators like prolongation and restriction for multi grid solvers using the same least-squares system approach. This allows us to retain high-order of accuracy near coarse-fine interface and near embedded boundaries. Canonical problems like Taylor-Green vortex flow and flow past bluff bodies will be presented to demonstrate the proposed method. U.S. Department of Energy, Office of Science, ASCR (Award Number DE-AC02-05CH11231).

  8. Variation of High-Intensity Therapeutic Ultrasound (HITU) Pressure Field Characterization: Effects of Hydrophone Choice, Nonlinearity, Spatial Averaging and Complex Deconvolution.

    Science.gov (United States)

    Liu, Yunbo; Wear, Keith A; Harris, Gerald R

    2017-10-01

    Reliable acoustic characterization is fundamental for patient safety and clinical efficacy during high-intensity therapeutic ultrasound (HITU) treatment. Technical challenges, such as measurement variation and signal analysis, still exist for HITU exposimetry using ultrasound hydrophones. In this work, four hydrophones were compared for pressure measurement: a robust needle hydrophone, a small polyvinylidene fluoride capsule hydrophone and two fiberoptic hydrophones. The focal waveform and beam distribution of a single-element HITU transducer (1.05 MHz and 3.3 MHz) were evaluated. Complex deconvolution between the hydrophone voltage signal and frequency-dependent complex sensitivity was performed to obtain pressure waveforms. Compressional pressure (p + ), rarefactional pressure (p - ) and focal beam distribution were compared up to 10.6/-6.0 MPa (p + /p - ) (1.05 MHz) and 20.65/-7.20 MPa (3.3 MHz). The effects of spatial averaging, local non-linear distortion, complex deconvolution and hydrophone damage thresholds were investigated. This study showed a variation of no better than 10%-15% among hydrophones during HITU pressure characterization. Published by Elsevier Inc.

  9. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  10. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  11. Genome-wide investigation reveals high evolutionary rates in annual model plants.

    Science.gov (United States)

    Yue, Jia-Xing; Li, Jinpeng; Wang, Dan; Araki, Hitoshi; Tian, Dacheng; Yang, Sihai

    2010-11-09

    Rates of molecular evolution vary widely among species. While significant deviations from molecular clock have been found in many taxa, effects of life histories on molecular evolution are not fully understood. In plants, annual/perennial life history traits have long been suspected to influence the evolutionary rates at the molecular level. To date, however, the number of genes investigated on this subject is limited and the conclusions are mixed. To evaluate the possible heterogeneity in evolutionary rates between annual and perennial plants at the genomic level, we investigated 85 nuclear housekeeping genes, 10 non-housekeeping families, and 34 chloroplast genes using the genomic data from model plants including Arabidopsis thaliana and Medicago truncatula for annuals and grape (Vitis vinifera) and popular (Populus trichocarpa) for perennials. According to the cross-comparisons among the four species, 74-82% of the nuclear genes and 71-97% of the chloroplast genes suggested higher rates of molecular evolution in the two annuals than those in the two perennials. The significant heterogeneity in evolutionary rate between annuals and perennials was consistently found both in nonsynonymous sites and synonymous sites. While a linear correlation of evolutionary rates in orthologous genes between species was observed in nonsynonymous sites, the correlation was weak or invisible in synonymous sites. This tendency was clearer in nuclear genes than in chloroplast genes, in which the overall evolutionary rate was small. The slope of the regression line was consistently lower than unity, further confirming the higher evolutionary rate in annuals at the genomic level. The higher evolutionary rate in annuals than in perennials appears to be a universal phenomenon both in nuclear and chloroplast genomes in the four dicot model plants we investigated. Therefore, such heterogeneity in evolutionary rate should result from factors that have genome-wide influence, most likely those

  12. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  13. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  14. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  15. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  16. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  17. Mixing processes in high-level waste tanks. 1998 annual progress report

    International Nuclear Information System (INIS)

    Peterson, P.F.

    1998-01-01

    Flammable gases can be generated in DOE high-level waste tanks, including radiolytic hydrogen, and during cesium precipitation from salt solutions, benzene. Under normal operating conditions the potential for deflagration or detonation from these gases is precluded by purging and ventilation systems, which remove the flammable gases and maintain a well-mixed condition in the tanks. Upon failure of the ventilation system, due to seismic or other events, however, it has proven more difficult to make strong arguments for well-mixed conditions, due to the potential for density-induced stratification which can potentially sequester fuel or oxidizer at concentrations significantly higher than average. This has complicated the task of defining the safety basis for tank operation. Waste-tank mixing processes have considerable overlap with similar large-enclosure mixing processes that occur in enclosure fires and nuclear reactor containments. Significant differences also exist, so that modeling techniques that have been developed previously can not be directly applied to waste tanks. In particular, mixing of air introduced through tank roof penetrations by buoyancy and pressure driven exchange flows, mixed convection induced by an injected high-velocity purge jet interacting with buoyancy driven flow, and onset and breakdown of stable stratification under the influence of an injected jet have not been adequately studied but are important in assessing the potential for accumulation of high-concentration pockets of fuel and oxygen. Treating these phenomena requires a combination of experiments and the development of new, more general computational models than those that have been developed for enclosure fires. U.C. Berkeley is now completing the second year of its three-year project that started in September, 1996. Excellent progress has been made in several important areas related to waste-tank ventilation and mixing processes.'

  18. Ultra-short pulse delivery at high average power with low-loss hollow core fibers coupled to TRUMPF's TruMicro laser platforms for industrial applications

    Science.gov (United States)

    Baumbach, S.; Pricking, S.; Overbuschmann, J.; Nutsch, S.; Kleinbauer, J.; Gebs, R.; Tan, C.; Scelle, R.; Kahmann, M.; Budnicki, A.; Sutter, D. H.; Killi, A.

    2017-02-01

    Multi-megawatt ultrafast laser systems at micrometer wavelength are commonly used for material processing applications, including ablation, cutting and drilling of various materials or cleaving of display glass with excellent quality. There is a need for flexible and efficient beam guidance, avoiding free space propagation of light between the laser head and the processing unit. Solid core step index fibers are only feasible for delivering laser pulses with peak powers in the kW-regime due to the optical damage threshold in bulk silica. In contrast, hollow core fibers are capable of guiding ultra-short laser pulses with orders of magnitude higher peak powers. This is possible since a micro-structured cladding confines the light within the hollow core and therefore minimizes the spatial overlap between silica and the electro-magnetic field. We report on recent results of single-mode ultra-short pulse delivery over several meters in a lowloss hollow core fiber packaged with industrial connectors. TRUMPF's ultrafast TruMicro laser platforms equipped with advanced temperature control and precisely engineered opto-mechanical components provide excellent position and pointing stability. They are thus perfectly suited for passive coupling of ultra-short laser pulses into hollow core fibers. Neither active beam launching components nor beam trackers are necessary for a reliable beam delivery in a space and cost saving packaging. Long term tests with weeks of stable operation, excellent beam quality and an overall transmission efficiency of above 85 percent even at high average power confirm the reliability for industrial applications.

  19. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  20. Training-induced annual changes in red blood cell profile in highly-trained endurance and speed-power athletes.

    Science.gov (United States)

    Ciekot-Sołtysiak, Monika; Kusy, Krzysztof; Podgórski, Tomasz; Zieliński, Jacek

    2017-10-24

    An extensive body of literature exists on the effects of training on haematological parameters, but the previous studies have not reported how hematological parameters respond to changes in training loads within consecutive phases of the training cycle in highly-trained athletes in extremely different sport disciplines. The aim of this study was to identify changes in red blood cell (RBC) profile in response to training loads in consecutive phases of the annual training cycle in highly-trained sprinters (8 men, aged 24 ± 3 years) and triathletes (6 men, aged 24 ± 4 years) who competed at the national and international level. Maximal oxygen uptake (VO2max), RBC, haemoglobin (Hb), haematocrit (Ht), mean corpuscular volume (MCV), mean corpuscular haemoglobin (MCH), mean corpuscular haemoglobin concentration (MCHC) and RBC distribution width (RDW) were determined in four characteristic training phases (transition, general subphase of the preparation phase, specific subphase of the preparation phase and competition phase). Our main findings are that (1) Hb, MCH and MCHC in triathletes and MCV in both triathletes and sprinters changed significantly over the annual training cycle, (2) triathletes had significantly higher values than sprinters only in case of MCH and MCHC after the transition and general preparation phases but not after the competition phase when MCH and MCHC were higher in sprinters and (3) in triathletes, Hb, MCH and MCHC substantially decreased after the competition phase, which was not observed in sprinters. The athletes maintained normal ranges of all haematological parameters in four characteristic training phases. Although highly-trained sprinters and triathletes do not significantly differ in their levels of most haematological parameters, these groups are characterized by different patterns of changes during the annual training cycle. Our results suggest that when interpreting the values of haematological parameters in speed-power and endurance

  1. High average daily intake of PCDD/Fs and serum levels in residents living near a deserted factory producing pentachlorophenol (PCP) in Taiwan: influence of contaminated fish consumption

    Energy Technology Data Exchange (ETDEWEB)

    Lee Ching-Chang; Lin Wu-Ting; Liao Po-Chi; Su Huey-Jen [Dept. of Environmental and Occupational Health/Research Center of Environmental Trace Toxic substances, Medical Coll., National Cheng Kung Univ., Tainan (Taiwan); Chen Hsiu-Lin [Inst. of Basic Medical Sciences, Medical Coll., National Cheng Kung Univ., Tainan (Taiwan)

    2004-09-15

    Many reports have suggested that PCDD/Fs (polychlorinated dibenzo-p-dioxins and dibenzofurans) contribute to immune deficiency, liver damage, human carcinogenesis, and neuromotor maturation in children. Therefore, beginning in 1999, the Taiwan Environmental Protection Agency (EPA) conducted a survey to determine serum levels of PCDD/Fs in the general populations living around 19 incinerators in Taiwan. Relatively high average serum PCDD/F levels were unexpectedly found in Tainan city, a less industrialized area in southwestern Taiwan, than in other urban areas. We therefore reviewed the usage history of the land and found that a factory situated between Hsien-Gong Li and Lu-Erh Li, two administrative units of Tainan city, had been manufacturing pentachlorophenol (PCP) between 1967 and 1982. PCDD/Fs are formed as byproducts in the PCP manufacturing process. Exposure to PCP and its derivatives via the food chain is the most significant intake route of PCDD/Fs in consumers in the European Union (EU). In Japan, in addition to combustion processes, PCP and chlornitrofen (CNP) have also been identified as the major sources of PCDD/Fs in Tokyo Bay7. A preliminary investigation showed that the soil in the PCP factory and sediments in the sea reservoir (13 hectares) near the deserted factory were seriously contaminated with PCDD/Fs (260-184,000 and 20-6220 pg I-TEQ/g, respectively), levels higher than those in other countries. Therefore, the aim of this study was to compare the PCDD/F levels of fish meat in the sea reservoir and the serum in inhabitants living in the vicinity of the closed PCP plant and other nearby areas. The data from human and other biota samples might clarify the transmission pathway of the PCDD/F contaminants from the PCP factory to local residents, provide information about the exposure status of those living in the vicinity of the deserted PCP factory, and also lead to useful suggestions for controlling PCDD/F accumulation in those living near such

  2. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  3. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  4. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: Analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe

    International Nuclear Information System (INIS)

    Prevosto, L.; Mancinelli, B.; Kelly, H.

    2013-01-01

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core

  5. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: Analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe

    Energy Technology Data Exchange (ETDEWEB)

    Prevosto, L.; Mancinelli, B. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Kelly, H. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Instituto de Física del Plasma (CONICET), Departamento de Física, Facultad de Ciencias Exactas y Naturales (UBA) Ciudad Universitaria Pab. I, 1428 Buenos Aires (Argentina)

    2013-12-15

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

  6. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe.

    Science.gov (United States)

    Prevosto, L; Kelly, H; Mancinelli, B

    2013-12-01

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

  7. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  8. ZPR-3 Assembly 6F : A spherical assembly of highly enriched uranium, depleted uranium, aluminum and steel with an average {sup 235}U enrichment of 47 atom %.

    Energy Technology Data Exchange (ETDEWEB)

    Lell, R. M.; McKnight, R. D; Schaefer, R. W.; Nuclear Engineering Division

    2010-09-30

    Assembly 6F (ZPR-3/6F), the final phase of the Assembly 6 program, simulated a spherical core with a thick depleted uranium reflector. ZPR-3/6F was designed as a fast reactor physics benchmark experiment with an average core {sup 235}U enrichment of approximately 47 at.%. Approximately 81.4% of the total fissions in this assembly occur above 100 keV, approximately 18.6% occur below 100 keV, and essentially none below 0.625 eV - thus the classification as a 'fast' assembly. This assembly is Fast Reactor Benchmark No. 7 in the Cross Section Evaluation Working Group (CSEWG) Benchmark Specifications and has historically been used as a data validation benchmark assembly. Loading of ZPR-3/6F began in late December 1956, and the experimental measurements were performed in January 1957. The core consisted of highly enriched uranium (HEU) plates, depleted uranium plates, perforated aluminum plates and stainless steel plates loaded into aluminum drawers, which were inserted into the central square stainless steel tubes of a 31 x 31 matrix on a split table machine. The core unit cell consisted of three columns of 0.125 in.-wide (3.175 mm) HEU plates, three columns of 0.125 in.-wide depleted uranium plates, nine columns of 0.125 in.-wide perforated aluminum plates and one column of stainless steel plates. The maximum length of each column of core material in a drawer was 9 in. (228.6 mm). Because of the goal to produce an approximately spherical core, core fuel and diluent column lengths generally varied between adjacent drawers and frequently within an individual drawer. The axial reflector consisted of depleted uranium plates and blocks loaded in the available space in the front (core) drawers, with the remainder loaded into back drawers behind the front drawers. The radial reflector consisted of blocks of depleted uranium loaded directly into the matrix tubes. The assembly geometry approximated a reflected sphere as closely as the square matrix tubes, the drawers and the

  9. Effect of land area on average annual suburban water demand

    African Journals Online (AJOL)

    2013-10-02

    Oct 2, 2013 ... to model and synthesise all decision-making information pertaining to ... In that process the GIS environment was used to delineate suburbs by ..... It should be recalled that the effects of climate and climate change on results.

  10. Effect of land area on average annual suburban water demand ...

    African Journals Online (AJOL)

    AADD) in South Africa are based on residential plot size. This paper presents a novel, robust method for estimating suburban water demand as a function of the suburb area. Seventy suburbs, identified as being predominantly residential, were ...

  11. ZPR-3 Assembly 11: A cylindrical sssembly of highly enriched uranium and depleted uranium with an average 235U enrichment of 12 atom % and a depleted uranium reflector

    International Nuclear Information System (INIS)

    Lell, R.M.; McKnight, R.D.; Tsiboulia, A.; Rozhikhin, Y.

    2010-01-01

    Over a period of 30 years, more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited for nuclear data validation and to form the basis for criticality safety benchmarks. A number of the Argonne ZPR/ZPPR critical assemblies have been evaluated as ICSBEP and IRPhEP benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactor physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was 235 U or 239 Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. ZPR-3 Assembly 11 (ZPR-3/11) was designed as a fast reactor physics benchmark experiment with an average core 235 U enrichment of approximately 12 at.% and a depleted uranium reflector. Approximately 79.7% of the total fissions in this assembly occur above 100 keV, approximately 20.3% occur below 100 keV, and essentially none below 0.625 eV - thus the classification as a 'fast' assembly. This assembly is Fast Reactor Benchmark No. 8 in the Cross Section Evaluation Working Group (CSEWG) Benchmark

  12. High Temperature Materials Laboratory Fourteenth Annual Report: October 2000 through September 2001

    Energy Technology Data Exchange (ETDEWEB)

    Pasto, A.E.

    2002-05-16

    The HTML User Program continued to work with industrial, academic, and governmental users this year, accepting 92 new projects and developing 48 new user agreements. Table 1 presents the breakdown of these statistics. Figure 1 depicts the continued growth in user agreements and user projects. You will note that the total number of HTML proposals has now exceeded 1000. Also, the large number of new agreements bodes well for the future. At the end of the report, we present a list of proposals to the HTML and a list of agreements between HTML and universities and industries, broken down by state. Program highlights this year included several outstanding user projects (some of which are highlighted in later sections), the annual meeting of the HTML Programs Senior Advisory Committee, and approval by ORNL for the construction of a building to house our new aberration-corrected electron microscope (ACEM) and several other sensitive electron and optical instruments.

  13. NRC high-level radioactive waste program. Annual progress report: Fiscal Year 1996

    International Nuclear Information System (INIS)

    Sagar, B.

    1997-01-01

    This annual status report for fiscal year 1996 documents technical work performed on ten key technical issues (KTI) that are most important to performance of the proposed geologic repository at Yucca Mountain. This report has been prepared jointly by the staff of the Nuclear Regulatory Commission (NRC) Division of Waste Management and the Center for Nuclear Waste Regulatory Analyses. The programmatic aspects of restructuring the NRC repository program in terms of KTIs is discussed and a brief summary of work accomplished is provided. The other ten chapters provide a comprehensive summary of the work in each KTI. Discussions on probability of future volcanic activity and its consequences, impacts of structural deformation and seismicity, the nature of of the near-field environment and its effects on container life and source term, flow and transport including effects of thermal loading, aspects of repository design, estimates of system performance, and activities related to the U.S. Environmental Protection Agency standard are provided

  14. Annual CO2 budget and seasonal CO2 exchange signals at a High Arctic permafrost site on Spitsbergen, Svalbard archipelago

    Science.gov (United States)

    Lüers, J.; Westermann, S.; Piel, K.; Boike, J.

    2014-01-01

    The annual variability of CO2 exchange in most ecosystems is primarily driven by the activities of plants and soil microorganisms. However, little is known about the carbon balance and its controlling factors outside the growing season in arctic regions dominated by soil freeze/thaw-processes, long-lasting snow cover, and several months of darkness. This study presents a complete annual cycle of the CO2 net ecosystem exchange (NEE) dynamics for a High Arctic tundra area on the west coast of Svalbard based on eddy-covariance flux measurements. The annual cumulative CO2 budget is close to zero grams carbon per square meter per year, but shows a very strong seasonal variability. Four major CO2 exchange seasons have been identified. (1) During summer (ground snow-free), the CO2 exchange occurs mainly as a result of biological activity, with a predominance of strong CO2 assimilation by the ecosystem. (2) The autumn (ground snow-free or partly snow-covered) is dominated by CO2 respiration as a result of biological activity. (3) In winter and spring (ground snow-covered), low but persistent CO2 release occur, overlain by considerable CO2 exchange events in both directions associated with changes of air masses and air and atmospheric CO2 pressure. (4) The snow melt season (pattern of snow-free and snow-covered areas), where both, meteorological and biological forcing, resulting in a visible carbon uptake by the high arctic ecosystem. Data related to this article are archived under: http://doi.pangaea.de/10.1594/PANGAEA.809507.

  15. Climate drives inter-annual variability in probability of high severity fire occurrence in the western United States

    Science.gov (United States)

    Keyser, Alisa; Westerling, Anthony LeRoy

    2017-05-01

    A long history of fire suppression in the western United States has significantly changed forest structure and ecological function, leading to increasingly uncharacteristic fires in terms of size and severity. Prior analyses of fire severity in California forests showed that time since last fire and fire weather conditions predicted fire severity very well, while a larger regional analysis showed that topography and climate were important predictors of high severity fire. There has not yet been a large-scale study that incorporates topography, vegetation and fire-year climate to determine regional scale high severity fire occurrence. We developed models to predict the probability of high severity fire occurrence for the western US. We predict high severity fire occurrence with some accuracy, and identify the relative importance of predictor classes in determining the probability of high severity fire. The inclusion of both vegetation and fire-year climate predictors was critical for model skill in identifying fires with high fractional fire severity. The inclusion of fire-year climate variables allows this model to forecast inter-annual variability in areas at future risk of high severity fire, beyond what slower-changing fuel conditions alone can accomplish. This allows for more targeted land management, including resource allocation for fuels reduction treatments to decrease the risk of high severity fire.

  16. Building a Grad Nation: Progress and Challenge in Raising High School Graduation Rates. Annual Update 2016

    Science.gov (United States)

    DePaoli, Jennifer L.; Balfanz, Robert; Bridgeland, John

    2016-01-01

    The nation has achieved an 82.3 percent high school graduation rate--a record high. Graduation rates rose for all student subgroups, and the number of low-graduation-rate high schools and students enrolled in them dropped again, indicating that progress has had far-reaching benefits for all students. This report is the first to analyze 2014…

  17. Prevalence of sleep duration on an average school night among 4 nationally representative successive samples of American high school students, 2007-2013.

    Science.gov (United States)

    Basch, Charles E; Basch, Corey H; Ruggles, Kelly V; Rajan, Sonali

    2014-12-11

    Consistency, quality, and duration of sleep are important determinants of health. We describe sleep patterns among demographically defined subgroups from the Youth Risk Behavior Surveillance System reported in 4 successive biennial representative samples of American high school students (2007 to 2013). Across the 4 waves of data collection, 6.2% to 7.7% of females and 8.0% to 9.4% of males reported obtaining 9 or more hours of sleep. Insufficient duration of sleep is pervasive among American high school students. Despite substantive public health implications, intervention research on this topic has received little attention.

  18. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  19. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  20. Teachers' Adherence to Highly Effective Instructional Practices as Related to Graduation Rates in Average-Need School Districts in New York State

    Science.gov (United States)

    Yannucci, Michael J.

    2014-01-01

    The purpose of this study was to investigate school administrators' perceptions of teachers' adherence to the highly effective critical attributes of the four domains of Charlotte Danielson's "Framework for Teaching" (Planning and Preparation, The Classroom Environment, Instruction, and Professional Responsibilities) in kindergarten…

  1. The Relationship of High School Type to Persistence and Grade Point Average of First-Year Students at Faith-Based Liberal Arts Colleges

    Science.gov (United States)

    Litscher, Kenneth Michael

    2015-01-01

    Based on previous research, there are several student characteristics that have been identified to affect academic success of first-year students in college. However, there are few studies that examine if the type of high school (public, private faith-based, private secular, or homeschool) from which a student graduates affects grade point average…

  2. High Committee for transparency and information on nuclear safety: Annual activity report (January 2010 - December 2010)

    International Nuclear Information System (INIS)

    2010-01-01

    After a description of the operation of the French 'High Committee for transparency and information on nuclear safety' (HCTISN), of its missions, its organisation and its means, the progress report presents the High Committee activity for 2010 with summaries of its report on the transparency of nuclear material and waste management, its meetings, its work groups, its visits and participations to other events

  3. Measurements of spatially resolved high resolution spectra of laser-produced plasmas. FY 83 annual report

    International Nuclear Information System (INIS)

    Feldman, U.

    1984-01-01

    A high resolution grazing incidence spectrograph, provided by the Naval Research Laboratory and the Goddard Space Flight Center, has been installed on the Omega laser facility of the Laboratory for Laser Energetics (LLE) at the University of Rochester. This 3 meter instrument, with a 1200 lines/mm grating blazed at 2 0 35', has produced extremely high quality spectra in the wavelength region 10 A to 100 A. Spectra have been obtained from glass microballoon targets that are coated with a variety of high-Z materials. Transitions from the Na-like and Ne-like ionization stages of Fe, Ni, Cu, and Kr have been identified

  4. Fluid hydration to prevent post-ERCP pancreatitis in average- to high-risk patients receiving prophylactic rectal NSAIDs (FLUYT trial): study protocol for a randomized controlled trial.

    Science.gov (United States)

    Smeets, Xavier J N M; da Costa, David W; Fockens, Paul; Mulder, Chris J J; Timmer, Robin; Kievit, Wietske; Zegers, Marieke; Bruno, Marco J; Besselink, Marc G H; Vleggaar, Frank P; van der Hulst, Rene W M; Poen, Alexander C; Heine, Gerbrand D N; Venneman, Niels G; Kolkman, Jeroen J; Baak, Lubbertus C; Römkens, Tessa E H; van Dijk, Sven M; Hallensleben, Nora D L; van de Vrie, Wim; Seerden, Tom C J; Tan, Adriaan C I T L; Voorburg, Annet M C J; Poley, Jan-Werner; Witteman, Ben J; Bhalla, Abha; Hadithi, Muhammed; Thijs, Willem J; Schwartz, Matthijs P; Vrolijk, Jan Maarten; Verdonk, Robert C; van Delft, Foke; Keulemans, Yolande; van Goor, Harry; Drenth, Joost P H; van Geenen, Erwin J M

    2018-04-02

    Post-endoscopic retrograde cholangiopancreatography (ERCP) pancreatitis (PEP) is the most common complication of ERCP and may run a severe course. Evidence suggests that vigorous periprocedural hydration can prevent PEP, but studies to date have significant methodological drawbacks. Importantly, evidence for its added value in patients already receiving prophylactic rectal non-steroidal anti-inflammatory drugs (NSAIDs) is lacking and the cost-effectiveness of the approach has not been investigated. We hypothesize that combination therapy of rectal NSAIDs and periprocedural hydration would significantly lower the incidence of post-ERCP pancreatitis compared to rectal NSAIDs alone in moderate- to high-risk patients undergoing ERCP. The FLUYT trial is a multicenter, parallel group, open label, superiority randomized controlled trial. A total of 826 moderate- to high-risk patients undergoing ERCP that receive prophylactic rectal NSAIDs will be randomized to a control group (no fluids or normal saline with a maximum of 1.5 mL/kg/h and 3 L/24 h) or intervention group (lactated Ringer's solution with 20 mL/kg over 60 min at start of ERCP, followed by 3 mL/kg/h for 8 h thereafter). The primary endpoint is the incidence of post-ERCP pancreatitis. Secondary endpoints include PEP severity, hydration-related complications, and cost-effectiveness. The FLUYT trial design, including hydration schedule, fluid type, and sample size, maximize its power of identifying a potential difference in post-ERCP pancreatitis incidence in patients receiving prophylactic rectal NSAIDs. EudraCT: 2015-000829-37 . Registered on 18 February 2015. 13659155 . Registered on 18 May 2015.

  5. Area-averaged evapotranspiration over a heterogeneous land surface: aggregation of multi-point EC flux measurements with a high-resolution land-cover map and footprint analysis

    Science.gov (United States)

    Xu, Feinan; Wang, Weizhen; Wang, Jiemin; Xu, Ziwei; Qi, Yuan; Wu, Yueru

    2017-08-01

    The determination of area-averaged evapotranspiration (ET) at the satellite pixel scale/model grid scale over a heterogeneous land surface plays a significant role in developing and improving the parameterization schemes of the remote sensing based ET estimation models and general hydro-meteorological models. The Heihe Watershed Allied Telemetry Experimental Research (HiWATER) flux matrix provided a unique opportunity to build an aggregation scheme for area-averaged fluxes. On the basis of the HiWATER flux matrix dataset and high-resolution land-cover map, this study focused on estimating the area-averaged ET over a heterogeneous landscape with footprint analysis and multivariate regression. The procedure is as follows. Firstly, quality control and uncertainty estimation for the data of the flux matrix, including 17 eddy-covariance (EC) sites and four groups of large-aperture scintillometers (LASs), were carefully done. Secondly, the representativeness of each EC site was quantitatively evaluated; footprint analysis was also performed for each LAS path. Thirdly, based on the high-resolution land-cover map derived from aircraft remote sensing, a flux aggregation method was established combining footprint analysis and multiple-linear regression. Then, the area-averaged sensible heat fluxes obtained from the EC flux matrix were validated by the LAS measurements. Finally, the area-averaged ET of the kernel experimental area of HiWATER was estimated. Compared with the formerly used and rather simple approaches, such as the arithmetic average and area-weighted methods, the present scheme is not only with a much better database, but also has a solid grounding in physics and mathematics in the integration of area-averaged fluxes over a heterogeneous surface. Results from this study, both instantaneous and daily ET at the satellite pixel scale, can be used for the validation of relevant remote sensing models and land surface process models. Furthermore, this work will be

  6. [High energy physics research]: Annual performance report, December 1, 1991--November 30, 1992

    International Nuclear Information System (INIS)

    Rosen, J.; Block, M.; Buchholz, D.; Gobbi, B.; Schellman, H.; Buchholz, D.; Rosen, J.; Miller, D.; Braaten, E. Chang, D.; Oakes, R.; Schellman, H.

    1992-01-01

    The various segments of the Northwestern University high energy physics research program are reviewed. Work is centered around experimental studies done primarily at FNAL; associated theoretical efforts are included

  7. [High energy physics research]: Annual performance report, December 1, 1991--November 30, 1992. [Northwestern Univ

    Energy Technology Data Exchange (ETDEWEB)

    Rosen, J; Block, M; Buchholz, D; Gobbi, B; Schellman, H; Buchholz, D; Rosen, J; Miller, D; Braaten, E; Chang, D; Oakes, R; Schellman, H

    1992-01-01

    The various segments of the Northwestern University high energy physics research program are reviewed. Work is centered around experimental studies done primarily at FNAL; associated theoretical efforts are included.

  8. High-beta tokamak research. Annual progress report, 1 August 1982-1 August 1983

    International Nuclear Information System (INIS)

    Navratil, G.A.

    1983-08-01

    The main research objectives during the past year fell into four areas: (1) detailed observations over a range of high-beta tokamak equilibria; (2) fabrication of an improved and more flexible high-beta tokamak based on our understanding of the present Torus II; (3) extension of the pulse length to 100 usec with power crowbar operation of the equilibrium field coil sets; and (4) comparison of our equilibrium and stability observations with computational models of MHD equilibrium and stability

  9. More controlling child-feeding practices are found among parents of boys with an average body mass index compared with parents of boys with a high body mass index.

    Science.gov (United States)

    Brann, Lynn S; Skinner, Jean D

    2005-09-01

    To determine if differences existed in mothers' and fathers' perceptions of their sons' weight, controlling child-feeding practices (ie, restriction, monitoring, and pressure to eat), and parenting styles (ie, authoritarian, authoritative, and permissive) by their sons' body mass index (BMI). One person (L.S.B.) interviewed mothers and boys using validated questionnaires and measured boys' weight and height; fathers completed questionnaires independently. Subjects were white, preadolescent boys and their parents. Boys were grouped by their BMI into an average BMI group (n=25; BMI percentile between 33rd and 68th) and a high BMI group (n=24; BMI percentile > or = 85th). Multivariate analyses of variance and analyses of variance. Mothers and fathers of boys with a high BMI saw their sons as more overweight (mothers P=.03, fathers P=.01), were more concerned about their sons' weight (Pfathers of boys with an average BMI (Pfathers of boys with a high BMI monitored their sons' eating less often than fathers of boys with an average BMI (P=.006). No differences were found in parenting by boys' BMI groups for either mothers or fathers. More controlling child-feeding practices were found among mothers (pressure to eat) and fathers (pressure to eat and monitoring) of boys with an average BMI compared with parents of boys with a high BMI. A better understanding of the relationships between feeding practices and boys' weight is necessary. However, longitudinal research is needed to provide evidence of causal association.

  10. Annual evaluation of routine radiological survey/monitoring frequencies for the High Ranking Facilities Deactivating Project at Oak Ridge, Tennessee

    International Nuclear Information System (INIS)

    1998-12-01

    The Bethel Valley Watershed at the Oak Ridge National Laboratory (ORNL) has several Environmental Management (EM) facilities that are designated for deactivation and subsequent decontamination and decommissioning (D and D). The Surplus Facilities Program at ORNL provides surveillance and maintenance support for these facilities as deactivation objectives are completed to reduce the risks associated with radioactive material inventories, etc. The Bechtel Jacobs Company LLC Radiological Control (RADCON) Program has established requirements for radiological monitoring and surveying radiological conditions in these facilities. These requirements include an annual evaluation of routine radiation survey and monitoring frequencies. Radiological survey/monitoring frequencies were evaluated for two High Ranking Facilities Deactivation Project facilities, the Bulk Shielding Facility and Tower Shielding Facility. Considerable progress has been made toward accomplishing deactivation objectives, thus the routine radiological survey/monitoring frequencies are being reduced for 1999. This report identifies the survey/monitoring frequency adjustments and provides justification that the applicable RADCON Program requirements are also satisfied

  11. Drivers of inter-annual variation and long-term change in High-Arctic spider species abundances

    DEFF Research Database (Denmark)

    Bowden, Joseph J.; Hansen, Oskar L. P.; Olsen, Kent

    2018-01-01

    Understanding how species abundances vary in space and time is a central theme in ecology, yet there are few long-term field studies of terrestrial invertebrate abundances and the determinants of their dynamics. This is particularly relevant in the context of rapid climate change occurring...... in the Arctic. Arthropods can serve as strong indicators of ecosystem change due to their sensitivity to increasing temperatures and other environmental variables. We used spider samples collected by pitfall trapping from three different habitats (fen, mesic and arid heath) in High-Arctic Greenland to assess...... interpretation of long-term trends. We used model selection to determine which climatic variables and/or previous years’ abundance best explained annual variation in species abundances over this period. We identified and used 28 566 adult spiders that comprised eight species. Most notably, the abundances of some...

  12. Research in high energy physics. Annual technical progress report, December 1, 1993--November 30, 1998

    International Nuclear Information System (INIS)

    Olsen, S.L.; Tata, X.

    1996-01-01

    The high energy physics research program at the University of Hawaii is directed toward the study of the properties of the elementary particles and the application of the results of these studies to the understanding of the physical world. Experiments using high energy accelerators are aimed at searching for new particles, testing current theories, and measuring properties of the known particles. Experiments using cosmic rays address particle physics and astrophysical issues. Theoretical physics research evaluates experimental results in the context of existing theories and projects the experimental consequences of proposed new theories

  13. Annual report of the Nuclear Physics and High Energy Physics Laboratory, 1986

    International Nuclear Information System (INIS)

    Grossetete, B.

    1988-01-01

    Research within the DELPHI program; neutrino research; the H1 collaboration, which is building one of the two spectrometers for the HERA electron-proton collider; CELLO; production and decay of mesons and baryons; use of emulsions in studies of charmed and beauty particles; and the CHARM1 project which studies high energy neutrino scattering with a marble target are presented [fr

  14. Prediction of storm transfers and annual loads with data-based mechanistic models using high-frequency data

    Science.gov (United States)

    Ockenden, Mary C.; Tych, Wlodek; Beven, Keith J.; Collins, Adrian L.; Evans, Robert; Falloon, Peter D.; Forber, Kirsty J.; Hiscock, Kevin M.; Hollaway, Michael J.; Kahana, Ron; Macleod, Christopher J. A.; Villamizar, Martha L.; Wearing, Catherine; Withers, Paul J. A.; Zhou, Jian G.; Benskin, Clare McW. H.; Burke, Sean; Cooper, Richard J.; Freer, Jim E.; Haygarth, Philip M.

    2017-12-01

    Excess nutrients in surface waters, such as phosphorus (P) from agriculture, result in poor water quality, with adverse effects on ecological health and costs for remediation. However, understanding and prediction of P transfers in catchments have been limited by inadequate data and over-parameterised models with high uncertainty. We show that, with high temporal resolution data, we are able to identify simple dynamic models that capture the P load dynamics in three contrasting agricultural catchments in the UK. For a flashy catchment, a linear, second-order (two pathways) model for discharge gave high simulation efficiencies for short-term storm sequences and was useful in highlighting uncertainties in out-of-bank flows. A model with non-linear rainfall input was appropriate for predicting seasonal or annual cumulative P loads where antecedent conditions affected the catchment response. For second-order models, the time constant for the fast pathway varied between 2 and 15 h for all three catchments and for both discharge and P, confirming that high temporal resolution data are necessary to capture the dynamic responses in small catchments (10-50 km2). The models led to a better understanding of the dominant nutrient transfer modes, which will be helpful in determining phosphorus transfers following changes in precipitation patterns in the future.

  15. HYFIRE II: fusion/high-temperature electrolysis conceptual-design study. Annual report

    International Nuclear Information System (INIS)

    Fillo, J.A.

    1983-08-01

    As in the previous HYFIRE design study, the current study focuses on coupling a Tokamak fusion reactor with a high-temperature blanket to a High-Temperature Electrolyzer (HTE) process to produce hydrogen and oxygen. Scaling of the STARFIRE reactor to allow a blanket power to 6000 MW(th) is also assumed. The primary difference between the two studies is the maximum inlet steam temperature to the electrolyzer. This temperature is decreased from approx. 1300 0 to approx. 1150 0 C, which is closer to the maximum projected temperature of the Westinghouse fuel cell design. The process flow conditions change but the basic design philosophy and approaches to process design remain the same as before. Westinghouse assisted in the study in the areas of systems design integration, plasma engineering, balance-of-plant design, and electrolyzer technology

  16. F-Element ion chelation in highly basic media. 1998 annual progress report

    International Nuclear Information System (INIS)

    Paine, R.T.

    1998-01-01

    'A large percentage of high-level radioactive waste (HLW) produced in the DOE complex over the last thirty years temporarily resides in storage tanks maintained at highly basic pH. The final permanent waste remediation plan will probably require that liquid and solid fractions be chemically treated in order to partition and concentrate the dominate hazardous emitters from the bulk of the waste. This is no small task. Indeed, there does not exist a well developed molecular chemistry knowledge base to guide the development of suitable separations for actinide and fission products present in the strongly basic media. The goal of this project is to undertake fundamental studies of the coordination chemistry of f-element ions and their species formed in basic aqueous solutions containing common waste treatment ions (e.g., NO 3 - , CO 3 2- , organic carboxylates, and EDTA), as well as new waste scrubbing chelators produced in this study.'

  17. Research in high energy elementary particle physics: Annual progress report, [March 1, 1986-February 29, 1988

    International Nuclear Information System (INIS)

    Field, R.; Ramond, P.; Thorn, C.; Avery, P.; Walker, J.; Tanner, D.; Sikivie, P.; Sullivan, N.; Majeswki, S.

    1988-01-01

    This is a progress report covering the period March 1, 1986 through February 29, 1988 for the High Energy Physics program at the University of Florida (DOE Florida Demonstration Project grant FG05-86-ER40272). Our research program covers a braod range of topics in theoretical and experimental physics and includes detector development and an Axion search. Included in this report is a summary of our program and a discussion of the research progress

  18. High frequency electromagnetic impedance measurements for characterization, monitoring and verification efforts. 1998 annual progress report

    International Nuclear Information System (INIS)

    Becker, A.; Lee, K.H.; Pellerin, L.

    1998-01-01

    'Non-invasive, high-resolution imaging of the shallow subsurface is needed for delineation of buried waste, detection of unexploded ordinance, verification and monitoring of containment structures, and other environmental applications. Electromagnetic measurements at frequencies between 1 and 100 MHz are important for such applications, because the induction number of many targets is small due, and the ability to determine the dielectric permittivity in addition to electrical conductivity of the subsurface is possible. Earlier workers were successful in developing systems for detecting anomalous areas, but no quantifiable information was accurately determined. For high resolution imaging, accurate measurements are necessary so the field data can be mapped into the space of the subsurface parameters. The authors are developing a non-invasive method for accurately imaging the electrical conductivity and dielectric permittivity of the shallow subsurface using the plane wave impedance approach, known as the magnetotelluric (MT) method at low frequencies. Electric and magnetic sensors are being tested in a known area against theoretical predictions, thereby insuring that the data collected with the high-frequency impedance (HFI) system will support high-resolution, multi-dimensional imaging techniques. The summary of the work to date is divided into three sections: equipment procurement, instrumentation, and theoretical developments. For most earth materials, the frequency range from 1 to 100 MHz encompasses a very difficult transition zone between the wave propagation of displacement currents and the diffusive behavior of conduction currents. Test equipment, such as signal generators and amplifiers, does not cover the entire range except at great expense. Hence the authors have divided the range of investigation into three sub-ranges: 1--10 MHz, 10--30 MHz, and 30--100 MHz. Results to date are in the lowest frequency range of 1--10 MHz. Even though conduction currents

  19. High energy particle physics at Purdue. Annual technical progress report, March 1982-March 1983

    International Nuclear Information System (INIS)

    Gaidos, J.A.; Koltick, D.S.; Loeffler, F.J.

    1983-01-01

    Progress is reported in these areas: a study of electron-positron annihilation using the High Resolution Spectrometer at SLAC; proton decay; extensive muon showers; gamma ray astronomy; the DUMAND project; theoretical work on fundamental problems in electromagnetic, weak, strong, and gravitational interactions; chi production by hadrons; p-nucleus interactions; development of the Collider Detector at Fermilab; and study of the observed hadrons as the relativistic bound states of baryons and antibaryons

  20. Testing and evaluation of solidified high-level waste forms. Joint annual progress report 1983

    International Nuclear Information System (INIS)

    Malow, G.

    1985-01-01

    A second joint programme of the European Atomic Community was started in 1981 under the indirect action programme (1980-84), Action No 5 'Testing and evaluation of the properties of various potential materials for immobilizing high activity waste'. The overall objective of the research is to test various European potential solidified high-level radioactive waste forms so as to predict their behaviour after disposal. The most important aspect is to produce data to calculate the activity release from the waste products under the attack of various aqueous solutions. The experiments were partly performed under waste repository relevant conditions and partly under simplified conditions for investigating basic activity release mechanisms. The topics of the programme were: (i) studies of basic leaching mechanisms; (ii) studies of hydrothermal leaching and surface attack of waste glasses; (iii) leach test carried out in contact with granite at low water flow rates; (iv) static leach tests with specimen surrounded by canister and backfill materials; (v) specific isotope leach tests in slowly flowing water; (vi) leach test of actinide spiked samples; (vii) leach tests of highly radioactive samples; (viii) leach tests of alpha radiation stability; (ix) studies of mechanical stability; (x) studies of mineral phases as model compounds and phase relations

  1. Experimental and theoretical high energy physics research. Annual progress report, September 1, 1991--September 31, 1992

    Energy Technology Data Exchange (ETDEWEB)

    1992-10-01

    Progress in the various components of the UCLA High-Energy Physics Research program is summarized, including some representative figures and lists of resulting presentations and published papers. Principal efforts were directed at the following: (I) UCLA hadronization model, PEP4/9 e{sup +}e{sup {minus}} analysis, {bar P} decay; (II) ICARUS and astroparticle physics (physics goals, technical progress on electronics, data acquisition, and detector performance, long baseline neutrino beam from CERN to the Gran Sasso and ICARUS, future ICARUS program, and WIMP experiment with xenon), B physics with hadron beams and colliders, high-energy collider physics, and the {phi} factory project; (III) theoretical high-energy physics; (IV) H dibaryon search, search for K{sub L}{sup 0} {yields} {pi}{sup 0}{gamma}{gamma} and {pi}{sup 0}{nu}{bar {nu}}, and detector design and construction for the FNAL-KTeV project; (V) UCLA participation in the experiment CDF at Fermilab; and (VI) VLPC/scintillating fiber R & D.

  2. Development of realistic high-resolution whole-body voxel models of Japanese adult males and females of average height and weight, and application of models to radio-frequency electromagnetic-field dosimetry

    International Nuclear Information System (INIS)

    Nagaoka, Tomoaki; Watanabe, Soichi; Sakurai, Kiyoko; Kunieda, Etsuo; Watanabe, Satoshi; Taki, Masao; Yamanaka, Yukio

    2004-01-01

    With advances in computer performance, the use of high-resolution voxel models of the entire human body has become more frequent in numerical dosimetries of electromagnetic waves. Using magnetic resonance imaging, we have developed realistic high-resolution whole-body voxel models for Japanese adult males and females of average height and weight. The developed models consist of cubic voxels of 2 mm on each side; the models are segmented into 51 anatomic regions. The adult female model is the first of its kind in the world and both are the first Asian voxel models (representing average Japanese) that enable numerical evaluation of electromagnetic dosimetry at high frequencies of up to 3 GHz. In this paper, we will also describe the basic SAR characteristics of the developed models for the VHF/UHF bands, calculated using the finite-difference time-domain method

  3. Development and characterization of solidified forms for high-level wastes: 1978. Annual report

    International Nuclear Information System (INIS)

    Ross, W.A.; Mendel, J.E.

    1979-12-01

    Development and characterization of solidified high-level waste forms are directed at determining both process properties and long-term behaviors of various solidified high-level waste forms in aqueous, thermal, and radiation environments. Waste glass properties measured as a function of composition were melt viscosity, melt electrical conductivity, devitrification, and chemical durability. The alkali metals were found to have the greatest effect upon glass properties. Titanium caused a slight decrease in viscosity and a significant increase in chemical durability in acidic solutions (pH-4). Aluminum, nickel and iron were all found to increase the formation of nickel-ferrite spinel crystals in the glass. Four multibarrier advanced waste forms were produced on a one-liter scale with simulated waste and characterized. Glass marbles encapsulated in a vacuum-cast lead alloy provided improved inertness with a minimal increase in technological complexity. Supercalcine spheres exhibited excellent inertness when coated with pyrolytic carbon and alumina and put in a metal matrix, but the processing requirements are quite complex. Tests on simulated and actual high-level waste glasses continue to suggest that thermal devitrification has a relatively small effect upon mechanical and chemical durabilities. Tests on the effects radiation has upon waste forms also continue to show changes to be relatively insignificant. Effects caused by decay of actinides can be estimated to saturate at near 10 19 alpha-events/cm 3 in homogeneous solids. Actually, in solidified waste forms the effects are usually observed around certain crystals as radiation causes amorphization and swelling of th crystals

  4. Performance and Reliability of Bonded Interfaces for High-temperature Packaging: Annual Progress Report

    Energy Technology Data Exchange (ETDEWEB)

    DeVoto, Douglas J. [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-19

    As maximum device temperatures approach 200 °Celsius, continuous operation, sintered silver materials promise to maintain bonds at these high temperatures without excessive degradation rates. A detailed characterization of the thermal performance and reliability of sintered silver materials and processes has been initiated for the next year. Future steps in crack modeling include efforts to simulate crack propagation directly using the extended finite element method (X-FEM), a numerical technique that uses the partition of unity method for modeling discontinuities such as cracks in a system.

  5. 14th annual Results and Review Workshop on High Performance Computing in Science and Engineering

    CERN Document Server

    Nagel, Wolfgang E; Resch, Michael M; Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2011; High Performance Computing in Science and Engineering '11

    2012-01-01

    This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2011. The reports cover all fields of computational science and engineering, ranging from CFD to computational physics and chemistry, to computer science, with a special emphasis on industrially relevant applications. Presenting results for both vector systems and microprocessor-based systems, the book allows readers to compare the performance levels and usability of various architectures. As HLRS

  6. High-Temperature Air-Cooled Power Electronics Thermal Design: Annual Progress Report

    Energy Technology Data Exchange (ETDEWEB)

    Waye, Scot [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-08-01

    Power electronics that use high-temperature devices pose a challenge for thermal management. With the devices running at higher temperatures and having a smaller footprint, the heat fluxes increase from previous power electronic designs. This project overview presents an approach to examine and design thermal management strategies through cooling technologies to keep devices within temperature limits, dissipate the heat generated by the devices and protect electrical interconnects and other components for inverter, converter, and charger applications. This analysis, validation, and demonstration intends to take a multi-scale approach over the device, module, and system levels to reduce size, weight, and cost.

  7. High energy particle physics at Purdue. Annual technical progress report, March 1983-March 1984

    International Nuclear Information System (INIS)

    Gaidos, J.A.; Koltick, D.S.; Loeffler, F.J.

    1984-01-01

    Progress is reported in these areas: a study of electron-positron annihilation using the High Resolution Spectrometer; experimental study of proton decay; gamma ray astrophysics; the DUMAND project; fundamental problems in the theory of gravitational, electromagnetic, weak, and strong interactions; chi production by hadrons; study of collective phenomena; search for the onset of collective phenonmena; work on the Collider Detector at Fermilab; search for a deconfined quark-gluon phase of strongly interacting matter at the FNAL proton-antiproton collider; and development of an electrodeless drift chamber

  8. High energy particle physics at Purdue. Annual progress report, March 1981-1982

    International Nuclear Information System (INIS)

    Gaidos, J.A.; Koltick, D.S.; Loeffler, F.J.; McIlwain, R.L.; Miller, D.H.; Palfrey, T.R.; Shibata, E.I.; Willmann, R.B.

    1982-01-01

    Progress is reported in these areas: study of electron positron annihilation using the High Resolution Spectrometer at PEP; experimental study of proton decay; a study of rare processes in meson spectroscopy utilizing the SLAC Hybrid Bubble Chamber System; theory of fundamental problems of gravitational, electromagnetic, weak, and strong interactions; experimental study of chi production by hadrons; p-nucleus interactions; development of the Collider Detector at Fermilab; anitneutrino physics and low energy neutrino physics; and the study of the observed hadrons as the relativistic bound states of baryons and antibaryons

  9. Annual report for the High Energy Physics Program at The University of Alabama

    International Nuclear Information System (INIS)

    Baksay, L.; Busenitz, J.K.

    1993-10-01

    The High Energy Physics group at University of Alabama is a member of the L3 collaboration studying e + e - collisions near the Z degree pole at the LEP accelerator at CERN. About 2 million Z degree events have been accumulated and the experiment has been prolific in publishing results on the Z resonance parameters, the Z couplings to all leptons and quarks with mass less than half the Z mass, searches for new particles and interactions, and studies of strong interactions and/or weak charged current decays of quarks and leptons abundantly produced in Z decays. They are contributing to data analysis as well as to detector hardware. In particular, they are involved in a major hardware upgrade for the experiment, namely the design, construction and commissioning of a Silicon Microvertex Detector (SMD) which has successfully been installed for operation during the present grant period. A report is presented on their recent L3 activities and their plans for the next grant period of twelve months (April 1, 1994--March 31, 1995). Their main interests in data analysis are in the study of single photon final states and the physics made more accessible by the SMD, such as heavy flavor physics. Their hardware efforts continue to be concentrated on the high precision capacitive and optical alignment monitoring systems for the SMD and also includes gas monitoring for the muon system. They are also planning to participate in the coming upgrade of the L3 detector

  10. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  11. High Temperature Materials Laboratory sixth annual report, October 1992--September 1993

    Energy Technology Data Exchange (ETDEWEB)

    Tennery, V.J.; Foust, F.M.

    1993-12-01

    The High Temperature Materials Laboratory has completed its sixth year of operation as a designated Department of Energy User Facility at the Oak Ridge National Laboratory. Growth of the User Program is evidenced by the number of outside institutions executing user agreements since the facility began operation in 1987. A total of 172 nonproprietary agreements (88 university and 84 industry) and 35 proprietary agreements, (2 university, 33 industry) are now in effect. Six other government facilities have also participated in the User Program. Thirty-eight states are represented by these interactions. Ninety-four nonproprietary research proposals (44 from universities, 47 from industry, and 3 from other government facilities) and three proprietary proposals were considered during this reporting period. Nonproprietary research projects active in FY 1993 are summarized.

  12. High Temperature Materials Laboratory fourth annual report, October 1990--September 1991

    Energy Technology Data Exchange (ETDEWEB)

    Tennery, V.J.; Foust, F.M.

    1991-12-01

    The High Temperature Materials Laboratory has completed its fourth year of operation as a designated Department of Energy User Facility at the Oak Ridge National Laboratory. Growth of the user program is evidenced by the number of outside institutions who have executed user agreements since the facility began operation in 1987. A total of 118 nonproprietary agreements (62 university and 56 industry) and 28 proprietary agreements (2 university, 26 industry) are now in effect. Five other government facilities have also participated in the user program. Sixty-free nonproprietary research proposals (38 from university, 26 from industry, and 1 other government facility) and four proprietary proposals were considered during this reporting period. Research projects active in FY 1991 are summarized.

  13. NREL/SCE High Penetration PV Integration Project: FY13 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Mather, B. A.; Shah, S.; Norris, B. L.; Dise, J. H.; Yu, L.; Paradis, D.; Katiraei, F.; Seguin, R.; Costyk, D.; Woyak, J.; Jung, J.; Russell, K.; Broadwater, R.

    2014-06-01

    In 2010, the National Renewable Energy Laboratory (NREL), Southern California Edison (SCE), Quanta Technology, Satcon Technology Corporation, Electrical Distribution Design (EDD), and Clean Power Research (CPR) teamed to analyze the impacts of high penetration levels of photovoltaic (PV) systems interconnected onto the SCE distribution system. This project was designed specifically to benefit from the experience that SCE and the project team would gain during the installation of 500 megawatts (MW) of utility-scale PV systems (with 1-5 MW typical ratings) starting in 2010 and completing in 2015 within SCE's service territory through a program approved by the California Public Utility Commission (CPUC). This report provides the findings of the research completed under the project to date.

  14. Annual Report, Fall 2016: Alternative Chemical Cleaning of Radioactive High Level Waste Tanks - Corrosion Test Results

    International Nuclear Information System (INIS)

    Wyrwas, R. B.

    2016-01-01

    The testing presented in this report is in support of the investigation of the Alternative Chemical Cleaning program to aid in developing strategies and technologies to chemically clean radioactive High Level Waste tanks prior to tank closure. The data and conclusions presented here were the examination of the corrosion rates of A285 carbon steel and 304L stainless steel exposed to two proposed chemical cleaning solutions: acidic permanganate (0.18 M nitric acid and 0.05M sodium permanganate) and caustic permanganate. (10 M sodium hydroxide and 0.05M sodium permanganate). These solutions have been proposed as a chemical cleaning solution for the retrieval of actinides in the sludge in the waste tanks, and were tested with both HM and PUREX sludge simulants at a 20:1 ratio.

  15. High energy density in matter produced by heavy ion beams. Annual report 1993

    International Nuclear Information System (INIS)

    1994-06-01

    The experimental activities at GSI were concentrated on the progress in beam-plasma interaction experiments of heavy ion with ionized matter, plasma -lens forming devices, intense beam at high temperature experimental area, and charge exchange collisions of ions. The development to higher intensities and phase space densities during 1993 for the SIS and the ESR is recorded. The possibility of studying of funneling of two beams in a two-beam RFQ is studied. Specific results are presented with respect to inertial confinement fusion (ICF). The problem of ion stopping in plasma and pumping X-ray lasers with heavy ion beams are discussed. Various contributions deal with dense plasma effects, shocks and opacity. (HP)

  16. Annual Report, Fall 2016: Alternative Chemical Cleaning of Radioactive High Level Waste Tanks - Corrosion Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Wyrwas, R. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-09-01

    The testing presented in this report is in support of the investigation of the Alternative Chemical Cleaning program to aid in developing strategies and technologies to chemically clean radioactive High Level Waste tanks prior to tank closure. The data and conclusions presented here were the examination of the corrosion rates of A285 carbon steel and 304L stainless steel exposed to two proposed chemical cleaning solutions: acidic permanganate (0.18 M nitric acid and 0.05M sodium permanganate) and caustic permanganate. (10 M sodium hydroxide and 0.05M sodium permanganate). These solutions have been proposed as a chemical cleaning solution for the retrieval of actinides in the sludge in the waste tanks, and were tested with both HM and PUREX sludge simulants at a 20:1 ratio.

  17. A comparison of annual and seasonal carbon dioxide effluxes between subarctic Sweden and high-arctic Svalbard

    DEFF Research Database (Denmark)

    Björkman, Mats P.; Morgner, Elke; Björk, Robert G.

    2010-01-01

    in the literature. Winter emissions varied in their contribution to total annual production between 1 and 18%. Artificial snow drifts shortened the snow-free period by 2 weeks and decreased the annual CO2 emission by up to 20%. This study suggests that future shifts in vegetation zones may increase soil respiration...

  18. High committee for nuclear safety transparency and information. Annual activity report. June 2008 - December 2009

    International Nuclear Information System (INIS)

    2010-01-01

    This document is the first activity report of the High committee for nuclear safety transparency and information (HCTISN), created on June 18, 2008. The HCTISN is a French authority of information, dialogue and debate about the risks linked with nuclear activities and about their impacts on public health, on the environment and on nuclear safety. The committee has the ability to express his opinion and recommendations about any question on the above topics and to propose any measure aiming at warranting or improving the transparency in the nuclear domain. This activity report offers a synthetic overview of the actions already undertaken: the plutonium imports from UK, the contamination incident at the Socatri facility (a Areva-Eurodif daughter company located at the Tricastin site), and the dismantling strategy of basic nuclear facilities. It presents the composition, organization, missions and means of the Committee, the different working groups and the follow-up of the different recommendations emitted so far by the Committee. (J.S.)

  19. High Level Radioactive Waste Management: Proceedings of the second annual international conference

    International Nuclear Information System (INIS)

    1991-01-01

    The final disposal of high level radioactive waste (HLW) has been one of the most arduous problems facing the nuclear industry. This issue has many facets, which are addressed in these proceedings. The papers herein contain the most current information regarding the conditioning and disposal of HLW. Most of the needs are technical in nature, such as the best form of the waste, the integrity of storage containers, design and construction of a repository, and characterization of the geology of a repository to provide assurance that radioactive and other hazardous materials will not reach the surrounding environment. Many of the papers discuss non-US programs. Continued international cooperation and technology exchange is essential. There are other concerns that must be addressed before the final emplacement of HLW. Some of the other issues addressed in these proceedings are conformance to regulations, transportation, socioeconomics, and public education. Any impediments in these areas must be resolved along with the scientific issues before final waste disposal. This conference provides a forum for information exchange. The papers in these proceedings will provide the basis for future planning and decisions. Continued cooperation of the technical community will ultimately result in the safe disposal of HLW. Individual abstracts are indexed separately for the data base

  20. Area-averaged evapotranspiration over a heterogeneous land surface: aggregation of multi-point EC flux measurements with a high-resolution land-cover map and footprint analysis

    Directory of Open Access Journals (Sweden)

    F. Xu

    2017-08-01

    Full Text Available The determination of area-averaged evapotranspiration (ET at the satellite pixel scale/model grid scale over a heterogeneous land surface plays a significant role in developing and improving the parameterization schemes of the remote sensing based ET estimation models and general hydro-meteorological models. The Heihe Watershed Allied Telemetry Experimental Research (HiWATER flux matrix provided a unique opportunity to build an aggregation scheme for area-averaged fluxes. On the basis of the HiWATER flux matrix dataset and high-resolution land-cover map, this study focused on estimating the area-averaged ET over a heterogeneous landscape with footprint analysis and multivariate regression. The procedure is as follows. Firstly, quality control and uncertainty estimation for the data of the flux matrix, including 17 eddy-covariance (EC sites and four groups of large-aperture scintillometers (LASs, were carefully done. Secondly, the representativeness of each EC site was quantitatively evaluated; footprint analysis was also performed for each LAS path. Thirdly, based on the high-resolution land-cover map derived from aircraft remote sensing, a flux aggregation method was established combining footprint analysis and multiple-linear regression. Then, the area-averaged sensible heat fluxes obtained from the EC flux matrix were validated by the LAS measurements. Finally, the area-averaged ET of the kernel experimental area of HiWATER was estimated. Compared with the formerly used and rather simple approaches, such as the arithmetic average and area-weighted methods, the present scheme is not only with a much better database, but also has a solid grounding in physics and mathematics in the integration of area-averaged fluxes over a heterogeneous surface. Results from this study, both instantaneous and daily ET at the satellite pixel scale, can be used for the validation of relevant remote sensing models and land surface process models. Furthermore, this

  1. Using an Ablation Gradient Model to Characterize Annual Glacial Melt Contribution to Major Rivers in High Asia

    Science.gov (United States)

    Brodzik, M. J.; Armstrong, R. L.; Khalsa, S. J. S.; Painter, T. H.; Racoviteanu, A.; Rittger, K.

    2014-12-01

    Ice melt from mountain glaciers can represent a significant contribution to freshwater hydrological budgets, along with seasonal snow melt, rainfall and groundwater. In the rivers of High Asia, understanding the proportion of glacier ice melt is critical for water resource management of irrigation and planning for hydropower generation and human consumption. Current climate conditions are producing heterogeneous glacier responses across the Hindu Kush-Karakoram-Himalayan ranges. However, it is not yet clear how contrasting glacier patterns affect regional water resources. For example, in the Upper Indus basin, estimates of glacial contribution to runoff are often not distinguished from seasonal snow contribution, and vary widely, from as little as 15% to as much as 55%. While many studies are based on reasonable concepts, most are based on assumptions uninformed by actual snow or ice cover measurements. While straightforward temperature index models have been used to estimate glacier runoff in some Himalayan basins, application of these models in larger Himalayan basins is limited by difficulties in estimating key model parameters, particularly air temperature. Estimating glacial area from the MODIS Permanent Snow and Ice Extent (MODICE) product for the years 2000-2013, with recently released Shuttle Radar Topography Mission (SRTMGL3) elevation data, we use a simple ablation gradient approach to calculate an upper limit on the contribution of clean glacier ice melt to streamflow data. We present model results for the five major rivers with glaciated headwaters in High Asia: the Bramaputra, Ganges, Indus, Amu Darya and Syr Darya. Using GRDC historical discharge records, we characterize the annual contribution from glacier ice melt. We use MODICE interannual trends in each basin to estimate glacier ice melt uncertainties. Our results are being used in the USAID project, Contribution to High Asia Runoff from Ice and Snow (CHARIS), to inform regional-scale planning for

  2. A new NDVI measure that overcomes data sparsity in cloud-covered regions predicts annual variation in ground-based estimates of high arctic plant productivity

    Science.gov (United States)

    Rune Karlsen, Stein; Anderson, Helen B.; van der Wal, René; Bremset Hansen, Brage

    2018-02-01

    Efforts to estimate plant productivity using satellite data can be frustrated by the presence of cloud cover. We developed a new method to overcome this problem, focussing on the high-arctic archipelago of Svalbard where extensive cloud cover during the growing season can prevent plant productivity from being estimated over large areas. We used a field-based time-series (2000-2009) of live aboveground vascular plant biomass data and a recently processed cloud-free MODIS-Normalised Difference Vegetation Index (NDVI) data set (2000-2014) to estimate, on a pixel-by-pixel basis, the onset of plant growth. We then summed NDVI values from onset of spring to the average time of peak NDVI to give an estimate of annual plant productivity. This remotely sensed productivity measure was then compared, at two different spatial scales, with the peak plant biomass field data. At both the local scale, surrounding the field data site, and the larger regional scale, our NDVI measure was found to predict plant biomass (adjusted R 2 = 0.51 and 0.44, respectively). The commonly used ‘maximum NDVI’ plant productivity index showed no relationship with plant biomass, likely due to some years having very few cloud-free images available during the peak plant growing season. Thus, we propose this new summed NDVI from onset of spring to time of peak NDVI as a proxy of large-scale plant productivity for regions such as the Arctic where climatic conditions restrict the availability of cloud-free images.

  3. The annual planktonic protist community structure in an ice-free high Arctic fjord (Adventfjorden, West Spitsbergen)

    Science.gov (United States)

    Kubiszyn, A. M.; Wiktor, J. M.; Wiktor, J. M.; Griffiths, C.; Kristiansen, S.; Gabrielsen, T. M.

    2017-05-01

    We investigated the size and trophic structure of the annual planktonic protist community structure in the ice-free Adventfjorden in relation to environmental factors. Our high-resolution (weekly to monthly) study was conducted in 2012, when warm Atlantic water was advected into the fjord in winter and summer. We observed a distinct seasonality in the protist communities. The winter protist community was characterised by extremely low levels of protist abundance and biomass (primarily Dinophyceae, Ciliophora and Bacillariophyceae) in a homogenous water column. In the second half of April, the total protist abundance and biomass rapidly increased, thus initiating the spring bloom in a still well-mixed water column. The spring bloom was initially dominated by the prymnesiophyte Phaeocystis pouchetii and Bacillariophyceae (primarily from the genera Thalassiosira, Fragilariopsis and Chaetoceros) and was later strictly dominated by Phaeocystis colonies. When the bloom terminated in mid-June, the community shifted towards flagellates (Dinophyceae, Ciliophora, Cryptophyceae and nanoflagellates 3-7 μm in size) in a stratified, nutrient-depleted water column. Decreases in the light intensity decreased the protist abundance and biomass, and the fall community (Dinophyceae, Cryptophyceae and Bacillariophyceae) was followed by the winter community.

  4. Microstructural properties of high level waste concentrates and gels with raman and infrared spectroscopies. 1997 annual progress report

    International Nuclear Information System (INIS)

    Agnew, S.F.; Coarbin, R.A.; Johnston, C.T.

    1997-01-01

    'Monosodium aluminate, the phase of aluminate found in waste tanks, is only stable over a fairly narrow range of water vapor pressure (22% relative humidity at 22 C). As a result, aluminate solids are stable at Hanford (seasonal average RH ∼20%) but are not be stable at Savannah River (seasonal average RH ∼40%). Monosodium aluminate (MSA) releases water upon precipitation from solution. In contrast, trisodium aluminate (TSA) consumes water upon precipitation. As a result, MSA precipitates gradually over time while TSA undergoes rapid accelerated precipitation, often gelling its solution. Raman spectra reported for first time for monosodium and trisodium aluminate solids. Ternary phase diagrams can be useful for showing effects of water removal, even with concentrated waste. Kinetics of monosodium aluminate precipitation are extremely slow (several months) at room temperature but quite fast (several hours) at 60 C. As a result, all waste simulants that contain aluminate need several days of cooking at 60 C in order to truly represent the equilibrium state of aluminate. The high level waste (HLW) slurries that have been created at the Hanford and Savannah River Sites over that last fifty years constitute a large fraction of the remaining HLW volumes at both sites. In spite of the preponderance of these wastes, very little quantitative information is available about their physical and chemical properties other than elemental analyses.'

  5. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  6. "High/Scope Supporting the Child, the Family, the Community": A Report of the Proceedings of the High/Scope Ireland Third Annual Conference, 12th October 2004, Newry, Northern Ireland

    Science.gov (United States)

    Peyton, Lynne

    2005-01-01

    The third annual High/Scope Ireland Conference provided a forum for speakers workshop leaders and delegates from across Ireland, the UK, USA, Europe and South Africa to share their experiences of High/Scope in action. Research demonstrates that long term benefits for High/Scope participants include increased literacy rates, school success and…

  7. Prepared for the thirtieth annual conference on bioassay analytical and environmental chemistry. Reliable analysis of high resolution gamma spectra

    International Nuclear Information System (INIS)

    Spitz, H.B.; Buschbom, R.; Rieksts, G.A.; Palmer, H.E.

    1985-01-01

    A new method has been developed to reliably analyze pulse height-energy spectra obtained from measurements employing high resolution germanium detectors. The method employs a simple data transformation and smoothing function to calculate background and identify photopeaks and isotopic analysis. This technique is elegant in its simplicity because it avoids dependence upon complex spectrum deconvolution, stripping, or other least-square-fitting techniques which complicate the assessment of measurement reliability. A moving median was chosen for data smoothing because, unlike moving averages, medians are not dominated by extreme data points. Finally, peaks are identified whenever the difference between the background spectrum and the transformed spectrum exceeds a pre-determined number of standard deviations

  8. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  9. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  10. Design Study for a Low-enriched Uranium Core for the High Flux Isotope Reactor, Annual Report for FY 2007

    Energy Technology Data Exchange (ETDEWEB)

    Primm, Trent [ORNL; Ellis, Ronald James [ORNL; Gehin, Jess C [ORNL; Ilas, Germina [ORNL; Miller, James Henry [ORNL; Sease, John D [ORNL

    2007-11-01

    This report documents progress made during fiscal year 2007 in studies of converting the High Flux Isotope Reactor (HFIR) from highly enriched uranium (HEU) fuel to low enriched uranium fuel (LEU). Conversion from HEU to LEU will require a change in fuel form from uranium oxide to a uranium-molybdenum alloy. A high volume fraction U/Mo-in-Al fuel could attain the same neutron flux performance as with the current, HEU fuel but materials considerations appear to preclude production and irradiation of such a fuel. A diffusion barrier would be required if Al is to be retained as the interstitial medium and the additional volume required for this barrier would degrade performance. Attaining the high volume fraction (55 wt. %) of U/Mo assumed in the computational study while maintaining the current fuel plate acceptance level at the fuel manufacturer is unlikely, i.e. no increase in the percentage of plates rejected for non-compliance with the fuel specification. Substitution of a zirconium alloy for Al would significantly increase the weight of the fuel element, the cost of the fuel element, and introduce an as-yet untried manufacturing process. A monolithic U-10Mo foil is the choice of LEU fuel for HFIR. Preliminary calculations indicate that with a modest increase in reactor power, the flux performance of the reactor can be maintained at the current level. A linearly-graded, radial fuel thickness profile is preferred to the arched profile currently used in HEU fuel because the LEU fuel media is a metal alloy foil rather than a powder. Developments in analysis capability and nuclear data processing techniques are underway with the goal of verifying the preliminary calculations of LEU flux performance. A conceptual study of the operational cost of an LEU fuel fabrication facility yielded the conclusion that the annual fuel cost to the HFIR would increase significantly from the current, HEU fuel cycle. Though manufacturing can be accomplished with existing technology

  11. 1999 annual report

    International Nuclear Information System (INIS)

    2000-01-01

    Seventh Energy Ltd is a junior oil and gas exploration company based in Calgary, Alberta. The company's focus is in southern Alberta, with oil producing properties at Hays and Enchant, and a new gas property at Princess due to come on stream in 2000. Production decreased 26 per cent in 1999 from an average of 599 barrels of oil equivalent per day to 441 barrels of oil equivalent per day, mainly as a result of asset sales. Nevertheless, improved commodity prices increased funds from operations by 110 per cent from $751,000 to $1,577,000. The company faced serious difficulties during 1999, including high debt levels, high overhead expenses, a lack of capital to maintain the production base, a dwindling land base due to asset sales and lease expirations, and a reduced production base due to asset sales. Although some of these challenges carry over into the year 2000, the company managed to reduce its debt very significantly. With a capital budget of $4,100,000 for 2000, it expects to carry on a vigorous exploration program. The annual report explains the company's efforts during 1999 to liquidate its debt load, reviews activities in exploration and production, and provides a detailed analysis of the company's financial status, and future plans

  12. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  13. Nuclear Physics Department annual report

    International Nuclear Information System (INIS)

    1997-07-01

    This annual report presents articles and abstracts published in foreign journals, covering the following subjects: nuclear structure, nuclear reactions, applied physics, instrumentation, nonlinear phenomena and high energy physics

  14. Average wind statistics for SRP area meteorological towers

    International Nuclear Information System (INIS)

    Laurinat, J.E.

    1987-01-01

    A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics

  15. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  16. Woodland: dynamics of average diameters of coniferous tree stands of the principal forest types

    Directory of Open Access Journals (Sweden)

    R. A. Ziganshin

    2016-08-01

    Full Text Available The analysis of age dynamics of average diameters of deciduous tree stands of different forest types at Highland Khamar-Daban (natural woodland in South-East Baikal Lake region has been done. The aggregate data of average tree, the analysis of age dynamics of average diameters of a deciduous tree stands of stand diameters by age classes, as well as tree stand current periodic and overall average increment are presented and discussed in the paper. Forest management appraisal is done. The most representative forest types have been selected to be analyzed. There were nine of them including three Siberian stone pine Pinus sibirica Du Tour stands, three Siberian fir Abies sibirica Ledeb. stands, one Siberian spruce Picea obovata Ledeb. stand, and two dwarf Siberian pine Pinus pumila (Pallas Regel stands. The whole high-altitude range of mountain taiga has been evaluated. Mathematical and statistic indicators have been calculated for every forest type. Stone pine stands are the largest. Dynamics of mean diameters of forest stands have been examined by dominant species for every forest type. Quite a number of interesting facts have been elicited. Generally, all species have maximal values of periodic annual increment that is typical for young stands, but further decrease of increment is going on differently and connects to the different lifetime of wood species. It is curious that annual increment of the dwarf Siberian pine stands almost does not decrease with aging. As for mean annual increment, it is more stable than periodic annual increment. From the fifth age class (age of stand approaching maturity mean annual increment of cedar stands varies from 0.20 to 0.24 cm per year; from 0.12–0.15 to 0.18–0.21 cm per year – in fir stands; from 0.18 to 0.24 cm per year – in spruce stands; and from 0.02–0.03 to 0.05–0.06 cm per year – in draft pine stands. Mean annual increment of dwarf Siberian pine increases with aging and increment of other

  17. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  18. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  19. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  20. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  1. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  2. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  3. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  4. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  5. Expanding the Annual Irrigation Maps (AIM) Product to the entire High Plains Aquifer (HPA): Addressing the Challenges of Cotton and Deficit-Irrigated Fields

    Science.gov (United States)

    Rapp, J. R.; Deines, J. M.; Kendall, A. D.; Hyndman, D. W.

    2017-12-01

    The High Plains Aquifer (HPA) is the most extensively irrigated aquifer in the continental United States and is the largest major aquifer in North America with an area of 500,000 km2. Increased demand for agricultural products has led to expanded irrigation extent, but brought with it declining groundwater levels that have made irrigation unsustainable in some locations. Understanding these irrigation dynamics and mapping irrigated areas through time are essential for future sustainable agricultural practices and hydrological modeling. Map products using remote sensing have only recently been able to track annual dynamics at relatively high spatial resolution (30 m) for a large portion of the northern HPA. However follow-on efforts to expand these maps to the entire HPA have met with difficulty due to the challenge of distinguishing irrigation in crop types that are commonly deficit- or partially-irrigated. Expanding these maps to the full HPA requires addressing unique features of partially irrigated fields and irrigated cotton, a major water user in the southern HPA. Working in Google Earth Engine, we used all available Landsat imagery to generate annual time series of vegetation indices. We combined this information with climate covariables, planting dates, and crop specific training data to algorithmically separate fully irrigated, partially irrigated, and non-irrigated field locations. The classification scheme was then applied to produce annual maps of irrigation across the entire HPA. The extensive use of ancillary data and the "greenness" time series for the algorithmic classification generally increased accuracy relative to previous efforts. High-accuracy, representative map products of irrigation extent capable of detecting crop type and irrigation intensity within aquifers will be an essential tool to monitor the sustainability of global aquifers and to provide a scientific bases for political and economic decisions affecting those aquifers.

  6. April 25, 2003, FY2003 Progress Summary and FY2002 Program Plan, Statement of Work and Deliverables for Development of High Average Power Diode-Pumped Solid State Lasers,and Complementary Technologies, for Applications in Energy and Defense

    International Nuclear Information System (INIS)

    Meier, W; Bibeau, C

    2005-01-01

    The High Average Power Laser Program (HAPL) is a multi-institutional, synergistic effort to develop inertial fusion energy (IFE). This program is building a physics and technology base to complement the laser-fusion science being pursued by DOE Defense programs in support of Stockpile Stewardship. The primary institutions responsible for overseeing and coordinating the research activities are the Naval Research Laboratory (NRL) and Lawrence Livermore National Laboratory (LLNL). The current LLNL proposal is a companion document to the one submitted by NRL, for which the driver development element is focused on the krypton fluoride excimer laser option. The NRL and LLNL proposals also jointly pursue complementary activities with the associated rep-rated laser technologies relating to target fabrication, target injection, final optics, fusion chamber, target physics, materials and power plant economics. This proposal requests continued funding in FY03 to support LLNL in its program to build a 1 kW, 100 J, diode-pumped, crystalline laser, as well as research into high gain fusion target design, fusion chamber issues, and survivability of the final optic element. These technologies are crucial to the feasibility of inertial fusion energy power plants and also have relevance in rep-rated stewardship experiments. The HAPL Program pursues technologies needed for laser-driven IFE. System level considerations indicate that a rep-rated laser technology will be needed, operating at 5-10 Hz. Since a total energy of ∼2 MJ will ultimately be required to achieve suitable target gain with direct drive targets, the architecture must be scaleable. The Mercury Laser is intended to offer such an architecture. Mercury is a solid state laser that incorporates diodes, crystals and gas cooling technologies

  7. Research & development and growth: A Bayesian model averaging analysis

    Czech Academy of Sciences Publication Activity Database

    Horváth, Roman

    2011-01-01

    Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economic s Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf

  8. NERSC 2001 Annual Report; ANNUAL

    International Nuclear Information System (INIS)

    Hules, John

    2001-01-01

    The National Energy Research Scientific Computing Center (NERSC) is the primary computational resource for scientific research funded by the DOE Office of Science. The Annual Report for FY2001 includes a summary of recent computational science conducted on NERSC systems (with abstracts of significant and representative projects); information about NERSC's current systems and services; descriptions of Berkeley Lab's current research and development projects in applied mathematics, computer science, and computational science; and a brief summary of NERSC's Strategic Plan for 2002-2005

  9. Annual report 1979

    International Nuclear Information System (INIS)

    1979-12-01

    In this annual report the work done at the named institute is described. This concerns experiments with synchrotron radiations, high energy physics experiments at the PETRA and DORIS storage rings, studies of MFS interactions, and some neutrino experiments at CERN. A list of publications is included. (HSI)

  10. NIKHEF Annual Report 1982

    International Nuclear Information System (INIS)

    1983-01-01

    In this annual report 1982, the NIKHEF research programs of high-energy physics, nuclear physics and radiochemistry is described in a wide context. Next, the reports of the individual projects of section-H and section-K are described in detail. Finally, the report gives some statistical information of publications, colloquia and co-workers. (Auth.)

  11. NIKHEF Annual Report 1981

    International Nuclear Information System (INIS)

    1982-01-01

    This annual report presents the activities of the Dutch National Institute for Nuclear and High Energy Physics (NIKHEF) during its first year. Following a general introduction to the research areas in which NIKHEF is involved, 29 brief reports from the project groups are presented. Details concerning personnel, participation in councils and committees, finances, publications, colloquia and participation in congresses and schools are included. (Auth.)

  12. High prevalence of cestodes in Artemia spp. throughout the annual cycle: relationship with abundance of avian final hosts

    Science.gov (United States)

    Sánchez, Marta I.; Nikolov, Pavel N.; GEorgieva, Darina D.; Georgiev, Boyko B.; Vasileva, Gergana P.; Pankov, Plamen; Paracuellos, Mariano; Lafferty, Kevin D.; Green, Andy J.

    2013-01-01

    Brine shrimp, Artemia spp., act as intermediate hosts for a range of cestode species that use waterbirds as their final hosts. These parasites can have marked influences on shrimp behavior and fecundity, generating the potential for cascading effects in hypersaline food webs. We present the first comprehensive study of the temporal dynamics of cestode parasites in natural populations of brine shrimp throughout the annual cycle. Over a 12-month period, clonal Artemia parthenogenetica were sampled in the Odiel marshes in Huelva, and the sexual Artemia salina was sampled in the Salinas de Cerrillos in Almería. Throughout the year, 4–45 % of A. parthenogenetica were infected with cestodes (mean species richness = 0.26), compared to 27–72 % of A. salina (mean species richness = 0.64). Ten cestode species were recorded. Male and female A. salina showed similar levels of parasitism. The most prevalent and abundant cestodes were those infecting the most abundant final hosts, especially the Greater Flamingo Phoenicopterus ruber. In particular, the flamingo parasite Flamingolepis liguloides had a prevalence of up to 43 % in A. parthenogenetica and 63.5 % in A. salina in a given month. Although there was strong seasonal variation in prevalence, abundance, and intensity of cestode infections, seasonal changes in bird counts were weak predictors of the dynamics of cestode infections. However, infection levels of Confluaria podicipina in A. parthenogenetica were positively correlated with the number of their black-necked grebe Podiceps nigricollis hosts. Similarly, infection levels of Anomotaenia tringae and Anomotaenia microphallos in A. salina were correlated with the number of shorebird hosts present the month before. Correlated seasonal transmission structured the cestode community, leading to more multiple infections than expected by chance.

  13. Annual report

    International Nuclear Information System (INIS)

    1986-01-01

    This is the thirty-ninth annual report of the Atomic Energy Control Board. The period covered by this report is the year ending March 31, 1986. The Atomic Energy Control Board (AECB) was established in 1946, by the Atomic Energy Control Act (AEC Act), (Revised Statues of Canada (R.S.C.) 1970 cA19). It is a departmental corporation (Schedule B) within the meaning and purpose of the Financial Administration Act. The AECB controls the development, application and use of atomic energy in Canada, and participates on behalf of Canada in international measures of control. The AECB is also repsonsible for the administration of the Nuclear Liability Act, (R.S.C. 1970 c29 1st Supp) as amended, including the designation of nuclear installations and the prescription of basic insurance to be carried by the operators of such nuclear installations. The AECB reports to Parliament through a designated Minister, currently the Minister of Energy, Mines and Resources

  14. Building a Grad Nation: Progress and Challenge in Ending the High School Dropout Epidemic. Annual Update, 2010-2011

    Science.gov (United States)

    Balfanz, Robert; Bridgeland, John M.; Fox, Joanna Hornig; Moore, Laura A.

    2011-01-01

    America continues to make progress in meeting its high school dropout challenge. Leaders in education, government, nonprofits and business have awakened to the individual, social and economic costs of the dropout crisis and are working together to solve it. This year, all states, districts, and schools are required by law to calculate high school…

  15. Building a Grad Nation: Progress and Challenge in Ending the High School Dropout Epidemic. Executive Summary. Annual Update, 2012

    Science.gov (United States)

    Balfanz, Robert; Bridgeland, John M.; Bruce, Mary; Fox, Joanna Hornig

    2012-01-01

    This 2012 report shows that high school graduation rates continue to improve nationally and across many states and school districts, with 12 states accounting for the majority of new graduates over the last decade. Tennessee and New York continue to lead the nation with double-digit gains in high school graduation rates over the same period. The…

  16. Building a Grad Nation: Progress and Challenge in Ending the High School Dropout Epidemic. Annual Update, 2012

    Science.gov (United States)

    Balfanz, Robert; Bridgeland, John M.; Bruce, Mary; Fox, Joanna Hornig

    2012-01-01

    In 2010, the authors shared a Civic Marshall Plan to create a Grad Nation. Through that first report and subsequent update, they saw hopeful signs of progress in boosting high school graduation rates in communities across the country. This 2012 report shows that high school graduation rates continue to improve nationally and across many states and…

  17. Research on stable, high-efficiency, amorphous silicon multijunction modules. Annual subcontract report, 1 May 1991--30 April 1992

    Energy Technology Data Exchange (ETDEWEB)

    Catalano, A.; Bennett, M.; Chen, L.; D`Aiello, R.; Fieselmann, B.; Li, Y.; Newton, J.; Podlesny, R.; Yang, L. [Solarex Corp., Newtown, PA (United States). Thin Film Div.

    1992-08-01

    This report describes work to demonstrate a multijunction module with a ``stabilized`` efficiency (600 h, 50{degrees}C, AM1.5) of 10.5%. Triple-junction devices and modules using a-Si:H alloys with carbon and germanium were developed to meet program goals. ZnO was used to provide a high optical transmission front contact. Proof of concept was obtained for several important advances deemed to be important for obtaining high (12.5%) stabilized efficiency. They were (1) stable, high-quality a-SiC:H devices and (2) high-transmission, textured ZnO. Although these developments were not scaled up and included in modules, triple-junction module efficiencies as high as 10.85% were demonstrated. NREL measured 9.62% and 9.00% indoors and outdoors, respectively. The modules are expected to lose no more than 20% of their initial performance. 28 refs.

  18. Effect of temporal averaging of meteorological data on predictions of groundwater recharge

    Directory of Open Access Journals (Sweden)

    Batalha Marcia S.

    2018-06-01

    Full Text Available Accurate estimates of infiltration and groundwater recharge are critical for many hydrologic, agricultural and environmental applications. Anticipated climate change in many regions of the world, especially in tropical areas, is expected to increase the frequency of high-intensity, short-duration precipitation events, which in turn will affect the groundwater recharge rate. Estimates of recharge are often obtained using monthly or even annually averaged meteorological time series data. In this study we employed the HYDRUS-1D software package to assess the sensitivity of groundwater recharge calculations to using meteorological time series of different temporal resolutions (i.e., hourly, daily, weekly, monthly and yearly averaged precipitation and potential evaporation rates. Calculations were applied to three sites in Brazil having different climatological conditions: a tropical savanna (the Cerrado, a humid subtropical area (the temperate southern part of Brazil, and a very wet tropical area (Amazonia. To simplify our current analysis, we did not consider any land use effects by ignoring root water uptake. Temporal averaging of meteorological data was found to lead to significant bias in predictions of groundwater recharge, with much greater estimated recharge rates in case of very uneven temporal rainfall distributions during the year involving distinct wet and dry seasons. For example, at the Cerrado site, using daily averaged data produced recharge rates of up to 9 times greater than using yearly averaged data. In all cases, an increase in the time of averaging of meteorological data led to lower estimates of groundwater recharge, especially at sites having coarse-textured soils. Our results show that temporal averaging limits the ability of simulations to predict deep penetration of moisture in response to precipitation, so that water remains in the upper part of the vadose zone subject to upward flow and evaporation.

  19. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  20. Advances in High-Throughput Speed, Low-Latency Communication for Embedded Instrumentation (7th Annual SFAF Meeting, 2012)

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Scott

    2012-06-01

    Scott Jordan on "Advances in high-throughput speed, low-latency communication for embedded instrumentation" at the 2012 Sequencing, Finishing, Analysis in the Future Meeting held June 5-7, 2012 in Santa Fe, New Mexico.

  1. Forschungszentrum Juelich. Annual report 2015

    International Nuclear Information System (INIS)

    Frick, Frank; Lueers, Katja; Roegener, Wiebke; Stettien, Annette; Trautwein, Ilse; Stahl-Busse, Brigitte

    2016-07-01

    The annual report 2015 of the Forschungszentrum Juelich covers research activities, including high-lights of brain science, electrically controllable quantum bits, climate science and atmosphere research, knowledge management, including education and international cooperation, and an economic survey.

  2. Forschungszentrum Juelich. Annual report 2013

    International Nuclear Information System (INIS)

    Frick, Frank; Roegener, Wiebke

    2014-07-01

    The annual report 2013 of the Forschungszentrum Juelich covers research activities, including high-lights of brain science, electrically controllable quantum bits, climate science and atmosphere research, knowledge management, including education and international cooperation, and an economic survey.

  3. Annual surveillance by CA125 and transvaginal ultrasound for ovarian cancer in both high-risk and population risk women is ineffective

    DEFF Research Database (Denmark)

    Woodward, E R; Sleightholme, H V; Considine, A M

    2007-01-01

    OBJECTIVE: To assess the efficacy of annual CA125 and transvaginal ultrasound (TVU) scan as surveillance for ovarian cancer. DESIGN: Retrospective audit. SETTING: NHS Trust. POPULATION: Three hundred and forty-one asymptomatic women enrolled for ovarian cancer screening: 179 were in a high...... and local cancer registry data. MAIN OUTCOME MEASURES: Ovarian cancers occurring in study population. Sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of TVU, and CA125 as a screening tool for ovarian cancer. RESULTS: Four ovarian cancers and one endometrial...... cancer occurred. One ovarian cancer was detected at surveillance, three occurred in women who presented symptomatically between screenings. Thirty women underwent exploratory surgery because of abnormal findings at surveillance. Two women had cancer (PPV = 6.7%); one had ovarian cancer and the other...

  4. Determining Mean Annual Energy Production

    DEFF Research Database (Denmark)

    Kofoed, Jens Peter; Folley, Matt

    2016-01-01

    This robust book presents all the information required for numerical modelling of a wave energy converter, together with a comparative review of the different available techniques. The calculation of the mean annual energy production (MAEP) is critical to the assessment of the levelized cost...... of energy for a wave energy converter or wave farm. Fundamentally, the MAEP is equal to the sum of the product of the power capture of a set of sea-states and their average annual occurrence. In general, it is necessary in the calculation of the MAEP to achieve a balance between computational demand...

  5. Estimation of annual effective dose from 226Ra 228Ra due to consumption of foodstuffs by inhabitants of high level natural radiation of Ramsar, Iran

    International Nuclear Information System (INIS)

    Fathivand, A.A.; Asefi, M.; Amidi, A.

    2005-01-01

    Full text: A knowledge of natural radioactivity in man and his environment is important since naturally occurring radionuclides are the major source of radiation exposure to man. Radioactive nuclides present in the natural environment enter the human body mainly through food and water.Besides, measurement of naturally occurring radionuclides in the environment can be used not only as a reference when routine releases from nuclear installation or accidental radiation exposures are assessed, but also as a baseline to evaluate the impact caused by non-nuclear activities. In Iran, measurement of natural and artificial radionuclides in environmental samples in normal and high-background radiation areas have been performed by some investigators but no information has been available on 226 Ra and 228 Ra in foodstuffs. Therefore we have started measurements of 226 Ra and 228 Ra in foodstuffs of Ramsar which is a coastal city in the north part of Iran and has been known as one of the world's high level natural radiation areas, using low level gamma spectrometry measurement system .The results from our measurements and food consumption rates for inhabitants of Ramsar city have been used for the estimation of annual effective dose due to consumption of foodstuffs by inhabitants of Ramsar city. A total of 33 samples from 11 different foodstuffs including root vegetables (beetroot), leafy vegetables (lettuce, parsley and spinach) and tea, meat,chicken, pea,broad bean, rice, and cheese were purchased from markets and were analyzed for their 226 Ra and 228 Ra concentrations. The highest concentrations of 226 Ra and 228 Ra were determined in tea samples with 1570 and 1140 mBq kg -1 respectively and the maximum estimated annual effective dose from 226 Ra and Ra due to consumption foodstuffs were determined to be 19.22 and 0.71 μSv from rice and meat samples respectively

  6. High temperature fracture and fatigue of ceramics. Annual technical progress report No. 6, August 15, 1994--August 14, 1995

    Energy Technology Data Exchange (ETDEWEB)

    Cox, B.

    1996-04-01

    This report covers work done in the first year of our new contract {open_quotes}High Temperature Fracture and Fatigue of Ceramics,{close_quotes} which commenced in August, 1995 as a follow-on from our prior contract {open_quotes}Mechanisms of Mechanical Fatigue in Ceramics.{close_quotes} Our activities have consisted mainly of studies of the failure of fibrous ceramic matrix composites (CMCs) at high temperature; with a little fundamental work on the role of stress redistribution in the statistics of fracture and cracking in the presence of viscous fluids.

  7. High Energy Physics Division semiannual report of research activities. Semi-annual progress report, July 1, 1995--December 31, 1995

    International Nuclear Information System (INIS)

    Norem, J.; Bajt, D.; Rezmer, R.; Wagner, R.

    1996-10-01

    This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period July 1, 1995 - December 31, 1995. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included

  8. Physics of intense light ion beams and production of high energy density in matter. Annual report 1994

    International Nuclear Information System (INIS)

    Bluhm, H.J.

    1995-06-01

    This report presents the results obtained in 1994 within the FZK-program on 'Physics of intense ion beams and pulsed plasmas'. It describes the present status of the 6 MW, 2 TW pulsed generator KALIF-HELIA, the production and focussing of high power ion beams and numerical simulations and experiments related to the hydrodynamics of beam matter interaction. (orig.) [de

  9. DESIGN STUDY FOR A LOW-ENRICHED URANIUM CORE FOR THE HIGH FLUX ISOTOPE REACTOR, ANNUAL REPORT FOR FY 2010

    Energy Technology Data Exchange (ETDEWEB)

    Cook, David Howard [ORNL; Freels, James D [ORNL; Ilas, Germina [ORNL; Jolly, Brian C [ORNL; Miller, James Henry [ORNL; Primm, Trent [ORNL; Renfro, David G [ORNL; Sease, John D [ORNL; Pinkston, Daniel [ORNL

    2011-02-01

    This report documents progress made during FY 2010 in studies of converting the High Flux Isotope Reactor (HFIR) from high enriched uranium (HEU) fuel to low enriched uranium (LEU) fuel. Conversion from HEU to LEU will require a change in fuel form from uranium oxide to a uranium-molybdenum alloy. With axial and radial grading of the fuel foil and an increase in reactor power to 100 MW, calculations indicate that the HFIR can be operated with LEU fuel with no degradation in performance to users from the current level. Studies are reported of support to a thermal hydraulic test loop design, the implementation of finite element, thermal hydraulic analysis capability, and infrastructure tasks at HFIR to upgrade the facility for operation at 100 MW. A discussion of difficulties with preparing a fuel specification for the uranium-molybdenum alloy is provided. Continuing development in the definition of the fuel fabrication process is described.

  10. Design Study for a Low-Enriched Uranium Core for the High Flux Isotope Reactor, Annual Report for FY 2008

    Energy Technology Data Exchange (ETDEWEB)

    Primm, Trent [ORNL; Chandler, David [ORNL; Ilas, Germina [ORNL; Miller, James Henry [ORNL; Sease, John D [ORNL; Jolly, Brian C [ORNL

    2009-03-01

    This report documents progress made during FY 2008 in studies of converting the High Flux Isotope Reactor (HFIR) from highly enriched uranium (HEU) fuel to low-enriched uranium (LEU) fuel. Conversion from HEU to LEU will require a change in fuel form from uranium oxide to a uranium-molybdenum alloy. With axial and radial grading of the fuel foil and an increase in reactor power to 100 MW, calculations indicate that the HFIR can be operated with LEU fuel with no degradation in reactor performance from the current level. Results of selected benchmark studies imply that calculations of LEU performance are accurate. Scoping experiments with various manufacturing methods for forming the LEU alloy profile are presented.

  11. Annual report 1983

    International Nuclear Information System (INIS)

    Van de Vyver, R.E.

    1983-01-01

    In 1983, the various research projects in which the nuclear physics laboratory of Ghent State University is involved, were continued. In the present Annual Report, the results obtained in the field of photofission, photonuclear reactions positron annihilation, dosimetry and nuclear theory are summarized. The new 10 MeV high duty factor linear electron accelerator is presently being installed: performance tests will be carried out until July 1984; after which this facility will gradually become available for nuclear research purposes. (AF)

  12. NERSC 1998 annual report

    Energy Technology Data Exchange (ETDEWEB)

    Hules, John (ed.)

    1999-03-01

    This 1998 annual report from the National Scientific Energy Research Computing Center (NERSC) presents the year in review of the following categories: Computational Science; Computer Science and Applied Mathematics; and Systems and Services. Also presented are science highlights in the following categories: Basic Energy Sciences; Biological and Environmental Research; Fusion Energy Sciences; High Energy and Nuclear Physics; and Advanced Scientific Computing Research and Other Projects.

  13. IKF - annual report 1982

    International Nuclear Information System (INIS)

    Schmidt-Boecking, H.; Steuer, E.

    This annual report contains extended abstracts about the work performed in the named institute during 1982 together with a list of publications. The work concerns nuclear structure and nuclear reactions, high-energetic heavy ion physics, heavy ion-atom collisions, nuclear solidstate physics, solid-state particle detectors, the application of nuclear methods and mass spectroscopy, ion source development, instrumental development and data processing, interdisciplinary cooperation, as well as the Van de Graaf accelerator facilities. (HSI) [de

  14. Annual report 1976

    International Nuclear Information System (INIS)

    Nilsson, A.

    1976-01-01

    This annual report contains research reports from the various groups of the Research Institute of Physics, Stockholm. Reports are made by workers in the Atomic and Molecular Physics group, the Surface Physics group, the Nuclear Physics group, the group researching into High Energy Physics and related topics and the Instrumentation and Methods group. The report also contains a list of the papers published by members of the Institute during the year and a list of the theses which were presented. (B.D.)

  15. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  16. Quarterly, Bi-annual and Annual Reports

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Quarterly, Bi-annual and Annual Reports are periodic reports issued for public release. For the deep set fishery these reports are issued quarterly and anually....

  17. Chemical decomposition of high-level nuclear waste storage/disposal glasses under irradiation. 1997 annual progress report

    International Nuclear Information System (INIS)

    Griscom, D.L.; Merzbacher, C.I.

    1997-01-01

    'The objective of this research is to use the sensitive technique of electron spin resonance (ESR) to look for evidence of radiation-induced chemical decomposition of vitreous forms contemplated for immobilization of plutonium and/or high-level nuclear wastes, to interpret this evidence in terms of existing knowledge of glass structure, and to recommend certain materials for further study by other techniques, particularly electron microscopy and measurements of gas evolution by high-vacuum mass spectroscopy. Previous ESR studies had demonstrated that an effect of y rays on a simple binary potassium silicate glass was to induce superoxide (O 2 - ) and ozonide (O 3 - ) as relatively stable product of long-term irradiation Accordingly, some of the first experiments performed as a part of the present effort involved repeating this work. A glass of composition 44 K 2 O: 56 SiO 2 was prepared from reagent grade K 2 CO3 and SiO 2 powders melted in a Pt crucible in air at 1,200 C for 1.5 hr. A sample irradiated to a dose of 1 MGy (1 MGy = 10 8 rad) indeed yielded the same ESR results as before. To test the notion that the complex oxygen ions detected may be harbingers of radiation-induced phase separation or bubble formation, a small-angle neutron scattering (SANS) experiment was performed. SANS is theoretically capable of detecting voids or bubbles as small as 10 305 in diameter. A preliminary experiment was carried out with the collaboration of Dr. John Barker (NIST). The SANS spectra for the irradiated and unirradiated samples were indistiguishable. A relatively high incoherent background (probably due to the presence of protons) may obscure scattering from small gas bubbles and therefore decrease the effective resolution of this technique. No further SANS experiments are planned at this time.'

  18. Low-Enriched Uranium Fuel Conversion Activities for the High Flux Isotope Reactor, Annual Report for FY 2011

    Energy Technology Data Exchange (ETDEWEB)

    Renfro, David G [ORNL; Cook, David Howard [ORNL; Freels, James D [ORNL; Griffin, Frederick P [ORNL; Ilas, Germina [ORNL; Sease, John D [ORNL; Chandler, David [ORNL

    2012-03-01

    This report describes progress made during FY11 in ORNL activities to support converting the High Flux Isotope Reactor (HFIR) from high-enriched uranium (HEU) fuel to low-enriched uranium (LEU) fuel. Conversion from HEU to LEU will require a change in fuel form from uranium oxide to a uranium-molybdenum (UMo) alloy. With both radial and axial contouring of the fuel foil and an increase in reactor power to 100 MW, calculations indicate that the HFIR can be operated with LEU fuel with no degradation in performance to users from the current levels achieved with HEU fuel. Studies are continuing to demonstrate that the fuel thermal safety margins can be preserved following conversion. Studies are also continuing to update other aspects of the reactor steady state operation and accident response for the effects of fuel conversion. Technical input has been provided to Oregon State University in support of their hydraulic testing program. The HFIR conversion schedule was revised and provided to the GTRI program. In addition to HFIR conversion activities, technical support was provided directly to the Fuel Fabrication Capability program manager.

  19. Low-Enriched Uranium Fuel Conversion Activities for the High Flux Isotope Reactor, Annual Report for FY 2011

    International Nuclear Information System (INIS)

    Renfro, David G.; Cook, David Howard; Freels, James D.; Griffin, Frederick P.; Ilas, Germina; Sease, John D.; Chandler, David

    2012-01-01

    This report describes progress made during FY11 in ORNL activities to support converting the High Flux Isotope Reactor (HFIR) from high-enriched uranium (HEU) fuel to low-enriched uranium (LEU) fuel. Conversion from HEU to LEU will require a change in fuel form from uranium oxide to a uranium-molybdenum (UMo) alloy. With both radial and axial contouring of the fuel foil and an increase in reactor power to 100 MW, calculations indicate that the HFIR can be operated with LEU fuel with no degradation in performance to users from the current levels achieved with HEU fuel. Studies are continuing to demonstrate that the fuel thermal safety margins can be preserved following conversion. Studies are also continuing to update other aspects of the reactor steady state operation and accident response for the effects of fuel conversion. Technical input has been provided to Oregon State University in support of their hydraulic testing program. The HFIR conversion schedule was revised and provided to the GTRI program. In addition to HFIR conversion activities, technical support was provided directly to the Fuel Fabrication Capability program manager.

  20. ZPR-3 Assembly 11 : A cylindrical sssembly of highly enriched uranium and depleted uranium with an average {sup 235}U enrichment of 12 atom % and a depleted uranium reflector.

    Energy Technology Data Exchange (ETDEWEB)

    Lell, R. M.; McKnight, R. D.; Tsiboulia, A.; Rozhikhin, Y.; National Security; Inst. of Physics and Power Engineering

    2010-09-30

    Over a period of 30 years, more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited for nuclear data validation and to form the basis for criticality safety benchmarks. A number of the Argonne ZPR/ZPPR critical assemblies have been evaluated as ICSBEP and IRPhEP benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactor physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. ZPR-3 Assembly 11 (ZPR-3/11) was designed as a fast reactor physics benchmark experiment with an average core {sup 235}U enrichment of approximately 12 at.% and a depleted uranium reflector. Approximately 79.7% of the total fissions in this assembly occur above 100 keV, approximately 20.3% occur below 100 keV, and essentially none below 0.625 eV - thus the classification as a 'fast' assembly. This assembly is Fast Reactor Benchmark No. 8 in the Cross Section Evaluation

  1. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  2. Studies related to chemical mechanisms of gas formation in Hanford high-level nuclear wastes. 1997 annual progress report

    International Nuclear Information System (INIS)

    Barefield, E.K.; Liotta, C.L.; Neumann, H.M.

    1997-01-01

    'Work during the past year has been concentrated in three areas: Analysis of the Relative Contributions of Thermal versus Radiolytic Pathways for Complexant Decomposition in Tank 101SY; Synthesis of Potential Precursors to HNO/NO - , and Analysis of the Kinetics of Decomposition of Piloty''s Acid at High [OH - ]. The undergraduate student worked on the aluminum catalyzed reactions of nitrite ion with 2-hydroxyethylamines. This is a follow-up to earlier work done under Westinghouse Hanford and PNNL funding that will be expanded to include an exploration of the complexation of nitrite ion by aluminum when Ms. Chalfant''s lab skills are sufficiently established. A brief synopsis of work in each of the first three areas.'

  3. DOE-DARPA High-Performance Corrosion-Resistant Materials (HPCRM), Annual HPCRM Team Meeting & Technical Review

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, J; Brown, B; Bayles, B; Lemieux, T; Choi, J; Ajdelsztajn, L; Dannenberg, J; Lavernia, E; Schoenung, J; Branagan, D; Blue, C; Peter, B; Beardsley, B; Graeve, O; Aprigliano, L; Yang, N; Perepezko, J; Hildal, K; Kaufman, L; Lewandowski, J; Perepezko, J; Hildal, K; Kaufman, L; Lewandowski, J; Boudreau, J

    2007-09-21

    The overall goal is to develop high-performance corrosion-resistant iron-based amorphous-metal coatings for prolonged trouble-free use in very aggressive environments: seawater & hot geothermal brines. The specific technical objectives are: (1) Synthesize Fe-based amorphous-metal coating with corrosion resistance comparable/superior to Ni-based Alloy C-22; (2) Establish processing parameter windows for applying and controlling coating attributes (porosity, density, bonding); (3) Assess possible cost savings through substitution of Fe-based material for more expensive Ni-based Alloy C-22; (4) Demonstrate practical fabrication processes; (5) Produce quality materials and data with complete traceability for nuclear applications; and (6) Develop, validate and calibrate computational models to enable life prediction and process design.

  4. High energy physics program at Texas A and M University. Annual report, April 1, 1991--March 31, 1992

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-01

    The Texas A&M experimental high energy physics program continued to reach significant milestones in each of its research initiatives during the course of the past year. We are participating in two major operating experiments, CDF and MACRO. In CDF, the Texas A&M group has spearheaded the test beam program to recalibrate the Forward Hadron Calorimeter for the upcoming CDF data run, as well as contributing to the ongoing analysis work on jets and b-quarks. In MACRO, we have assisted in the development of the final version of the wave form digitizing system being implemented for the entire scintillator system. The construction of the first six supermodules of the detector has been completed and all six are currently taking data with streamer chambers while four have the completed scintillator counter system up and running. We have built and tested prototypes of a liquid-scintillator fiber calorimeter system, in which internally reflecting channels are imbedded in a lead matrix and filled with liquid scintillator. This approach combines the performance features of fiber calorimetry and the radiation hardness of liquid scintillator, and is being developed for forward calorimetry at the SSC. The microstrip chamber is a new technology for precision track chambers that offers the performance required for future hadron colliders. The theoretical high energy physics program has continued to develop during the past funding cycle. We have continued the study of their very successful string-derived model that unifies all known interactions; flipped SU(5), which is the leading candidate for a TOE. Work has continued on some generalizations of the symmetries of string theory, known as W algebras. These are expected to have applications in two-dimensional conformal field theory, two-dimensional extensions of gravity and topological gravity and W-string theory.

  5. Annual report for the High Energy Physics Program at Texas A and M University, October 1, 1993--September 30, 1994

    International Nuclear Information System (INIS)

    1994-10-01

    The experimental and theoretical high energy physics programs at Texas A and M University have continued their ambitious research activities over the past year. On the experimental side, the authors have continued their participation in two major operating experiments, CDF and MACRO, and each of these programs have attained significant milestones during this period. Especially note worthy is the CDF Collaborations paper on the ''evidence'' for the top quark and MACRO's completion of the construction of the ''Attico''. In CDF, the Texas A and M group continues to play a leading role in the plans for upgrading the silicon vertex detector, as well as supporting the on going running of this experiment during its current data taking run. In addition, the group has focused its analysis efforts on studies of trilepton events and as well searching for supersymmetric particles. In MACRO, the authors have continued their work on the development of the final version of the wave form digitizing system. Within the past month the final production circuits have been assembled and they are currently testing these units at Texas A and M. The authors plan to complete this testing and commission the wave form digitizing system on the MACRO detector by the end of 1994. The theoretical high energy physics program has also continued to develop during the past funding cycle. D. Nanopoulos and colleagues have continued the study of their very successful string-derived model that unifies all known interactions; flipped SU(5), which is the leading candidate for a theory of everything. C. Pope has continued his work on generalizations of the symmetries of string theory, known as W algebras

  6. A fast, high light output scintillator for gamma ray and neutron detection. Fifth Semi-Annual Report

    International Nuclear Information System (INIS)

    Entine, Gerald; Kanai, S.; Shah, M.S.; Leonard Cirignano, M.S.; Jarek Glodo; Van Loef, Edgar V.

    2003-01-01

    In view of the attractive properties of RbGd2Br7:Ce for gamma-ray and thermal neutron detection, and the lack of larger volume crystals, the goal of the Phase I project was to perform a rigorous investigation of the crystal growth of this exciting material and explore its capabilities for gamma-ray and thermal neutron detection. The Phase I research was very successful. All technical objectives were met and in many cases exceeded expectations. We were able to produce large (>1 cm3) RbGd2Br7:Ce crystals with excellent scintillation properties and demonstrated the possibility to detect thermal neutrons. As far as we are aware, our Phase I experiment was the first to demonstrate thermal neutron detection with RbGd2Br7:Ce. Clearly, the feasibility of the proposed research was adequately proven. The Phase II research builds on the successful results obtained during Phase I. Phase II will initially focus on optimizing the RbGd2Br7:Ce growth process to produce high quality, larger volume RbGd2Br7:Ce crystals. We will continue to use the versatile Bridgman technique. During this process, crystal growth parameters will be adjusted for optimal growth conditions. Our goal is to produce high quality RbGd2Br7:Ce crystals of size 1 inch x 1 inch x 1 inch (∼16 cm3). We will work on packaging aspects that allow efficient light collection and prevent crystal degradation. We will study and measure emission spectra, light yield, scintillation decay, energy and time resolution. The effects of variation in Ce concentration on the scintillation properties of RbGd2Br7:Ce will be examined in detail. Comprehensive gamma-ray spectroscopic and imaging studies will be conducted. Also, optimization of RbGd2Br7:Ce for thermal neutron detection will be addressed. Our initial studies will determine the optimal geometry of the RbGd2Br7:Ce crystals for neutron detection. For thermal neutron detection experiments, we will produce large area, thin samples in order to minimize gamma-ray sensitivity

  7. FY05 HPCRM Annual Report: High-Performance Corrosion-Resistant Iron-Based Amorphous Metal Coatings

    International Nuclear Information System (INIS)

    Farmer, J; Choi, J; Haslam, J; Day, S; Yang, N; Headley, T; Lucadamo, G; Yio, J; Chames, J; Gardea, A; Clift, M; Blue, G; Peters, W; Rivard, J; Harper, D; Swank, D; Bayles, R; Lemieux, E; Brown, R; Wolejsza, T; Aprigliano, L; Branagan, D; Marshall, M; Meacham, B; Aprigliano, L; Branagan, D; Marshall, M; Meacham, B; Lavernia, E; Schoenung, J; Ajdelsztajn, L; Dannenberg, J; Graeve, O; Lewandowski, J; Perepezko, J; Hildal, K; Kaufman, L; Boudreau, J

    2007-01-01

    New corrosion-resistant, iron-based amorphous metals have been identified from published data or developed through combinatorial synthesis, and tested to determine their relative corrosion resistance. Many of these materials can be applied as coatings with advanced thermal spray technology. Two compositions have corrosion resistance superior to wrought nickel-based Alloy C-22 (UNS No. N06022) in some very aggressive environments, including concentrated calcium-chloride brines at elevated temperature. Two Fe-based amorphous metal formulations have been found that appear to have corrosion resistance comparable to, or better than that of Ni-based Alloy C-22, based on breakdown potential and corrosion rate. Both Cr and Mo provide corrosion resistance, B enables glass formation, and Y lowers critical cooling rate (CCR). SAM1651 has yttrium added, and has a nominal critical cooling rate of only 80 Kelvin per second, while SAM2X7 (similar to SAM2X5) has no yttrium, and a relatively high critical cooling rate of 610 Kelvin per second. Both amorphous metal formulations have strengths and weaknesses. SAM1651 (yttrium added) has a low critical cooling rate (CCR), which enables it to be rendered as a completely amorphous thermal spray coating. Unfortunately, it is relatively difficult to atomize, with powders being irregular in shape. This causes the powder to be difficult to pneumatically convey during thermal spray deposition. Gas atomized SAM1651 powder has required cryogenic milling to eliminate irregularities that make flow difficult. SAM2X5 (no yttrium) has a high critical cooling rate, which has caused problems associated with devitrification. SAM2X5 can be gas atomized to produce spherical powders of SAM2X5, which enable more facile thermal spray deposition. The reference material, nickel-based Alloy C-22, is an outstanding corrosion-resistant engineering material. Even so, crevice corrosion has been observed with C-22 in hot sodium chloride environments without buffer

  8. FY05 HPCRM Annual Report: High-Performance Corrosion-Resistant Iron-Based Amorphous Metal Coatings

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, J; Choi, J; Haslam, J; Day, S; Yang, N; Headley, T; Lucadamo, G; Yio, J; Chames, J; Gardea, A; Clift, M; Blue, G; Peters, W; Rivard, J; Harper, D; Swank, D; Bayles, R; Lemieux, E; Brown, R; Wolejsza, T; Aprigliano, L; Branagan, D; Marshall, M; Meacham, B; Aprigliano, L; Branagan, D; Marshall, M; Meacham, B; Lavernia, E; Schoenung, J; Ajdelsztajn, L; Dannenberg, J; Graeve, O; Lewandowski, J; Perepezko, J; Hildal, K; Kaufman, L; Boudreau, J

    2007-09-20

    New corrosion-resistant, iron-based amorphous metals have been identified from published data or developed through combinatorial synthesis, and tested to determine their relative corrosion resistance. Many of these materials can be applied as coatings with advanced thermal spray technology. Two compositions have corrosion resistance superior to wrought nickel-based Alloy C-22 (UNS No. N06022) in some very aggressive environments, including concentrated calcium-chloride brines at elevated temperature. Two Fe-based amorphous metal formulations have been found that appear to have corrosion resistance comparable to, or better than that of Ni-based Alloy C-22, based on breakdown potential and corrosion rate. Both Cr and Mo provide corrosion resistance, B enables glass formation, and Y lowers critical cooling rate (CCR). SAM1651 has yttrium added, and has a nominal critical cooling rate of only 80 Kelvin per second, while SAM2X7 (similar to SAM2X5) has no yttrium, and a relatively high critical cooling rate of 610 Kelvin per second. Both amorphous metal formulations have strengths and weaknesses. SAM1651 (yttrium added) has a low critical cooling rate (CCR), which enables it to be rendered as a completely amorphous thermal spray coating. Unfortunately, it is relatively difficult to atomize, with powders being irregular in shape. This causes the powder to be difficult to pneumatically convey during thermal spray deposition. Gas atomized SAM1651 powder has required cryogenic milling to eliminate irregularities that make flow difficult. SAM2X5 (no yttrium) has a high critical cooling rate, which has caused problems associated with devitrification. SAM2X5 can be gas atomized to produce spherical powders of SAM2X5, which enable more facile thermal spray deposition. The reference material, nickel-based Alloy C-22, is an outstanding corrosion-resistant engineering material. Even so, crevice corrosion has been observed with C-22 in hot sodium chloride environments without buffer

  9. Pion and kaon correlations in high energy heavy-ion collisions. Annual report, April 1, 1995 - March 31, 1996

    International Nuclear Information System (INIS)

    Wolf, K.L.

    1996-01-01

    Data analysis is in progress for recent experiments performed by the NA44 collaboration with the first running of 160 A GeV 208 Pb-induced reactions at the CERN SPS. Identified singles spectra were taken for pions, kaons, protons, deuterons, antiprotons and antideuterons. Two-pion interferometry measurements were made for semi-central-triggered 208 Pb + Pb collisions. An upgraded multiple-particle spectrometer allows high statistics data sets of identified particles to be collected near mid-rapidity. A second series of experiments will be performed in the fall of 1995 with more emphasis on identical kaon interferometry and on the measurement of rare particle spectra and correlations. Modest instrumentation upgrades by TAMU are designed to increase the trigger function for better impact parameter selection and improved collection efficiency of valid events. An effort to achieve the highest degree of projectile-target stopping is outlined and it is argued that an excitation function on the SPS is needed to better understand reaction mechanisms. Analysis of experimental results is in the final stages at LBL in the EOS collaboration for two-ion interferometry in the 1.2 A GeV Au+Au reaction, taken with full event characterization

  10. Corrosion behaviour of container materials for geological disposal of high-level waste. Joint annual progress report 1983

    International Nuclear Information System (INIS)

    1985-01-01

    Within the framework of the Community R and D programme on management and storage of radioactive waste (shared-cost action), a research activity is aiming at the assessment of corrosion behaviour of potential container materials for geological disposal of vitrified high-level wastes. In this report, the results obtained during the year 1983 are described. Research performed at the Studiecentrum voor Kernenergie/Centre d'Etudes de l'Energie Nucleaire (SCK/CEN) at Mol (B), concerns the corrosion behaviour in clay environments. The behaviour in salt is tested by the Kernforschungszentrum (KfK) at Karlsruhe (D). Corrosion behaviour in granitic environments is being examined by the Commissariat a l'Energie Atomique (CEA) at Fontenay-aux-Roses (F) and the Atomic Energy Research Establishment (AERE) at Harwell (UK); the first is concentrating on corrosion-resistant materials and the latter on corrosion-allowance materials. Finally, the Centre National de la Recherche Scientifique (CNRS) at Vitry (F) is examining the formation and behaviour of passive layers on the metal alloys in the various environments

  11. Annual report, spring 2015. Alternative chemical cleaning methods for high level waste tanks-corrosion test results

    Energy Technology Data Exchange (ETDEWEB)

    Wyrwas, R. B. [Savannah River Site (SRS), Aiken, SC (United States)

    2015-07-06

    The testing presented in this report is in support of the investigation of the Alternative Chemical Cleaning program to aid in developing strategies and technologies to chemically clean radioactive High Level Waste tanks prior to tank closure. The data and conclusions presented here were the examination of the corrosion rates of A285 carbon steel and 304L stainless steel when interacted with the chemical cleaning solution composed of 0.18 M nitric acid and 0.5 wt. % oxalic acid. This solution has been proposed as a dissolution solution that would be used to remove the remaining hard heel portion of the sludge in the waste tanks. This solution was combined with the HM and PUREX simulated sludge with dilution ratios that represent the bulk oxalic cleaning process (20:1 ratio, acid solution to simulant) and the cumulative volume associated with multiple acid strikes (50:1 ratio). The testing was conducted over 28 days at 50°C and deployed two methods to invest the corrosion conditions; passive weight loss coupon and an active electrochemical probe were used to collect data on the corrosion rate and material performance. In addition to investigating the chemical cleaning solutions, electrochemical corrosion testing was performed on acidic and basic solutions containing sodium permanganate at room temperature to explore the corrosion impacts if these solutions were to be implemented to retrieve remaining actinides that are currently in the sludge of the tank.

  12. Electric power annual, 1990

    International Nuclear Information System (INIS)

    1992-01-01

    The Electric Power Annual presents a summary of electric utility statistics at the national, regional and State levels. The objective of the publication is to provide industry decisionmakers, government policy-makers, analysts and the general public with historical data that may be used in understanding US electricity markets. ''The Industry at a Glance'' section presents a profile of the electric power industry ownership and performance; a review of key statistics for the year; and projections for various aspects of the electric power industry through 2010. Subsequent sections present data on generating capability, including proposed capability additions; net generation; fossil-fuel statistics; electricity sales, revenue, and average revenue per kilowatthour sold; financial statistics; environmental statistics; and electric power transactions. In addition, appendices provide supplemental data on major disturbances and unusual occurrences. Each section contains related text and tables and refers the reader to the appropriate publication that contains more detailed data on the subject matter

  13. Experimental prediction of severe droughts on seasonal to intra-annual time scales with GFDL High-Resolution Atmosphere Model

    Science.gov (United States)

    Yu, Z.; Lin, S.

    2011-12-01

    Regional heat waves and drought have major economic and societal impacts on regional and even global scales. For example, during and following the 2010-2011 La Nina period, severe droughts have been reported in many places around the world including China, the southern US, and the east Africa, causing severe hardship in China and famine in east Africa. In this study, we investigate the feasibility and predictability of severe spring-summer draught events, 3 to 6 months in advance with the 25-km resolution Geophysical Fluid Dynamics Laboratory High-Resolution Atmosphere Model (HiRAM), which is built as a seamless weather-climate model, capable of long-term climate simulations as well as skillful seasonal predictions (e.g., Chen and Lin 2011, GRL). We adopted a similar methodology and the same (HiRAM) model as in Chen and Lin (2011), which is used successfully for seasonal hurricane predictions. A series of initialized 7-month forecasts starting from Dec 1 are performed each year (5 members each) during the past decade (2000-2010). We will then evaluate the predictability of the severe drought events during this period by comparing model predictions vs. available observations. To evaluate the predictive skill, in this preliminary report, we will focus on the anomalies of precipitation, sea-level-pressure, and 500-mb height. These anomalies will be computed as the individual model prediction minus the mean climatology obtained by an independent AMIP-type "simulation" using observed SSTs (rather than using predictive SSTs in the forecasts) from the same model.

  14. International energy annual, 1989

    International Nuclear Information System (INIS)

    1991-02-01

    This report is prepared annually and presents the latest information and trends on world energy production, consumption, reserves, trade, and prices for five primary energy sources: petroleum, natural gas, coal, hydroelectricity, and nuclear electricity. It also presents information on petroleum products. Since the early 1980's the world's total output of primary energy has increased steadily. The annual average growth rate of energy production during the decade was 1.9 percent. Throughout the 1980's, petroleum was the world's most heavily used type of energy. In 1989, three countries--the United States, the USSR, and China--were the leading producers and consumers of world energy. Together, these countries consumed and produced almost 50 percent of the world's total energy. Global production and consumption of crude oil and natural gas liquids increased during the 1980's, despite a decline in total production and demand in the early part of the decade. World production of dry natural gas continued to rise steadily in the 1980's. For the last several years, China has been the leading producer of coal, followed by the United States. In 1989, hydroelectricity supply declined slightly from the upward trend of the last 10 years. Nuclear power generation rose slightly from the 1988 level, compared with the marked growth in earlier years. Prices for major crude oils all increased between 1988 and 1989, but remained well below the price levels at the beginning of the decade. 26 figs., 36 tabs

  15. Annual report 1982

    International Nuclear Information System (INIS)

    This annual report describes the work during 1982 of the Nuclear Physics Institute of Lyon. The achievement of SARA postaccelerator in Grenoble, realized in collaboration with the Nuclear Science Institute, permits to pursue new lines of research in heavy ion physics. A new isotope separator was realized by the nuclear spectroscopy group and the high energy experimental group cooperates with the LAPP to build in international collaboration the L3 detector for LEP. The topics covered include theoretical physics, high energy and intermediate energy physics, nuclear physics and interdisciplinary physics, such as solid state physics and neutronics [fr

  16. Annual report 1976

    International Nuclear Information System (INIS)

    Lindh, U.; Sundberg, O.

    1977-01-01

    The Gustaf Werner Institute (GWI) annual report for the year 1976 presents in a condensed form the scientific activities in the disciplines High Energy Physics and Physical Biology at Uppsala University. The activities in High Energy Physics fall into three domains: Research with the local accelerator, participation in collaborations at international centers and work on the rebuilding of the Uppsala synchrocyclotron. A major subject of research in Physical Biology is control of growth and differentiation, as reflected in the kinetics of biochemical reactions or in the behaviour of healthy or malignant cells at various levels of organization. (Auth.)

  17. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  18. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  19. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  20. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  1. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  2. Average multiplications in deep inelastic processes and their interpretation

    International Nuclear Information System (INIS)

    Kiselev, A.V.; Petrov, V.A.

    1983-01-01

    Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity

  3. High-precision analysis on annual variations of heavy metals, lead isotopes and rare earth elements in mangrove tree rings by inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Yu Kefu; Kamber, Balz S.; Lawrence, Michael G.; Greig, Alan; Zhao Jianxin

    2007-01-01

    Annual variations from 1982 to 1999 of a wide range of trace elements and reconnaissance Pb isotopes ( 207 Pb/ 206 Pb and 208 Pb/ 206 Pb) were analyzed by solution ICP-MS on digested ash from mangrove Rhizophora apiculata, obtained from Leizhou Peninsula, along northern coast of South China Sea. The concentrations of the majority of elements show a weak declining trend with growth from 1982 to 1999, punctuated by several high concentration spikes. The declining trends are positively correlated with ring width and negatively correlated with inferred water-use efficiency, suggesting a physiological control over metal-uptake in this species. The episodic metal concentration-peaks cannot be interpreted with lateral movement or growth activities and appear to be related to environmental pollution events. Pb isotope ratios for most samples plot along the 'Chinese Pb line' and clearly document the importance of gasoline Pb as a source of contaminant. Shale-normalised REE + Y patterns are relatively flat and consistent across the growth period, with all patterns showing a positive Ce anomaly and elevated Y/Ho ratio. The positive Ce anomaly is observed regardless of the choice of normaliser, in contrast to previously reported REE patterns for terrestrial and marine plants. This pilot study of trace element, REE + Y and Pb isotope distribution in mangrove tree rings indicates the potential use of mangroves as monitors of historical environmental change

  4. Retrospectively reported month-to-month variation in sleeping problems of people naturally exposed to high-amplitude annual variation in daylength and/or temperature

    Directory of Open Access Journals (Sweden)

    Arcady A. Putilov

    Full Text Available Compared to literature on seasonal variation in mood and well-being, reports on seasonality of trouble sleeping are scarce and contradictive. To extend geography of such reports on example of people naturally exposed to high-amplitude annual variation in daylength and/or temperature. Participants were the residents of Turkmenia, West Siberia, South and North Yakutia, Chukotka, and Alaska. Health and sleep-wake adaptabilities, month-to-month variation in sleeping problems, well-being and behaviors were self-assessed. More than a half of 2398 respondents acknowledged seasonality of sleeping problems. Four of the assessed sleeping problems demonstrated three different patterns of seasonal variation. Rate of the problems significantly increased in winter months with long nights and cold days (daytime sleepiness and difficulties falling and staying asleep as well as in summer months with either long days (premature awakening and difficulties falling and staying asleep or hot nights and days (all 4 sleeping problems. Individual differences between respondents in pattern and level of seasonality of sleeping problems were significantly associated with differences in several other domains of individual variation, such as gender, age, ethnicity, physical health, morning-evening preference, sleep quality, and adaptability of the sleep-wake cycle. These results have practical relevance to understanding of the roles playing by natural environmental factors in seasonality of sleeping problems as well as to research on prevalence of sleep disorders and methods of their prevention and treatment in regions with large seasonal differences in temperature and daylength.

  5. High temperature turbine technology program. Phase II. Technology test and support studies. Annual technical progress report, January 1, 1979-December 31, 1979

    Energy Technology Data Exchange (ETDEWEB)

    1980-01-01

    Work performed on the High Temperature Turbine Technology Program, Phase II - Technology Test and Support Studies during the period from January 1, 1979 through December 31, 1979 is summarized. Objectives of the program elements as well as technical progress and problems encountered during this Phase II annual reporting period are presented. Progress on design, fabrication and checkout of test facilities and test rigs is described. LP turbine cascade tests were concluded. 350 hours of testing were conducted on the LP rig engine first with clean distillate fuel and then with fly ash particulates injected into the hot gas stream. Design and fabrication of the turbine spool technology rig components are described. TSTR 60/sup 0/ sector combustor rig fabrication and testing are reviewed. Progress in the design and fabrication of TSTR cascade rig components for operation on both distillate fuel and low Btu gas is described. The new coal-derived gaseous fuel synthesizing facility is reviewed. Results and future plans for the supporting metallurgical programs are discussed.

  6. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  7. Determining the optimal number of individual samples to pool for quantification of average herd levels of antimicrobial resistance genes in Danish pig herds using high-throughput qPCR

    DEFF Research Database (Denmark)

    Clasen, Julie; Mellerup, Anders; Olsen, John Elmerdahl

    2016-01-01

    The primary objective of this study was to determine the minimum number of individual fecal samples to pool together in order to obtain a representative sample for herd level quantification of antimicrobial resistance (AMR) genes in a Danish pig herd, using a novel high-throughput qPCR assay...

  8. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  9. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  10. NIRE annual report 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    The National Institute for Resources and Environment (NIRE) has a R & D concept of `ecotechnology` that aims to protect the environment from degradation whilst promoting sustainable development. This annual report presents summaries of 32 recent research efforts on such topics as: emission control of sulfur and nitrogen oxides from advanced coal combustors; catalysts for diesel NO{sub x} removal; measuring dust from stationary sources; software for life cycle assessment; marine disposal of CO{sub 2}; emissions of greenhouse gases from coal mines in Japan; structural changes in coal particles during gasification; solubilization and desulfurization of high sulfur coal with trifluoromethane sulfuroic acid; and oxidation mechanisms of H{sub 2}S.

  11. Annual report 1980

    International Nuclear Information System (INIS)

    1981-01-01

    This annual report contains extended abstracts about the research done at the named institute. These abstracts concern the development of accelerators and ion sources, the construction of the magnetic spectrograph and radiation detectors, the investigation of solids and microstructures by nuclear methods, the development of electronic circuits, the advances in data processing, the study of heavy ion reactions, nuclear structure, and reaction mechanisms, the research on atomic physics and the interaction of charged particles with matter, the studies in medium and high energy physics, the theoretical studies of nuclear structure, and the research in cosmochemistry. Furthermore a list of publications is added. (orig./HSI) [de

  12. Institute annual report 2004

    International Nuclear Information System (INIS)

    2005-01-01

    The mission of the ITU (Institute for Transuranium Elements) is to protect the European citizen against risk associated with the handling and storage of highly radioactive elements. The JRC (Joint Research Center) provide customer-driven scientific and technical support for the conception, development, implementation and monitoring of EU policies. In this framework this annual report presents the TU actions in: basic actinide research, spent fuel characterization, safety of nuclear fuels, partitioning and transmutation, alpha-immunotherapy/radiobiology, measurement of radioactivity in the environment, safeguards research and development. (A.L.B.)

  13. Annual report 1979

    International Nuclear Information System (INIS)

    1980-01-01

    This annual report contains extended abstracts about the research done at the named institute. These abstracts concern the development of accelerators and ion sources, the construction of the magnetic spectrograph and radiation detectors, the investigation of solids and microstructures by nuclear methods, the development of electronic circuits, the advances in data processing, the study of heavy ion reactions, nuclear structure, and reaction mechanisms, the research on atomic physics and the interaction of charged particles with matter, the studies in medium and high energy physics, the theoretical studies of nuclear structure, and the research in cosmochemistry. Furthermore a list of publications is added. (orig./HSI) [de

  14. Institute annual report 2004

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    The mission of the ITU (Institute for Transuranium Elements) is to protect the European citizen against risk associated with the handling and storage of highly radioactive elements. The JRC (Joint Research Center) provide customer-driven scientific and technical support for the conception, development, implementation and monitoring of EU policies. In this framework this annual report presents the TU actions in: basic actinide research, spent fuel characterization, safety of nuclear fuels, partitioning and transmutation, alpha-immunotherapy/radiobiology, measurement of radioactivity in the environment, safeguards research and development. (A.L.B.)

  15. 1992 Annual report

    International Nuclear Information System (INIS)

    1993-01-01

    Annual report of the Institut de Physique Nucleaire at Orsay (France). The main themes are presented. Concerning experimental research: nuclear structure, ground states and low energy excited states, high excitation energy nuclear states, nuclear matter and nucleus-nucleus collision, intermediate energy nuclear physics, radiochemistry, inter-disciplinary research, scientific information and communication; concerning theoretical physics: particles and fields (formal aspects and hadronic physics), chaotic systems and semi-classical methods, few body problems, nucleus-nucleus scattering, nucleus spectroscopy and clusters, statistical physics and condensed matter; concerning general activities and technological research: accelerators, detectors, applications in cryogenics, data processing, Isolde and Orion equipment

  16. IKF annual report 1983

    International Nuclear Information System (INIS)

    Schmidt-Boecking, H.

    1983-01-01

    This annual report contains extended abstracts about the scientific work performed at the named institute descriptions of the operation of the Van-de-Graaf accelerator facilities of this institute and the work of the technical establishments, as well as a list of publications. The scientific work concerns nuclear structure and nuclear reactions, high energy heavy ion physics, atomic physics with fast ions, nuclear solid state physics, solid-state track detectors, applications of nuclear methods in solid state physics, ion source developments, apparative developments and data processing, as well as interdisciplinary collaborations. See hints under the relevant topics. (HSI) [de

  17. Annual report 1981

    International Nuclear Information System (INIS)

    1982-01-01

    This annual report contains extended abstracts about the research done at the named institute. These abstracts concern the development of accelerators and ion sources, the construction of the magnetic spectrograph and radiation detectors, the investigation of solids and microstructures by nuclear methods, the development of electronic circuits, the advances in data processing, the study of heavy ion reactions, nuclear structure, and reaction mechanisms, the research on atomic physics and the interaction of charged particles with matter, the studies of in medium and high energy physics, the theoretical studies of nuclear structure and the research in cosmophysics. Furthermore a list of publications is added. (orig./HSI) [de

  18. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  19. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  20. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  1. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  2. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  3. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  4. Average and local atomic-scale structure in BaZrxTi(1-x)O3 (x = 0. 10, 0.20, 0.40) ceramics by high-energy x-ray diffraction and Raman spectroscopy.

    Science.gov (United States)

    Buscaglia, Vincenzo; Tripathi, Saurabh; Petkov, Valeri; Dapiaggi, Monica; Deluca, Marco; Gajović, Andreja; Ren, Yang

    2014-02-12

    High-resolution x-ray diffraction (XRD), Raman spectroscopy and total scattering XRD coupled to atomic pair distribution function (PDF) analysis studies of the atomic-scale structure of archetypal BaZrxTi(1-x)O3 (x = 0.10, 0.20, 0.40) ceramics are presented over a wide temperature range (100-450 K). For x = 0.1 and 0.2 the results reveal, well above the Curie temperature, the presence of Ti-rich polar clusters which are precursors of a long-range ferroelectric order observed below TC. Polar nanoregions (PNRs) and relaxor behaviour are observed over the whole temperature range for x = 0.4. Irrespective of ceramic composition, the polar clusters are due to locally correlated off-centre displacement of Zr/Ti cations compatible with local rhombohedral symmetry. Formation of Zr-rich clusters is indicated by Raman spectroscopy for all compositions. Considering the isovalent substitution of Ti with Zr in BaZrxTi1-xO3, the mechanism of formation and growth of the PNRs is not due to charge ordering and random fields, but rather to a reduction of the local strain promoted by the large difference in ion size between Zr(4+) and Ti(4+). As a result, non-polar or weakly polar Zr-rich clusters and polar Ti-rich clusters are randomly distributed in a paraelectric lattice and the long-range ferroelectric order is disrupted with increasing Zr concentration.

  5. IHEP 2001 annual report

    International Nuclear Information System (INIS)

    2002-01-01

    IHEP's focal points of research encompass high energy physics experiment and theory, cosmic ray and high energy astrophysics, theory of nuclear physics, nuclear detector and nuclear electronics, accelerator physics and technology, synchrotron radiation technology and application, nuclear analytical technique and application, free electron laser, computer and network application, radiation projection, etc. In 2001, IHEP further compacted its scientific goal by defining three key fields of high energy physics, research and development of advanced accelerator technologies, and advanced synchrotron radiation technologies and applications, as well as 10 relevant major research orientations. The plentiful results on scientific research, operation and upgrading of BEPC/BES/BSRF and other branches of work in 2001 are given in this annual report

  6. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  7. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  8. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  9. Average daily and annual courses of 222Rn concentration in some natural medium

    International Nuclear Information System (INIS)

    Holy, K.; Bohm, R.; Polaskova, A.; Stelina, J.; Sykora, I.; Hola, O.

    1996-01-01

    Simultaneous measurements of the 222 Rn concentration in the outdoor atmosphere of Bratislava and in the soil air over one year period have been made. Daily and seasonal variations of the 222 Rn concentration in both media were found. Some attributes of these variations as well as methods of measurements are presented in this work. (author). 17 refs., 6 figs

  10. Multi-Annual Kinematics of an Active Rock Glacier Quantified from Very High-Resolution DEMs: An Application-Case in the French Alps

    Directory of Open Access Journals (Sweden)

    Xavier Bodin

    2018-04-01

    Full Text Available Rock glaciers result from the long-term creeping of ice-rich permafrost along mountain slopes. Under warming conditions, deformation is expected to increase, and potential destabilization of those landforms may lead to hazardous phenomena. Monitoring the kinematics of rock glaciers at fine spatial resolution is required to better understand at which rate, where and how they deform. We present here the results of several years of in situ surveys carried out between 2005 and 2015 on the Laurichard rock glacier, an active rock glacier located in the French Alps. Repeated terrestrial laser-scanning (TLS together with aerial laser-scanning (ALS and structure-from-motion-multi-view-stereophotogrammetry (SFM-MVS were used to accurately quantify surface displacement of the Laurichard rock glacier at interannual and pluri-annual scales. Six very high-resolution digital elevation models (DEMs, pixel size <50 cm of the rock glacier surface were generated, and their respective quality was assessed. The relative horizontal position accuracy (XY of the individual DEMs is in general less than 2 cm with a co-registration error on stable areas ranging from 20–50 cm. The vertical accuracy is around 20 cm. The direction and amplitude of surface displacements computed between DEMs are very consistent with independent geodetic field measurements (e.g., DGPS. Using these datasets, local patterns of the Laurichard rock glacier kinematics were quantified, pointing out specific internal (rheological and external (bed topography controls. The evolution of the surface velocity shows few changes on the rock glacier’s snout for the first years of the observed period, followed by a major acceleration between 2012 and 2015 affecting the upper part of the tongue and the snout.

  11. Annual Energy Review 1999

    Energy Technology Data Exchange (ETDEWEB)

    Seiferlein, Katherine E. [USDOE Energy Information Administration (EIA), Washington, DC (United States)

    2000-07-01

    A generation ago the Ford Foundation convened a group of experts to explore and assess the Nation’s energy future, and published their conclusions in A Time To Choose: America’s Energy Future (Cambridge, MA: Ballinger, 1974). The Energy Policy Project developed scenarios of U.S. potential energy use in 1985 and 2000. Now, with 1985 well behind us and 2000 nearly on the record books, it may be of interest to take a look back to see what actually happened and consider what it means for our future. The study group sketched three primary scenarios with differing assumptions about the growth of energy use. The Historical Growth scenario assumed that U.S. energy consumption would continue to expand by 3.4 percent per year, the average rate from 1950 to 1970. This scenario assumed no intentional efforts to change the pattern of consumption, only efforts to encourage development of our energy supply. The Technical Fix scenario anticipated a “conscious national effort to use energy more efficiently through engineering know-how." The Zero Energy Growth scenario, while not clamping down on the economy or calling for austerity, incorporated the Technical Fix efficiencies plus additional efficiencies. This third path anticipated that economic growth would depend less on energy-intensive industries and more on those that require less energy, i.e., the service sector. In 2000, total energy consumption was projected to be 187 quadrillion British thermal units (Btu) in the Historical Growth case, 124 quadrillion Btu in the Technical Fix case, and 100 quadrillion Btu in the Zero Energy Growth case. The Annual Energy Review 1999 reports a preliminary total consumption for 1999 of 97 quadrillion Btu (see Table 1.1), and the Energy Information Administration’s Short-Term Energy Outlook (April 2000) forecasts total energy consumption of 98 quadrillion Btu in 2000. What energy consumption path did the United States actually travel to get from 1974, when the scenarios were drawn

  12. AREVA annual results 2009

    International Nuclear Information System (INIS)

    2009-01-01

    AREVA expanded its backlog and increased its revenues compared with 2008, on strong installed base business and dynamic major projects, fostering growth in operating income of 240 million euros. As announced previously, Areva is implementing a financing plan suited to its objectives of profitable growth. The plan was implemented successfully in 2009, including the conclusion of an agreement, under very satisfactory terms, to sell its Transmission and Distribution business for 4 billion euros, asset sales for more than 1.5 billion euros, and successful bond issues of 3 billion euros. The plan will continue in 2010 with a capital increase, the completion of asset disposals and cost reduction and continued operational performance improvement programs. Areva bolstered its Renewable Energies business segment by supplementing its offshore wind power and biomass businesses with the acquisition of Ausra, a California-based leader in concentrated solar power technology. Despite the sale of T and D, Areva is maintaining its financial performance outlook for 2012: 12% average annual revenue growth to 12 billion euros in 2012, double digit operating margin and substantially positive free operating cash flow. Annual results 2009: - For the group as a whole, including Transmission and Distribution: Backlog: euros 49.4 bn (+2.5%), Revenues: euros 14 bn (+6.4%), Operating income: euros 501 m (+20.1%); - Nuclear and Renewable Energies perimeter: Backlog: euros 43.3 bn (+1.8%), Strong revenue growth: +5.4% to euros 8.5 bn, Operating income before provision for the Finnish project in the first half of 2009: euros 647 m, Operating income: euros 97 m, for a euros 240 m increase from 2008; - Net income attributable to equity holders of the parent: euros 552 m, i.e. euros 15.59 per share; - Net debt: euros 6,193 m; - Pro-forma net debt, including net cash to be received from the sale of T and D in 2010: euros 3,022 m; - Dividend of euros 7.06 per share to be proposed during the Annual

  13. Annual Scientific Report for DE-FG03-02NA00063 Coherent imaging of laser-plasma interactions using XUV high harmonic radiation

    International Nuclear Information System (INIS)

    Henry C. Kapteyn

    2005-01-01

    In this project, we use coherent short-wavelength light generated using high-order harmonic generation as a probe of laser-plasma dynamics and phase transitions on femtosecond time-scales. The interaction of ultrashort laser pulses with materials and plasmas is relevant to stockpile stewardship, to understanding the equation of state of matter at high pressures and temperatures, and to plasma concepts such as the fast-ignitor ICF fusion concept and laser-based particle acceleration. Femtosecond laser technology makes it possible to use a small-scale setup to generate 20fs pulses with average power >10W at multiple kHz repetition rates, that can be focused to intensities in excess of 1017W/cm2. These lasers can be used either to rapidly heat materials to initiate phase transitions, or to create laser plasmas over a wide parameter space. These lasers can also be used to generate fully spatially coherent XUV beams with which to probe these materials and plasma systems. We are in process of implementing imaging studies of plasma hydrodynamics and warm, dense matter. The data will be compared with simulation codes of laser-plasma interactions, making it possible to refine and validate these codes

  14. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  15. Annual report 1984

    International Nuclear Information System (INIS)

    1985-01-01

    In this annual report of NIKHEF Amsterdam (Netherlands) research programs of high-energy physics, nuclear physics and radiochemistry are described. Concerning the high-energy physics section (section H) it contains short accounts of experiments carried out at CERN (Geneva) with the Super Proton Synchrotron (WA25, ACCMOR, EHS, CHARM), the proton-antiproton collider (SppS) and LEAR, experiments performed with the DESY (Hamburg) accelerators (Crystal-ball, MARK-J, HERA), the SLAC and LEP experiments and an overview of the activities of the theory group and the technical and instrumentation groups. As for the nuclear physics section (K), short descriptions and (preliminary) results are presented of electron-excitation studies and experiments with pions, muons and antiprotons. Theoretical studies include Coulomb sum rule, quark-bag models, pion-nucleon interaction and the Delta-hole model. A radiochemical and technical part concludes the report. (Auth.)

  16. 1985 annual report

    International Nuclear Information System (INIS)

    1986-01-01

    In this annual report of NIKHEF Amsterdam (Netherlands) research programs of high-energy physics, nuclear physics and radiochemistry are described. Concerning the high-energy physics section (section H) it contains short accounts of experiments carried out at CERN (Geneva) with the Super Proton Synchrotron (WA25, ACCMOR, EHS, CHARM), the proton-antiproton collider (SppS) and LEAR, experiments performed with the DESY (Hamburg) accelerators (Crystal-ball, MARK-J, HERA), the SLAC and LEP experiments and an overview of the activities of the theory group and the technical and instrumentation groups. As for the nuclear physics section (K), short descriptions and (preliminary) results are presented of electron-excitation studies and experiments with pions, muons and antiprotons. Theoretical studies include Coulomb sum rule, sigma-omega model, pion photoproduction and the (e,e'p) reaction. A radiochemical and technical part concludes the report. (Auth.)

  17. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  18. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  19. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  20. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  1. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  2. Comparing inferences of solar geolocation data against high-precision GPS data : Annual movements of a double-tagged black-tailed godwit

    NARCIS (Netherlands)

    Rakhimberdiev, Eldar; Senner, Nathan R.; Verhoeven, Mo A.; Winkler, David W.; Bouten, Willem; Piersma, Theunis

    2016-01-01

    Annual routines of migratory birds inferred from archival solar geolocation devices have never before been confirmed using GPS technologies. A female black-tailed godwit Limosa limosa limosa captured on the breeding grounds in The Netherlands in 2013 and recaptured in 2014 was outfitted with both an

  3. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  4. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  5. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  6. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  7. A high average power beam dump for an electron accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xianghong, E-mail: xl66@cornell.edu [Cornell Laboratory of Accelerator-based Sciences and Education, Cornell University, Ithaca, NY 14853 (United States); Bazarov, Ivan; Dunham, Bruce M.; Kostroun, Vaclav O.; Li, Yulin; Smolenski, Karl W. [Cornell Laboratory of Accelerator-based Sciences and Education, Cornell University, Ithaca, NY 14853 (United States)

    2013-05-01

    The electron beam dump for Cornell University's Energy Recovery Linac (ERL) prototype injector was designed and manufactured to absorb 600 kW of electron beam power at beam energies between 5 and 15 MeV. It is constructed from an aluminum alloy using a cylindrical/conical geometry, with water cooling channels between an inner vacuum chamber and an outer jacket. The electron beam is defocused and its centroid is rastered around the axis of the dump to dilute the power density. A flexible joint connects the inner body and the outer jacket to minimize thermal stress. A quadrant detector at the entrance to the dump monitors the electron beam position and rastering. Electron scattering calculations, thermal and thermomechanical stress analysis, and radiation calculations are presented.

  8. Research Leading to High Throughput Processing of Thin-Film CdTe PV Module: Phase I Annual Report, October 2003 (Revised)

    Energy Technology Data Exchange (ETDEWEB)

    Powell, R. C.; Meyers, P. V.

    2004-02-01

    Work under this subcontract contributes to the overall manufacturing operation. During Phase I, average module efficiency on the line was improved from 7.1% to 7.9%, due primarily to increased photocurrent resulting from a decrease in CdS thickness. At the same time, production volume for commercial sale increased from 1.5 to 2.5 MW/yr. First Solar is committed to commercializing CdTe-based thin-film photovoltaics. This commercialization effort includes a major addition of floor space and equipment, as well as process improvements to achieve higher efficiency and greater durability. This report presents the results of Phase I of the subcontract entitled''Research Leading to High Throughput Processing of Thin-Film CdTe PV Modules.'' The subcontract supports several important aspects needed to begin high-volume manufacturing, including further development of the semiconductor deposition reactor, advancement of accelerated life testing methods and understanding, and improvements to th e environmental, health, and safety programs. Progress in the development of the semiconductor deposition reactor was made in several areas. First, a new style of vapor transport deposition distributor with simpler operational behavior and the potential for improved cross-web uniformity was demonstrated. Second, an improved CdS feed system that will improve down-web uniformity was developed. Third, the core of a numerical model of fluid and heat flow within the distributor was developed, including flow in a 3-component gas system at high temperature and low pressure and particle sublimation.

  9. Annual report, 1982

    International Nuclear Information System (INIS)

    1983-01-01

    In 1982 Eldorado Nuclear Ltd. acquired important new sources of uranium in the Wollaston Lake area of northern Saskatchewan by purchasing the shares of Gulf Minerals Canada Ltd. and Uranerz Canada Ltd. Eldorado Nuclear Ltd. is now sole owner of the Rabbit Lake properties, consisting of more than 30 million kg of U 3 O 8 and a mill with a capacity of 2.5 million kg annually. Production records were set at the Port Hope, Ontario, uranium processing plant, and processing capacity continued to expand there and at the new Blind River, Ontario, refinery. The uneconomic Beaverlodge mine in northern Saskatchewan was closed as scheduled. The company participated in the development of the Key Lake project in northern Saskatchewan. This high-grade, open pit mine has reserves containing more than 80 million kg of U 3 O 8 , and will have a production capacity of 5.4 million kg annually when production begins in 1983. Company assets were increased from $618.4 million in 1981 to $875.6 million in 1982; and corporate structure was re-organized to integrate newly-acquired operations. Earnings for 1982 were $4 million

  10. 1998 annual report

    International Nuclear Information System (INIS)

    1999-01-01

    Talisman Energy Inc. is the largest Canadian-based independent oil and gas producer. Its main business activities include exploration, development, production and marketing of natural gas, natural gas liquids and crude oil. Main operating centres are in Canada, the North Sea, Indonesia and Sudan. Talisman Energy is also pursuing a number of high potential exploration ventures in Algeria and Trinidad. The Company has experienced a 30 per cent annual production growth rate over the last five years and has consistently increased its reserve base, while maintaining competitive finding and development costs. This report contains an extensive review of activities and achievements in each of Talisman Energy Inc.'s exploration and operating areas during 1998, a consolidated financial statement, a management discussion and analysis of production and financial performance, and a review of corporate governance. Future prospects are also discussed. Continued growth of 10 to 15 per cent annually over the next two years is anticipated as a result of management's current investment program. tabs

  11. Annual report/1979

    International Nuclear Information System (INIS)

    1980-04-01

    Primary energy demand in Ontario in 1979 was up by 2.9 percent, compared to 2.7 percent in the previous year. Revised forecasts issued in January 1980 indicate Ontario's need for electricity is expected to grow by an average of 3.4 percent annually to the year 2000. Nuclear generation provided 29 percent of the total energy made available by Hydro, and Hydro's eight reactors at Pickering and Bruce continued to rank in the top 36 - four in the top 10 - when compared to the permance of 104 of the world's largest reactors. The provinical legislature's Select Committee on Hydro Affairs examined the safety of the CANDU system and concluded that is is 'acceptably safe'. Faced with reduced forecasts of electrical demands to the year 2000 the Board of Directors decided to stretch out the construction schedule of the Darlington station, to halt construction of the second half of the Bruce Heavy Water Plant D, and to complete but mothball the first half. Construction of Bruce Heavy Water Plant B was completed early in 1979. The A plant produced 599.8 megagrams of reactor-grade heavy water. A control room simulator for Bruce A nuclear generating station was ordered. Agreement was reached on rebuilding faulty boilers at Pickering B. A total of 757.6 megagrams of uranium was used to produce electrical energy and steam. Ontario Hydro continued involvement in uranium exploration. Studies on radioactive waste disposal are being carried out, with emphasis on interim storage and transportation. (LL)

  12. Electric power annual, 1991

    International Nuclear Information System (INIS)

    1993-01-01

    The Electric Power Annual is prepared by the Survey Management Division; Office of Coal, Nuclear, Electric and Alternate Fuels; Energy Information Administration (EIA); US Department of Energy. The 1991 edition has been enhanced to include statistics on electric utility demand-side management and nonutility supply. ''The US Electric Power Industry at a Glance'' section presents a profile of the electric power industry ownership and performance, and a review of key statistics for the year. Subsequent sections present data on generating capability, including proposed capability additions; net generation; fossil-fuel statistics; electricity sales, revenue, and average revenue per kilowatthour sold; financial statistics; environmental statistics; electric power transactions; demand-side management; and nonutility power producers. In addition, the appendices provide supplemental data on major disturbances and unusual occurrences in US electricity power systems. Each section contains related text and tables and refers the reader to the appropriate publication that contains more detailed data on the subject matter. Monetary values in this publication are expressed in nominal terms

  13. Syncrude annual report 1994

    International Nuclear Information System (INIS)

    1994-01-01

    Syncrude Canada Ltd. is the world's largest producer of custom made crude oil from the oil sands, and the largest single source of oil in Canada. This annual report claimed many outstanding achievements for 1994. A new production record resulted in higher revenue, lower unit operating costs, increased cash flow, improved productivity, and higher net income. With a focus on technology development and continuous improvement in operations, including environment, health and safety performance, operating results are projected to improve further. The formation of the Canadian Oil Sands Network for Research and Development (CONRAD) early in 1994 was a major step assuring maximum leverage from every dollar expended to realize this objective. Production of Syncrude Sweet Blend is expected to rise by over 10 million barrels to 81 million barrels a year at an average cash operating expenditure of Can$12.00/ barrel by the year 2000. The Corporation's business plans include capital investments for major projects, such as the North Mine hydrotransport, product quality enhancements, and upgrading facilities expansion. 4 tabs., 1 fig

  14. Visualizing the uncertainty in the relationship between seasonal average climate and malaria risk.

    Science.gov (United States)

    MacLeod, D A; Morse, A P

    2014-12-02

    Around $1.6 billion per year is spent financing anti-malaria initiatives, and though malaria morbidity is falling, the impact of annual epidemics remains significant. Whilst malaria risk may increase with climate change, projections are highly uncertain and to sidestep this intractable uncertainty, adaptation efforts should improve societal ability to anticipate and mitigate individual events. Anticipation of climate-related events is made possible by seasonal climate forecasting, from which warnings of anomalous seasonal average temperature and rainfall, months in advance are possible. Seasonal climate hindcasts have been used to drive climate-based models for malaria, showing significant skill for observed malaria incidence. However, the relationship between seasonal average climate and malaria risk remains unquantified. Here we explore this relationship, using a dynamic weather-driven malaria model. We also quantify key uncertainty in the malaria model, by introducing variability in one of the first order uncertainties in model formulation. Results are visualized as location-specific impact surfaces: easily integrated with ensemble seasonal climate forecasts, and intuitively communicating quantified uncertainty. Methods are demonstrated for two epidemic regions, and are not limited to malaria modeling; the visualization method could be applied to any climate impact.

  15. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  16. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  17. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  18. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  19. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  20. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  1. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  2. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  3. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  4. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  5. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  6. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  7. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  8. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  9. Forschungszentrum Juelich. Annual report 2016

    International Nuclear Information System (INIS)

    Frick, Frank; Lueers, Katja; Roegener, Wiebke; Stahl-Busse, Brigitte

    2017-07-01

    The annual report 2016 of the Forschungszentrum Juelich covers research activities, including high-lights of structural biochemistry (Alzheimer research), material research (skyrmions), computer simulation (e.g. of flexible blood cells), quantum physics (100 qubit era), photovoltaics, battery research, environmental research, climate research, biotechnology and community codes, including education and international cooperation.

  10. Annual accounts 1992-93

    International Nuclear Information System (INIS)

    1993-06-01

    AEA Technology is the trading name of the United Kingdom Atomic Energy Authority. The principal activity is the provision of high quality scientific and engineering services, consultancy and specialist products across a broad range. During 1992-93, AEA achieved a profit of Pound 23.9M, representing a return of 12.2%. The detailed annual accounts are presented. (UK)

  11. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  12. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  13. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  14. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  15. Annual report 1982

    International Nuclear Information System (INIS)

    1983-06-01

    This annual report gives a survey of the activities of ECN at The Hague and Petten, Netherlands, in 1982. These activities are concerned with energy generation and development and with scientific and technical applications of thermal neutrons, which are available from the High Flux Reactor and the Low Flux Reactor at Petten. The Energy Study Centre (ESC), a special department of ECN, is engaged with social-economic studies on energy generation and utilization. ESC also investigates the consequences of energy scenarios. The Bureau Energy Research Projects (BEOP) coordinates and administers all national research projects, especially on flywheels, solar energy, wind power and coal combustion. After a survey of staffing and finances the report ends with a list of ECN publications

  16. Annual report 2002

    International Nuclear Information System (INIS)

    2003-01-01

    This annual report provides information on the energies and raw materials policy for 2002. The first part, devoted to the supplying safety, deals with the petroleum situation in 2002, the new perspectives for the continental shelf exploitation, the heavy metals prices evolution and the renewable energies promotion. The second part on the markets opening presents the new legislation of the energy markets, the new juridical framework of the natural gas transportation network, EdF, GdF and the National Company of the Rhone situation, the markets liberalization. The third part deals with the today subjects as the sustainable development, the nuclear situation, the high voltage power lines and the environment, the end of the mines exploitation in France, the energy policy facing the climatic change, the National Debate on the energies, the directive on the energy efficiency of buildings. (A.L.B.)

  17. Annual Report 1980

    International Nuclear Information System (INIS)

    1981-01-01

    This Annual Report covers the activities of the Institute of the Nuclear Physics Accelerator (KVI) during the year 1980. The main lines of research are on experimental nuclear physics and on nuclear theory. The experimental work of the laboratory is centred around a large and modern, k=160 MeV AVF cyclotron that became operational at the end of 1972. The experimental work in 1980 concentrated on high-resolution nuclear structure studies via transfer reactions and inelastic scattering, on the decay properties of giant resonances, on elastic and inelastic breakup of light and heavy ions, on the investigation of continuum γ-rays, on in-beam γ-ray and conversion electron spectroscopy, and on weak interactions. Much of the theoretical work was directed towards the Interacting Boson Model (IBM). Another major effort was done on the theoretical description of relativistic heavy-ion reactions via a Boltzmann equation approach. (Auth.)

  18. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  19. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  20. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  1. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  2. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  3. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  4. Editor's Choice - High Annual Hospital Volume is Associated with Decreased in Hospital Mortality and Complication Rates Following Treatment of Abdominal Aortic Aneurysms: Secondary Data Analysis of the Nationwide German DRG Statistics from 2005 to 2013.

    Science.gov (United States)

    Trenner, Matthias; Kuehnl, Andreas; Salvermoser, Michael; Reutersberg, Benedikt; Geisbuesch, Sarah; Schmid, Volker; Eckstein, Hans-Henning

    2018-02-01

    The aim of this study was to analyse the association between annual hospital procedural volume and post-operative outcomes following repair of abdominal aortic aneurysms (AAA) in Germany. Data were extracted from nationwide Diagnosis Related Group (DRG) statistics provided by the German Federal Statistical Office. Cases with a diagnosis of AAA (ICD-10 GM I71.3, I71.4) and procedure codes for endovascular aortic repair (EVAR; OPS 5-38a.1*) or open aortic repair (OAR; OPS 5-38.45, 5-38.47) treated between 2005 and 2013 were included. Hospitals were empirically grouped to quartiles depending on the overall annual volume of AAA procedures. A multilevel multivariable regression model was applied to adjust for sex, medical risk, type of procedure, and type of admission. Primary outcome was in hospital mortality. Secondary outcomes were complications, use of blood products, and length of stay (LOS). The association between AAA volume and in hospital mortality was also estimated as a function of continuous volume. A total of 96,426 cases, of which 11,795 (12.6%) presented as ruptured (r)AAA, were treated in >700 hospitals (annual median: 501). The crude in hospital mortality was 3.3% after intact (i)AAA repair (OAR 5.3%; EVAR 1.7%). Volume was inversely associated with mortality after OAR and EVAR. Complication rates, LOS, and use of blood products were lower in high volume hospitals. After rAAA repair, crude mortality was 40.4% (OAR 43.2%; EVAR 27.4%). An inverse association between mortality and volume was shown for rAAA repair; the same accounts for the use of blood products. When considering volume as a continuous variate, an annual caseload of 75-100 elective cases was associated with the lowest mortality risk. In hospital mortality and complication rates following AAA repair are inversely associated with annual hospital volume. The use of blood products and the LOS are lower in high volume hospitals. A minimum annual case threshold for AAA procedures might improve

  5. Average Distance Travelled To School by Primary and Secondary ...

    African Journals Online (AJOL)

    This study investigated average distance travelled to school by students in primary and secondary schools in Anambra, Enugu, and Ebonyi States and effect on attendance. These are among the top ten densely populated and educationally advantaged States in Nigeria. Research evidences report high dropout rates in ...

  6. The Effect of Honors Courses on Grade Point Averages

    Science.gov (United States)

    Spisak, Art L.; Squires, Suzanne Carter

    2016-01-01

    High-ability entering college students give three main reasons for not choosing to become part of honors programs and colleges; they and/or their parents believe that honors classes at the university level require more work than non-honors courses, are more stressful, and will adversely affect their self-image and grade point average (GPA) (Hill;…

  7. Fish Passage Center 2000 annual report.; ANNUAL

    International Nuclear Information System (INIS)

    Fish Passage Center

    2001-01-01

    The year 2000 hydrosystem operations illustrated two main points: (1) that the NMFS Biological Opinion on the operations of the Federal Columbia River Power System (FCRPS) fish migration measures could not be met in a slightly below average water year, and; (2) the impacts and relationships of energy deregulation and volatile wholesale energy prices on the ability of the FCRPS to provide the Biological Opinion fish migration measures. In 2000, a slightly below average water year, the flow targets were not met and, when energy ''emergencies'' were declared, salmon protection measures were reduced. The 2000 migration year was a below average runoff volume year with an actual run off volume of 61.1 MAF or 96% of average. This year illustrated the ability of the hydro system to meet the migration protection measures established by the NMFS Biological Opinion. The winter operation of storage reservoirs was based upon inaccurate runoff volume forecasts which predicted a January-July runoff volume forecast at The Dalles of 102 to 105% of average, from January through June. Reservoir flood control drafts during the winter months occurred according to these forecasts. This caused an over-draft of reservoirs that resulted in less volume of water available for fish flow augmentation in the spring and the summer. The season Biological Opinion flow targets for spring and summer migrants at Lower Granite and McNary dams were not met. Several power emergencies were declared by BPA in the summer of 2000. The first in June was caused by loss of resources (WNP2 went off-line). The second and third emergencies were declared in August as a result of power emergencies in California and in the Northwest. The unanticipated effects of energy deregulation, power market volatility and rising wholesale electricity prices, and Californian energy deregulation reduced the ability of the FCRPS to implement fish protection measures. A Spill Plan Agreement was implemented in the FCRPS. Under this

  8. Investigations of natural groundwater hazards at the proposed Yucca Mountain high level nuclear waste repository. Part A: Geology at Yucca Mountain. Part B: Modeling of hydro-tectonic phenomena relevant to Yucca Mountain. Annual report - Nevada

    International Nuclear Information System (INIS)

    Szymanski, J.S.; Schluter, C.M.; Livingston, D.E.

    1993-05-01

    This document is an annual report describing investigations of natural groundwater hazards at the proposed Yucca Mountain, Nevada High-Level Nuclear Waste Repository.This document describes research studies of the origin of near surface calcite/silica deposits at Yucca Mountain. The origin of these deposits is controversial and the authors have extended and strengthened the basis of their arguments for epigenetic, metasomatic alteration of the tuffs at Yucca Mountain. This report includes stratigraphic, mineralogical, and geochronological information along with geochemical data to support the conclusions described by Livingston and Szymanski, and others. As part of their first annual report, they take this opportunity to clarify the technical basis of their concerns and summarize the critical geological field evidence and related information. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database

  9. Annual report 1973

    International Nuclear Information System (INIS)

    1973-01-01

    The GKSS scientific annual report summarizes the problems and results of the research and development projects of 1973. In contrast to earlier annual reports, a comprehensive description of the research facilities is not included. The annual report was extended by the paragraph 'Financial Report 1973' in the chapter 'Development of Geesthacht Research Centre'. The financial report gives a survey of the financial transactions and the major operations of the year under review. (orig./AK) [de

  10. NUKEM annual report 1981

    International Nuclear Information System (INIS)

    The annual report of this important undertaking in the German nuclear industry informs about its structure, holdings and activities in 1981. The report of the management is followed by remarks on the annual statement of accounts (annual balance, profit-loss accounting) and the report of the Supervisory Board. In the annex the annual balance of NUKEM GmbH/HOBEG mbH as per December 31, 1981, and the profit-loss accounting of NUKEM GmbH/HOBEG mbH for the business year 1981 are presented. (UA) [de

  11. Annual Statistical Supplement, 2002

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2002 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  12. Annual Statistical Supplement, 2010

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2010 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  13. Annual Statistical Supplement, 2007

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2007 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  14. Annual Statistical Supplement, 2001

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2001 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  15. Annual Statistical Supplement, 2016

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2016 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  16. Annual Statistical Supplement, 2011

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2011 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  17. Annual Statistical Supplement, 2005

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2005 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  18. Annual Statistical Supplement, 2015

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2015 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  19. Annual Statistical Supplement, 2003

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2003 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  20. Annual Statistical Supplement, 2017

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2017 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  1. Annual Statistical Supplement, 2008

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2008 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  2. Annual Statistical Supplement, 2014

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2014 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  3. Annual Statistical Supplement, 2004

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2004 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  4. Annual Statistical Supplement, 2000

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2000 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  5. Annual Statistical Supplement, 2009

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2009 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  6. Annual Statistical Supplement, 2006

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2006 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  7. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  8. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  9. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  10. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  11. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  12. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  13. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  14. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  15. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  16. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  17. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  18. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  19. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  20. Laser Program annual report 1987

    Energy Technology Data Exchange (ETDEWEB)

    O' Neal, E.M.; Murphy, P.W.; Canada, J.A.; Kirvel, R.D.; Peck, T.; Price, M.E.; Prono, J.K.; Reid, S.G.; Wallerstein, L.; Wright, T.W. (eds.)

    1989-07-01

    This report discusses the following topics: target design and experiments; target materials development; laboratory x-ray lasers; laser science and technology; high-average-power solid state lasers; and ICF applications studies.