WorldWideScience

Sample records for highest average power

  1. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  2. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  3. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  4. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  5. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  6. High average power solid state laser power conditioning system

    International Nuclear Information System (INIS)

    Steinkraus, R.F.

    1987-01-01

    The power conditioning system for the High Average Power Laser program at Lawrence Livermore National Laboratory (LLNL) is described. The system has been operational for two years. It is high voltage, high power, fault protected, and solid state. The power conditioning system drives flashlamps that pump solid state lasers. Flashlamps are driven by silicon control rectifier (SCR) switched, resonant charged, (LC) discharge pulse forming networks (PFNs). The system uses fiber optics for control and diagnostics. Energy and thermal diagnostics are monitored by computers

  7. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  8. Relation of average and highest solvent vapor concentrations in workplaces in small to medium enterprises and large enterprises.

    Science.gov (United States)

    Ukai, Hirohiko; Ohashi, Fumiko; Samoto, Hajime; Fukui, Yoshinari; Okamoto, Satoru; Moriguchi, Jiro; Ezaki, Takafumi; Takada, Shiro; Ikeda, Masayuki

    2006-04-01

    The present study was initiated to examine the relationship between the workplace concentrations and the estimated highest concentrations in solvent workplaces (SWPs), with special references to enterprise size and types of solvent work. Results of survey conducted in 1010 SWPs in 156 enterprises were taken as a database. Workplace air was sampled at > or = 5 crosses in each SWP following a grid sampling strategy. An additional air was grab-sampled at the site where the worker's exposure was estimated to be highest (estimated highest concentration or EHC). The samples were analyzed for 47 solvents designated by regulation, and solvent concentrations in each sample were summed up by use of additiveness formula. From the workplace concentrations at > or = 5 points, geometric mean and geometric standard deviations were calculated as the representative workplace concentration (RWC) and the indicator of variation in workplace concentration (VWC). Comparison between RWC and EHC in the total of 1010 SWPs showed that EHC was 1.2 (in large enterprises with>300 employees) to 1.7 times [in small to medium (SM) enterprises with enterprises and large enterprises, both RWC and EHC were significantly higher in SM enterprises than in large enterprises. Further comparison by types of solvent work showed that the difference was more marked in printing, surface coating and degreasing/cleaning/wiping SWPs, whereas it was less remarkable in painting SWPs and essentially nil in testing/research laboratories. In conclusion, the present observation as discussed in reference to previous publications suggests that RWC, EHC and the ratio of EHC/WRC varies substantially among different types of solvent work as well as enterprise size, and are typically higher in printing SWPs in SM enterprises.

  9. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  10. Minimal average consumption downlink base station power control strategy

    OpenAIRE

    Holtkamp H.; Auer G.; Haas H.

    2011-01-01

    We consider single cell multi-user OFDMA downlink resource allocation on a flat-fading channel such that average supply power is minimized while fulfilling a set of target rates. Available degrees of freedom are transmission power and duration. This paper extends our previous work on power optimal resource allocation in the mobile downlink by detailing the optimal power control strategy investigation and extracting fundamental characteristics of power optimal operation in cellular downlink. W...

  11. High Average Power Fiber Laser for Satellite Communications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...

  12. Highest manageable level of radioactivity in the waste storage facilities of power plants

    International Nuclear Information System (INIS)

    Elkert, J.; Lennartsson, R.

    1991-01-01

    This project presents and discusses an investigation of the highest level of radioactivity possible to handle in the waste storage facilities. The amount of radioactivity, about 0.1% of the fuel inventory, is the same in both of the cases but the amount of water is very different. The hypothetical accident was supposed to be damage of the reactor fuel caused by loss of coolant. (K.A.E.)

  13. Eighth CW and High Average Power RF Workshop

    CERN Document Server

    2014-01-01

    We are pleased to announce the next Continuous Wave and High Average RF Power Workshop, CWRF2014, to take place at Hotel NH Trieste, Trieste, Italy from 13 to 16 May, 2014. This is the eighth in the CWRF workshop series and will be hosted by Elettra - Sincrotrone Trieste S.C.p.A. (www.elettra.eu). CWRF2014 will provide an opportunity for designers and users of CW and high average power RF systems to meet and interact in a convivial environment to share experiences and ideas on applications which utilize high-power klystrons, gridded tubes, combined solid-state architectures, high-voltage power supplies, high-voltage modulators, high-power combiners, circulators, cavities, power couplers and tuners. New ideas for high-power RF system upgrades and novel ways of RF power generation and distribution will also be discussed. CWRF2014 sessions will start on Tuesday morning and will conclude on Friday lunchtime. A visit to Elettra and FERMI will be organized during the workshop. ORGANIZING COMMITTEE (OC): Al...

  14. High average power diode pumped solid state lasers for CALIOPE

    International Nuclear Information System (INIS)

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers

  15. High Average Power UV Free Electron Laser Experiments At JLAB

    International Nuclear Information System (INIS)

    Douglas, David; Benson, Stephen; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle; Tennant, Christopher; Williams, Gwyn

    2012-01-01

    Having produced 14 kW of average power at ∼2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.

  16. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  17. Thermal effects in high average power optical parametric amplifiers.

    Science.gov (United States)

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given.

  18. High Average Power, High Energy Short Pulse Fiber Laser System

    Energy Technology Data Exchange (ETDEWEB)

    Messerly, M J

    2007-11-13

    Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.

  19. Database of average-power damage thresholds at 1064 nm

    International Nuclear Information System (INIS)

    Rainer, F.; Hildum, E.A.; Milam, D.

    1987-01-01

    We have completed a database of average-power, laser-induced, damage thresholds at 1064 nm on a variety of materials. Measurements were made with a newly constructed laser to provide design input for moderate and high average-power laser projects. The measurements were conducted with 16-ns pulses at pulse-repetition frequencies ranging from 6 to 120 Hz. Samples were typically irradiated for time ranging from a fraction of a second up to 5 minutes (36,000 shots). We tested seven categories of samples which included antireflective coatings, high reflectors, polarizers, single and multiple layers of the same material, bare and overcoated metal surfaces, bare polished surfaces, and bulk materials. The measured damage threshold ranged from 2 for some metals to > 46 J/cm 2 for a bare polished glass substrate. 4 refs., 7 figs., 1 tab

  20. Power Efficiency Improvements through Peak-to-Average Power Ratio Reduction and Power Amplifier Linearization

    Directory of Open Access Journals (Sweden)

    Zhou G Tong

    2007-01-01

    Full Text Available Many modern communication signal formats, such as orthogonal frequency-division multiplexing (OFDM and code-division multiple access (CDMA, have high peak-to-average power ratios (PARs. A signal with a high PAR not only is vulnerable in the presence of nonlinear components such as power amplifiers (PAs, but also leads to low transmission power efficiency. Selected mapping (SLM and clipping are well-known PAR reduction techniques. We propose to combine SLM with threshold clipping and digital baseband predistortion to improve the overall efficiency of the transmission system. Testbed experiments demonstrate the effectiveness of the proposed approach.

  1. Recent developments in high average power driver technology

    International Nuclear Information System (INIS)

    Prestwich, K.R.; Buttram, M.T.; Rohwein, G.J.

    1979-01-01

    Inertial confinement fusion (ICF) reactors will require driver systems operating with tens to hundreds of megawatts of average power. The pulse power technology that will be required to build such drivers is in a primitive state of development. Recent developments in repetitive pulse power are discussed. A high-voltage transformer has been developed and operated at 3 MV in a single pulse experiment and is being tested at 1.5 MV, 5 kj and 10 pps. A low-loss, 1 MV, 10 kj, 10 pps Marx generator is being tested. Test results from gas-dynamic spark gaps that operate both in the 100 kV and 700 kV range are reported. A 250 kV, 1.5 kA/cm 2 , 30 ns electron beam diode has operated stably for 1.6 x 10 5 pulses

  2. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  3. Potential of high-average-power solid state lasers

    International Nuclear Information System (INIS)

    Emmett, J.L.; Krupke, W.F.; Sooy, W.R.

    1984-01-01

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels

  4. High-average-power laser medium based on silica glass

    Science.gov (United States)

    Fujimoto, Yasushi; Nakatsuka, Masahiro

    2000-01-01

    Silica glass is one of the most attractive materials for a high-average-power laser. We have developed a new laser material base don silica glass with zeolite method which is effective for uniform dispersion of rare earth ions in silica glass. High quality medium, which is bubbleless and quite low refractive index distortion, must be required for realization of laser action. As the main reason of bubbling is due to hydroxy species remained in the gelation same, we carefully choose colloidal silica particles, pH value of hydrochloric acid for hydrolysis of tetraethylorthosilicate on sol-gel process, and temperature and atmosphere control during sintering process, and then we get a bubble less transparent rare earth doped silica glass. The refractive index distortion of the sample also discussed.

  5. Strengthened glass for high average power laser applications

    International Nuclear Information System (INIS)

    Cerqua, K.A.; Lindquist, A.; Jacobs, S.D.; Lambropoulos, J.

    1987-01-01

    Recent advancements in high repetition rate and high average power laser systems have put increasing demands on the development of improved solid state laser materials with high thermal loading capabilities. The authors have developed a process for strengthening a commercially available Nd doped phosphate glass utilizing an ion-exchange process. Results of thermal loading fracture tests on moderate size (160 x 15 x 8 mm) glass slabs have shown a 6-fold improvement in power loading capabilities for strengthened samples over unstrengthened slabs. Fractographic analysis of post-fracture samples has given insight into the mechanism of fracture in both unstrengthened and strengthened samples. Additional stress analysis calculations have supported these findings. In addition to processing the glass' surface during strengthening in a manner which preserves its post-treatment optical quality, the authors have developed an in-house optical fabrication technique utilizing acid polishing to minimize subsurface damage in samples prior to exchange treatment. Finally, extension of the strengthening process to alternate geometries of laser glass has produced encouraging results, which may expand the potential or strengthened glass in laser systems, making it an exciting prospect for many applications

  6. Investigations to the potential of the high temperature reactor for steam power processes with highest steam conditions and comparison with according conventional power plants

    International Nuclear Information System (INIS)

    Mondry, M.

    1988-04-01

    Already in the fifties conventional power plants with high parameters of the live steam were built to improve the total efficiency. The power plant with the highest steam conditions in the Federal Republic of Germany has 300 bar pressure and 600deg C temperature. Because of high material costs and other problems power plants with such high conditions were not continued to be built. Standard conditions of today's power plants are in the order of 180-250 bar pressure and 535deg C temperature. As the high temperature reactor is partly built up in another way than a conventional power plant, the results regarding the high steam parameters are not transferable. Possibilities for the technical realization of determined HTR-specific components are introduced and discussed. Then different HTR-power plants with steam conditions up to 350 bar pressure and 650deg C temperature are projected. Economical considerations show that an HTR with higher steam parameters brings financial profits. Further efficiency increase, which is possible by the high steam conditions, is shortly presented. The work ends with a technical and economical comparison of corresponding conventional power plants. (orig./UA) [de

  7. A high average power beam dump for an electron accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xianghong, E-mail: xl66@cornell.edu [Cornell Laboratory of Accelerator-based Sciences and Education, Cornell University, Ithaca, NY 14853 (United States); Bazarov, Ivan; Dunham, Bruce M.; Kostroun, Vaclav O.; Li, Yulin; Smolenski, Karl W. [Cornell Laboratory of Accelerator-based Sciences and Education, Cornell University, Ithaca, NY 14853 (United States)

    2013-05-01

    The electron beam dump for Cornell University's Energy Recovery Linac (ERL) prototype injector was designed and manufactured to absorb 600 kW of electron beam power at beam energies between 5 and 15 MeV. It is constructed from an aluminum alloy using a cylindrical/conical geometry, with water cooling channels between an inner vacuum chamber and an outer jacket. The electron beam is defocused and its centroid is rastered around the axis of the dump to dilute the power density. A flexible joint connects the inner body and the outer jacket to minimize thermal stress. A quadrant detector at the entrance to the dump monitors the electron beam position and rastering. Electron scattering calculations, thermal and thermomechanical stress analysis, and radiation calculations are presented.

  8. Energy stability in a high average power FEL

    International Nuclear Information System (INIS)

    Mermings, L.; Bisognano, J.; Delayen, J.

    1995-01-01

    Recirculating, energy-recovering linacs can be used as driver accelerators for high power FELs. Instabilities which arise from fluctuations of the cavity fields or beam current are investigated. Energy changes can cause beam loss on apertures, or, when coupled to M, phase oscillations. Both effects change the beam induced voltage in the cavities and can lead to unstable variations of the accelerating field. Stability analysis for small perturbations from equilibrium is performed and threshold currents are determined. Furthermore, the analytical model is extended to include feedback. Comparison with simulation results derived from direct integration of the equations of motion is presented. Design strategies to increase the instability threshold are discussed and the UV Demo FEL, proposed for construction at CEBAF, and the INP Recuperatron at Novosibirsk are used as examples

  9. 53 W average power few-cycle fiber laser system generating soft x rays up to the water window.

    Science.gov (United States)

    Rothhardt, Jan; Hädrich, Steffen; Klenke, Arno; Demmler, Stefan; Hoffmann, Armin; Gotschall, Thomas; Eidam, Tino; Krebs, Manuel; Limpert, Jens; Tünnermann, Andreas

    2014-09-01

    We report on a few-cycle laser system delivering sub-8-fs pulses with 353 μJ pulse energy and 25 GW of peak power at up to 150 kHz repetition rate. The corresponding average output power is as high as 53 W, which represents the highest average power obtained from any few-cycle laser architecture so far. The combination of both high average and high peak power provides unique opportunities for applications. We demonstrate high harmonic generation up to the water window and record-high photon flux in the soft x-ray spectral region. This tabletop source of high-photon flux soft x rays will, for example, enable coherent diffractive imaging with sub-10-nm resolution in the near future.

  10. Potential for efficient frequency conversion at high average power using solid state nonlinear optical materials

    International Nuclear Information System (INIS)

    Eimerl, D.

    1985-01-01

    High-average-power frequency conversion using solid state nonlinear materials is discussed. Recent laboratory experience and new developments in design concepts show that current technology, a few tens of watts, may be extended by several orders of magnitude. For example, using KD*P, efficient doubling (>70%) of Nd:YAG at average powers approaching 100 KW is possible; and for doubling to the blue or ultraviolet regions, the average power may approach 1 MW. Configurations using segmented apertures permit essentially unlimited scaling of average power. High average power is achieved by configuring the nonlinear material as a set of thin plates with a large ratio of surface area to volume and by cooling the exposed surfaces with a flowing gas. The design and material fabrication of such a harmonic generator are well within current technology

  11. Improved performance of high average power semiconductor arrays for applications in diode pumped solid state lasers

    International Nuclear Information System (INIS)

    Beach, R.; Emanuel, M.; Benett, W.; Freitas, B.; Ciarlo, D.; Carlson, N.; Sutton, S.; Skidmore, J.; Solarz, R.

    1994-01-01

    The average power performance capability of semiconductor diode laser arrays has improved dramatically over the past several years. These performance improvements, combined with cost reductions pursued by LLNL and others in the fabrication and packaging of diode lasers, have continued to reduce the price per average watt of laser diode radiation. Presently, we are at the point where the manufacturers of commercial high average power solid state laser systems used in material processing applications can now seriously consider the replacement of their flashlamp pumps with laser diode pump sources. Additionally, a low cost technique developed and demonstrated at LLNL for optically conditioning the output radiation of diode laser arrays has enabled a new and scalable average power diode-end-pumping architecture that can be simply implemented in diode pumped solid state laser systems (DPSSL's). This development allows the high average power DPSSL designer to look beyond the Nd ion for the first time. Along with high average power DPSSL's which are appropriate for material processing applications, low and intermediate average power DPSSL's are now realizable at low enough costs to be attractive for use in many medical, electronic, and lithographic applications

  12. Estimation of average annual streamflows and power potentials for Alaska and Hawaii

    Energy Technology Data Exchange (ETDEWEB)

    Verdin, Kristine L. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL)

    2004-05-01

    This paper describes the work done to develop average annual streamflow estimates and power potential for the states of Alaska and Hawaii. The Elevation Derivatives for National Applications (EDNA) database was used, along with climatic datasets, to develop flow and power estimates for every stream reach in the EDNA database. Estimates of average annual streamflows were derived using state-specific regression equations, which were functions of average annual precipitation, precipitation intensity, drainage area, and other elevation-derived parameters. Power potential was calculated through the use of the average annual streamflow and the hydraulic head of each reach, which is calculated from the EDNA digital elevation model. In all, estimates of streamflow and power potential were calculated for over 170,000 stream segments in the Alaskan and Hawaiian datasets.

  13. National survey provides average power quality profiles for different customer groups

    International Nuclear Information System (INIS)

    Hughes, B.; Chan, J.

    1996-01-01

    A three year survey, beginning in 1991, was conducted by the Canadian Electrical Association to study the levels of power quality that exist in Canada, and to determine ways to increase utility expertise in making power quality measurements. Twenty-two utilities across Canada were involved, with a total of 550 sites being monitored, including residential and commercial customers. Power disturbances, power outages and power quality were recorded for each site. To create a group average power quality plot, the transient disturbance activity for each site was normalized to a per channel, per month basis and then divided into a grid. Results showed that the average power quality provided by Canadian utilities was very good. Almost all the electrical disturbance within a customer premises were created and stayed within those premises. Disturbances were generally beyond utility control. Utilities could, however, reduce the amount of time the steady-state voltage exceeds the CSA normal voltage upper limit. 5 figs

  14. High-Average-Power Diffraction Pulse-Compression Gratings Enabling Next-Generation Ultrafast Laser Systems

    Energy Technology Data Exchange (ETDEWEB)

    Alessi, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-11-01

    Pulse compressors for ultrafast lasers have been identified as a technology gap in the push towards high peak power systems with high average powers for industrial and scientific applications. Gratings for ultrashort (sub-150fs) pulse compressors are metallic and can absorb a significant percentage of laser energy resulting in up to 40% loss as well as thermal issues which degrade on-target performance. We have developed a next generation gold grating technology which we have scaled to the petawatt-size. This resulted in improvements in efficiency, uniformity and processing as compared to previous substrate etched gratings for high average power. This new design has a deposited dielectric material for the grating ridge rather than etching directly into the glass substrate. It has been observed that average powers as low as 1W in a compressor can cause distortions in the on-target beam. We have developed and tested a method of actively cooling diffraction gratings which, in the case of gold gratings, can support a petawatt peak power laser with up to 600W average power. We demonstrated thermo-mechanical modeling of a grating in its use environment and benchmarked with experimental measurement. Multilayer dielectric (MLD) gratings are not yet used for these high peak power, ultrashort pulse durations due to their design challenges. We have designed and fabricated broad bandwidth, low dispersion MLD gratings suitable for delivering 30 fs pulses at high average power. This new grating design requires the use of a novel Out Of Plane (OOP) compressor, which we have modeled, designed, built and tested. This prototype compressor yielded a transmission of 90% for a pulse with 45 nm bandwidth, and free of spatial and angular chirp. In order to evaluate gratings and compressors built in this project we have commissioned a joule-class ultrafast Ti:Sapphire laser system. Combining the grating cooling and MLD technologies developed here could enable petawatt laser systems to

  15. Comparison of power pulses from homogeneous and time-average-equivalent models

    International Nuclear Information System (INIS)

    De, T.K.; Rouben, B.

    1995-01-01

    The time-average-equivalent model is an 'instantaneous' core model designed to reproduce the same three dimensional power distribution as that generated by a time-average model. However it has been found that the time-average-equivalent model gives a full-core static void reactivity about 8% smaller than the time-average or homogeneous models. To investigate the consequences of this difference in static void reactivity in time dependent calculations, simulations of the power pulse following a hypothetical large-loss-of-coolant accident were performed with a homogeneous model and compared with the power pulse from the time-average-equivalent model. The results show that there is a much smaller difference in peak dynamic reactivity than in static void reactivity between the two models. This is attributed to the fact that voiding is not complete, but also to the retardation effect of the delayed-neutron precursors on the dynamic flux shape. The difference in peak reactivity between the models is 0.06 milli-k. The power pulses are essentially the same in the two models, because the delayed-neutron fraction in the time-average-equivalent model is lower than in the homogeneous model, which compensates for the lower void reactivity in the time-average-equivalent model. (author). 1 ref., 5 tabs., 9 figs

  16. Power electronic supply system with the wind turbine dedicated for average power receivers

    Science.gov (United States)

    Widerski, Tomasz; Skrzypek, Adam

    2018-05-01

    This article presents the original project of the AC-DC-AC converter dedicated to low power wind turbines. Such a set can be a good solution for powering isolated objects that do not have access to the power grid, for example isolated houses, mountain lodges or forester's lodges, where they can replace expensive diesel engine generators. An additional source of energy in the form of a mini-wind farm is also a good alternative to yachts, marinas and tent sites, which are characterized by relatively low power consumption. This article presents a designed low power wind converter that is dedicated to these applications. The main design idea of the authors was to create a device that converts the very wide range input voltage directly to a stable 230VAC output voltage without the battery buffer. Authors focused on maximum safety of using and service. The converter contains the thermal protection, short-circuit protection and overvoltage protection. The components have been selected in such a way as to ensure that the device functions as efficiently as possible.

  17. Generation and Applications of High Average Power Mid-IR Supercontinuum in Chalcogenide Fibers

    OpenAIRE

    Petersen, Christian Rosenberg

    2016-01-01

    Mid-infrared supercontinuum with up to 54.8 mW average power, and maximum bandwidth of 1.77-8.66 μm is demonstrated as a result of pumping tapered chalcogenide photonic crystal fibers with a MHz parametric source at 4 μm

  18. Recent advances in the development of high average power induction accelerators for industrial and environmental applications

    International Nuclear Information System (INIS)

    Neau, E.L.

    1994-01-01

    Short-pulse accelerator technology developed during the early 1960's through the late 1980's is being extended to high average power systems capable of use in industrial and environmental applications. Processes requiring high dose levels and/or high volume throughput will require systems with beam power levels from several hundreds of kilowatts to megawatts. Beam accelerating potentials can range from less than 1 MeV to as much as 10 MeV depending on the type of beam, depth of penetration required, and the density of the product being treated. This paper addresses the present status of a family of high average power systems, with output beam power levels up to 200 kW, now in operation that use saturable core switches to achieve output pulse widths of 50 to 80 nanoseconds. Inductive adders and field emission cathodes are used to generate beams of electrons or x-rays at up to 2.5 MeV over areas of 1000 cm 2 . Similar high average power technology is being used at ≤ 1 MeV to drive repetitive ion beam sources for treatment of material surfaces over 100's of cm 2

  19. Application of Bayesian model averaging to measurements of the primordial power spectrum

    International Nuclear Information System (INIS)

    Parkinson, David; Liddle, Andrew R.

    2010-01-01

    Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940 s s is specified at a pivot scale 0.015 Mpc -1 . For the tensors model averaging can tighten the credible upper limit, depending on prior assumptions.

  20. A novel Generalized State-Space Averaging (GSSA) model for advanced aircraft electric power systems

    International Nuclear Information System (INIS)

    Ebrahimi, Hadi; El-Kishky, Hassan

    2015-01-01

    Highlights: • A study model is developed for aircraft electric power systems. • A novel GSSA model is developed for the interconnected power grid. • The system’s dynamics are characterized under various conditions. • The averaged results are compared and verified with the actual model. • The obtained measured values are validated with available aircraft standards. - Abstract: The growing complexity of Advanced Aircraft Electric Power Systems (AAEPS) has made conventional state-space averaging models inadequate for systems analysis and characterization. This paper presents a novel Generalized State-Space Averaging (GSSA) model for the system analysis, control and characterization of AAEPS. The primary objective of this paper is to introduce a mathematically elegant and computationally simple model to copy the AAEPS behavior at the critical nodes of the electric grid. Also, to reduce some or all of the drawbacks (complexity, cost, simulation time…, etc) associated with sensor-based monitoring and computer aided design software simulations popularly used for AAEPS characterization. It is shown in this paper that the GSSA approach overcomes the limitations of the conventional state-space averaging method, which fails to predict the behavior of AC signals in a circuit analysis. Unlike conventional averaging method, the GSSA model presented in this paper includes both DC and AC components. This would capture the key dynamic and steady-state characteristics of the aircraft electric systems. The developed model is then examined for the aircraft system’s visualization and accuracy of computation under different loading scenarios. Through several case studies, the applicability and effectiveness of the GSSA method is verified by comparing to the actual real-time simulation model obtained from Powersim 9 (PSIM9) software environment. The simulations results represent voltage, current and load power at the major nodes of the AAEPS. It has been demonstrated that

  1. A Hybrid Islanding Detection Technique Using Average Rate of Voltage Change and Real Power Shift

    DEFF Research Database (Denmark)

    Mahat, Pukar; Chen, Zhe; Bak-Jensen, Birgitte

    2009-01-01

    The mainly used islanding detection techniques may be classified as active and passive techniques. Passive techniques don't perturb the system but they have larger nondetection znes, whereas active techniques have smaller nondetection zones but they perturb the system. In this paper, a new hybrid...... technique is proposed to solve this problem. An average rate of voltage change (passive technique) has been used to initiate a real power shift (active technique), which changes the eal power of distributed generation (DG), when the passive technique cannot have a clear discrimination between islanding...

  2. Rf system modeling for the high average power FEL at CEBAF

    International Nuclear Information System (INIS)

    Merminga, L.; Fugitt, J.; Neil, G.; Simrock, S.

    1995-01-01

    High beam loading and energy recovery compounded by use of superconducting cavities, which requires tight control of microphonic noise, place stringent constraints on the linac rf system design of the proposed high average power FEL at CEBAF. Longitudinal dynamics imposes off-crest operation, which in turn implies a large tuning angle to minimize power requirements. Amplitude and phase stability requirements are consistent with demonstrated performance at CEBAF. A numerical model of the CEBAF rf control system is presented and the response of the system is examined under large parameter variations, microphonic noise, and beam current fluctuations. Studies of the transient behavior lead to a plausible startup and recovery scenario

  3. PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM

    OpenAIRE

    Bahubali K. Shiragapur; Uday Wali

    2016-01-01

    In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR) quantity. The Golay Code (24, 12), Reed-Muller code (16, 11), Hamming code (7, 4) and Hybrid technique (Combination of Signal Scrambling and Signal Distortion) proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conve...

  4. High average power Q-switched 1314 nm two-crystal Nd:YLF laser

    CSIR Research Space (South Africa)

    Botha, RC

    2015-02-01

    Full Text Available . 40, No. 4 / OPTICS LETTERS High average power Q-switched 1314 nm two-crystal Nd:YLF laser R. C. Botha,1,2,* W. Koen,3 M. J. D. Esser,3,4 C. Bollig,3,5 W. L. Combrinck,1,6 H. M. von Bergmann,2 and H. J. Strauss3 1HartRAO, P.O. Box 443...

  5. The use of induction linacs with nonlinear magnetic drive as high average power accelerators

    International Nuclear Information System (INIS)

    Birx, D.L.; Cook, E.G.; Hawkins, S.A.; Newton, M.A.; Poor, S.E.; Reginato, L.L.; Schmidt, J.A.; Smith, M.W.

    1985-01-01

    The marriage of induction linac technology with Nonlinear Magnetic Modulators has produced some unique capabilities. It appears possible to produce electron beams with average currents measured in amperes, at gradients exceeding 1 MeV/m, and with power efficiences approaching 50%. A 2 MeV, 5 kA electron accelerator is under construction at Lawrence Livermore National Laboratory (LLNL) to allow us to demonstrate some of these concepts. Progress on this project is reported here. (orig.)

  6. Average spectral power changes at the hippocampal electroencephalogram in schizophrenia model induced by ketamine.

    Science.gov (United States)

    Sampaio, Luis Rafael L; Borges, Lucas T N; Silva, Joyse M F; de Andrade, Francisca Roselin O; Barbosa, Talita M; Oliveira, Tatiana Q; Macedo, Danielle; Lima, Ricardo F; Dantas, Leonardo P; Patrocinio, Manoel Cláudio A; do Vale, Otoni C; Vasconcelos, Silvânia M M

    2018-02-01

    The use of ketamine (Ket) as a pharmacological model of schizophrenia is an important tool for understanding the main mechanisms of glutamatergic regulated neural oscillations. Thus, the aim of the current study was to evaluate Ket-induced changes in the average spectral power using the hippocampal quantitative electroencephalography (QEEG). To this end, male Wistar rats were submitted to a stereotactic surgery for the implantation of an electrode in the right hippocampus. After three days, the animals were divided into four groups that were treated for 10 consecutive days with Ket (10, 50, or 100 mg/kg). Brainwaves were captured on the 1st or 10th day, respectively, to acute or repeated treatments. The administration of Ket (10, 50, or 100 mg/kg), compared with controls, induced changes in the hippocampal average spectral power of delta, theta, alpha, gamma low or high waves, after acute or repeated treatments. Therefore, based on the alterations in the average spectral power of hippocampal waves induced by Ket, our findings might provide a basis for the use of hippocampal QEEG in animal models of schizophrenia. © 2017 Société Française de Pharmacologie et de Thérapeutique.

  7. High-throughput machining using high average power ultrashort pulse lasers and ultrafast polygon scanner

    Science.gov (United States)

    Schille, Joerg; Schneider, Lutz; Streek, André; Kloetzer, Sascha; Loeschner, Udo

    2016-03-01

    In this paper, high-throughput ultrashort pulse laser machining is investigated on various industrial grade metals (Aluminium, Copper, Stainless steel) and Al2O3 ceramic at unprecedented processing speeds. This is achieved by using a high pulse repetition frequency picosecond laser with maximum average output power of 270 W in conjunction with a unique, in-house developed two-axis polygon scanner. Initially, different concepts of polygon scanners are engineered and tested to find out the optimal architecture for ultrafast and precision laser beam scanning. Remarkable 1,000 m/s scan speed is achieved on the substrate, and thanks to the resulting low pulse overlap, thermal accumulation and plasma absorption effects are avoided at up to 20 MHz pulse repetition frequencies. In order to identify optimum processing conditions for efficient high-average power laser machining, the depths of cavities produced under varied parameter settings are analyzed and, from the results obtained, the characteristic removal values are specified. The maximum removal rate is achieved as high as 27.8 mm3/min for Aluminium, 21.4 mm3/min for Copper, 15.3 mm3/min for Stainless steel and 129.1 mm3/min for Al2O3 when full available laser power is irradiated at optimum pulse repetition frequency.

  8. Highest energy cosmic rays

    International Nuclear Information System (INIS)

    Nikolskij, S.

    1984-01-01

    Primary particles of cosmic radiation with highest energies cannot in view of their low intensity be recorded directly but for this purpose the phenomenon is used that these particles interact with nuclei in the atmosphere and give rise to what are known as extensive air showers. It was found that 40% of primary particles with an energy of 10 15 to 10 16 eV consist of protons, 12 to 15% of helium nuclei, 15% of iron nuclei, the rest of nuclei of other elements. Radiation intensity with an energy of 10 18 to 10 19 eV depends on the direction of incoming particles. Maximum intensity is in the direction of the centre of the nearest clustre of galaxies, minimal in the direction of the central area of our galaxy. (Ha)

  9. Specification of optical components for a high average-power laser environment

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.R.; Chow, R.; Rinmdahl, K.A.; Willis, J.B.; Wong, J.N.

    1997-06-25

    Optical component specifications for the high-average-power lasers and transport system used in the Atomic Vapor Laser Isotope Separation (AVLIS) plant must address demanding system performance requirements. The need for high performance optics has to be balanced against the practical desire to reduce the supply risks of cost and schedule. This is addressed in optical system design, careful planning with the optical industry, demonstration of plant quality parts, qualification of optical suppliers and processes, comprehensive procedures for evaluation and test, and a plan for corrective action.

  10. Laser properties of an improved average-power Nd-doped phosphate glass

    International Nuclear Information System (INIS)

    Payne, S.A.; Marshall, C.D.; Bayramian, A.J.

    1995-01-01

    The Nd-doped phosphate laser glass described herein can withstand 2.3 times greater thermal loading without fracture, compared to APG-1 (commercially-available average-power glass from Schott Glass Technologies). The enhanced thermal loading capability is established on the basis of the intrinsic thermomechanical properties (expansion, conduction, fracture toughness, and Young's modulus), and by direct thermally-induced fracture experiments using Ar-ion laser heating of the samples. This Nd-doped phosphate glass (referred to as APG-t) is found to be characterized by a 29% lower gain cross section and a 25% longer low-concentration emission lifetime

  11. Angle-averaged effective proton-carbon analyzing powers at intermediate energies

    International Nuclear Information System (INIS)

    Amir-Ahmadi, H.R.; Berg, A.M. van den; Hunyadi, M.; Kalantar-Nayestanaki, N.; Kis, M.; Mahjour-Shafiei, M.; Messchendorp, J.G.; Woertche, H.J.

    2006-01-01

    The angle-averaged effective analyzing powers, A-bar c , for proton-carbon inclusive scattering were measured as a function of the kinetic energy of protons in a double scattering experiment. The measurements were performed in the kinetic energy range of 44.8-136.5MeV at the center of 1-5cm thick graphite analyzers using a polarized proton beam on a CH 2 film or liquid hydrogen serving as target for the primary scattering. These data can be used for measuring the polarization of protons emerging from other reactions such as H(d-bar ,p-bar )d

  12. Development of high-average-power-laser medium based on silica glass

    International Nuclear Information System (INIS)

    Fujimoto, Yasushi; Nakatsuka, Masahiro

    2000-01-01

    We have developed a high-average-power laser material based on silica glass. A new method using Zeolite X is effective for homogeneously dispersing rare earth ions in silica glass to get a high quantum yield. High quality medium, which is bubbleless and quite low refractive index distortion, must be required for realization of laser action, and therefore, we have carefully to treat the gelation and sintering processes, such as, selection of colloidal silica, pH value of for hydrolysis of tetraethylorthosilicate, and sintering history. The quality of the sintered sample and the applications are discussed. (author)

  13. Strips of hourly power options. Approximate hedging using average-based forward contracts

    International Nuclear Information System (INIS)

    Lindell, Andreas; Raab, Mikael

    2009-01-01

    We study approximate hedging strategies for a contingent claim consisting of a strip of independent hourly power options. The payoff of the contingent claim is a sum of the contributing hourly payoffs. As there is no forward market for specific hours, the fundamental problem is to find a reasonable hedge using exchange-traded forward contracts, e.g. average-based monthly contracts. The main result is a simple dynamic hedging strategy that reduces a significant part of the variance. The idea is to decompose the contingent claim into mathematically tractable components and to use empirical estimations to derive hedging deltas. Two benefits of the method are that the technique easily extends to more complex power derivatives and that only a few parameters need to be estimated. The hedging strategy based on the decomposition technique is compared with dynamic delta hedging strategies based on local minimum variance hedging, using a correlated traded asset. (author)

  14. Cooperative AF Relaying in Spectrum-Sharing Systems: Performance Analysis under Average Interference Power Constraints and Nakagami-m Fading

    KAUST Repository

    Xia, Minghua; Aissa, Sonia

    2012-01-01

    the optimal end-to-end performance, the transmit powers of the secondary source and the relays are optimized with respect to average interference power constraints at primary users and Nakagami-$m$ fading parameters of interference channels (for mathematical

  15. Highest priority in Pakistan.

    Science.gov (United States)

    Adil, E

    1968-01-01

    Responding to the challenge posed by its population problem, Pakistan's national leadership gave the highest priority to family planning in its socioeconomic development plan. In Pakistan, as elsewhere in the world, the first family planning effort originated in the private sector. The Family Planning Association of Pakistan made a tentative beginning in popularizing family planning in the country. Some clinics were opened and some publicity and education were undertaken to emphasize the need for family limitation. It was recognized soon that the government needed to assume the primarily responsibility if family planning efforts were to be successful. For the 1st plan period, 1955-60, about $10 million was allocated by the central government in the social welfare sector for voluntary family planning. The level of support continued on the same basis during the 2nd plan, 1960-65, but has been raised 4-fold in the 1965-70 scheme of family planning. Pakistan's Family Planning Association continues to play vital collaborative roles in designing and pretesting of prototype publicity material, involvement of voluntary social workers, and functional research in the clinical and public relations fields. The real breakthrough in the program came with the 3rd 5-year plan, 1965-70. High priority assigned to family planning is reflected by the total initial budget of Rs.284 million (about $60,000,000) for the 5-year period. Current policy is postulated on 6 basic assumptions: family planning efforts need to be public relations-oriented; operations should be conducted through autonomous bodies with decentralized authority at all tiers down to the grassroots level, for expeditious decision making; monetary incentives play an important role; interpersonal motivation in terms of life experience of the clientele through various contacts, coupled with mass media for publicity, can produce a sociological breakthrough; supplies and services in all related disciplines should be

  16. Autoregressive moving average fitting for real standard deviation in Monte Carlo power distribution calculation

    International Nuclear Information System (INIS)

    Ueki, Taro

    2010-01-01

    The noise propagation of tallies in the Monte Carlo power method can be represented by the autoregressive moving average process of orders p and p-1 (ARMA(p,p-1)], where p is an integer larger than or equal to two. The formula of the autocorrelation of ARMA(p,q), p≥q+1, indicates that ARMA(3,2) fitting is equivalent to lumping the eigenmodes of fluctuation propagation in three modes such as the slow, intermediate and fast attenuation modes. Therefore, ARMA(3,2) fitting was applied to the real standard deviation estimation of fuel assemblies at particular heights. The numerical results show that straightforward ARMA(3,2) fitting is promising but a stability issue must be resolved toward the incorporation in the distributed version of production Monte Carlo codes. The same numerical results reveal that the average performance of ARMA(3,2) fitting is equivalent to that of the batch method in MCNP with a batch size larger than one hundred and smaller than two hundred cycles for a 1100 MWe pressurized water reactor. The bias correction of low lag autocovariances in MVP/GMVP is demonstrated to have the potential of improving the average performance of ARMA(3,2) fitting. (author)

  17. Average stopping powers for electron and photon sources for radiobiological modeling and microdosimetric applications

    Science.gov (United States)

    Vassiliev, Oleg N.; Kry, Stephen F.; Grosshans, David R.; Mohan, Radhe

    2018-03-01

    This study concerns calculation of the average electronic stopping power for photon and electron sources. It addresses two problems that have not yet been fully resolved. The first is defining the electron spectrum used for averaging in a way that is most suitable for radiobiological modeling. We define it as the spectrum of electrons entering the sensitive to radiation volume (SV) within the cell nucleus, at the moment they enter the SV. For this spectrum we derive a formula that combines linearly the fluence spectrum and the source spectrum. The latter is the distribution of initial energies of electrons produced by a source. Previous studies used either the fluence or source spectra, but not both, thereby neglecting a part of the complete spectrum. Our derived formula reduces to these two prior methods in the case of high and low energy sources, respectively. The second problem is extending electron spectra to low energies. Previous studies used an energy cut-off on the order of 1 keV. However, as we show, even for high energy sources, such as 60Co, electrons with energies below 1 keV contribute about 30% to the dose. In this study all the spectra were calculated with Geant4-DNA code and a cut-off energy of only 11 eV. We present formulas for calculating frequency- and dose-average stopping powers, numerical results for several important electron and photon sources, and tables with all the data needed to use our formulas for arbitrary electron and photon sources producing electrons with initial energies up to  ∼1 MeV.

  18. Highest Resolution Gaspra Mosaic

    Science.gov (United States)

    1992-01-01

    This picture of asteroid 951 Gaspra is a mosaic of two images taken by the Galileo spacecraft from a range of 5,300 kilometers (3,300 miles), some 10 minutes before closest approach on October 29, 1991. The Sun is shining from the right; phase angle is 50 degrees. The resolution, about 54 meters/pixel, is the highest for the Gaspra encounter and is about three times better than that in the view released in November 1991. Additional images of Gaspra remain stored on Galileo's tape recorder, awaiting playback in November. Gaspra is an irregular body with dimensions about 19 x 12 x 11 kilometers (12 x 7.5 x 7 miles). The portion illuminated in this view is about 18 kilometers (11 miles) from lower left to upper right. The north pole is located at upper left; Gaspra rotates counterclockwise every 7 hours. The large concavity on the lower right limb is about 6 kilometers (3.7 miles) across, the prominent crater on the terminator, center left, about 1.5 kilometers (1 mile). A striking feature of Gaspra's surface is the abundance of small craters. More than 600 craters, 100-500 meters (330-1650 feet) in diameter are visible here. The number of such small craters compared to larger ones is much greater for Gaspra than for previously studied bodies of comparable size such as the satellites of Mars. Gaspra's very irregular shape suggests that the asteroid was derived from a larger body by nearly catastrophic collisions. Consistent with such a history is the prominence of groove-like linear features, believed to be related to fractures. These linear depressions, 100-300 meters wide and tens of meters deep, are in two crossing groups with slightly different morphology, one group wider and more pitted than the other. Grooves had previously been seen only on Mars's moon Phobos, but were predicted for asteroids as well. Gaspra also shows a variety of enigmatic curved depressions and ridges in the terminator region at left. The Galileo project, whose primary mission is the

  19. Gaspra - Highest Resolution Mosaic

    Science.gov (United States)

    1992-01-01

    This picture of asteroid 951 Gaspra is a mosaic of two images taken by the Galileo spacecraft from a range of 5,300 kilometers (3,300 miles), some 10 minutes before closest approach on October 29, 1991. The Sun is shining from the right; phase angle is 50 degrees. The resolution, about 54 meters/pixel, is the highest for the Gaspra encounter and is about three times better than that in the view released in November 1991. Additional images of Gaspra remain stored on Galileo's tape recorder, awaiting playback in November. Gaspra is an irregular body with dimensions about 19 x 12 x 11 kilometers (12 x 7.5 x 7 miles). The portion illuminated in this view is about 18 kilometers (11 miles) from lower left to upper right. The north pole is located at upper left; Gaspra rotates counterclockwise every 7 hours. The large concavity on the lower right limb is about 6 kilometers (3.7 miles) across, the prominent crater on the terminator, center left, about 1.5 kilometers (1 mile). A striking feature of Gaspra's surface is the abundance of small craters. More than 600 craters, 100-500 meters (330-1650 feet) in diameter are visible here. The number of such small craters compared to larger ones is much greater for Gaspra than for previously studied bodies of comparable size such as the satellites of Mars. Gaspra's very irregular shape suggests that the asteroid was derived from a larger body by nearly catastrophic collisions. Consistent with such a history is the prominence of groove-like linear features, believed to be related to fractures. These linear depressions, 100-300 meters wide and tens of meters deep, are in two crossing groups with slightly different morphology, one group wider and more pitted than the other. Grooves had previously been seen only on Mars's moon Phobos, but were predicted for asteroids as well. Gaspra also shows a variety of enigmatic curved depressions and ridges in the terminator region at left. The Galileo project, whose primary mission is the

  20. Development of linear proton accelerators with the high average beam power

    CERN Document Server

    Bomko, V A; Egorov, A M

    2001-01-01

    Review of the current situation in the development of powerful linear proton accelerators carried out in many countries is given. The purpose of their creation is solving problems of safe and efficient nuclear energetics on a basis of the accelerator-reactor complex. In this case a proton beam with the energy up to 1 GeV, the average current of 30 mA is required. At the same time there is a needed in more powerful beams,for example, for production of tritium and transmutation of nuclear waste products. The creation of accelerators of such a power will be followed by the construction of linear accelerators of 1 GeV but with a more moderate beam current. They are intended for investigation of many aspects of neutron physics and neutron engineering. Problems in the creation of efficient constructions for the basic and auxiliary equipment, the reliability of the systems, and minimization of the beam losses in the process of acceleration will be solved.

  1. Design and component specifications for high average power laser optical systems

    Energy Technology Data Exchange (ETDEWEB)

    O' Neil, R.W.; Sawicki, R.H.; Johnson, S.A.; Sweatt, W.C.

    1987-01-01

    Laser imaging and transport systems are considered in the regime where laser-induced damage and/or thermal distortion have significant design implications. System design and component specifications are discussed and quantified in terms of the net system transport efficiency and phase budget. Optical substrate materials, figure, surface roughness, coatings, and sizing are considered in the context of visible and near-ir optical systems that have been developed at Lawrence Livermore National Laboratory for laser isotope separation applications. In specific examples of general applicability, details of the bulk and/or surface absorption, peak and/or average power damage threshold, coating characteristics and function, substrate properties, or environmental factors will be shown to drive the component size, placement, and shape in high-power systems. To avoid overstressing commercial fabrication capabilities or component design specifications, procedures will be discussed for compensating for aberration buildup, using a few carefully placed adjustable mirrors. By coupling an aggressive measurements program on substrates and coatings to the design effort, an effective technique has been established to project high-power system performance realistically and, in the process, drive technology developments to improve performance or lower cost in large-scale laser optical systems. 13 refs.

  2. Cloud-based design of high average power traveling wave linacs

    Science.gov (United States)

    Kutsaev, S. V.; Eidelman, Y.; Bruhwiler, D. L.; Moeller, P.; Nagler, R.; Barbe Welzel, J.

    2017-12-01

    The design of industrial high average power traveling wave linacs must accurately consider some specific effects. For example, acceleration of high current beam reduces power flow in the accelerating waveguide. Space charge may influence the stability of longitudinal or transverse beam dynamics. Accurate treatment of beam loading is central to the design of high-power TW accelerators, and it is especially difficult to model in the meter-scale region where the electrons are nonrelativistic. Currently, there are two types of available codes: tracking codes (e.g. PARMELA or ASTRA) that cannot solve self-consistent problems, and particle-in-cell codes (e.g. Magic 3D or CST Particle Studio) that can model the physics correctly but are very time-consuming and resource-demanding. Hellweg is a special tool for quick and accurate electron dynamics simulation in traveling wave accelerating structures. The underlying theory of this software is based on the differential equations of motion. The effects considered in this code include beam loading, space charge forces, and external magnetic fields. We present the current capabilities of the code, provide benchmarking results, and discuss future plans. We also describe the browser-based GUI for executing Hellweg in the cloud.

  3. Design and component specifications for high average power laser optical systems

    International Nuclear Information System (INIS)

    O'Neil, R.W.; Sawicki, R.H.; Johnson, S.A.; Sweatt, W.C.

    1987-01-01

    Laser imaging and transport systems are considered in the regime where laser-induced damage and/or thermal distortion have significant design implications. System design and component specifications are discussed and quantified in terms of the net system transport efficiency and phase budget. Optical substrate materials, figure, surface roughness, coatings, and sizing are considered in the context of visible and near-ir optical systems that have been developed at Lawrence Livermore National Laboratory for laser isotope separation applications. In specific examples of general applicability, details of the bulk and/or surface absorption, peak and/or average power damage threshold, coating characteristics and function, substrate properties, or environmental factors will be shown to drive the component size, placement, and shape in high-power systems. To avoid overstressing commercial fabrication capabilities or component design specifications, procedures will be discussed for compensating for aberration buildup, using a few carefully placed adjustable mirrors. By coupling an aggressive measurements program on substrates and coatings to the design effort, an effective technique has been established to project high-power system performance realistically and, in the process, drive technology developments to improve performance or lower cost in large-scale laser optical systems. 13 refs

  4. Sub-100 fs high average power directly blue-diode-laser-pumped Ti:sapphire oscillator

    Science.gov (United States)

    Rohrbacher, Andreas; Markovic, Vesna; Pallmann, Wolfgang; Resan, Bojan

    2016-03-01

    Ti:sapphire oscillators are a proven technology to generate sub-100 fs (even sub-10 fs) pulses in the near infrared and are widely used in many high impact scientific fields. However, the need for a bulky, expensive and complex pump source, typically a frequency-doubled multi-watt neodymium or optically pumped semiconductor laser, represents the main obstacle to more widespread use. The recent development of blue diodes emitting over 1 W has opened up the possibility of directly diode-laser-pumped Ti:sapphire oscillators. Beside the lower cost and footprint, a direct diode pumping provides better reliability, higher efficiency and better pointing stability to name a few. The challenges that it poses are lower absorption of Ti:sapphire at available diode wavelengths and lower brightness compared to typical green pump lasers. For practical applications such as bio-medicine and nano-structuring, output powers in excess of 100 mW and sub-100 fs pulses are required. In this paper, we demonstrate a high average power directly blue-diode-laser-pumped Ti:sapphire oscillator without active cooling. The SESAM modelocking ensures reliable self-starting and robust operation. We will present two configurations emitting 460 mW in 82 fs pulses and 350 mW in 65 fs pulses, both operating at 92 MHz. The maximum obtained pulse energy reaches 5 nJ. A double-sided pumping scheme with two high power blue diode lasers was used for the output power scaling. The cavity design and the experimental results will be discussed in more details.

  5. PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM

    Directory of Open Access Journals (Sweden)

    Bahubali K. Shiragapur

    2016-03-01

    Full Text Available In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR quantity. The Golay Code (24, 12, Reed-Muller code (16, 11, Hamming code (7, 4 and Hybrid technique (Combination of Signal Scrambling and Signal Distortion proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conventional and Modified Selective mapping techniques. The simulation results are validated through statistical properties, for proposed technique’s autocorrelation value is maximum shows reduction in PAPR. The symbol preference is the key idea to reduce PAPR based on Hamming distance. The simulation results are discussed in detail, in this article.

  6. Adaptive Control for Buck Power Converter Using Fixed Point Inducting Control and Zero Average Dynamics Strategies

    Science.gov (United States)

    Hoyos Velasco, Fredy Edimer; García, Nicolás Toro; Garcés Gómez, Yeison Alberto

    In this paper, the output voltage of a buck power converter is controlled by means of a quasi-sliding scheme. The Fixed Point Inducting Control (FPIC) technique is used for the control design, based on the Zero Average Dynamics (ZAD) strategy, including load estimation by means of the Least Mean Squares (LMS) method. The control scheme is tested in a Rapid Control Prototyping (RCP) system based on Digital Signal Processing (DSP) for dSPACE platform. The closed loop system shows adequate performance. The experimental and simulation results match. The main contribution of this paper is to introduce the load estimator by means of LMS, to make ZAD and FPIC control feasible in load variation conditions. In addition, comparison results for controlled buck converter with SMC, PID and ZAD-FPIC control techniques are shown.

  7. High average power CW FELs [Free Electron Laser] for application to plasma heating: Designs and experiments

    International Nuclear Information System (INIS)

    Booske, J.H.; Granatstein, V.L.; Radack, D.J.; Antonsen, T.M. Jr.; Bidwell, S.; Carmel, Y.; Destler, W.W.; Latham, P.E.; Levush, B.; Mayergoyz, I.D.; Zhang, Z.X.

    1989-01-01

    A short period wiggler (period ∼ 1 cm), sheet beam FEL has been proposed as a low-cost source of high average power (1 MW) millimeter-wave radiation for plasma heating and space-based radar applications. Recent calculation and experiments have confirmed the feasibility of this concept in such critical areas as rf wall heating, intercepted beam (''body'') current, and high voltage (0.5 - 1 MV) sheet beam generation and propagation. Results of preliminary low-gain sheet beam FEL oscillator experiments using a field emission diode and pulse line accelerator have verified that lasing occurs at the predicted FEL frequency. Measured start oscillation currents also appear consistent with theoretical estimates. Finally, we consider the possibilities of using a short-period, superconducting planar wiggler for improved beam confinement, as well as access to the high gain, strong pump Compton regime with its potential for highly efficient FEL operation

  8. Research on DC-RF superconducting photocathode injector for high average power FELs

    International Nuclear Information System (INIS)

    Zhao Kui; Hao Jiankui; Hu Yanle; Zhang Baocheng; Quan Shengwen; Chen Jiaer; Zhuang Jiejia

    2001-01-01

    To obtain high average current electron beams for a high average power Free Electron Laser (FEL), a DC-RF superconducting injector is designed. It consists of a DC extraction gap, a 1+((1)/(2)) superconducting cavity and a coaxial input system. The DC gap, which takes the form of a Pierce configuration, is connected to the 1+((1)/(2)) superconducting cavity. The photocathode is attached to the negative electrode of the DC gap. The anode forms the bottom of the ((1)/(2)) cavity. Simulations are made to model the beam dynamics of the electron beams extracted by the DC gap and accelerated by the superconducting cavity. High quality electron beams with emittance lower than 3 π-mm-mrad can be obtained. The optimization of experiments with the DC gap, as well as the design of experiments with the coaxial coupler have all been completed. An optimized 1+((1)/(2)) superconducting cavity is in the process of being studied and manufactured

  9. Peak-to-average power ratio reduction in interleaved OFDMA systems

    KAUST Repository

    Al-Shuhail, Shamael; Ali, Anum; Al-Naffouri, Tareq Y.

    2015-01-01

    Orthogonal frequency division multiple access (OFDMA) systems suffer from several impairments, and communication system engineers use powerful signal processing tools to combat these impairments and to keep up with the capacity/rate demands. One of these impairments is high peak-to-average power ratio (PAPR) and clipping is the simplest peak reduction scheme. However, in general, when multiple users are subjected to clipping, frequency domain clipping distortions spread over the spectrum of all users. This results in compromised performance and hence clipping distortions need to be mitigated at the receiver. Mitigating these distortions in multiuser case is not simple and requires complex clipping mitigation procedures at the receiver. However, it was observed that interleaved OFDMA presents a special structure that results in only self-inflicted clipping distortions (i.e., the distortions of a particular user do not interfere with other users). In this work, we prove analytically that distortions do not spread over multiple users (while utilizing interleaved carrier assignment in OFDMA) and construct a compressed sensing system that utilizes the sparsity of the clipping distortions and recovers it on each user. We provide numerical results that validate our analysis and show promising performance for the proposed clipping recovery scheme.

  10. 7.5 MeV High Average Power Linear Accelerator System for Food Irradiation Applications

    International Nuclear Information System (INIS)

    Eichenberger, Carl; Palmer, Dennis; Wong, Sik-Lam; Robison, Greg; Miller, Bruce; Shimer, Daniel

    2005-09-01

    In December 2004 the US Food and Drug Administration (FDA) approved the use of 7.5 MeV X-rays for irradiation of food products. The increased efficiency for treatment at 7.5 MeV (versus the previous maximum allowable X-ray energy of 5 MeV) will have a significant impact on processing rates and, therefore, reduce the per-package cost of irradiation using X-rays. Titan Pulse Sciences Division is developing a new food irradiation system based on this ruling. The irradiation system incorporates a 7.5 MeV electron linear accelerator (linac) that is capable of 100 kW average power. A tantalum converter is positioned close to the exit window of the scan horn. The linac is an RF standing waveguide structure based on a 5 MeV accelerator that is used for X-ray processing of food products. The linac is powered by a 1300 MHz (L-Band) klystron tube. The electrical drive for the klystron is a solid state modulator that uses inductive energy store and solid-state opening switches. The system is designed to operate 7000 hours per year. Keywords: Rf Accelerator, Solid state modulator, X-ray processing

  11. Peak-to-average power ratio reduction in interleaved OFDMA systems

    KAUST Repository

    Al-Shuhail, Shamael

    2015-12-07

    Orthogonal frequency division multiple access (OFDMA) systems suffer from several impairments, and communication system engineers use powerful signal processing tools to combat these impairments and to keep up with the capacity/rate demands. One of these impairments is high peak-to-average power ratio (PAPR) and clipping is the simplest peak reduction scheme. However, in general, when multiple users are subjected to clipping, frequency domain clipping distortions spread over the spectrum of all users. This results in compromised performance and hence clipping distortions need to be mitigated at the receiver. Mitigating these distortions in multiuser case is not simple and requires complex clipping mitigation procedures at the receiver. However, it was observed that interleaved OFDMA presents a special structure that results in only self-inflicted clipping distortions (i.e., the distortions of a particular user do not interfere with other users). In this work, we prove analytically that distortions do not spread over multiple users (while utilizing interleaved carrier assignment in OFDMA) and construct a compressed sensing system that utilizes the sparsity of the clipping distortions and recovers it on each user. We provide numerical results that validate our analysis and show promising performance for the proposed clipping recovery scheme.

  12. Development of high average power industrial Nd:YAG laser with peak power of 10 kW class

    International Nuclear Information System (INIS)

    Kim, Cheol Jung; Kim, Jeong Mook; Jung, Chin Mann; Kim, Soo Sung; Kim, Kwang Suk; Kim, Min Suk; Cho, Jae Wan; Kim, Duk Hyun

    1992-03-01

    We developed and commercialized an industrial pulsed Nd:YAG laser with peak power of 10 kW class for fine cutting and drilling applications. Several commercial models have been investigated in design and performance. We improved its quality to the level of commercial Nd:YAG laser by an endurance test for each parts of laser system. The maximum peak power and average power of our laser were 10 kW and 250 W, respectively. Moreover, the laser pulse width could be controlled from 0.5 msec to 20 msec continuously. Many optical parts were localized and lowered much in cost. Only few parts were imported and almost 90% in cost were localized. Also, to accellerate the commercialization by the joint company, the training and transfer of technology were pursued in the joint participation in design and assembly by company researchers from the early stage. Three Nd:YAG lasers have been assembled and will be tested in industrial manufacturing process to prove the capability of developed Nd:YAG laser with potential users. (Author)

  13. Micro-engineered first wall tungsten armor for high average power laser fusion energy systems

    Science.gov (United States)

    Sharafat, Shahram; Ghoniem, Nasr M.; Anderson, Michael; Williams, Brian; Blanchard, Jake; Snead, Lance; HAPL Team

    2005-12-01

    The high average power laser program is developing an inertial fusion energy demonstration power reactor with a solid first wall chamber. The first wall (FW) will be subject to high energy density radiation and high doses of high energy helium implantation. Tungsten has been identified as the candidate material for a FW armor. The fundamental concern is long term thermo-mechanical survivability of the armor against the effects of high temperature pulsed operation and exfoliation due to the retention of implanted helium. Even if a solid tungsten armor coating would survive the high temperature cyclic operation with minimal failure, the high helium implantation and retention would result in unacceptable material loss rates. Micro-engineered materials, such as castellated structures, plasma sprayed nano-porous coatings and refractory foams are suggested as a first wall armor material to address these fundamental concerns. A micro-engineered FW armor would have to be designed with specific geometric features that tolerate high cyclic heating loads and recycle most of the implanted helium without any significant failure. Micro-engineered materials are briefly reviewed. In particular, plasma-sprayed nano-porous tungsten and tungsten foams are assessed for their potential to accommodate inertial fusion specific loads. Tests show that nano-porous plasma spray coatings can be manufactured with high permeability to helium gas, while retaining relatively high thermal conductivities. Tungsten foams where shown to be able to overcome thermo-mechanical loads by cell rotation and deformation. Helium implantation tests have shown, that pulsed implantation and heating releases significant levels of implanted helium. Helium implantation and release from tungsten was modeled using an expanded kinetic rate theory, to include the effects of pulsed implantations and thermal cycles. Although, significant challenges remain micro-engineered materials are shown to constitute potential

  14. Micro-engineered first wall tungsten armor for high average power laser fusion energy systems

    International Nuclear Information System (INIS)

    Sharafat, Shahram; Ghoniem, Nasr M.; Anderson, Michael; Williams, Brian; Blanchard, Jake; Snead, Lance

    2005-01-01

    The high average power laser program is developing an inertial fusion energy demonstration power reactor with a solid first wall chamber. The first wall (FW) will be subject to high energy density radiation and high doses of high energy helium implantation. Tungsten has been identified as the candidate material for a FW armor. The fundamental concern is long term thermo-mechanical survivability of the armor against the effects of high temperature pulsed operation and exfoliation due to the retention of implanted helium. Even if a solid tungsten armor coating would survive the high temperature cyclic operation with minimal failure, the high helium implantation and retention would result in unacceptable material loss rates. Micro-engineered materials, such as castellated structures, plasma sprayed nano-porous coatings and refractory foams are suggested as a first wall armor material to address these fundamental concerns. A micro-engineered FW armor would have to be designed with specific geometric features that tolerate high cyclic heating loads and recycle most of the implanted helium without any significant failure. Micro-engineered materials are briefly reviewed. In particular, plasma-sprayed nano-porous tungsten and tungsten foams are assessed for their potential to accommodate inertial fusion specific loads. Tests show that nano-porous plasma spray coatings can be manufactured with high permeability to helium gas, while retaining relatively high thermal conductivities. Tungsten foams where shown to be able to overcome thermo-mechanical loads by cell rotation and deformation. Helium implantation tests have shown, that pulsed implantation and heating releases significant levels of implanted helium. Helium implantation and release from tungsten was modeled using an expanded kinetic rate theory, to include the effects of pulsed implantations and thermal cycles. Although, significant challenges remain micro-engineered materials are shown to constitute potential

  15. Industrial applications of high-average power high-peak power nanosecond pulse duration Nd:YAG lasers

    Science.gov (United States)

    Harrison, Paul M.; Ellwi, Samir

    2009-02-01

    Within the vast range of laser materials processing applications, every type of successful commercial laser has been driven by a major industrial process. For high average power, high peak power, nanosecond pulse duration Nd:YAG DPSS lasers, the enabling process is high speed surface engineering. This includes applications such as thin film patterning and selective coating removal in markets such as the flat panel displays (FPD), solar and automotive industries. Applications such as these tend to require working spots that have uniform intensity distribution using specific shapes and dimensions, so a range of innovative beam delivery systems have been developed that convert the gaussian beam shape produced by the laser into a range of rectangular and/or shaped spots, as required by demands of each project. In this paper the authors will discuss the key parameters of this type of laser and examine why they are important for high speed surface engineering projects, and how they affect the underlying laser-material interaction and the removal mechanism. Several case studies will be considered in the FPD and solar markets, exploring the close link between the application, the key laser characteristics and the beam delivery system that link these together.

  16. Performance study of highly efficient 520 W average power long pulse ceramic Nd:YAG rod laser

    Science.gov (United States)

    Choubey, Ambar; Vishwakarma, S. C.; Ali, Sabir; Jain, R. K.; Upadhyaya, B. N.; Oak, S. M.

    2013-10-01

    We report the performance study of a 2% atomic doped ceramic Nd:YAG rod for long pulse laser operation in the millisecond regime with pulse duration in the range of 0.5-20 ms. A maximum average output power of 520 W with 180 J maximum pulse energy has been achieved with a slope efficiency of 5.4% using a dual rod configuration, which is the highest for typical lamp pumped ceramic Nd:YAG lasers. The laser output characteristics of the ceramic Nd:YAG rod were revealed to be nearly equivalent or superior to those of high-quality single crystal Nd:YAG rod. The laser pump chamber and resonator were designed and optimized to achieve a high efficiency and good beam quality with a beam parameter product of 16 mm mrad (M2˜47). The laser output beam was efficiently coupled through a 400 μm core diameter optical fiber with 90% overall transmission efficiency. This ceramic Nd:YAG laser will be useful for various material processing applications in industry.

  17. Systematic approach to peak-to-average power ratio in OFDM

    Science.gov (United States)

    Schurgers, Curt

    2001-11-01

    OFDM multicarrier systems support high data rate wireless transmission using orthogonal frequency channels, and require no extensive equalization, yet offer excellent immunity against fading and inter-symbol interference. The major drawback of these systems is the large Peak-to-Average power Ratio (PAR) of the transmit signal, which renders a straightforward implementation very costly and inefficient. Existing approaches that attack this PAR issue are abundant, but no systematic framework or comparison between them exist to date. They sometimes even differ in the problem definition itself and consequently in the basic approach to follow. In this work, we provide a systematic approach that resolves this ambiguity and spans the existing PAR solutions. The basis of our framework is the observation that efficient system implementations require a reduced signal dynamic range. This range reduction can be modeled as a hard limiting, also referred to as clipping, where the extra distortion has to be considered as part of the total noise tradeoff. We illustrate that the different PAR solutions manipulate this tradeoff in alternative ways in order to improve the performance. Furthermore, we discuss and compare a broad range of such techniques and organize them into three classes: block coding, clip effect transformation and probabilistic.

  18. A ROBUST CLUSTER HEAD SELECTION BASED ON NEIGHBORHOOD CONTRIBUTION AND AVERAGE MINIMUM POWER FOR MANETs

    Directory of Open Access Journals (Sweden)

    S.Balaji

    2015-06-01

    Full Text Available Mobile Adhoc network is an instantaneous wireless network that is dynamic in nature. It supports single hop and multihop communication. In this infrastructure less network, clustering is a significant model to maintain the topology of the network. The clustering process includes different phases like cluster formation, cluster head selection, cluster maintenance. Choosing cluster head is important as the stability of the network depends on well-organized and resourceful cluster head. When the node has increased number of neighbors it can act as a link between the neighbor nodes which in further reduces the number of hops in multihop communication. Promisingly the node with more number of neighbors should also be available with enough energy to provide stability in the network. Hence these aspects demand the focus. In weight based cluster head selection, closeness and average minimum power required is considered for purging the ineligible nodes. The optimal set of nodes selected after purging will compete to become cluster head. The node with maximum weight selected as cluster head. Mathematical formulation is developed to show the proposed method provides optimum result. It is also suggested that weight factor in calculating the node weight should give precise importance to energy and node stability.

  19. Design of a high average-power FEL driven by an existing 20 MV electrostatic-accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Kimel, I.; Elias, L.R. [Univ. of Central Florida, Orlando, FL (United States)

    1995-12-31

    There are some important applications where high average-power radiation is required. Two examples are industrial machining and space power-beaming. Unfortunately, up to date no FEL has been able to show more than 10 Watts of average power. To remedy this situation we started a program geared towards the development of high average-power FELs. As a first step we are building in our CREOL laboratory, a compact FEL which will generate close to 1 kW in CW operation. As the next step we are also engaged in the design of a much higher average-power system based on a 20 MV electrostatic accelerator. This FEL will be capable of operating CW with a power output of 60 kW. The idea is to perform a high power demonstration using the existing 20 MV electrostatic accelerator at the Tandar facility in Buenos Aires. This machine has been dedicated to accelerate heavy ions for experiments and applications in nuclear and atomic physics. The necessary adaptations required to utilize the machine to accelerate electrons will be described. An important aspect of the design of the 20 MV system, is the electron beam optics through almost 30 meters of accelerating and decelerating tubes as well as the undulator. Of equal importance is a careful design of the long resonator with mirrors able to withstand high power loading with proper heat dissipation features.

  20. Observer design for DC/DC power converters with bilinear averaged model

    NARCIS (Netherlands)

    Spinu, V.; Dam, M.C.A.; Lazar, M.

    2012-01-01

    Increased demand for high bandwidth and high efficiency made full state-feedback control solutions very attractive to power-electronics community. However, full state measurement is economically prohibitive for a large range of applications. Moreover, state measurements in switching power converters

  1. Spatial models for probabilistic prediction of wind power with application to annual-average and high temporal resolution data

    DEFF Research Database (Denmark)

    Lenzi, Amanda; Pinson, Pierre; Clemmensen, Line Katrine Harder

    2017-01-01

    average wind power generation, and for a high temporal resolution (typically wind power averages over 15-min time steps). In both cases, we use a spatial hierarchical statistical model in which spatial correlation is captured by a latent Gaussian field. We explore how such models can be handled...... with stochastic partial differential approximations of Matérn Gaussian fields together with Integrated Nested Laplace Approximations. We demonstrate the proposed methods on wind farm data from Western Denmark, and compare the results to those obtained with standard geostatistical methods. The results show...

  2. The Application of Cryogenic Laser Physics to the Development of High Average Power Ultra-Short Pulse Lasers

    Directory of Open Access Journals (Sweden)

    David C. Brown

    2016-01-01

    Full Text Available Ultrafast laser physics continues to advance at a rapid pace, driven primarily by the development of more powerful and sophisticated diode-pumping sources, the development of new laser materials, and new laser and amplification approaches such as optical parametric chirped-pulse amplification. The rapid development of high average power cryogenic laser sources seems likely to play a crucial role in realizing the long-sought goal of powerful ultrafast sources that offer concomitant high peak and average powers. In this paper, we review the optical, thermal, thermo-optic and laser parameters important to cryogenic laser technology, recently achieved laser and laser materials progress, the progression of cryogenic laser technology, discuss the importance of cryogenic laser technology in ultrafast laser science, and what advances are likely to be achieved in the near-future.

  3. Determination of the in-core power and the average core temperature of low power research reactors using gamma dose rate measurements

    International Nuclear Information System (INIS)

    Osei Poku, L.

    2012-01-01

    Most reactors incorporate out-of-core neutron detectors to monitor the reactor power. An accurate relationship between the powers indicated by these detectors and actual core thermal power is required. This relationship is established by calibrating the thermal power. The most common method used in calibrating the thermal power of low power reactors is neutron activation technique. To enhance the principle of multiplicity and diversity of measuring the thermal neutron flux and/or power and temperature difference and/or average core temperature of low power research reactors, an alternative and complimentary method has been developed, in addition to the current method. Thermal neutron flux/Power and temperature difference/average core temperature were correlated with measured gamma dose rate. The thermal neutron flux and power predicted using gamma dose rate measurement were in good agreement with the calibrated/indicated thermal neutron fluxes and powers. The predicted data was also good agreement with thermal neutron fluxes and powers obtained using the activation technique. At an indicated power of 30 kW, the gamma dose rate measured predicted thermal neutron flux of (1* 10 12 ± 0.00255 * 10 12 ) n/cm 2 s and (0.987* 10 12 ± 0.00243 * 10 12 ) which corresponded to powers of (30.06 ± 0.075) kW and (29.6 ± 0.073) for both normal level of the pool water and 40 cm below normal levels respectively. At an indicated power of 15 kW, the gamma dose rate measured predicted thermal neutron flux of (5.07* 10 11 ± 0.025* 10 11 ) n/cm 2 s and (5.12 * 10 11 ±0.024* 10 11 ) n/cm 2 s which corresponded to power of (15.21 ± 0.075) kW and (15.36 ± 0.073) kW for both normal levels of the pool water and 40 cm below normal levels respectively. The power predicted by this work also compared well with power obtained from a three-dimensional neutronic analysis for GHARR-1 core. The predicted power also compares well with calculated power using a correlation equation obtained from

  4. Efficient processing of CFRP with a picosecond laser with up to 1.4 kW average power

    Science.gov (United States)

    Onuseit, V.; Freitag, C.; Wiedenmann, M.; Weber, R.; Negel, J.-P.; Löscher, A.; Abdou Ahmed, M.; Graf, T.

    2015-03-01

    Laser processing of carbon fiber reinforce plastic (CFRP) is a very promising method to solve a lot of the challenges for large-volume production of lightweight constructions in automotive and airplane industries. However, the laser process is actual limited by two main issues. First the quality might be reduced due to thermal damage and second the high process energy needed for sublimation of the carbon fibers requires laser sources with high average power for productive processing. To achieve thermal damage of the CFRP of less than 10μm intensities above 108 W/cm² are needed. To reach these high intensities in the processing area ultra-short pulse laser systems are favored. Unfortunately the average power of commercially available laser systems is up to now in the range of several tens to a few hundred Watt. To sublimate the carbon fibers a large volume specific enthalpy of 85 J/mm³ is necessary. This means for example that cutting of 2 mm thick material with a kerf width of 0.2 mm with industry-typical 100 mm/sec requires several kilowatts of average power. At the IFSW a thin-disk multipass amplifier yielding a maximum average output power of 1100 W (300 kHz, 8 ps, 3.7 mJ) allowed for the first time to process CFRP at this average power and pulse energy level with picosecond pulse duration. With this unique laser system cutting of CFRP with a thickness of 2 mm an effective average cutting speed of 150 mm/sec with a thermal damage below 10μm was demonstrated.

  5. Synchronously pumped optical parametric oscillation in periodically poled lithium niobate with 1-W average output power

    NARCIS (Netherlands)

    Graf, T.; McConnell, G.; Ferguson, A.I.; Bente, E.A.J.M.; Burns, D.; Dawson, M.D.

    1999-01-01

    We report on a rugged all-solid-state laser source of near-IR radiation in the range of 1461–1601 nm based on a high-power Nd:YVO4 laser that is mode locked by a semiconductor saturable Bragg reflector as the pump source of a synchronously pumped optical parametric oscillator with a periodically

  6. High average power scaling of optical parametric amplification through cascaded difference-frequency generators

    Science.gov (United States)

    Jovanovic, Igor; Comaskey, Brian J.

    2004-09-14

    A first pump pulse and a signal pulse are injected into a first optical parametric amplifier. This produces a first amplified signal pulse. At least one additional pump pulse and the first amplified signal pulse are injected into at least one additional optical parametric amplifier producing an increased power coherent optical pulse.

  7. Program THEK energy production units of average power and using thermal conversion of solar radiation

    Science.gov (United States)

    1978-01-01

    General studies undertaken by the C.N.R.S. in the field of solar power plants have generated the problem of building energy production units in the medium range of electrical power, in the order of 100 kW. Among the possible solutions, the principle of the use of distributed heliothermal converters has been selected as being, with the current status of things, the most advantageous solution. This principle consists of obtaining the conversion of concentrated radiation into heat by using a series of heliothermal conversion modules scattered over the ground; the produced heat is collected by a heat-carrying fluid circulating inside a thermal loop leading to a device for both regulation and storage.

  8. Mixed-mode distribution systems for high average power electron cyclotron heating

    International Nuclear Information System (INIS)

    White, T.L.; Kimrey, H.D.; Bigelow, T.S.

    1984-01-01

    The ELMO Bumpy Torus-Scale (EBT-S) experiment consists of 24 simple magnetic mirrors joined end-to-end to form a torus of closed magnetic field lines. In this paper, we first describe an 80% efficient mixed-mode unpolarized heating system which couples 28-GHz microwave power to the midplane of the 24 EBT-S cavities. The system consists of two radiused bends feeding a quasi-optical mixed-mode toroidal distribution manifold. Balancing power to the 24 cavities is determined by detailed computer ray tracing. A second 28-GHz electron cyclotron heating (ECH) system using a polarized grid high field launcher is described. The launcher penetrates the fundamental ECH resonant surface without a vacuum window with no observable breakdown up to 1 kW/cm 2 (source limited) with 24 kW delivered to the plasma. This system uses the same mixed-mode output as the first system but polarizes the launched power by using a grid of WR42 apertures. The efficiency of this system is 32%, but can be improved by feeding multiple launchers from a separate distribution manifold

  9. Green-diode-pumped femtosecond Ti:Sapphire laser with up to 450 mW average power.

    Science.gov (United States)

    Gürel, K; Wittwer, V J; Hoffmann, M; Saraceno, C J; Hakobyan, S; Resan, B; Rohrbacher, A; Weingarten, K; Schilt, S; Südmeyer, T

    2015-11-16

    We investigate power-scaling of green-diode-pumped Ti:Sapphire lasers in continuous-wave (CW) and mode-locked operation. In a first configuration with a total pump power of up to 2 W incident onto the crystal, we achieved a CW power of up to 440 mW and self-starting mode-locking with up to 200 mW average power in 68-fs pulses using semiconductor saturable absorber mirror (SESAM) as saturable absorber. In a second configuration with up to 3 W of pump power incident onto the crystal, we achieved up to 650 mW in CW operation and up to 450 mW in 58-fs pulses using Kerr-lens mode-locking (KLM). The shortest pulse duration was 39 fs, which was achieved at 350 mW average power using KLM. The mode-locked laser generates a pulse train at repetition rates around 400 MHz. No complex cooling system is required: neither the SESAM nor the Ti:Sapphire crystal is actively cooled, only air cooling is applied to the pump diodes using a small fan. Because of mass production for laser displays, we expect that prices for green laser diodes will become very favorable in the near future, opening the door for low-cost Ti:Sapphire lasers. This will be highly attractive for potential mass applications such as biomedical imaging and sensing.

  10. High energy, high average power solid state green or UV laser

    Science.gov (United States)

    Hackel, Lloyd A.; Norton, Mary; Dane, C. Brent

    2004-03-02

    A system for producing a green or UV output beam for illuminating a large area with relatively high beam fluence. A Nd:glass laser produces a near-infrared output by means of an oscillator that generates a high quality but low power output and then multi-pass through and amplification in a zig-zag slab amplifier and wavefront correction in a phase conjugator at the midway point of the multi-pass amplification. The green or UV output is generated by means of conversion crystals that follow final propagation through the zig-zag slab amplifier.

  11. Design of an L-band normally conducting RF gun cavity for high peak and average RF power

    Energy Technology Data Exchange (ETDEWEB)

    Paramonov, V., E-mail: paramono@inr.ru [Institute for Nuclear Research of Russian Academy of Sciences, 60-th October Anniversary prospect 7a, 117312 Moscow (Russian Federation); Philipp, S. [Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, D-15738 Zeuthen (Germany); Rybakov, I.; Skassyrskaya, A. [Institute for Nuclear Research of Russian Academy of Sciences, 60-th October Anniversary prospect 7a, 117312 Moscow (Russian Federation); Stephan, F. [Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, D-15738 Zeuthen (Germany)

    2017-05-11

    To provide high quality electron bunches for linear accelerators used in free electron lasers and particle colliders, RF gun cavities operate with extreme electric fields, resulting in a high pulsed RF power. The main L-band superconducting linacs of such facilities also require a long RF pulse length, resulting in a high average dissipated RF power in the gun cavity. The newly developed cavity based on the proven advantages of the existing DESY RF gun cavities, underwent significant changes. The shape of the cells is optimized to reduce the maximal surface electric field and RF loss power. Furthermore, the cavity is equipped with an RF probe to measure the field amplitude and phase. The elaborated cooling circuit design results in a lower temperature rise on the cavity RF surface and permits higher dissipated RF power. The paper presents the main solutions and results of the cavity design.

  12. Catching the Highest Energy Neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Stanev, Todor [Bartol Research Institute and Department of Physics and Astronomy, University of Delaware, Newark, DE 19716 (United States)

    2011-08-15

    We briefly discuss the possible sources of ultrahigh energy neutrinos and the methods for their detection. Then we present the results obtained by different experiments for detection of the highest energy neutrinos.

  13. Average stopping powers and the use of non-analyte spiking for the determination of phosphorus and sodium by PIPPS

    International Nuclear Information System (INIS)

    Olivier, C.; Morland, H.J.

    1991-01-01

    By using particle induced prompt photon spectrometry, PIPPS, the ratios of the average stopping powers in samples and standards can be used to determine elemental compositions. Since the average stopping powers in the samples are in general unknown, this procedure poses a problem. It has been shown that by spiking the sample with a known amount of a compound with known stopping power and containing a non-analyte element, appropriate stopping powers in the samples can be determined by measuring the prompt gamma-ray yields induced in the spike. Using 5-MeV protons and lithium compounds as non-analyte spikes, sodium and phosphorus were determined in ivory, while sodium was determined in geological samples. For the stopping power determinations in the samples the 429-keV 7 Li n(1,0) and 478-keV 7 Li (1,0) gamma rays were measured, while for phosphorus and sodium determinations the high yield 1,266-keV 31 P (1,0), 440-keV 23 Na (1,0), 1,634-keV, Na 23 α(1,0) and 1,637-keV 23 Na (2,1) gamma rays were used. The method was tested by analyzing the standard reference materials SRM 91, 120c and 694

  14. Electrical method for the measurements of volume averaged electron density and effective coupled power to the plasma bulk

    Science.gov (United States)

    Henault, M.; Wattieaux, G.; Lecas, T.; Renouard, J. P.; Boufendi, L.

    2016-02-01

    Nanoparticles growing or injected in a low pressure cold plasma generated by a radiofrequency capacitively coupled capacitive discharge induce strong modifications in the electrical parameters of both plasma and discharge. In this paper, a non-intrusive method, based on the measurement of the plasma impedance, is used to determine the volume averaged electron density and effective coupled power to the plasma bulk. Good agreements are found when the results are compared to those given by other well-known and established methods.

  15. High-throughput machining using a high-average power ultrashort pulse laser and high-speed polygon scanner

    Science.gov (United States)

    Schille, Joerg; Schneider, Lutz; Streek, André; Kloetzer, Sascha; Loeschner, Udo

    2016-09-01

    High-throughput ultrashort pulse laser machining is investigated on various industrial grade metals (aluminum, copper, and stainless steel) and Al2O3 ceramic at unprecedented processing speeds. This is achieved by using a high-average power picosecond laser in conjunction with a unique, in-house developed polygon mirror-based biaxial scanning system. Therefore, different concepts of polygon scanners are engineered and tested to find the best architecture for high-speed and precision laser beam scanning. In order to identify the optimum conditions for efficient processing when using high-average laser powers, the depths of cavities made in the samples by varying the processing parameter settings are analyzed and, from the results obtained, the characteristic removal values are specified. For overlapping pulses of optimum fluence, the removal rate is as high as 27.8 mm3/min for aluminum, 21.4 mm3/min for copper, 15.3 mm3/min for stainless steel, and 129.1 mm3/min for Al2O3, when a laser beam of 187 W average laser powers irradiates. On stainless steel, it is demonstrated that the removal rate increases to 23.3 mm3/min when the laser beam is very fast moving. This is thanks to the low pulse overlap as achieved with 800 m/s beam deflection speed; thus, laser beam shielding can be avoided even when irradiating high-repetitive 20-MHz pulses.

  16. The Mercury Laser System-A scaleable average-power laser for fusion and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Ebbers, C A; Moses, E I

    2008-03-26

    Nestled in a valley between the whitecaps of the Pacific and the snowcapped crests of the Sierra Nevada, Lawrence Livermore National Laboratory (LLNL) is home to the nearly complete National Ignition Facility (NIF). The purpose of NIF is to create a miniature star-on demand. An enormous amount of laser light energy (1.8 MJ in a pulse that is 20 ns in duration) will be focused into a small gold cylinder approximately the size of a pencil eraser. Centered in the gold cylinder (or hohlraum) will be a nearly perfect sphere filled with a complex mixture of hydrogen gas isotopes that is similar to the atmosphere of our Sun. During experiments, the laser light will hit the inside of the gold cylinder, heating the metal until it emits X-rays (similar to how your electric stove coil emits visible red light when heated). The X-rays will be used to compress the hydrogen-like gas with such pressure that the gas atoms will combine or 'fuse' together, producing the next heavier element (helium) and releasing energy in the form of energetic particles. 2010 will mark the first credible attempt at this world-changing event: the achievement of fusion energy 'break-even' on Earth using NIF, the world's largest laser! NIF is anticipated to eventually perform this immense technological accomplishment once per week, with the capability of firing up to six shots per day - eliminating the need for continued underground testing of our nation's nuclear stockpile, in addition to opening up new realms of science. But what about the day after NIF achieves ignition? Although NIF will achieve fusion energy break-even and gain, the facility is not designed to harness the enormous potential of fusion for energy generation. A fusion power plant, as opposed to a world-class engineering research facility, would require that the laser deliver drive pulses nearly 100,000 times more frequently - a rate closer to 10 shots per second as opposed to several shots per day.

  17. The Mercury Laser System-A scaleable average-power laser for fusion and beyond

    International Nuclear Information System (INIS)

    Ebbers, C.A.; Moses, E.I.

    2009-01-01

    Nestled in a valley between the whitecaps of the Pacific and the snowcapped crests of the Sierra Nevada, Lawrence Livermore National Laboratory (LLNL) is home to the nearly complete National Ignition Facility (NIF). The purpose of NIF is to create a miniature star-on demand. An enormous amount of laser light energy (1.8 MJ in a pulse that is 20 ns in duration) will be focused into a small gold cylinder approximately the size of a pencil eraser. Centered in the gold cylinder (or hohlraum) will be a nearly perfect sphere filled with a complex mixture of hydrogen gas isotopes that is similar to the atmosphere of our Sun. During experiments, the laser light will hit the inside of the gold cylinder, heating the metal until it emits X-rays (similar to how your electric stove coil emits visible red light when heated). The X-rays will be used to compress the hydrogen-like gas with such pressure that the gas atoms will combine or 'fuse' together, producing the next heavier element (helium) and releasing energy in the form of energetic particles. 2010 will mark the first credible attempt at this world-changing event: the achievement of fusion energy 'break-even' on Earth using NIF, the world's largest laser NIF is anticipated to eventually perform this immense technological accomplishment once per week, with the capability of firing up to six shots per day - eliminating the need for continued underground testing of our nation's nuclear stockpile, in addition to opening up new realms of science. But what about the day after NIF achieves ignition? Although NIF will achieve fusion energy break-even and gain, the facility is not designed to harness the enormous potential of fusion for energy generation. A fusion power plant, as opposed to a world-class engineering research facility, would require that the laser deliver drive pulses nearly 100,000 times more frequently - a rate closer to 10 shots per second as opposed to several shots per day.

  18. Lowest cost due to highest productivity and highest quality

    Science.gov (United States)

    Wenk, Daniel

    2003-03-01

    Since global purchasing in the automotive industry has been taken up all around the world there is one main key factor that makes a TB-supplier today successful: Producing highest quality at lowest cost. The fact that Tailored Blanks, which today may reach up to 1/3 of a car body weight, are purchased on the free market but from different steel suppliers, especially in Europe and NAFTA, the philosophy on OEM side has been changing gradually towards tough evaluation criteria. "No risk at the stamping side" calls for top quality Tailored- or Tubular Blank products. Outsourcing Tailored Blanks has been starting in Japan but up to now without any quality request from the OEM side like ISO 13919-1B (welding quality standard in Europe and USA). Increased competition will automatically push the quality level and the ongoing approach to combine high strength steel with Tailored- and Tubular Blanks will ask for even more reliable system concepts which enables to weld narrow seams at highest speed. Beside producing quality, which is the key to reduce one of the most important cost driver "material scrap," in-line quality systems with true and reliable evaluation is going to be a "must" on all weld systems. Traceability of all process related data submitted to interfaces according to customer request in combination with ghost-shift-operation of TB systems are tomorrow's state-of-the-art solutions of Tailored Blank-facilities.

  19. Optimization and Annual Average Power Predictions of a Backward Bent Duct Buoy Oscillating Water Column Device Using the Wells Turbine.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Christopher S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bull, Diana L [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Willits, Steven M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Fontaine, Arnold A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-08-01

    This Technical Report presents work completed by The Applied Research Laboratory at The Pennsylvania State University, in conjunction with Sandia National Labs, on the optimization of the power conversion chain (PCC) design to maximize the Average Annual Electric Power (AAEP) output of an Oscillating Water Column (OWC) device. The design consists of two independent stages. First, the design of a floating OWC, a Backward Bent Duct Buoy (BBDB), and second the design of the PCC. The pneumatic power output of the BBDB in random waves is optimized through the use of a hydrodynamically coupled, linear, frequency-domain, performance model that links the oscillating structure to internal air-pressure fluctuations. The PCC optimization is centered on the selection and sizing of a Wells Turbine and electric power generation equipment. The optimization of the PCC involves the following variables: the type of Wells Turbine (fixed or variable pitched, with and without guide vanes), the radius of the turbine, the optimal vent pressure, the sizing of the power electronics, and number of turbines. Also included in this Technical Report are further details on how rotor thrust and torque are estimated, along with further details on the type of variable frequency drive selected.

  20. Cooperative AF Relaying in Spectrum-Sharing Systems: Performance Analysis under Average Interference Power Constraints and Nakagami-m Fading

    KAUST Repository

    Xia, Minghua

    2012-06-01

    Since the electromagnetic spectrum resource becomes more and more scarce, improving spectral efficiency is extremely important for the sustainable development of wireless communication systems and services. Integrating cooperative relaying techniques into spectrum-sharing cognitive radio systems sheds new light on higher spectral efficiency. In this paper, we analyze the end-to-end performance of cooperative amplify-and-forward (AF) relaying in spectrum-sharing systems. In order to achieve the optimal end-to-end performance, the transmit powers of the secondary source and the relays are optimized with respect to average interference power constraints at primary users and Nakagami-$m$ fading parameters of interference channels (for mathematical tractability, the desired channels from secondary source to relay and from relay to secondary destination are assumed to be subject to Rayleigh fading). Also, both partial and opportunistic relay-selection strategies are exploited to further enhance system performance. Based on the exact distribution functions of the end-to-end signal-to-noise ratio (SNR) obtained herein, the outage probability, average symbol error probability, diversity order, and ergodic capacity of the system under study are analytically investigated. Our results show that system performance is dominated by the resource constraints and it improves slowly with increasing average SNR. Furthermore, larger Nakagami-m fading parameter on interference channels deteriorates system performance slightly. On the other hand, when interference power constraints are stringent, opportunistic relay selection can be exploited to improve system performance significantly. All analytical results are corroborated by simulation results and they are shown to be efficient tools for exact evaluation of system performance.

  1. An Electrochemical Capacitor with Applicable Energy Density of 7.4 Wh/kg at Average Power Density of 3000 W/kg.

    Science.gov (United States)

    Zhai, Teng; Lu, Xihong; Wang, Hanyu; Wang, Gongming; Mathis, Tyler; Liu, Tianyu; Li, Cheng; Tong, Yexiang; Li, Yat

    2015-05-13

    Electrochemical capacitors represent a new class of charge storage devices that can simultaneously achieve high energy density and high power density. Previous reports have been primarily focused on the development of high performance capacitor electrodes. Although these electrodes have achieved excellent specific capacitance based on per unit mass of active materials, the gravimetric energy densities calculated based on the weight of entire capacitor device were fairly small. This is mainly due to the large mass ratio between current collector and active material. We aimed to address this issue by a 2-fold approach of minimizing the mass of current collector and increasing the electrode performance. Here we report an electrochemical capacitor using 3D graphene hollow structure as current collector, vanadium sulfide and manganese oxide as anode and cathode materials, respectively. 3D graphene hollow structure provides a lightweight and highly conductive scaffold for deposition of pseudocapacitive materials. The device achieves an excellent active material ratio of 24%. Significantly, it delivers a remarkable energy density of 7.4 Wh/kg (based on the weight of entire device) at the average power density of 3000 W/kg. This is the highest gravimetric energy density reported for asymmetric electrochemical capacitors at such a high power density.

  2. Power Based Phase-Locked Loop Under Adverse Conditions with Moving Average Filter for Single-Phase System

    Directory of Open Access Journals (Sweden)

    Menxi Xie

    2017-06-01

    Full Text Available High performance synchronization methord is citical for grid connected power converter. For single-phase system, power based phase-locked loop(pPLL uses a multiplier as phase detector(PD. As single-phase grid voltage is distorted, the phase error information contains ac disturbances oscillating at integer multiples of fundamental frequency which lead to detection error. This paper presents a new scheme based on moving average filter(MAF applied in-loop of pPLL. The signal characteristic of phase error is dissussed in detail. A predictive rule is adopted to compensate the delay induced by MAF, thus achieving fast dynamic response. In the case of frequency deviate from nomimal, estimated frequency is fed back to adjust the filter window length of MAF and buffer size of predictive rule. Simulation and experimental results show that proposed PLL achieves good performance under adverse grid conditions.

  3. Relationship Between Selected Strength and Power Assessments to Peak and Average Velocity of the Drive Block in Offensive Line Play.

    Science.gov (United States)

    Jacobson, Bert H; Conchola, Eric C; Smith, Doug B; Akehi, Kazuma; Glass, Rob G

    2016-08-01

    Jacobson, BH, Conchola, EC, Smith, DB, Akehi, K, and Glass, RG. Relationship between selected strength and power assessments to peak and average velocity of the drive block in offensive line play. J Strength Cond Res 30(8): 2202-2205, 2016-Typical strength training for football includes the squat and power clean (PC) and routinely measured variables include 1 repetition maximum (1RM) squat and 1RM PC along with the vertical jump (VJ) for power. However, little research exists regarding the association between the strength exercises and velocity of an actual on-the-field performance. The purpose of this study was to investigate the relationship of peak velocity (PV) and average velocity (AV) of the offensive line drive block to 1RM squat, 1RM PC, the VJ, body mass (BM), and body composition. One repetition maximum assessments for the squat and PC were recorded along with VJ height, BM, and percent body fat. These data were correlated with PV and AV while performing the drive block. Peal velocity and AV were assessed using a Tendo Power and Speed Analyzer as the linemen fired, from a 3-point stance into a stationary blocking dummy. Pearson product analysis yielded significant (p ≤ 0.05) correlations between PV and AV and the VJ, the squat, and the PC. A significant inverse association was found for both PV and AV and body fat. These data help to confirm that the typical exercises recommended for American football linemen is positively associated with both PV and AV needed for the drive block effectiveness. It is recommended that these exercises remain the focus of a weight room protocol and that ancillary exercises be built around these exercises. Additionally, efforts to reduce body fat are recommended.

  4. Half-Watt average power femtosecond source spanning 3-8 µm based on subharmonic generation in GaAs

    Science.gov (United States)

    Smolski, Viktor; Vasilyev, Sergey; Moskalev, Igor; Mirov, Mike; Ru, Qitian; Muraviev, Andrey; Schunemann, Peter; Mirov, Sergey; Gapontsev, Valentin; Vodopyanov, Konstantin

    2018-06-01

    Frequency combs with a wide instantaneous spectral span covering the 3-20 µm molecular fingerprint region are highly desirable for broadband and high-resolution frequency comb spectroscopy, trace molecular detection, and remote sensing. We demonstrate a novel approach for generating high-average-power middle-infrared (MIR) output suitable for producing frequency combs with an instantaneous spectral coverage close to 1.5 octaves. Our method is based on utilizing a highly-efficient and compact Kerr-lens mode-locked Cr2+:ZnS laser operating at 2.35-µm central wavelength with 6-W average power, 77-fs pulse duration, and high 0.9-GHz repetition rate; to pump a degenerate (subharmonic) optical parametric oscillator (OPO) based on a quasi-phase-matched GaAs crystal. Such subharmonic OPO is a nearly ideal frequency converter capable of extending the benefits of frequency combs based on well-established mode-locked pump lasers to the MIR region through rigorous, phase- and frequency-locked down conversion. We report a 0.5-W output in the form of an ultra-broadband spectrum spanning 3-8 µm measured at 50-dB level.

  5. Compact Source of Electron Beam with Energy of 200 kEv and Average Power of 2 kW

    CERN Document Server

    Kazarezov, Ivan; Balakin, Vladimir E; Bryazgin, Alex; Bulatov, Alexandre; Glazkov, Ivan; Kokin, Evgeny; Krainov, Gennady; Kuznetsov, Gennady I; Molokoedov, Andrey; Tuvik, Alfred

    2005-01-01

    The paper describes a compact electron beam source with average electron energy of 200 keV. The source operates with pulse power up to 2 MW under average power not higher than 2 kW, pulsed beam current up to 10 A, pulse duration up to 2 mks, and repetition rate up to 5 kHz. The electron beam is extracted through aluminium-beryllium alloy foil. The pulse duration and repetition rate can be changed from control desk. High-voltage generator for the source with output voltage up to 220 kV is realized using the voltage-doubling circuit which consists of 30 sections. The insulation type - gas, SF6 under pressure of 8 atm. The cooling of the foil supporting tubes is provided by a water-alcohol mixture from an independent source. The beam output window dimensions are 180?75 mm, the energy spread in the beam +10/-30%, the source weight is 80 kg.

  6. High average power, diode pumped petawatt laser systems: a new generation of lasers enabling precision science and commercial applications

    Science.gov (United States)

    Haefner, C. L.; Bayramian, A.; Betts, S.; Bopp, R.; Buck, S.; Cupal, J.; Drouin, M.; Erlandson, A.; Horáček, J.; Horner, J.; Jarboe, J.; Kasl, K.; Kim, D.; Koh, E.; Koubíková, L.; Maranville, W.; Marshall, C.; Mason, D.; Menapace, J.; Miller, P.; Mazurek, P.; Naylon, A.; Novák, J.; Peceli, D.; Rosso, P.; Schaffers, K.; Sistrunk, E.; Smith, D.; Spinka, T.; Stanley, J.; Steele, R.; Stolz, C.; Suratwala, T.; Telford, S.; Thoma, J.; VanBlarcom, D.; Weiss, J.; Wegner, P.

    2017-05-01

    Large laser systems that deliver optical pulses with peak powers exceeding one Petawatt (PW) have been constructed at dozens of research facilities worldwide and have fostered research in High-Energy-Density (HED) Science, High-Field and nonlinear physics [1]. Furthermore, the high intensities exceeding 1018W/cm2 allow for efficiently driving secondary sources that inherit some of the properties of the laser pulse, e.g. pulse duration, spatial and/or divergence characteristics. In the intervening decades since that first PW laser, single-shot proof-of-principle experiments have been successful in demonstrating new high-intensity laser-matter interactions and subsequent secondary particle and photon sources. These secondary sources include generation and acceleration of charged-particle (electron, proton, ion) and neutron beams, and x-ray and gamma-ray sources, generation of radioisotopes for positron emission tomography (PET), targeted cancer therapy, medical imaging, and the transmutation of radioactive waste [2, 3]. Each of these promising applications requires lasers with peak power of hundreds of terawatt (TW) to petawatt (PW) and with average power of tens to hundreds of kW to achieve the required secondary source flux.

  7. Up to the highest peak!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    In the early hours of this morning, the beam energy was ramped up to 3.5 TeV, a new world record and the highest energy for this year’s run. Now operators will prepare the machine to make high-energy collisions later this month. CERN Operations Group leader Mike Lamont (foreground) and LHC engineer in charge Alick Macpherson in the CERN Control Centre early this morning. At 5:23 this morning, Friday 19 March, the energy of both beams in the LHC was ramped up to 3.5 TeV, a new world record. During the night, operators had tested the performance of the whole machine with two so-called ‘dry runs’, that is, without beams. Given the good overall response, beams were injected at around 3:00 a.m. and stabilized soon after. The ramp started at around 4:10 and lasted about one hour. Over the last couple of weeks, operation of the LHC at 450 GeV has become routinely reproducible. The operators were able to test and optimize the beam orbit, the beam collimation, the injection and ext...

  8. Design and development of a 6 MW peak, 24 kW average power S-band klystron

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, L.M.; Meena, Rakesh; Nangru, Subhash; Kant, Deepender; Pal, Debashis; Lamba, O.S.; Jindal, Vishnu; Jangid, Sushil Kumar, E-mail: joslm@rediffmail.com [Central Electronics Engineering Research Institute, Council of Scientific and Industrial Research, Pilani (India); Chakravarthy, D.P.; Dixit, Kavita [Bhabha Atomic Research Centre, Mumbai (India)

    2011-07-01

    A 6 MW peak, 24 kW average power S-band Klystron is under development at CEERI, Pilani under an MoU between BARC and CEERI. The design of the klystron has been completed. The electron gun has been designed using TRAK and MAGIC codes. RF cavities have been designed using HFSS and CST Microwave Studio while the complete beam wave interaction simulation has been done using MAGIC code. The thermal design of collector and RF window has been done using ANSYS code. A Gun Collector Test Module (GCTM) was developed before making actual klystron to validate gun perveance and thermal design of collector. A high voltage solid state pulsed modulator has been installed for performance valuation of the tube. The paper will cover the design aspects of the tube and experimental test results of GCTM and klystron. (author)

  9. Overview of the HiLASE project: high average power pulsed DPSSL systems for research and industry

    Czech Academy of Sciences Publication Activity Database

    Divoký, Martin; Smrž, Martin; Chyla, Michal; Sikocinski, Pawel; Severová, Patricie; Novák, Ondřej; Huynh, Jaroslav; Nagisetty, Siva S.; Miura, Taisuke; Pilař, Jan; Slezák, Jiří; Sawicka, Magdalena; Jambunathan, Venkatesan; Vanda, Jan; Endo, Akira; Lucianetti, Antonio; Rostohar, Danijela; Mason, P.D.; Phillips, P.J.; Ertel, K.; Banerjee, S.; Hernandez-Gomez, C.; Collier, J.L.; Mocek, Tomáš

    2014-01-01

    Roč. 2, SI (2014), s. 1-10 ISSN 2095-4719 R&D Projects: GA MŠk ED2.1.00/01.0027; GA MŠk EE2.3.20.0143; GA MŠk EE2.3.30.0057 Grant - others:HILASE(XE) CZ.1.05/2.1.00/01.0027; OP VK 6(XE) CZ.1.07/2.3.00/20.0143; OP VK 4 POSTDOK(XE) CZ.1.07/2.3.00/30.0057 Institutional support: RVO:68378271 Keywords : DPSSL * Yb3C:YAG * thin-disk * multi-slab * pulsed high average power laser Subject RIV: BH - Optics, Masers, Lasers

  10. Design and development of a 6 MW peak, 24 kW average power S-band klystron

    International Nuclear Information System (INIS)

    Joshi, L.M.; Meena, Rakesh; Nangru, Subhash; Kant, Deepender; Pal, Debashis; Lamba, O.S.; Jindal, Vishnu; Jangid, Sushil Kumar; Chakravarthy, D.P.; Dixit, Kavita

    2011-01-01

    A 6 MW peak, 24 kW average power S-band Klystron is under development at CEERI, Pilani under an MoU between BARC and CEERI. The design of the klystron has been completed. The electron gun has been designed using TRAK and MAGIC codes. RF cavities have been designed using HFSS and CST Microwave Studio while the complete beam wave interaction simulation has been done using MAGIC code. The thermal design of collector and RF window has been done using ANSYS code. A Gun Collector Test Module (GCTM) was developed before making actual klystron to validate gun perveance and thermal design of collector. A high voltage solid state pulsed modulator has been installed for performance valuation of the tube. The paper will cover the design aspects of the tube and experimental test results of GCTM and klystron. (author)

  11. A high-average power tapered FEL amplifier at submillimeter frequencies using sheet electron beams and short-period wigglers

    International Nuclear Information System (INIS)

    Bidwell, S.W.; Radack, D.J.; Antonsen, T.M. Jr.; Booske, J.H.; Carmel, Y.; Destler, W.W.; Granatstein, V.L.; Levush, B.; Latham, P.E.; Zhang, Z.X.

    1990-01-01

    A high-average-power FEL amplifier operating at submillimeter frequencies is under development at the University of Maryland. Program goals are to produce a CW, ∼1 MW, FEL amplifier source at frequencies between 280 GHz and 560 GHz. To this end, a high-gain, high-efficiency, tapered FEL amplifier using a sheet electron beam and a short-period (superconducting) wiggler has been chosen. Development of this amplifier is progressing in three stages: (1) beam propagation through a long length (∼1 m) of short period (λ ω = 1 cm) wiggler, (2) demonstration of a proof-of-principle amplifier experiment at 98 GHz, and (3) designs of a superconducting tapered FEL amplifier meeting the ultimate design goal specifications. 17 refs., 1 fig., 1 tab

  12. The measurement of power losses at high magnetic field densities or at small cross-section of test specimen using the averaging

    CERN Document Server

    Gorican, V; Hamler, A; Nakata, T

    2000-01-01

    It is difficult to achieve sufficient accuracy of power loss measurement at high magnetic field densities where the magnetic field strength gets more and more distorted, or in cases where the influence of noise increases (small specimen cross section). The influence of averaging on the accuracy of power loss measurement was studied on the cast amorphous magnetic material Metglas 2605-TCA. The results show that the accuracy of power loss measurements can be improved by using the averaging of data acquisition points.

  13. Rationalization of the electric power utilization for the ferro-alloy production at the HEK 'Jugohrom' by means of the follow up and restriction of the highest level loading as well as automatic processing system for the involved electric power (Macedonia)

    International Nuclear Information System (INIS)

    Koevski, Doncho

    2001-01-01

    The relations between the electro energetic system and the energetic generally, as well as the chronic energetic crisis, the electric power price, provoked to analyse the application of the economic use forms of the electric power. This paper presents the way of rationalisation of the electric power utilization for the ferro-alloy production in Jugohrom. This is done by appointing the measuring system, control and limitation of the highest level loading as well as automatic processing system for the involved electric power. (Original)

  14. Rationalization of the electric power utilization for the ferro-alloy production at the HEK 'Jugohrom' by means of the follow up and restriction of the highest level loading as well as automatic processing system for the involved electric power (Macedonia)

    International Nuclear Information System (INIS)

    Koevski, Doncho

    2002-01-01

    The relations between the electro energetic system and the energetic generally, as well as the chronic energetic crisis, the electric power price, provoked to analyse the application of the economic use forms of the electric power. This paper presents the way of rationalisation of the electric power utilization for the ferro-alloy production in Jugohrom. This is done by appointing the measuring system, control and limitation of the highest level loading as well as automatic processing system for the involved electric power. (Original)

  15. High-average-power 2 μm few-cycle optical parametric chirped pulse amplifier at 100 kHz repetition rate.

    Science.gov (United States)

    Shamir, Yariv; Rothhardt, Jan; Hädrich, Steffen; Demmler, Stefan; Tschernajew, Maxim; Limpert, Jens; Tünnermann, Andreas

    2015-12-01

    Sources of long wavelengths few-cycle high repetition rate pulses are becoming increasingly important for a plethora of applications, e.g., in high-field physics. Here, we report on the realization of a tunable optical parametric chirped pulse amplifier at 100 kHz repetition rate. At a central wavelength of 2 μm, the system delivered 33 fs pulses and a 6 W average power corresponding to 60 μJ pulse energy with gigawatt-level peak powers. Idler absorption and its crystal heating is experimentally investigated for a BBO. Strategies for further power scaling to several tens of watts of average power are discussed.

  16. Development of a 33 kV, 20 A long pulse converter modulator for high average power klystron

    Energy Technology Data Exchange (ETDEWEB)

    Reghu, T.; Mandloi, V.; Shrivastava, Purushottam [Pulsed High Power Microwave Section, Raja Ramanna Centre for Advanced Technology, Indore 452013, M.P. (India)

    2014-05-15

    Research, design, and development of high average power, long pulse modulators for the proposed Indian Spallation Neutron Source are underway at Raja Ramanna Centre for Advanced Technology. With this objective, a prototype of long pulse modulator capable of delivering 33 kV, 20 A at 5 Hz repetition rate has been designed and developed. Three Insulated Gate Bipolar Transistors (IGBT) based switching modules driving high frequency, high voltage transformers have been used to generate high voltage output. The IGBT based switching modules are shifted in phase by 120° with respect to each other. The switching frequency is 25 kHz. Pulses of 1.6 ms pulse width, 80 μs rise time, and 70 μs fall time have been achieved at the modulator output. A droop of ±0.6% is achieved using a simple segmented digital droop correction technique. The total fault energy transferred to the load during fault has been measured by conducting wire burn tests and is found to be within 3.5 J.

  17. Experimental assessment of blade tip immersion depth from free surface on average power and thrust coefficients of marine current turbine

    Science.gov (United States)

    Lust, Ethan; Flack, Karen; Luznik, Luksa

    2014-11-01

    Results from an experimental study on the effects of marine current turbine immersion depth from the free surface are presented. Measurements are performed with a 1/25 scale (diameter D = 0.8m) two bladed horizontal axis turbine towed in the large towing tank at the U.S. Naval Academy. Thrust and torque are measured using a dynamometer, mounted in line with the turbine shaft. Shaft rotation speed and blade position are measured using a shaft position indexing system. The tip speed ratio (TSR) is adjusted using a hysteresis brake which is attached to the output shaft. Two optical wave height sensors are used to measure the free surface elevation. The turbine is towed at 1.68 m/s, resulting in a 70% chord based Rec = 4 × 105. An Acoustic Doppler Velocimeter (ADV) is installed one turbine diameter upstream of the turbine rotation plane to characterize the inflow turbulence. Measurements are obtained at four relative blade tip immersion depths of z/D = 0.5, 0.4, 0.3, and 0.2 at a TSR value of 7 to identify the depth where free surface effects impact overall turbine performance. The overall average power and thrust coefficient are presented and compared to previously conducted baseline tests. The influence of wake expansion blockage on the turbine performance due to presence of the free surface at these immersion depths will also be discussed.

  18. Development of a 33 kV, 20 A long pulse converter modulator for high average power klystron

    International Nuclear Information System (INIS)

    Reghu, T.; Mandloi, V.; Shrivastava, Purushottam

    2014-01-01

    Research, design, and development of high average power, long pulse modulators for the proposed Indian Spallation Neutron Source are underway at Raja Ramanna Centre for Advanced Technology. With this objective, a prototype of long pulse modulator capable of delivering 33 kV, 20 A at 5 Hz repetition rate has been designed and developed. Three Insulated Gate Bipolar Transistors (IGBT) based switching modules driving high frequency, high voltage transformers have been used to generate high voltage output. The IGBT based switching modules are shifted in phase by 120° with respect to each other. The switching frequency is 25 kHz. Pulses of 1.6 ms pulse width, 80 μs rise time, and 70 μs fall time have been achieved at the modulator output. A droop of ±0.6% is achieved using a simple segmented digital droop correction technique. The total fault energy transferred to the load during fault has been measured by conducting wire burn tests and is found to be within 3.5 J

  19. Development of a 33 kV, 20 A long pulse converter modulator for high average power klystron

    Science.gov (United States)

    Reghu, T.; Mandloi, V.; Shrivastava, Purushottam

    2014-05-01

    Research, design, and development of high average power, long pulse modulators for the proposed Indian Spallation Neutron Source are underway at Raja Ramanna Centre for Advanced Technology. With this objective, a prototype of long pulse modulator capable of delivering 33 kV, 20 A at 5 Hz repetition rate has been designed and developed. Three Insulated Gate Bipolar Transistors (IGBT) based switching modules driving high frequency, high voltage transformers have been used to generate high voltage output. The IGBT based switching modules are shifted in phase by 120° with respect to each other. The switching frequency is 25 kHz. Pulses of 1.6 ms pulse width, 80 μs rise time, and 70 μs fall time have been achieved at the modulator output. A droop of ±0.6% is achieved using a simple segmented digital droop correction technique. The total fault energy transferred to the load during fault has been measured by conducting wire burn tests and is found to be within 3.5 J.

  20. A novel power spectrum calculation method using phase-compensation and weighted averaging for the estimation of ultrasound attenuation.

    Science.gov (United States)

    Heo, Seo Weon; Kim, Hyungsuk

    2010-05-01

    An estimation of ultrasound attenuation in soft tissues is critical in the quantitative ultrasound analysis since it is not only related to the estimations of other ultrasound parameters, such as speed of sound, integrated scatterers, or scatterer size, but also provides pathological information of the scanned tissue. However, estimation performances of ultrasound attenuation are intimately tied to the accurate extraction of spectral information from the backscattered radiofrequency (RF) signals. In this paper, we propose two novel techniques for calculating a block power spectrum from the backscattered ultrasound signals. These are based on the phase-compensation of each RF segment using the normalized cross-correlation to minimize estimation errors due to phase variations, and the weighted averaging technique to maximize the signal-to-noise ratio (SNR). The simulation results with uniform numerical phantoms demonstrate that the proposed method estimates local attenuation coefficients within 1.57% of the actual values while the conventional methods estimate those within 2.96%. The proposed method is especially effective when we deal with the signal reflected from the deeper depth where the SNR level is lower or when the gated window contains a small number of signal samples. Experimental results, performed at 5MHz, were obtained with a one-dimensional 128 elements array, using the tissue-mimicking phantoms also show that the proposed method provides better estimation results (within 3.04% of the actual value) with smaller estimation variances compared to the conventional methods (within 5.93%) for all cases considered. Copyright 2009 Elsevier B.V. All rights reserved.

  1. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  2. Origin of the highest energy cosmic rays

    Energy Technology Data Exchange (ETDEWEB)

    Biermann, Peter L.; Ahn, Eun-Joo; Medina-Tanco, Gustavo; Stanev, Todor

    2000-06-01

    Introducing a simple Galactic wind model patterned after the solar wind we show that back-tracing the orbits of the highest energy cosmic events suggests that they may all come from the Virgo cluster, and so probably from the active radio galaxy M87. This confirms a long standing expectation. Those powerful radio galaxies that have their relativistic jets stuck in the interstellar medium of the host galaxy, such as 3C147, will then enable us to derive limits on the production of any new kind of particle, expected in some extensions of the standard model in particle physics. New data from HIRES will be crucial in testing the model proposed here.

  3. A Front End for Multipetawatt Lasers Based on a High-Energy, High-Average-Power Optical Parametric Chirped-Pulse Amplifier

    International Nuclear Information System (INIS)

    Bagnoud, V.

    2004-01-01

    We report on a high-energy, high-average-power optical parametric chirped-pulse amplifier developed as the front end for the OMEGA EP laser. The amplifier provides a gain larger than 109 in two stages leading to a total energy of 400 mJ with a pump-to-signal conversion efficiency higher than 25%

  4. High average power 1314 nm Nd:YLF laser, passively Q-switched with V:YAG

    CSIR Research Space (South Africa)

    Botha, RC

    2013-03-01

    Full Text Available A 1314 nm Nd:YLF laser was designed and operated both CW and passively Q-switched. Maximum CW output of 10.4 W resulted from 45.2 Wof incident pump power. Passive Q-switching was obtained by inserting a V:YAG saturable absorber in the cavity...

  5. High-average-power UV generation at 266 and 355 nm in β-BaB/sub 2/O/sub 4/

    International Nuclear Information System (INIS)

    Liu, K.C.; Rhoades, M.

    1987-01-01

    UV light has been generated previously by harmonic conversion from Nd:YAG lasers using the nonlinear crystals KD*P and ADP. Most of the previous studies have employed lasers with high peak power due to the low-harmonic-conversion efficiency of these crystals and also low average power due to the phase mismatch caused by temperature detuning resulting from UV absorption. A new nonlinear crystal β-BaB/sub 2/O/sub 4/ has recently been reported which provides for the possibility of overcoming the aforementioned problems. The authors utilized β-BaB/sub 2/O/sub 4/ to frequency triple and frequency quadruple a high-repetition-rate cw-pumped Nd:YAG laser and achieved up to 1-W average power with Gaussian spatial distribution at 266 and 355 nm. β-BaB/sub 2/O/sub 4/ has demonstrated its advantages for high-average-power UV generation. Its major drawback is a low-angular-acceptance bandwidth which requires a high-quality fundamental pump beam

  6. Computerized system for building 'the rose' of the winds and defining the velocity and the average density of the wind power for a given place

    International Nuclear Information System (INIS)

    Valkov, I.; Dekova, I.; Arnaudov, A.; Kostadinov, A.

    2002-01-01

    This paper considers the structure and the working principle of a computerized system for building 'the rose' of the winds. The behaviour of the system has been experimentally investigated and on the basis of the received data 'the rose' of the winds has been built, a diagram of the average wind velocity at a predefined step in the course of time has been made, and the average density of the wind power has been quantitatively defined. The proposed system enables possibilities for creating a data base of wind parameters, their processing and graphical visualizing of the received results. The system allows to improve the work of devices of wild's wind gauge type. (authors)

  7. Development of laser diode-pumped high average power solid-state laser for the pumping of Ti:sapphire CPA system

    Energy Technology Data Exchange (ETDEWEB)

    Maruyama, Yoichiro; Tei, Kazuyoku; Kato, Masaaki; Niwa, Yoshito; Harayama, Sayaka; Oba, Masaki; Matoba, Tohru; Arisawa, Takashi; Takuma, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    Laser diode pumped all solid state, high repetition frequency (PRF) and high energy Nd:YAG laser using zigzag slab crystals has been developed for the pumping source of Ti:sapphire CPA system. The pumping laser installs two main amplifiers which compose ring type amplifier configuration. The maximum amplification gain of the amplifier system is 140 and the condition of saturated amplification is achieved with this high gain. The average power of fundamental laser radiation is 250 W at the PRF of 200 Hz and the pulse duration is around 20 ns. The average power of second harmonic is 105 W at the PRF of 170 Hz and the pulse duration is about 16 ns. The beam profile of the second harmonic is near top hat and will be suitable for the pumping of Ti:sapphire laser crystal. The wall plug efficiency of the laser is 2.0 %. (author)

  8. Investigation on repetition rate and pulse duration influences on ablation efficiency of metals using a high average power Yb-doped ultrafast laser

    Directory of Open Access Journals (Sweden)

    Lopez J.

    2013-11-01

    Full Text Available Ultrafast lasers provide an outstanding processing quality but their main drawback is the low removal rate per pulse compared to longer pulses. This limitation could be overcome by increasing both average power and repetition rate. In this paper, we report on the influence of high repetition rate and pulse duration on both ablation efficiency and processing quality on metals. All trials have been performed with a single tunable ultrafast laser (350 fs to 10ps.

  9. The final power calibration of the IPEN/MB-01 nuclear reactor for various configurations obtained from the measurements of the absolute average neutron flux

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Alexandre Fonseca Povoa da, E-mail: alexandre.povoa@mar.mil.br [Centro Tecnologico da Marinha em Sao Paulo (CTMSP), Sao Paulo, SP (Brazil); Bitelli, Ulysses d' Utra; Mura, Luiz Ernesto Credidio; Lima, Ana Cecilia de Souza; Betti, Flavio; Santos, Diogo Feliciano dos, E-mail: ubitelli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    The use of neutron activation foils is a widely spread technique applied to obtain nuclear parameters then comparing the results with those calculated using specific methodologies and available nuclear data. By irradiation of activation foils and subsequent measurement of its induced activity, it is possible to determine the neutron flux at the position of irradiation. The power level during operation of the reactor is a parameter which is directly proportional to the average neutron flux throughout the core. The objective of this work is to gather data from irradiation of gold foils symmetrically placed along a cylindrically configured core which presents only a small excess reactivity in order to derive the power generated throughout the spatial thermal and epithermal neutron flux distribution over the core of the IPEN/MB-01 Nuclear Reactor, eventually lending to a proper calibration of its nuclear channels. The foils are fixed in a Lucite plate then irradiated with and without cadmium sheaths so as to obtain the absolute thermal and epithermal neutron flux. The correlation between the average power neutron flux resulting from the gold foils irradiation, and the average power digitally indicated by the nuclear channel number 6, allows for the calibration of the nuclear channels of the reactor. The reactor power level obtained by thermal neutron flux mapping was (74.65 ± 2.45) watts to a mean counting per seconds of 37881 cps to nuclear channel number 10 a pulse detector, and 0.719.10{sup -5} ampere to nuclear linear channel number 6 (a non-compensated ionization chamber). (author)

  10. Diode-side-pumped intracavity frequency-doubled Nd:YAG/BaWO4 Raman laser generating average output power of 3.14 W at 590 nm.

    Science.gov (United States)

    Li, Shutao; Zhang, Xingyu; Wang, Qingpu; Zhang, Xiaolei; Cong, Zhenhua; Zhang, Huaijin; Wang, Jiyang

    2007-10-15

    We report a linear-cavity high-power all-solid-state Q-switched yellow laser. The laser source comprises a diode-side-pumped Nd:YAG module that produces 1064 nm fundamental radiation, an intracavity BaWO(4) Raman crystal that generates a first-Stokes laser at 1180 nm, and a KTP crystal that frequency doubles the first-Stokes laser to 590 nm. A convex-plane cavity is employed in this configuration to counteract some of the thermal effect caused by high pump power. An average output power of 3.14 W at 590 nm is obtained at a pulse repetition frequency of 10 kHz.

  11. A diode-pumped continuous-wave Nd:YAG laser with an average output power of 1 kW

    International Nuclear Information System (INIS)

    Lee, Sung Man; Cha, Byung Heon; Kim, Cheol Jung

    2004-01-01

    A diode-pumped Nd:YAG laser with an average output power of 1 kW is developed for industrial applications, such as metal cutting, precision welding, etc. To develop such a diode-pumped high power solid-state laser, a series of laser modules have been used in general with and without thermal birefringence compensation. For example, Akiyama et al. used three laser modules to obtain a output power of 5.4 kW CW.1 In the side-pumped Nd:YAG laser, which is a commonly used pump scheme to obtain high output power, the crystal rod has a short thermal focal length at a high input pump power, and the short thermal focal length in turn leads to beam distortion within a laser resonator. Therefore, to achieve a high output power with good stability, isotropic beam profile, and high optical efficiency, the detailed analysis of the resonator stability condition depending on both mirror distances and a crystal separation is essential

  12. Daily Average Wind Power Interval Forecasts Based on an Optimal Adaptive-Network-Based Fuzzy Inference System and Singular Spectrum Analysis

    Directory of Open Access Journals (Sweden)

    Zhongrong Zhang

    2016-01-01

    Full Text Available Wind energy has increasingly played a vital role in mitigating conventional resource shortages. Nevertheless, the stochastic nature of wind poses a great challenge when attempting to find an accurate forecasting model for wind power. Therefore, precise wind power forecasts are of primary importance to solve operational, planning and economic problems in the growing wind power scenario. Previous research has focused efforts on the deterministic forecast of wind power values, but less attention has been paid to providing information about wind energy. Based on an optimal Adaptive-Network-Based Fuzzy Inference System (ANFIS and Singular Spectrum Analysis (SSA, this paper develops a hybrid uncertainty forecasting model, IFASF (Interval Forecast-ANFIS-SSA-Firefly Alogorithm, to obtain the upper and lower bounds of daily average wind power, which is beneficial for the practical operation of both the grid company and independent power producers. To strengthen the practical ability of this developed model, this paper presents a comparison between IFASF and other benchmarks, which provides a general reference for this aspect for statistical or artificially intelligent interval forecast methods. The comparison results show that the developed model outperforms eight benchmarks and has a satisfactory forecasting effectiveness in three different wind farms with two time horizons.

  13. The ETA-II linear induction accelerator and IMP wiggler: A high-average-power millimeter-wave free-electron laser for plasma heating

    International Nuclear Information System (INIS)

    Allen, S.L.; Scharlemann, E.T.

    1993-01-01

    The authors have constructed a 140-GHz free-electron laser to generate high-average-power microwaves for heating the MTX tokamak plasma. A 5.5-m steady-state wiggler (Intense Microwave, Prototype-IMP) has been installed at the end of the upgraded 60-cell ETA-II accelerator, and is configured as an FEL amplifier for the output of a 140-GHz long-pulse gyrotron. Improvements in the ETA-II accelerator include a multicable-feed power distribution network, better magnetic alignment using a stretched-wire alignment technique (SWAT), and a computerized tuning algorithm that directly minimizes the transverse sweep (corkscrew motion) of the electron beam. The upgrades were first tested on the 20-cell, 3-MeV front end of ETA-II and resulted in greatly improved energy flatness and reduced corkscrew motion. The upgrades were then incorporated into the full 60-cell configuration of ETA-II, along with modifications to allow operation in 50-pulse bursts at pulse repetition frequencies up to 5 kHz. The pulse power modifications were developed and tested on the High Average Power Test Stand (HAPTS), and have significantly reduced the voltage and timing jitter of the MAG 1D magnetic pulse compressors. The 2-3 kA, 6-7 MeV beam from ETA-II is transported to the IMP wiggler, which has been reconfigured as a laced wiggler, with both permanent magnets and electromagnets, for high magnetic field operation. Tapering of the wiggler magnetic field is completely computer controlled and can be optimized based on the output power. The microwaves from the FEL are transmitted to the MTX tokamak by a windowless quasi-optical microwave transmission system. Experiments at MTX are focused on studies of electron-cyclotron-resonance heating (ECRH) of the plasma. The authors summarize here the accelerator and pulse power modifications, and describe the status of ETA-II, IMP, and MTX operations

  14. The ETA-II linear induction accelerator and IMP wiggler: A high-average-power millimeter-wave free-electron-laser for plasma heating

    International Nuclear Information System (INIS)

    Allen, S.L.; Scharlemann, E.T.

    1992-05-01

    We have constructed a 140-GHz free-electron laser to generate high-average-power microwaves for heating the MTX tokamak plasma. A 5.5-m steady-state wiggler (intense Microwave Prototype-IMP) has been installed at the end of the upgraded 60-cell ETA-II accelerator, and is configured as an FEL amplifier for the output of a 140-GHz long-pulse gyrotron. Improvements in the ETA-II accelerator include a multicable-feed power distribution network, better magnetic alignment using a stretched-wire alignment technique (SWAT). and a computerized tuning algorithm that directly minimizes the transverse sweep (corkscrew motion) of the electron beam. The upgrades were first tested on the 20-cell, 3-MeV front end of ETA-II and resulted in greatly improved energy flatness and reduced corkscrew motion. The upgrades were then incorporated into the full 60-cell configuration of ETA-II, along with modifications to allow operation in 50-pulse bursts at pulse repetition frequencies up to 5 kHz. The pulse power modifications were developed and tested on the High Average Power Test Stand (HAPTS), and have significantly reduced the voltage and timing jitter of the MAG 1D magnetic pulse compressors. The 2-3 kA. 6-7 MeV beam from ETA-II is transported to the IMP wiggler, which has been reconfigured as a laced wiggler, with both permanent magnets and electromagnets, for high magnetic field operation. Tapering of the wiggler magnetic field is completely computer controlled and can be optimized based on the output power. The microwaves from the FEL are transmitted to the MTX tokamak by a windowless quasi-optical microwave transmission system. Experiments at MTX are focused on studies of electron-cyclotron-resonance heating (ECRH) of the plasma. We summarize here the accelerator and pulse power modifications, and describe the status of ETA-II, IMP, and MTX operations

  15. Investigation of the thermal and optical performance of a spatial light modulator with high average power picosecond laser exposure for materials processing applications

    Science.gov (United States)

    Zhu, G.; Whitehead, D.; Perrie, W.; Allegre, O. J.; Olle, V.; Li, Q.; Tang, Y.; Dawson, K.; Jin, Y.; Edwardson, S. P.; Li, L.; Dearden, G.

    2018-03-01

    Spatial light modulators (SLMs) addressed with computer generated holograms (CGHs) can create structured light fields on demand when an incident laser beam is diffracted by a phase CGH. The power handling limitations of these devices based on a liquid crystal layer has always been of some concern. With careful engineering of chip thermal management, we report the detailed optical phase and temperature response of a liquid cooled SLM exposed to picosecond laser powers up to 〈P〉  =  220 W at 1064 nm. This information is critical for determining device performance at high laser powers. SLM chip temperature rose linearly with incident laser exposure, increasing by only 5 °C at 〈P〉  =  220 W incident power, measured with a thermal imaging camera. Thermal response time with continuous exposure was 1-2 s. The optical phase response with incident power approaches 2π radians with average power up to 〈P〉  =  130 W, hence the operational limit, while above this power, liquid crystal thickness variations limit phase response to just over π radians. Modelling of the thermal and phase response with exposure is also presented, supporting experimental observations well. These remarkable performance characteristics show that liquid crystal based SLM technology is highly robust when efficiently cooled. High speed, multi-beam plasmonic surface micro-structuring at a rate R  =  8 cm2 s-1 is achieved on polished metal surfaces at 〈P〉  =  25 W exposure while diffractive, multi-beam surface ablation with average power 〈P〉  =100 W on stainless steel is demonstrated with ablation rate of ~4 mm3 min-1. However, above 130 W, first order diffraction efficiency drops significantly in accord with the observed operational limit. Continuous exposure for a period of 45 min at a laser power of 〈P〉  =  160 W did not result in any detectable drop in diffraction efficiency, confirmed afterwards by the efficient

  16. The mercury laser system - An average power, gas-cooled, Yb:S-FAP based system with frequency conversion and wavefront correction

    Energy Technology Data Exchange (ETDEWEB)

    Bibeau, C.; Bayramian, A.; Armstrong, P.; Ault, E.; Beach, R.; Benapfl, M.; Campbell, R.; Dawson, J.; Ebbers, C.; Freitas, B.; Kent, R.; Liao, Z.; Ladran, T.; Menapace, J.; Molander, B.; Moses, E.; Oberhelman, S.; Payne, S.; Peterson, N.; Schaffers, K.; Stolz, C.; Sutton, S.; Tassano, J.; Telford, S.; Utterback, E. [Lawrence Livermore National Lab., Livermore, CA (United States); Randles, M. [Northrop Grumman Space Technologies, Charlotte, NC (United States); Chain, B.; Fei, Y. [Crystal Photonics, Sanford, Fl (United States)

    2006-06-15

    We report on the operation of the Mercury laser with fourteen 4*6 cm{sup 2} Yb:S-FAP amplifier slabs pumped by eight 100 kW peak power diode arrays. The system was continuously run at 55 J and 10 Hz for several hours, (2*10{sup 5} cumulative shots) with over 80% of the energy in a 6 times diffraction limited spot at 1.047 {mu}m. Improved optical quality was achieved in Yb:S-FAP amplifiers with magneto-rheological finishing, a deterministic polishing method. In addition, average power frequency conversion employing YCOB crystal was demonstrated at 50% conversion efficiency or 22.6 J at 10 Hz. (authors)

  17. Performance of MgO:PPLN, KTA, and KNbO₃ for mid-wave infrared broadband parametric amplification at high average power.

    Science.gov (United States)

    Baudisch, M; Hemmer, M; Pires, H; Biegert, J

    2014-10-15

    The performance of potassium niobate (KNbO₃), MgO-doped periodically poled lithium niobate (MgO:PPLN), and potassium titanyl arsenate (KTA) were experimentally compared for broadband mid-wave infrared parametric amplification at a high repetition rate. The seed pulses, with an energy of 6.5 μJ, were amplified using 410 μJ pump energy at 1064 nm to a maximum pulse energy of 28.9 μJ at 3 μm wavelength and at a 160 kHz repetition rate in MgO:PPLN while supporting a transform limited duration of 73 fs. The high average powers of the interacting beams used in this study revealed average power-induced processes that limit the scaling of optical parametric amplification in MgO:PPLN; the pump peak intensity was limited to 3.8  GW/cm² due to nonpermanent beam reshaping, whereas in KNbO₃ an absorption-induced temperature gradient in the crystal led to permanent internal distortions in the crystal structure when operated above a pump peak intensity of 14.4  GW/cm².

  18. Amplified spontaneous emission and thermal management on a high average-power diode-pumped solid-state laser - the Lucia laser system

    International Nuclear Information System (INIS)

    Albach, D.

    2010-01-01

    The development of the laser triggered the birth of numerous fields in both scientific and industrial domains. High intensity laser pulses are a unique tool for light/matter interaction studies and applications. However, current flash-pumped glass-based systems are inherently limited in repetition-rate and efficiency. Development within recent years in the field of semiconductor lasers and gain media drew special attention to a new class of lasers, the so-called Diode Pumped Solid State Laser (DPSSL). DPSSLs are highly efficient lasers and are candidates of choice for compact, high average-power systems required for industrial applications but also as high-power pump sources for ultra-high intense lasers. The work described in this thesis takes place in the context of the 1 kilowatt average-power DPSSL program Lucia, currently under construction at the 'Laboratoire d'Utilisation des Laser Intenses' (LULI) at the Ecole Polytechnique, France. Generation of sub-10 nanosecond long pulses with energies of up to 100 joules at repetition rates of 10 hertz are mainly limited by Amplified Spontaneous Emission (ASE) and thermal effects. These limitations are the central themes of this work. Their impact is discussed within the context of a first Lucia milestone, set around 10 joules. The developed laser system is shown in detail from the oscillator level to the end of the amplification line. A comprehensive discussion of the impact of ASE and thermal effects is completed by related experimental benchmarks. The validated models are used to predict the performances of the laser system, finally resulting in a first activation of the laser system at an energy level of 7 joules in a single-shot regime and 6.6 joules at repetition rates up to 2 hertz. Limitations and further scaling approaches are discussed, followed by an outlook for the further development. (author) [fr

  19. Directional clustering in highest energy cosmic rays

    International Nuclear Information System (INIS)

    Goldberg, Haim; Weiler, Thomas J.

    2001-01-01

    An unexpected degree of small-scale clustering is observed in highest-energy cosmic ray events. Some directional clustering can be expected due to purely statistical fluctuations for sources distributed randomly in the sky. This creates a background for events originating in clustered sources. We derive analytic formulas to estimate the probability of random cluster configurations, and use these formulas to study the strong potential of the HiRes, Auger, Telescope Array and EUSO-OWL-AirWatch facilities for deciding whether any observed clustering is most likely due to nonrandom sources. For a detailed comparison to data, our analytical approach cannot compete with Monte Carlo simulations, including experimental systematics. However, our derived formulas do offer two advantages: (i) easy assessment of the significance of any observed clustering, and most importantly, (ii) an explicit dependence of cluster probabilities on the chosen angular bin size

  20. The highest energies in the Universe

    International Nuclear Information System (INIS)

    Rebel, H.

    2006-01-01

    There are not many issues of fundamental importance which have induced so many problems for astrophysicists like the question of the origin of cosmic rays. This radiation from the outer space has an energy density comparable with that of the visible starlight or of the microwave background radiation. It is an important feature of our environment with many interesting aspects. A most conspicuous feature is that the energy spectrum of cosmic rays seems to have no natural end, though resonant photopion production with the cosmic microwave background predicts a suppression of extragalactic protons above the so-called Greisen-Zatsepin-Kuz’min cutoff at about EGZK = 5 × 10"1"9 eV. In fact the highest particle energies ever observed on the Earth, stem from observations of Ultrahigh Energy Cosmic Rays (E > 3 × 10"1"9 eV). But the present observations by the AGASA and HiRes Collaborations, partly a matter of debate, are origin of a number of puzzling questions, where these particles are coming from, by which gigantic acceleration mechanism they could gain such tremendous energies and how they have been able to propagate to our Earth. These questions imply serious problems of the understanding of our Universe. There are several approaches to clarify the mysteries of the highest energies and to base the observations on larger statistical accuracy. The Pierre Auger Observatory, being in installation in the Pampa Amarilla in the Province Mendoza in Argentina, is a hybrid detector, combining a large array of water Cerenkov detectors (registering charged particles generated in giant extended air showers) with measurements of the fluorescence light produced during the air shower development. This contribution will illustrate the astrophysical motivation and the current status of the experimental efforts, and sketch the ideas about the origin of these particles.

  1. Ultra-short pulse delivery at high average power with low-loss hollow core fibers coupled to TRUMPF's TruMicro laser platforms for industrial applications

    Science.gov (United States)

    Baumbach, S.; Pricking, S.; Overbuschmann, J.; Nutsch, S.; Kleinbauer, J.; Gebs, R.; Tan, C.; Scelle, R.; Kahmann, M.; Budnicki, A.; Sutter, D. H.; Killi, A.

    2017-02-01

    Multi-megawatt ultrafast laser systems at micrometer wavelength are commonly used for material processing applications, including ablation, cutting and drilling of various materials or cleaving of display glass with excellent quality. There is a need for flexible and efficient beam guidance, avoiding free space propagation of light between the laser head and the processing unit. Solid core step index fibers are only feasible for delivering laser pulses with peak powers in the kW-regime due to the optical damage threshold in bulk silica. In contrast, hollow core fibers are capable of guiding ultra-short laser pulses with orders of magnitude higher peak powers. This is possible since a micro-structured cladding confines the light within the hollow core and therefore minimizes the spatial overlap between silica and the electro-magnetic field. We report on recent results of single-mode ultra-short pulse delivery over several meters in a lowloss hollow core fiber packaged with industrial connectors. TRUMPF's ultrafast TruMicro laser platforms equipped with advanced temperature control and precisely engineered opto-mechanical components provide excellent position and pointing stability. They are thus perfectly suited for passive coupling of ultra-short laser pulses into hollow core fibers. Neither active beam launching components nor beam trackers are necessary for a reliable beam delivery in a space and cost saving packaging. Long term tests with weeks of stable operation, excellent beam quality and an overall transmission efficiency of above 85 percent even at high average power confirm the reliability for industrial applications.

  2. Challenges for highest energy circular colliders

    CERN Document Server

    Benedikt, M; Wenninger, J; Zimmermann, F

    2014-01-01

    A new tunnel of 80–100 km circumference could host a 100 TeV centre-of-mass energy-frontier proton collider (FCC-hh/VHE-LHC), with a circular lepton collider (FCCee/TLEP) as potential intermediate step, and a leptonhadron collider (FCC-he) as additional option. FCC-ee, operating at four different energies for precision physics of the Z, W, and Higgs boson and the top quark, represents a significant push in terms of technology and design parameters. Pertinent R&D efforts include the RF system, topup injection scheme, optics design for arcs and final focus, effects of beamstrahlung, beam polarization, energy calibration, and power consumption. FCC-hh faces other challenges, such as high-field magnet design, machine protection and effective handling of large synchrotron radiation power in a superconducting machine. All these issues are being addressed by a global FCC collaboration. A parallel design study in China prepares for a similar, but smaller collider, called CepC/SppC.

  3. Estimation of the hydraulic conductivity of a two-dimensional fracture network using effective medium theory and power-law averaging

    Science.gov (United States)

    Zimmerman, R. W.; Leung, C. T.

    2009-12-01

    Most oil and gas reservoirs, as well as most potential sites for nuclear waste disposal, are naturally fractured. In these sites, the network of fractures will provide the main path for fluid to flow through the rock mass. In many cases, the fracture density is so high as to make it impractical to model it with a discrete fracture network (DFN) approach. For such rock masses, it would be useful to have recourse to analytical, or semi-analytical, methods to estimate the macroscopic hydraulic conductivity of the fracture network. We have investigated single-phase fluid flow through generated stochastically two-dimensional fracture networks. The centers and orientations of the fractures are uniformly distributed, whereas their lengths follow a lognormal distribution. The aperture of each fracture is correlated with its length, either through direct proportionality, or through a nonlinear relationship. The discrete fracture network flow and transport simulator NAPSAC, developed by Serco (Didcot, UK), is used to establish the “true” macroscopic hydraulic conductivity of the network. We then attempt to match this value by starting with the individual fracture conductances, and using various upscaling methods. Kirkpatrick’s effective medium approximation, which works well for pore networks on a core scale, generally underestimates the conductivity of the fracture networks. We attribute this to the fact that the conductances of individual fracture segments (between adjacent intersections with other fractures) are correlated with each other, whereas Kirkpatrick’s approximation assumes no correlation. The power-law averaging approach proposed by Desbarats for porous media is able to match the numerical value, using power-law exponents that generally lie between 0 (geometric mean) and 1 (harmonic mean). The appropriate exponent can be correlated with statistical parameters that characterize the fracture density.

  4. Combined peak-to-average power ratio reduction and physical layer security enhancement in optical orthogonal frequency division multiplexing visible-light communication systems

    Science.gov (United States)

    Wang, Zhongpeng; Chen, Shoufa

    2016-07-01

    A physical encryption scheme for discrete Hartley transform (DHT) precoded orthogonal frequency division multiplexing (OFDM) visible-light communication (VLC) systems using frequency domain chaos scrambling is proposed. In the scheme, the chaos scrambling, which is generated by a modified logistic mapping, is utilized to enhance the physical layer of security, and the DHT precoding is employed to reduce of OFDM signal for OFDM-based VLC. The influence of chaos scrambling on peak-to-average power ratio (PAPR) and bit error rate (BER) of systems is studied. The experimental simulation results prove the efficiency of the proposed encryption method for DHT-precoded, OFDM-based VLC systems. Furthermore, the influence of the proposed encryption to the PAPR and BER of systems is evaluated. The experimental results show that the proposed security scheme can protect the DHT-precoded, OFDM-based VLC from eavesdroppers, while keeping the good BER performance of DHT-precoded systems. The BER performance of the encrypted and DHT-precoded system is almost the same as that of the conventional DHT-precoded system without encryption.

  5. Peak-to-average power ratio reduction in orthogonal frequency division multiplexing-based visible light communication systems using a modified partial transmit sequence technique

    Science.gov (United States)

    Liu, Yan; Deng, Honggui; Ren, Shuang; Tang, Chengying; Qian, Xuewen

    2018-01-01

    We propose an efficient partial transmit sequence technique based on genetic algorithm and peak-value optimization algorithm (GAPOA) to reduce high peak-to-average power ratio (PAPR) in visible light communication systems based on orthogonal frequency division multiplexing (VLC-OFDM). By analysis of hill-climbing algorithm's pros and cons, we propose the POA with excellent local search ability to further process the signals whose PAPR is still over the threshold after processed by genetic algorithm (GA). To verify the effectiveness of the proposed technique and algorithm, we evaluate the PAPR performance and the bit error rate (BER) performance and compare them with partial transmit sequence (PTS) technique based on GA (GA-PTS), PTS technique based on genetic and hill-climbing algorithm (GH-PTS), and PTS based on shuffled frog leaping algorithm and hill-climbing algorithm (SFLAHC-PTS). The results show that our technique and algorithm have not only better PAPR performance but also lower computational complexity and BER than GA-PTS, GH-PTS, and SFLAHC-PTS technique.

  6. Plasma wakefields driven by an incoherent combination of laser pulses: a path towards high-average power laser-plasma accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Benedetti, C.; Schroeder, C.B.; Esarey, E.; Leemans, W.P.

    2014-05-01

    he wakefield generated in a plasma by incoherently combining a large number of low energy laser pulses (i.e.,without constraining the pulse phases) is studied analytically and by means of fully-self-consistent particle-in-cell simulations. The structure of the wakefield has been characterized and its amplitude compared with the amplitude of the wake generated by a single (coherent) laser pulse. We show that, in spite of the incoherent nature of the wakefield within the volume occupied by the laser pulses, behind this region the structure of the wakefield can be regular with an amplitude comparable or equal to that obtained from a single pulse with the same energy. Wake generation requires that the incoherent structure in the laser energy density produced by the combined pulses exists on a time scale short compared to the plasma period. Incoherent combination of multiple laser pulses may enable a technologically simpler path to high-repetition rate, high-average power laser-plasma accelerators and associated applications.

  7. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  8. Production Planning with Respect to Uncertainties. Simulator Based Production Planning of Average Sized Combined Heat and Power Production Plants; Produktionsplanering under osaekerhet. Simulatorbaserad produktionsplanering av medelstora kraftvaermeanlaeggningar

    Energy Technology Data Exchange (ETDEWEB)

    Haeggstaahl, Daniel [Maelardalen Univ., Vaesteraas (Sweden); Dotzauer, Erik [AB Fortum, Stockholm (Sweden)

    2004-12-01

    Production planning in Combined Heat and Power (CHP) systems is considered. The focus is on development and use of mathematical models and methods. Different aspects on production planning are discussed, including weather and load predictions. Questions relevant on the different planning horizons are illuminated. The main purpose with short-term (one week) planning is to decide when to start and stop the production units, and to decide how to use the heat storage. The main conclusion from the outline of pros and cons of commercial planning software are that several are using Mixed Integer Programming (MIP). In that sense they are similar. Building a production planning model means that the planning problem is formulated as a mathematical optimization problem. The accuracy of the input data determines the practical detail level of the model. Two alternatives to the methods used in today's commercial programs are proposed: stochastic optimization and simulator-based optimization. The basic concepts of mathematical optimization are outlined. A simulator-based model for short-term planning is developed. The purpose is to minimize the production costs, depending on the heat demand in the district heating system, prices of electricity and fuels, emission taxes and fees, etc. The problem is simplified by not including any time-linking conditions. The process model is developed in IPSEpro, a heat and mass-balance software from SimTech Simulation Technology. TOMLAB, an optimization toolbox in MATLAB, is used as optimizer. Three different solvers are applied: glcFast, glcCluster and SNOPT. The link between TOMLAB and IPSEpro is accomplished using the Microsoft COM technology. MATLAB is the automation client and contains the control of IPSEpro and TOMLAB. The simulator-based model is applied to the CHP plant in Eskilstuna. Two days are chosen and analyzed. The optimized production is compared to the measured. A sensitivity analysis on how variations in outdoor

  9. On the use of the residue theorem for the efficient evaluation of band-averaged input power into linear second-order dynamic systems

    Science.gov (United States)

    D'Amico, R.; Koo, K.; Huybrechs, D.; Desmet, W.

    2013-12-01

    An alternative to numerical quadrature is proposed to compute the power injected into a vibrating structure over a certain frequency band. Instead of evaluating the system response at several sampling frequencies within the considered band, the integral is computed by estimating the residue at a few complex frequencies, corresponding to the poles of the weighting function. This technique provides considerable benefits in terms of computation time, since the integration is independent of the width of the frequency band. Two application examples show the effectiveness of the approach. Firstly, the use of a Butterworth filter instead of a rectangular weighting function is assessed. Secondly, the accuracy of the approximation in case of hysteretic damping is proven. Finally, the computational performance of the technique is compared with classical numerical quadrature schemes.

  10. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  11. Multi-objective optimization of MOSFETs channel widths and supply voltage in the proposed dual edge-triggered static D flip-flop with minimum average power and delay by using fuzzy non-dominated sorting genetic algorithm-II.

    Science.gov (United States)

    Keivanian, Farshid; Mehrshad, Nasser; Bijari, Abolfazl

    2016-01-01

    D Flip-Flop as a digital circuit can be used as a timing element in many sophisticated circuits. Therefore the optimum performance with the lowest power consumption and acceptable delay time will be critical issue in electronics circuits. The newly proposed Dual-Edge Triggered Static D Flip-Flop circuit layout is defined as a multi-objective optimization problem. For this, an optimum fuzzy inference system with fuzzy rules is proposed to enhance the performance and convergence of non-dominated sorting Genetic Algorithm-II by adaptive control of the exploration and exploitation parameters. By using proposed Fuzzy NSGA-II algorithm, the more optimum values for MOSFET channel widths and power supply are discovered in search space than ordinary NSGA types. What is more, the design parameters involving NMOS and PMOS channel widths and power supply voltage and the performance parameters including average power consumption and propagation delay time are linked. To do this, the required mathematical backgrounds are presented in this study. The optimum values for the design parameters of MOSFETs channel widths and power supply are discovered. Based on them the power delay product quantity (PDP) is 6.32 PJ at 125 MHz Clock Frequency, L = 0.18 µm, and T = 27 °C.

  12. Cortex Matures Faster in Youths With Highest IQ

    Science.gov (United States)

    ... NIH Cortex Matures Faster in Youths With Highest IQ Past Issues / Summer 2006 Table of Contents For ... on. Photo: Getty image (StockDisc) Youths with superior IQ are distinguished by how fast the thinking part ...

  13. Which Kids Are at Highest Risk for Suicide?

    Science.gov (United States)

    ... Share Which Kids are at Highest Risk for Suicide? Page Content Article Body No child is immune, ... who have lost a friend or relative to suicide. Studies show that a considerable number of youth ...

  14. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  15. Lung Cancer Screening May Benefit Those at Highest Risk

    Science.gov (United States)

    People at the highest risk for lung cancer, based on a risk model, may be more likely to benefit from screening with low-dose CT, a new analysis suggests. The study authors believe the findings may better define who should undergo lung cancer screening, as this Cancer Currents blog post explains.

  16. Highest weight representations of the quantum algebra Uh(gl∞)

    International Nuclear Information System (INIS)

    Palev, T.D.; Stoilova, N.I.

    1997-04-01

    A class of highest weight irreducible representations of the quantum algebra U h (gl-∞) is constructed. Within each module a basis is introduced and the transformation relations of the basis under the action of the Chevalley generators are explicitly written. (author). 16 refs

  17. Exploring the cultural dimensions of the right to the highest ...

    African Journals Online (AJOL)

    The right to enjoying the highest attainable standard of health is incorporated in many international and regional human rights instruments. This right contains both freedoms and entitlements, including the freedom to control one's own health and body and the right to an accessible system of health care, goods and services.

  18. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  19. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  20. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  1. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  2. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  3. JAERI femtosecond pulsed and tens-kilowatts average-powered free-electron lasers and their applications of large-scaled non-thermal manufacturing in nuclear energy industry

    International Nuclear Information System (INIS)

    Minehara, Eisuke J.

    2004-01-01

    We first reported the novel method that femto-second (fs) lasers of the low average power Ti: Sapphire one, the JAERI high average power free-electron laser, excimer laser, fiber laser and so on could peel off and remove both stress corrosion cracking (SCC) origins of the cold-worked (CW) and very crack-susceptible material, and residual tensile stress in the hardened surface of low-carbon stainless steel cubic samples for nuclear reactor internals as a proof of principle experiment except for the last and third origin of corrosive environment. Because it has been successfully demonstrated that the fs lasers could clearly remove the two SCC origins, we could resultantly prevent the cold-worked SCC in many field near future. The SCC is a well known phenomenon in modern material sciences, technologies, and industries, and defined as an insidious failure mechanism that is caused by the corrosive environment, and the crack-susceptible material and the surface residual tensile stress simultaneously. There are a large number of famous SCC examples for damaging stainless steels, aluminum alloys, brass and other alloy metals in many different cases. In many boiling light-water reactor (BWR) nuclear power plants and a few pressurized light water reactor (PWR) ones in Japan and the world up to now, a large number of the deep and wide cracks have been recently found in the reactor-grade low-carbon stainless steel components of core shroud, control-blade handle, re-circulating pipes, sheath and other internals in the reactor vessel under very low or no applied stresses. These cracks have been thought to be initiated from the crack-susceptible like very small-sized cracks, pinholes, concentrated dislocation defects and so on in the hardened surface, which were originated from cold-work machining processes in reactor manufacturing factories, and to be insidiously penetrated widely into the deep inside under the residual tensile stress and corrosive environment, and under no

  4. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  5. The highest energy cosmic rays, photons and neutrinos

    International Nuclear Information System (INIS)

    Zas, Enrique

    1998-01-01

    In these lectures I introduce and discuss aspects of currently active fields of interest related to the production, transport and detection of high energy particles from extraterrestrial sources. I have payed most attention to the highest energies and I have divided the material according to the types of particles which will be searched for with different experimental facilities in planning: hadrons, gamma rays and neutrinos. Particular attention is given to shower development, stochastic acceleration and detection techniques

  6. Do optimally ripe blackberries contain the highest levels of metabolites?

    Science.gov (United States)

    Mikulic-Petkovsek, Maja; Koron, Darinka; Zorenc, Zala; Veberic, Robert

    2017-01-15

    Five blackberry cultivars were selected for the study ('Chester Thornless', 'Cacanska Bestrna', 'Loch Ness', 'Smoothstem' and 'Thornfree') and harvested at three different maturity stages (under-, optimal- and over-ripe). Optimally ripe and over-ripe blackberries contained significantly higher levels of total sugars compared to under-ripe fruit. 'Loch Ness' cultivar was characterized by 2.2-2.6-fold higher levels of total sugars than other cultivars and consequently, the highest sugar/acids ratio. 'Chester Thornless' stands out as the cultivar with the highest level of vitamin C in under-ripe (125.87mgkg(-1)) and optimally mature fruit (127.66mgkg(-1)). Maturity stage significantly affected the accumulation of phenolic compounds. The content of total anthocyanins increased for 43% at optimal maturity stage and cinnamic acid derivatives for 57% compared to under-ripe fruit. Over-ripe blackberries were distinguished by the highest content of total phenolics (1251-2115mg GAE kg(-1) FW) and greatest FRAP values (25.9-43.2mM TE kg(-1) FW). Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. The fifty highest cited papers in anterior cruciate ligament injury.

    Science.gov (United States)

    Vielgut, Ines; Dauwe, Jan; Leithner, Andreas; Holzer, Lukas A

    2017-07-01

    The anterior cruciate ligament (ACL) is one of the most common injured knee ligaments and at the same time, one of the most frequent injuries seen in the sport orthopaedic practice. Due to the clinical relevance of ACL injuries, numerous papers focussing on this topic including biomechanical-, basic science-, clinical- or animal studies, were published. The purpose of this study was to determine the most frequently cited scientific articles which address this subject, establish a ranking of the 50 highest cited papers and analyse them according to their characteristics. The 50 highest cited articles related to Anterior Cruciate Ligament Injury were searched in Thomson ISI Web of Science® by the use of defined search terms. All types of scientific papers with reference to our topic were ranked according to the absolute number of citations and analyzed for the following characteristics: journal title, year of publication, number of citations, citation density, geographic origin, article type and level of evidence. The 50 highest cited articles had up to 1624 citations. The top ten papers on this topic were cited 600 times at least. Most papers were published in the American Journal of Sports Medicine. The publication years spanned from 1941 to 2007, with the 1990s and 2000s accounting for half of the articles (n = 25). Seven countries contributed to the top 50 list, with the USA having by far the most contribution (n = 40). The majority of articles could be attributed to the category "Clinical Science & Outcome". Most of them represent a high level of evidence. Scientific articles in the field of ACL injury are highly cited. The majority of these articles are clinical studies that have a high level of evidence. Although most of the articles were published between 1990 and 2007, the highest cited articles in absolute and relative numbers were published in the early 1980s. These articles contain well established scoring- or classification systems. The

  8. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    OpenAIRE

    Edmundo Guerra; Rodrigo Munguia; Yolanda Bolea; Antoni Grau

    2013-01-01

    Simultaneous Location and Mapping (SLAM) is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D) Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hyp...

  9. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  10. April 25, 2003, FY2003 Progress Summary and FY2002 Program Plan, Statement of Work and Deliverables for Development of High Average Power Diode-Pumped Solid State Lasers,and Complementary Technologies, for Applications in Energy and Defense

    International Nuclear Information System (INIS)

    Meier, W; Bibeau, C

    2005-01-01

    The High Average Power Laser Program (HAPL) is a multi-institutional, synergistic effort to develop inertial fusion energy (IFE). This program is building a physics and technology base to complement the laser-fusion science being pursued by DOE Defense programs in support of Stockpile Stewardship. The primary institutions responsible for overseeing and coordinating the research activities are the Naval Research Laboratory (NRL) and Lawrence Livermore National Laboratory (LLNL). The current LLNL proposal is a companion document to the one submitted by NRL, for which the driver development element is focused on the krypton fluoride excimer laser option. The NRL and LLNL proposals also jointly pursue complementary activities with the associated rep-rated laser technologies relating to target fabrication, target injection, final optics, fusion chamber, target physics, materials and power plant economics. This proposal requests continued funding in FY03 to support LLNL in its program to build a 1 kW, 100 J, diode-pumped, crystalline laser, as well as research into high gain fusion target design, fusion chamber issues, and survivability of the final optic element. These technologies are crucial to the feasibility of inertial fusion energy power plants and also have relevance in rep-rated stewardship experiments. The HAPL Program pursues technologies needed for laser-driven IFE. System level considerations indicate that a rep-rated laser technology will be needed, operating at 5-10 Hz. Since a total energy of ∼2 MJ will ultimately be required to achieve suitable target gain with direct drive targets, the architecture must be scaleable. The Mercury Laser is intended to offer such an architecture. Mercury is a solid state laser that incorporates diodes, crystals and gas cooling technologies

  11. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  12. Robert Aymar receives one of the highest Finnish distinctions

    CERN Multimedia

    2008-01-01

    On 9 December 2008 Robert Aymar, CERN Director-General, was awarded the decoration of Commander, first class, of the Order of the Lion of Finland by the President of the Republic of Finland. This decoration, one of the highest of Finland, was presented in a ceremony by the Ambassador Hannu Himanen, Permanent Representative of Finland to the UN and other international organisations in Geneva. Robert Aymar was honoured for his service to CERN, the LHC, his role in the cooperation between Finland and CERN, as well as his contribution to science in general. In his speech the ambassador underlined CERN’s efforts in the field of education, mentioning the High school teachers programme.

  13. Z-burst scenario for the highest energy cosmic rays

    International Nuclear Information System (INIS)

    Fodor, Z.

    2002-10-01

    The origin of highest energy cosmic rays is yet unknown. An appealing possibility is the so-called Z-burst scenario, in which a large fraction of these cosmic rays are decay products of Z bosons produced in the scattering of ultrahigh energy neutrinos on cosmological relic neutrinos. The comparison between the observed and predicted spectra constrains the mass of the heaviest neutrino. The required neutrino mass is fairly robust against variations of the presently unknown quantities, such as the amount of relic neutrino clustering, the universal photon radio background and the extragalactic magnetic field. Considering different possibilities for the ordinary cosmic rays the required neutrino masses are determined. In the most plausible case that the ordinary cosmic rays are of extragalactic origin and the universal radio background is strong enough to suppress high energy photons, the required neutrino mass is 0.08 eV ≤ m ν ≤ 0.40 eV. The required ultrahigh energy neutrino flux should be detected in the near future by experiments such as AMANDA, RICE or the Pierre Auger Observatory. (orig.)

  14. Compatibility of Firm Positioning Strategy and Website Content: Highest

    Directory of Open Access Journals (Sweden)

    Evla MUTLU KESİCİ

    2017-07-01

    Full Text Available Corporate websites are essential platforms through which firms introduce their goods and services on B2B and B2C level, express financial information for the stakeholders and share corporate values, purposes and activities. Due to its facilities, websites take part in firm positioning strategy. Accordingly this study aims to understand the innovation oriented positioning through corporate websites. The method applied in this study has been adapted from the 2QCV2Q Model developed by Mich and Franch (2000 to evaluate websites and top 30 firms with the highest Research and Development expenditures listed in Turkishtime (2015 have been analyzed. Within this context, this study presents a revised and updated method for the assessments of websites through positioning strategy framework. Findings indicate no direct relationship between website evaluation and R&D expenditure, though some common weaknesses have been put forward, such as information about management of the firms. Besides, publicly traded firms are recognized to facilitate websites more efficiently than non-publicly traded firms. Study contribute to both academia and practitioners as putting forward a new approach for 2QCV2Q Model and indicating the similarities and differences among the corporate websites through positioning perspective.

  15. Estimation of the center frequency of the highest modulation filter.

    Science.gov (United States)

    Moore, Brian C J; Füllgrabe, Christian; Sek, Aleksander

    2009-02-01

    For high-frequency sinusoidal carriers, the threshold for detecting sinusoidal amplitude modulation increases when the signal modulation frequency increases above about 120 Hz. Using the concept of a modulation filter bank, this effect might be explained by (1) a decreasing sensitivity or greater internal noise for modulation filters with center frequencies above 120 Hz; and (2) a limited span of center frequencies of the modulation filters, the top filter being tuned to about 120 Hz. The second possibility was tested by measuring modulation masking in forward masking using an 8 kHz sinusoidal carrier. The signal modulation frequency was 80, 120, or 180 Hz and the masker modulation frequencies covered a range above and below each signal frequency. Four highly trained listeners were tested. For the 80-Hz signal, the signal threshold was usually maximal when the masker frequency equaled the signal frequency. For the 180-Hz signal, the signal threshold was maximal when the masker frequency was below the signal frequency. For the 120-Hz signal, two listeners showed the former pattern, and two showed the latter pattern. The results support the idea that the highest modulation filter has a center frequency in the range 100-120 Hz.

  16. Kyle Cranmer receives the highest recognition from the US government

    CERN Multimedia

    Allen Mincer

    Kyle Cranmer with Clay Sell, Deputy Secretary of EnergyKyle Cranmer, who has worked on ATLAS as a graduate student at the University of Wisconsin-Madison, a Goldhaber Fellow at Brookhaven National Laboratory, and, most recently, an Assistant Professor at New York University, has been awarded a Presidential Early Career Award for Scientists and Engineers (PECASE). As described at the United States Department of Energy web page: "The PECASE Awards are intended to recognize some of the finest scientists and engineers who, while early in their research careers, show exceptional potential for leadership at the frontiers of scientific knowledge during the twenty-first century...The PECASE Award is the highest honor bestowed by the U.S. government on outstanding scientists and engineers beginning their independent careers." Kyle's work on ATLAS focuses on tools and strategies for data analysis, triggering, and searches for the Higgs.At the awards ceremony, which took place on Thursday Nov. 1st in Washington, D.C.,...

  17. Recreational fishing selectively captures individuals with the highest fitness potential.

    Science.gov (United States)

    Sutter, David A H; Suski, Cory D; Philipp, David P; Klefoth, Thomas; Wahl, David H; Kersten, Petra; Cooke, Steven J; Arlinghaus, Robert

    2012-12-18

    Fisheries-induced evolution and its impact on the productivity of exploited fish stocks remains a highly contested research topic in applied fish evolution and fisheries science. Although many quantitative models assume that larger, more fecund fish are preferentially removed by fishing, there is no empirical evidence describing the relationship between vulnerability to capture and individual reproductive fitness in the wild. Using males from two lines of largemouth bass (Micropterus salmoides) selectively bred over three generations for either high (HV) or low (LV) vulnerability to angling as a model system, we show that the trait "vulnerability to angling" positively correlates with aggression, intensity of parental care, and reproductive fitness. The difference in reproductive fitness between HV and LV fish was particularly evident among larger males, which are also the preferred mating partners of females. Our study constitutes experimental evidence that recreational angling selectively captures individuals with the highest potential for reproductive fitness. Our study further suggests that selective removal of the fittest individuals likely occurs in many fisheries that target species engaged in parental care. As a result, depending on the ecological context, angling-induced selection may have negative consequences for recruitment within wild populations of largemouth bass and possibly other exploited species in which behavioral patterns that determine fitness, such as aggression or parental care, also affect their vulnerability to fishing gear.

  18. Academic Training - Tevatron: studying pp collisions at the highest energy

    CERN Multimedia

    2006-01-01

    ACADEMIC TRAINING LECTURE SERIES 15, 16, 17, 18 May Main Auditorium, bldg. 500 on 15, 16, 17 May - Council Chamber on 18 May Physics at the Tevatron B. HEINEMANN, Univ. of Liverpool, FERMILAB Physics Results from the Tevatron The Tevatron proton-antiproton collider at Fermilab in the US is currently the world's highest energy collider. At the experiments CDF and D0 a broad physics programme is being pursued, ranging from flavour physics via electroweak precision measurements to searches for the Higgs boson and new particles beyond the Standard Model. In my lecture I will describe some of the highlight measurements in the flavour, electroweak and searches sectors, and the experimental techniques that are used. ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127 academic.training@cern.ch If you wish to participate in one of the following courses, please tell to your supervisor and apply electronically from the course description pages that can be found on the Web at: http://www.cern.ch/...

  19. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse-Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  20. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...

  1. Low-peak-to-average power ratio and low-complexity asymmetrically clipped optical orthogonal frequency-division multiplexing uplink transmission scheme for long-reach passive optical network.

    Science.gov (United States)

    Zhou, Ji; Qiao, Yaojun

    2015-09-01

    In this Letter, we propose a discrete Hartley transform (DHT)-spread asymmetrically clipped optical orthogonal frequency-division multiplexing (DHT-S-ACO-OFDM) uplink transmission scheme in which the multiplexing/demultiplexing process also uses the DHT algorithm. By designing a simple encoding structure, the computational complexity of the transmitter can be reduced from O(Nlog(2)(N)) to O(N). At the probability of 10(-3), the peak-to-average power ratio (PAPR) of 2-ary pulse amplitude modulation (2-PAM)-modulated DHT-S-ACO-OFDM is approximately 9.7 dB lower than that of 2-PAM-modulated conventional ACO-OFDM. To verify the feasibility of the proposed scheme, a 4-Gbit/s DHT-S-ACO-OFDM uplink transmission scheme with a 1∶64 way split has been experimentally implemented using 100-km standard single-mode fiber (SSMF) for a long-reach passive optical network (LR-PON).

  2. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  3. Power

    DEFF Research Database (Denmark)

    Elmholdt, Claus Westergård; Fogsgaard, Morten

    2016-01-01

    and creativity suggests that when managers give people the opportunity to gain power and explicate that there is reason to be more creative, people will show a boost in creative behaviour. Moreover, this process works best in unstable power hierarchies, which implies that power is treated as a negotiable....... It is thus a central point that power is not necessarily something that breaks down and represses. On the contrary, an explicit focus on the dynamics of power in relation to creativity can be productive for the organisation. Our main focus is to elaborate the implications of this for practice and theory...

  4. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  5. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  6. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  7. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  8. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  9. The highest velocity and the shortest duration permitting attainment of VO2max during running

    Directory of Open Access Journals (Sweden)

    Tiago Turnes

    2015-02-01

    Full Text Available DOI: http://dx.doi.org/10.5007/1980-0037.2015v17n2p226   The severe-intensity domain has important applications for the prescription of running training and the elaboration of experimental designs. The objectives of this study were: 1 to investigate the validity of a previously proposed model to estimate the shortest exercise duration (TLOW and the highest velocity (VHIGH at which VO2max is reached during running, and 2 to evaluate the effects of aerobic training status on these variables. Eight runners and eight physically active subjects performed several treadmill running exercise tests to fatigue in order to mathematically estimate and to experimentally determine TLOW and VHIGH. The relationship between the time to achieve VO2max and time to exhaustion (Tlim was used to estimate TLOW. VHIGH was estimated using the critical velocity model. VHIGH was assumed to be the highest velocity at which VO2 was equal to or higher than the average VO2max minus one standard deviation. TLOW was defined as Tlim associated with VHIGH. Runners presented better aerobic fitness and higher VHIGH (22.2 ± 1.9 km.h-1 than active subjects (20.0 ± 2.1 km.h-1. However, TLOW did not differ between groups (runners: 101 ± 39 s; active subjects: 100 ± 35 s. TLOW and VHIGH were not well estimated by the model proposed, with high coefficients of variation (> 6% and a low correlation coefficient (r<0.70, a fact reducing the validity of the model. It was concluded that aerobic training status positively affected only VHIGH. Furthermore, the model proposed presented low validity to estimate the upper boundary of the severe-intensity domain (i.e., VHIGH, irrespective of the subjects’ training status.

  10. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  11. Cost curves for implantation of small scale hydroelectric power plant project in function of the average annual energy production; Curvas de custo de implantacao de pequenos projetos hidreletricos em funcao da producao media anual de energia

    Energy Technology Data Exchange (ETDEWEB)

    Veja, Fausto Alfredo Canales; Mendes, Carlos Andre Bulhoes; Beluco, Alexandre

    2008-10-15

    Because of its maturity, small hydropower generation is one of the main energy sources to be considered for electrification of areas far from the national grid. Once a site with hydropower potential is identified, technical and economical studies to assess its feasibility shall be done. Cost curves are helpful tools in the appraisal of the economical feasibility of this type of projects. This paper presents a method to determine initial cost curves as a function of the average energy production of the hydropower plant, by using a set of parametric cost curves and the flow duration curve at the analyzed location. The method is illustrated using information related to 18 pre-feasibility studies made in 2002, at the Central-Atlantic rural region of Nicaragua. (author)

  12. Extreme Markup: The Fifty US Hospitals With The Highest Charge-To-Cost Ratios.

    Science.gov (United States)

    Bai, Ge; Anderson, Gerard F

    2015-06-01

    Using Medicare cost reports, we examined the fifty US hospitals with the highest charge-to-cost ratios in 2012. These hospitals have markups (ratios of charges over Medicare-allowable costs) approximately ten times their Medicare-allowable costs compared to a national average of 3.4 and a mode of 2.4. Analysis of the fifty hospitals showed that forty-nine are for profit (98 percent), forty-six are owned by for-profit hospital systems (92 percent), and twenty (40 percent) operate in Florida. One for-profit hospital system owns half of these fifty hospitals. While most public and private health insurers do not use hospital charges to set their payment rates, uninsured patients are commonly asked to pay the full charges, and out-of-network patients and casualty and workers' compensation insurers are often expected to pay a large portion of the full charges. Because it is difficult for patients to compare prices, market forces fail to constrain hospital charges. Federal and state governments may want to consider limitations on the charge-to-cost ratio, some form of all-payer rate setting, or mandated price disclosure to regulate hospital markups. Project HOPE—The People-to-People Health Foundation, Inc.

  13. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  14. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  15. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  16. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  17. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  18. Towards highest peak intensities for ultra-short MeV-range ion bunches

    Science.gov (United States)

    Busold, Simon; Schumacher, Dennis; Brabetz, Christian; Jahn, Diana; Kroll, Florian; Deppert, Oliver; Schramm, Ulrich; Cowan, Thomas E.; Blažević, Abel; Bagnoud, Vincent; Roth, Markus

    2015-01-01

    A laser-driven, multi-MeV-range ion beamline has been installed at the GSI Helmholtz center for heavy ion research. The high-power laser PHELIX drives the very short (picosecond) ion acceleration on μm scale, with energies ranging up to 28.4 MeV for protons in a continuous spectrum. The necessary beam shaping behind the source is accomplished by applying magnetic ion lenses like solenoids and quadrupoles and a radiofrequency cavity. Based on the unique beam properties from the laser-driven source, high-current single bunches could be produced and characterized in a recent experiment: At a central energy of 7.8 MeV, up to 5 × 108 protons could be re-focused in time to a FWHM bunch length of τ = (462 ± 40) ps via phase focusing. The bunches show a moderate energy spread between 10% and 15% (ΔE/E0 at FWHM) and are available at 6 m distance to the source und thus separated from the harsh laser-matter interaction environment. These successful experiments represent the basis for developing novel laser-driven ion beamlines and accessing highest peak intensities for ultra-short MeV-range ion bunches. PMID:26212024

  19. Towards highest peak intensities for ultra-short MeV-range ion bunches

    Science.gov (United States)

    Busold, Simon; Schumacher, Dennis; Brabetz, Christian; Jahn, Diana; Kroll, Florian; Deppert, Oliver; Schramm, Ulrich; Cowan, Thomas E.; Blažević, Abel; Bagnoud, Vincent; Roth, Markus

    2015-07-01

    A laser-driven, multi-MeV-range ion beamline has been installed at the GSI Helmholtz center for heavy ion research. The high-power laser PHELIX drives the very short (picosecond) ion acceleration on μm scale, with energies ranging up to 28.4 MeV for protons in a continuous spectrum. The necessary beam shaping behind the source is accomplished by applying magnetic ion lenses like solenoids and quadrupoles and a radiofrequency cavity. Based on the unique beam properties from the laser-driven source, high-current single bunches could be produced and characterized in a recent experiment: At a central energy of 7.8 MeV, up to 5 × 108 protons could be re-focused in time to a FWHM bunch length of τ = (462 ± 40) ps via phase focusing. The bunches show a moderate energy spread between 10% and 15% (ΔE/E0 at FWHM) and are available at 6 m distance to the source und thus separated from the harsh laser-matter interaction environment. These successful experiments represent the basis for developing novel laser-driven ion beamlines and accessing highest peak intensities for ultra-short MeV-range ion bunches.

  20. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  1. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  2. Pareto Principle in Datamining: an Above-Average Fencing Algorithm

    Directory of Open Access Journals (Sweden)

    K. Macek

    2008-01-01

    Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.

  3. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  4. Short pulse mid-infrared amplifier for high average power

    CSIR Research Space (South Africa)

    Botha, LR

    2006-09-01

    Full Text Available High pressure CO2 lasers are good candidates for amplifying picosecond mid infrared pulses. High pressure CO2 lasers are notorious for being unreliable and difficult to operate. In this paper a high pressure CO2 laser is presented based on well...

  5. Average Power and Brightness Scaling of Diamond Raman Lasers

    Science.gov (United States)

    2012-01-07

    J. Appl. Phys. 92(2), 649–653 (2002). 26. J. Smedley , C. Jaye, J. Bohon, T. Rao, and D. A. Fischer, “Laser patterning of diamond. Part II. Surface...nondiamond carbon formation and its removal,” J. Appl. Phys. 105(12), 123108 (2009). 27. J. Smedley , J. Bohon, Q. Wu, and T. Rao, “Laser patterning...Singh, Dianyuan Fan, Jianquan Yao, Robert F. Walter, Proc. of SPIE Vol. 8551, 85510U · © 2012 SPIE CCC code: 0277-786/12/$18 · doi: 10.1117/12.999857 Proc

  6. Picosecond mid-infrared amplifier for high average power.

    CSIR Research Space (South Africa)

    Botha, LR

    2007-04-01

    Full Text Available High pressure CO2 lasers are good candidates for amplifying picosecond mid infrared pulses. High pressure CO2 lasers are notorious for being unreliable and difficult to operate. In this paper a high pressure CO2 laser is presented based on well...

  7. Significance of power average of sinusoidal and non-sinusoidal ...

    Indian Academy of Sciences (India)

    2016-06-08

    Jun 8, 2016 ... PG & Research Department of Physics, Nehru Memorial College (Autonomous),. Puthanampatti .... for long time intervals, the periodic or chaotic behaviour ..... square force (short dashed line), sawtooth force (long dashed.

  8. A new derivation of the highest-weight polynomial of a unitary lie algebra

    International Nuclear Information System (INIS)

    P Chau, Huu-Tai; P Van, Isacker

    2000-01-01

    A new method is presented to derive the expression of the highest-weight polynomial used to build the basis of an irreducible representation (IR) of the unitary algebra U(2J+1). After a brief reminder of Moshinsky's method to arrive at the set of equations defining the highest-weight polynomial of U(2J+1), an alternative derivation of the polynomial from these equations is presented. The method is less general than the one proposed by Moshinsky but has the advantage that the determinantal expression of the highest-weight polynomial is arrived at in a direct way using matrix inversions. (authors)

  9. PV String to 3-Phase Inverter with Highest Voltage Capabilities, Highest Efficiency and 25 Year Lifetime: Final Technical Report, November 7, 2011 - November 6, 2012

    Energy Technology Data Exchange (ETDEWEB)

    West, R.

    2012-12-01

    Final report for Renewable Power Conversion. The overall objective of this project was to develop a prototype PV inverter which enables a new utility-scale PV system approach where the cost, performance, reliability and safety benefits of this new approach have the potential to make all others obsolete.

  10. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  11. Development of an Interferometric Phased Array Trigger for Balloon-Borne Detection of the Highest Energy Cosmic Particles

    Science.gov (United States)

    Vieregg, Abigail

    interferometric phased array trigger for these impulsive radio detectors, a new type of trigger that will improve sensitivity substantially and expedite the discovery of the highest energy particles in our universe. We have developed an 8- channel interferometric trigger board for ground-based applications that will be deployed in December 2017 with the ground-based Askaryan Radio Array (ARA) experiment at the South Pole. Preliminary Monte Carlo simulations indicate that the cosmogenic neutrino event rate will go up by a factor of 3 with the new trigger. The true power of the interferometric trigger is in scaling to large numbers of channels, and the discovery space that is only available from a balloon platform at the highest energies is extremely appealing. We will build on and extend the NASA investment in the ANITA Long Duration Balloon (LDB) mission and the many other complementary particle astrophysics LDB missions by developing the electronics required to bring a large-scale radio interferometric trigger to a balloon platform, extending the scientific reach of any future LDB or Super Pressure Balloon (SPB) mission for radio detection of the highest energy cosmic particles. We will develop an interferometric trigger system that is scalable to O(100) channels and suitable for use on a balloon platform. Under this proposal, we will: 1) Design and fabricate interferometric trigger hardware for balloon-borne cosmic particle detectors that is scalable to large numbers of channels O(100) by reducing the power consumption per channel, increasing the number of channels per board, and developing high-speed communication capability between boards. 2) Perform a trade study and inform design decisions for future balloon missions by further developing our Monte Carlo simulation and adapting it to balloon geometries.

  12. The Highest Good and the Practical Regulative Knowledge in Kant’s Critique of Practical Reason

    OpenAIRE

    Joel Thiago Klein

    2016-01-01

    In this paper I defend three different points: first, that the concept of highest good is derived from an a priori but subjective argument, namely a maxim of pure practical reason; secondly, that the theory regarding the highest good has the validity of a practical regulative knowledge; and thirdly, that the practical regulative knowledge can be understood as the same “holding something to be true” as Kant attributes to hope and believe.

  13. INVESTIGATION OF NONAXISYMMETRIC STRESS STATE FOR QUASI-STATIC THERMAL POWER LOADING UNDER CONDITIONS OF HIGHEST-ENERGY PARTICLE IRRADIATION

    Directory of Open Access Journals (Sweden)

    A. V. Chigarev

    2013-01-01

    Full Text Available The paper contains a virtual modeling 2D(r, θ for deformation of singly-connected cylindrical solid under conditions of thermo-irradiation impact. Influence of a circumferential distortion on nonaxisymmetric strain-stress state for various values of temperature amplitudes has been investigated in the paper. A solid cylinder  with internal heat sources has been considered as a model body. Properties of the model body correspond to cermet fuel (40 % UO2 + 60 % Cr. 

  14. [The gender gap in highest quality medical research - A scientometric analysis of the representation of female authors in highest impact medical journals].

    Science.gov (United States)

    Bendels, Michael H K; Wanke, Eileen M; Benik, Steffen; Schehadat, Marc S; Schöffel, Norman; Bauer, Jan; Gerber, Alexander; Brüggmann, Dörthe; Oremek, Gerhard M; Groneberg, David A

    2018-05-01

     The study aims to elucidate the state of gender equality in high-impact medical research, analyzing the representation of female authorships from January, 2008 to September, 2017.  133 893 male and female authorships from seven high-impact medical journals were analyzed. The key methodology was the combined analysis of the relative frequency, odds ratio and citations of female authorships. The Prestige Index measures the distribution of prestigious authorships between the two genders.  35.0 % of all authorships and 34.3 % of the first, 36.1 % of the co- and 24.2 % of the last authorships were held by women. Female authors have an odds ratio of 0.97 (KI: 0.93 - 1.01) for first, 1.36 (KI: 1.32 - 1.40) for co- und 0.57 (KI: 0.54 - 0.60) for last authorships compared to male authors. The proportion of female authorships exhibits an annual growth of 1.3 % overall, with 0.5 % for first, 1.2 % for co-, and 0.8 % for last authorships. Women are underrepresented at prestigious authorship compared to men (Prestige Index = -0.38). The underrepresentation accentuates in highly competitive articles attracting the highest citation rates, namely, articles with many authors and articles that were published in highest-impact journals. Multi-author articles with male key authors are more frequently cited than articles with female key authors. The gender-specific differences in citation rates increase the more authors contribute to an article. Women publish fewer articles compared to men (39.6 % female authors are responsible for 35.0 % of the authorships) and are underrepresented at productivity levels of more than 1 article per author. Distinct differences at the country level were revealed.  High impact medical research is characterized by few female group leaders as last authors and many female researchers being first or co-authors early in their career. It is very likely that this gender-specific career dichotomy will persistent in

  15. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  16. Dipole model analysis of highest precision HERA data, including very low Q"2's

    International Nuclear Information System (INIS)

    Luszczak, A.; Kowalski, H.

    2016-12-01

    We analyse, within a dipole model, the final, inclusive HERA DIS cross section data in the low χ region, using fully correlated errors. We show, that these highest precision data are very well described within the dipole model framework starting from Q"2 values of 3.5 GeV"2 to the highest values of Q"2=250 GeV"2. To analyze the saturation effects we evaluated the data including also the very low 0.35< Q"2 GeV"2 region. The fits including this region show a preference of the saturation ansatz.

  17. Correlation of the highest-energy cosmic rays with the positions of nearby active galactic nuclei

    NARCIS (Netherlands)

    Abraham, J.; Abreu, P.; Aglietta, M.; Aguirre, C.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez-Muniz, J.; Ambrosio, M.; Anchordoqui, L.; Andringa, S.; Anzalone, A.; Aramo, C.; Argiro, S.; Arisaka, K.; Armengaud, E.; Arneodo, F.; Arqueros, F.; Asch, T.; Asorey, H.; Assis, P.; Atulugama, B. S.; Aublin, J.; Ave, M.; Avila, G.; Baecker, T.; Badagnani, D.; Barbosa, A. F.; Barnhill, D.; Barroso, S. L. C.; Bauleo, P.; Beatty, J. J.; Beau, T.; Becker, B. R.; Becker, K. H.; Bellido, J. A.; BenZvi, S.; Berat, C.; Bergmann, T.; Bernardini, P.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanch-Bigas, O.; Blanco, F.; Blasi, P.; Bleve, C.; Bluemer, H.; Bohacova, M.; Bonifazi, C.; Bonino, R.; Brack, J.; Brogueira, P.; Brown, W. C.; Buchholz, P.; Bueno, A.; Burton, R. E.; Busca, N. G.; Caballero-Mora, K. S.; Cai, B.; Camin, D. V.; Caramete, L.; Caruso, R.; Carvalho, W.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Chiavassa, A.; Chinellato, J. A.; Chou, A.; Chye, J.; Clay, R. W.; Colombo, E.; Conceicao, R.; Connolly, B.; Contreras, F.; Coppens, J.; Cordier, A.; Cotti, U.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Donato, C.; Bg, S. J. de Jong; De La Vega, G.; de Mello, W. J. M.; de Mello Neto, J. R. T.; De Mitri, I.; de Souza, V.; del Peral, L.; Deligny, O.; Della Selva, A.; Delle Fratte, C.; Dembinski, H.; Di Giulio, C.; Diaz, J. C.; Diep, P. N.; Dobrigkeit, C.; D'Olivo, J. C.; Dong, P. N.; Dornic, D.; Dorofeev, A.; dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; DuVernois, M. A.; Engel, R.; Epele, L.; Escobar, C. O.; Etchegoyen, A.; Luis, P. Facal San; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferrer, F.; Ferry, S.; Fick, B.; Filevich, A.; Filipcic, A.; Fleck, I.; Fracchiolla, C. E.; Fulgione, W.; Garcia, B.; Gaimez, D. Garcia; Garcia-Pinto, D.; Garrido, X.; Geenen, H.; Gelmini, G.; Gemmeke, H.; Ghia, P. L.; Giller, M.; Glass, H.; Gold, M. S.; Golup, G.; Albarracin, F. Gomez; Berisso, M. Gomez; Herrero, R. Gomez; Goncalves, P.; do Amaral, M. Goncalves; Gonzalez, D.; Gonzalezc, J. G.; Gonzalez, M.; Gora, D.; Gorgi, A.; Gouffon, P.; Grassi, V.; Grillo, A. F.; Grunfeld, C.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Gutierrez, J.; Hague, J. D.; Hamilton, J. C.; Hansen, P.; Harari, D.; Harmsma, S.; Harton, J. L.; Haungs, A.; Hauschildt, T.; Healy, M. D.; Hebbeker, T.; Hebrero, G.; Heck, D.; Hojvat, C.; Holmes, V. C.; Homola, P.; Hoerandel, J.; Horneffer, A.; Horvat, M.; Hrabovsky, M.; Huege, T.; Hussain, M.; Larlori, M.; Insolia, A.; Ionita, F.; Italiano, A.; Kaducak, M.; Kampert, K. H.; Karova, T.; Kegl, B.; Keilhauer, B.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapik, R.; Knapp, J.; Koanga, V. -H.; Krieger, A.; Kroemer, O.; Kuempel, D.; Kunka, N.; Kusenko, A.; La Rosa, G.; Lachaud, C.; Lago, B. L.; Lebrun, D.; LeBrun, P.; Lee, J.; de Oliveira, M. A. Leigui; Lopez, R.; Letessier-Selvon, A.; Leuthold, M.; Lhenry-Yvon, I.; Aguera, A. Lopez; Bahilo, J. Lozano; Garcia, R. Luna; Maccarone, M. C.; Macolino, C.; Maldera, S.; Mancarella, G.; Mancenido, M. E.; Mandatat, D.; Mantsch, P.; Mariazzi, A. G.; Maris, I. C.; Falcon, H. R. Marquez; Martello, D.; Martinez, J.; Bravo, O. Martinez; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurizio, D.; Mazur, P. O.; McCauley, T.; McEwen, M.; McNeil, R. R.; Medina, M. C.; Medina-Tanco, G.; Meli, A.; Melo, D.; Menichetti, E.; Menschikov, A.; Meurer, Chr.; Meyhandan, R.; Micheletti, M. I.; Miele, G.; Miller, W.; Mollerach, S.; Monasor, M.; Ragaigne, D. Monnier; Montanet, F.; Morales, B.; Morello, C.; Moreno, J. C.; Morris, C.; Mostafa, M.; Muller, M. A.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Newman-Holmes, C.; Newton, D.; Nhung, P. T.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nozka, L.; Oehlschlaeger, J.; Ohnuki, T.; Olinto, A.; Olmos-Gilbaja, V. M.; Ortiz, M.; Ortolani, F.; Ostapchenko, S.; Otero, L.; Pacheco, N.; Selmi-Dei, D. Pakk; Palatka, M.; Pallotta, J.; Parente, G.; Parizot, E.; Parlati, S.; Pastor, S.; Patel, M.; Paul, T.; Pavlidou, V.; Payet, K.; Pech, M.; Pekala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Petrera, S.; Petrinca, P.; Petrov, Y.; Pichel, A.; Piegaia, R.; Pierog, T.; Pimenta, M.; Pinto, T.; Pirronello, V.; Pisanti, O.; Platino, M.; Pochon, J.; Privitera, P.; Prouza, M.; Quel, E. J.; Rautenberg, J.; Redondo, A.; Reucroft, S.; Revenu, B.; Rezende, F. A. S.; Ridky, J.; Riggi, S.; Risse, M.; Riviere, C.; Rizi, V.; Roberts, M.; Robledo, C.; Rodriguez, G.; Martino, J. Rodriguez; Rojo, J. Rodriguez; Rodriguez-Cabo, I.; Rodriguez-Frias, M. D.; Ros, G.; Rosado, J.; Roth, M.; Rouille-d'Orfeuil, B.; Roulet, E.; Roverok, A. C.; Salamida, F.; Salazar, H.; Salina, G.; Sanchez, F.; Santander, M.; Santo, C. E.; Santos, E. M.; Sarazin, F.; Sarkar, S.; Sato, R.; Scherini, V.; Schieler, H.; Schmidt, A.; Schmidt, F.; Schmidt, T.; Scholten, O.; Schovanek, P.; Schuessler, F.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Semikoz, D.; Settimo, M.; Shellard, R. C.; Sidelnik, I.; Siffert, B. B.; Sigl, G.; De Grande, N. Smetniansky; Smialkowski, A.; Smida, R.; Smith, A. G. K.; Smith, B. E.; Snow, G. R.; Sokolsky, P.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Strazzeri, E.; Stutz, A.; Suarez, F.; Suomijarvi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Takahashi, J.; Tamashiro, A.; Tamburro, A.; Tascau, O.; Tcaciuc, R.; Thao, N. T.; Thomas, D.; Ticona, R.; Tiffenberg, J.; Timmermans, C.; Tkaczyk, W.; Peixoto, C. J. Todero; Tome, B.; Tonachini, A.; Torres, I.; Travnicek, P.; Tripathi, A.; Tristram, G.; Tscherniakhovski, D.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Galicia, J. F. Valdes; Valino, I.; Valore, L.; van den Berg, A. M.; van Elewyck, V.; Vazquez, R. A.; Veberic, D.; Veiga, A.; Velarde, A.; Venters, T.; Verzi, V.; Videla, M.; Villasenor, L.; Vorobiov, S.; Voyvodic, L.; Wahlberg, H.; Wainberg, O.; Warner, D.; Watson, A. A.; Westerhoff, S.; Wieczorek, G.; Wiencke, L.; Wilczynska, B.; Wilczynski, H.; Wileman, C.; Winnick, M. G.; Wu, H.; Wundheiler, B.; Yamamoto, T.; Younk, P.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zech, A.; Zepeda, A.; Ziolkowski, M.

    Data collected by the Pierre Auger Observatory provide evidence for anisotropy in the arrival directions of the cosmic rays with the highest-energies, which are correlated with the positions of relatively nearby active galactic nuclei (AGN) [Pierre Auger Collaboration, Science 318 (2007) 938]. The

  18. Alpha-1-antitrypsin deficiency in Madeira (Portugal): the highest prevalence in the world.

    Science.gov (United States)

    Spínola, Carla; Bruges-Armas, Jácome; Pereira, Conceição; Brehm, António; Spínola, Hélder

    2009-10-01

    Alpha-1-antitrypsin (AAT) deficiency is a common genetic disease which affects both lung and liver. Early diagnosis can help asymptomatic patients to adjust their lifestyle choices in order to reduce the risk of Chronic Obstructive Pulmonary Disease (COPD). The determination of this genetic deficiency prevalence in Madeira Island (Portugal) population is important to clarify susceptibility and define the relevance of performing genetic tests for AAT on individuals at risk for COPD. Two hundred samples of unrelated individuals from Madeira Island were genotyped for the two most common AAT deficiency alleles, PI*S and PI*Z, using Polymerase Chain Reaction-Mediated Site-Directed Mutagenesis. Our results show one of the highest frequencies for both mutations when compared to any already studied population in the world. In fact, PI*S mutation has the highest prevalence (18%), and PI*Z mutation (2.5%) was the third highest worldwide. The frequency of AAT deficiency genotypes in Madeira (PI*ZZ, PI*SS, and PI*SZ) is estimated to be the highest in the world: 41 per 1000. This high prevalence of AAT deficiency on Madeira Island reveals an increased genetic susceptibility to COPD and suggests a routine genetic testing for individuals at risk.

  19. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  20. Highest cited papers published in Neurology India: An analysis for the years 1993-2014.

    Science.gov (United States)

    Pandey, Paritosh; Subeikshanan, V; Madhugiri, Venkatesh S

    2016-01-01

    The highest cited papers published in a journal provide a snapshot of the clinical practice and research in that specialty and/or region. The aim of this study was to determine the highest cited papers published in Neurology India and analyze their attributes. This study was a citation analysis of all papers published in Neurology India since online archiving commenced in 1993. All papers published in Neurology India between the years 1993-2014 were listed. The number of times each paper had been cited up till the time of performing this study was determined by performing a Google Scholar search. Published papers were then ranked on the basis of total times cited since publication and the annual citation rate. Statistical Techniques: Simple counts and percentages were used to report most results. The mean citations received by papers in various categories were compared using the Student's t-test or a one-way analysis of variance, as appropriate. All analyses were carried out on SAS University Edition (SAS/STAT®, SAS Institute Inc, NC, USA) and graphs were generated on MS Excel 2016. The top papers on the total citations and annual citation rate rank lists pertained to basic neuroscience research. The highest cited paper overall had received 139 citations. About a quarter of the papers published had never been cited at all. The major themes represented were vascular diseases and infections. The highest cited papers reflect the diseases that are of major concern in India. Certain domains such as trauma, allied neurosciences, and basic neuroscience research were underrepresented.

  1. ATLAS event at 13 TeV - Highest mass dijets resonance event in 2015 data

    CERN Multimedia

    ATLAS Collaboration

    2015-01-01

    The highest-mass, central dijet event passing the dijet resonance selection collected in 2015 (Event 1273922482, Run 280673) : the two central high-pT jets have an invariant mass of 6.9 TeV, the two leading jets have a pT of 3.2 TeV. The missing transverse momentum in this event is 46 GeV.

  2. ATLAS event at 13 TeV - Highest mass dijets angular event in 2015 data

    CERN Multimedia

    ATLAS Collaboration

    2015-01-01

    The highest-mass dijet event passing the angular selection collected in 2015 (Event 478442529, Run 280464): the two central high-pT jets have an invariant mass of 7.9 TeV, the three leading jets have a pT of 1.99, 1.86 and 0.74 TeV respectively. The missing transverse momentum in this event is 46 GeV

  3. Eyebrow hairs from actinic keratosis patients harbor the highest number of cutaneous human papillomaviruses.

    Science.gov (United States)

    Schneider, Ines; Lehmann, Mandy D; Kogosov, Vlada; Stockfleth, Eggert; Nindl, Ingo

    2013-04-24

    Cutaneous human papillomavirus (HPV) infections seem to be associated with the onset of actinic keratosis (AK). This study compares the presence of cutaneous HPV types in eyebrow hairs to those in tissues of normal skin and skin lesions of 75 immunocompetent AK patients. Biopsies from AK lesions, normal skin and plucked eyebrow hairs were collected from each patient. DNA from these specimens was tested for the presence of 28 cutaneous HPV (betaPV and gammaPV) by a PCR based method. The highest number of HPV prevalence was detected in 84% of the eyebrow hairs (63/75, median 6 types) compared to 47% of AK lesions (35/75, median 3 types) (pAK and 69 in normal skin. In all three specimens HPV20, HPV23 and/or HPV37 were the most prevalent types. The highest number of multiple types of HPV positive specimens was found in 76% of the eyebrow hairs compared to 60% in AK and 57% in normal skin. The concordance of at least one HPV type in virus positive specimens was 81% (three specimens) and 88-93% of all three combinations with two specimens. Thus, eyebrow hairs revealed the highest number of cutaneous HPV infections, are easy to collect and are an appropriate screening tool in order to identify a possible association of HPV and AK.

  4. 18 CFR 301.5 - Changes in Average System Cost methodology.

    Science.gov (United States)

    2010-04-01

    ... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE... customers, or from three-quarters of Bonneville's direct-service industrial customers may initiate a...

  5. Operating experience from Swedish nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-06-01

    The total production of electricity from Swedish nuclear power plants was 70.5 TWh during 1998, which is the second highest yearly production ever. Production losses due to low demand totaled 5.1 TWh combined for all twelve units and production losses due to coastdown operation totaled an additional 0.5 TWh. The reason for this low power demand was a very good supply of water to the hydropower system. Hydroelectric power production was 73.6 TWh, an increase by roughly 5 TWh since 1997. Hence, the hydroelectric power production substantially exceeded the 64 TWh expected during a normal year, i.e. a year with average rainfall. Remaining production sources, mainly fossil fuel electricity production combined with district heating, contributed with 10 TWh. The total electricity production was 154.2 TWh, the highest yearly production ever. The total electricity consumption including transmission losses was 143.5 TWh. This is also the highest consumption ever and an increase by one percent compared to 1997. The preliminary net result of the electric power trade shows a net export by 10.7 TWh. The figures above are calculated from the preliminary production results. A comprehensive report on electric power supply and consumption in Sweden is given in the 1998 Annual Report from the Swedish Power Association. Besides Oskarshamn 1, all plants have periodically been operated in load-following mode, mostly because of the abundant supply of hydropower. The energy availability for the three boiling water reactors at Forsmark averaged 93.3 % and for the three pressure water reactors at Ringhals 91.0 %, both figures are the highest ever noted. In the section `Special Reports` three events of importance to safety that occurred during 1998 are reported. The events were all rated as level 1 according to the International Nuclear Event Scale (INES) Figs, tabs.; Also available in Swedish

  6. Operating experience from Swedish nuclear power plants

    International Nuclear Information System (INIS)

    1999-01-01

    The total production of electricity from Swedish nuclear power plants was 70.5 TWh during 1998, which is the second highest yearly production ever. Production losses due to low demand totaled 5.1 TWh combined for all twelve units and production losses due to coastdown operation totaled an additional 0.5 TWh. The reason for this low power demand was a very good supply of water to the hydropower system. Hydroelectric power production was 73.6 TWh, an increase by roughly 5 TWh since 1997. Hence, the hydroelectric power production substantially exceeded the 64 TWh expected during a normal year, i.e. a year with average rainfall. Remaining production sources, mainly fossil fuel electricity production combined with district heating, contributed with 10 TWh. The total electricity production was 154.2 TWh, the highest yearly production ever. The total electricity consumption including transmission losses was 143.5 TWh. This is also the highest consumption ever and an increase by one percent compared to 1997. The preliminary net result of the electric power trade shows a net export by 10.7 TWh. The figures above are calculated from the preliminary production results. A comprehensive report on electric power supply and consumption in Sweden is given in the 1998 Annual Report from the Swedish Power Association. Besides Oskarshamn 1, all plants have periodically been operated in load-following mode, mostly because of the abundant supply of hydropower. The energy availability for the three boiling water reactors at Forsmark averaged 93.3 % and for the three pressure water reactors at Ringhals 91.0 %, both figures are the highest ever noted. In the section 'Special Reports' three events of importance to safety that occurred during 1998 are reported. The events were all rated as level 1 according to the International Nuclear Event Scale (INES)

  7. Eyebrow hairs from actinic keratosis patients harbor the highest number of cutaneous human papillomaviruses

    Science.gov (United States)

    2013-01-01

    Background Cutaneous human papillomavirus (HPV) infections seem to be associated with the onset of actinic keratosis (AK). This study compares the presence of cutaneous HPV types in eyebrow hairs to those in tissues of normal skin and skin lesions of 75 immunocompetent AK patients. Methods Biopsies from AK lesions, normal skin and plucked eyebrow hairs were collected from each patient. DNA from these specimens was tested for the presence of 28 cutaneous HPV (betaPV and gammaPV) by a PCR based method. Results The highest number of HPV prevalence was detected in 84% of the eyebrow hairs (63/75, median 6 types) compared to 47% of AK lesions (35/75, median 3 types) (p< 0.001) and 37% of normal skin (28/75, median 4 types) (p< 0.001), respectively. A total of 228 HPV infections were found in eyebrow hairs compared to only 92 HPV infections in AK and 69 in normal skin. In all three specimens HPV20, HPV23 and/or HPV37 were the most prevalent types. The highest number of multiple types of HPV positive specimens was found in 76% of the eyebrow hairs compared to 60% in AK and 57% in normal skin. The concordance of at least one HPV type in virus positive specimens was 81% (three specimens) and 88-93% of all three combinations with two specimens. Conclusions Thus, eyebrow hairs revealed the highest number of cutaneous HPV infections, are easy to collect and are an appropriate screening tool in order to identify a possible association of HPV and AK. PMID:23618013

  8. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  9. 17-Year-Old Boy with Renal Failure and the Highest Reported Creatinine in Pediatric Literature

    Directory of Open Access Journals (Sweden)

    Vimal Master Sankar Raj

    2015-01-01

    Full Text Available The prevalence of chronic kidney disease (CKD is on the rise and constitutes a major health burden across the world. Clinical presentations in early CKD are usually subtle. Awareness of the risk factors for CKD is important for early diagnosis and treatment to slow the progression of disease. We present a case report of a 17-year-old African American male who presented in a life threatening hypertensive emergency with renal failure and the highest reported serum creatinine in a pediatric patient. A brief discussion on CKD criteria, complications, and potential red flags for screening strategies is provided.

  10. Observations of the highest energy gamma-rays from gamma-ray bursts

    International Nuclear Information System (INIS)

    Dingus, Brenda L.

    2001-01-01

    EGRET has extended the highest energy observations of gamma-ray bursts to GeV gamma rays. Such high energies imply the fireball that is radiating the gamma-rays has a bulk Lorentz factor of several hundred. However, EGRET only detected a few gamma-ray bursts. GLAST will likely detect several hundred bursts and may extend the maximum energy to a few 100 GeV. Meanwhile new ground based detectors with sensitivity to gamma-ray bursts are beginning operation, and one recently reported evidence for TeV emission from a burst

  11. Addressing the Highest Risk: Environmental Programs at Los Alamos National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Forbes, Elaine E [Los Alamos National Laboratory

    2012-06-08

    Report topics: Current status of cleanup; Shift in priorities to address highest risk; Removal of above-ground waste; and Continued focus on protecting water resources. Partnership between the National Nuclear Security Administration's Los Alamos Site Office, DOE Carlsbad Field Office, New Mexico Environment Department, and contractor staff has enabled unprecedented cleanup progress. Progress on TRU campaign is well ahead of plan. To date, have completed 130 shipments vs. 104 planned; shipped 483 cubic meters of above-ground waste (vs. 277 planned); and removed 11,249 PE Ci of material at risk (vs. 9,411 planned).

  12. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  13. Nuclear power: 2004 world report - evaluation

    International Nuclear Information System (INIS)

    Anon.

    2005-01-01

    Last year, 2004, 441 nuclear power plants were available for power supply in 31 countries of the world. Nuclear generating capacity attained its highest level so far at an aggregate gross power of 385,854 MWe and an aggregate net power of 366,682 MWe, respectively. Nine different reactor lines are operated in commercial nuclear power plants. Light water reactors (PWR and BWR) again are in the lead with 362 plants. At year's end, 22 nuclear power plants with an aggregate gross power of 18,553 MWe and an aggregate net power, respectively, of 17,591 MWe were under construction in nine countries. Of these, twelve are light water reactors, nine are CANDU-type reactors, and one is a fast breeder reactor. So far, 104 commercial reactors with powers in excess of 5 MWe have been decommissioned in eighteen countries, most of them low-power prototype plants. 228 nuclear power plants of those in operation, i.e. slightly more than half, were commissioned in the 1980es. Nuclear power plant availabilities in terms of capacity and time again reached record levels. Capacity availability was 84.30%, availability in terms of time, 85.60%. The four nuclear power plants in Finland continue to be world champions in this respect with a cumulated average capacity availability of 90.30%. (orig.)

  14. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  15. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  16. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  17. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  18. The Mass Elevation Effect of the Central Andes and Its Implications for the Southern Hemisphere's Highest Treeline

    Directory of Open Access Journals (Sweden)

    Wenhui He

    2016-05-01

    Full Text Available One of the highest treelines in the world is at 4810 m above sea level on the Sajama Volcano in the central Andes. The climatological cause of that exceptionally high treeline position is still unclear. Although it has been suggested that the mass elevation effect (MEE explains the upward shift of treelines in the Altiplano region, the magnitude of MEE has not yet been quantified for that region. This paper defines MEE as the air temperature difference in summer at the same elevation between the inner mountains/plateaus (Altiplano and the free atmosphere above the adjacent lowlands of the Andean Cordillera. The Altiplano air temperature was obtained from the Global Historical Climatology Network-Monthly temperature database, and the air temperature above the adjacent lowlands was interpolated based on the National Center for Environmental Prediction/National Center for Atmospheric Research Reanalysis 1 data set. We analyzed the mean air temperature differences for January, July, and the warm months from October to April. The air temperature was mostly higher on the Altiplano than over the neighboring lowlands at the same altitude. The air temperature difference increased from the outer Andean east-facing slope to the interior of the Altiplano in summer, and it increased from high latitudes to low latitudes in winter. The mean air temperature in the Altiplano in summer is approximately 5 K higher than it is above the adjacent lowlands at the same mean elevation, averaging about 3700 m above sea level. This upward shift of isotherms in the inner part of the Cordillera enables the treeline to climb to 4810 m, with shrub-size trees reaching even higher. Therefore, the MEE explains the occurrence of one of the world’s highest treelines in the central Andes.

  19. Automatic orbital GTAW welding: Highest quality welds for tomorrow's high-performance systems

    Science.gov (United States)

    Henon, B. K.

    1985-01-01

    Automatic orbital gas tungsten arc welding (GTAW) or TIG welding is certain to play an increasingly prominent role in tomorrow's technology. The welds are of the highest quality and the repeatability of automatic weldings is vastly superior to that of manual welding. Since less heat is applied to the weld during automatic welding than manual welding, there is less change in the metallurgical properties of the parent material. The possibility of accurate control and the cleanliness of the automatic GTAW welding process make it highly suitable to the welding of the more exotic and expensive materials which are now widely used in the aerospace and hydrospace industries. Titanium, stainless steel, Inconel, and Incoloy, as well as, aluminum can all be welded to the highest quality specifications automatically. Automatic orbital GTAW equipment is available for the fusion butt welding of tube-to-tube, as well as, tube to autobuttweld fittings. The same equipment can also be used for the fusion butt welding of up to 6 inch pipe with a wall thickness of up to 0.154 inches.

  20. Optimasi Penggunaan Lahan Kosong di Kecamatan Baturiti Untuk Properti Komersial Dengan Prinsip Highest and Best Use

    Directory of Open Access Journals (Sweden)

    Made Darmawan Saputra Mahardika

    2013-09-01

    Full Text Available Kecamatan Baturiti merupakan satu-satunya kecamatan di Kabupaten Tabanan yang berkembang dalam sektor ekonomi agrowisata karena lokasinya yang strategis dekat dengan berbagai obyek wisata terkenal. Dengan lokasi yang strategis, pembangunan untuk properti komersial tentu akan memberikan potensi keuntungan tinggi bagi investor yang memiliki lahan kosong di Kecamatan Baturiti. Kondisi seperti ini menyebabkan permintaan yang tinggi akan lahan, padahal ketersediaan lahan selalu berkurang. Pembangunan properti komersial di Kecamatan Baturiti perlu dioptimalisasi agar dicapai keuntungan maksimum bagi investor. Berdasarkan hal tersebut, investor yang ingin membangun di Kecamatan Baturiti memerlukan analisa untuk mendapatkan alternatif pemanfaatan lahan kosong. Lahan yang dianalisa merupakan lahan kosong belum terbangun seluas 22.175 m2 di Kecamatan Baturiti, Kabupaten Tabanan. Metode yang digunakan untuk mengetahui alternatif pendirian bangunan komersial yang memiliki nilai pasar tertinggi adalah Highest and Best Use (HBU. Dengan metode tersebut, pemilik lahan dapat mengetahui alternatif terbaik yang memenuhi syarat-syarat diijinkan secara legal, memungkinkan secara fisik, layak secara finansial, dan memiliki produktivitas maksimum. Hasil yang diperoleh dari analisa Highest and Best Use ini adalah alternatif mixed-use berupa hotel dan toko souvenir dengan nilai lahan tertinggi dibandingkan alternatif lainnya sebesar Rp 7,950,714.60 per m2.

  1. A novel method to predict the highest hardness of plasma sprayed coating without micro-defects

    Science.gov (United States)

    Zhuo, Yukun; Ye, Fuxing; Wang, Feng

    2018-04-01

    The plasma sprayed coatings are stacked by splats, which are regarded generally as the elementary units of coating. Many researchers have focused on the morphology and formation mechanism of splat. However, a novel method to predict the highest hardness of plasma sprayed coating without micro-defects is proposed according to the nanohardness of splat in this paper. The effectiveness of this novel method was examined by experiments. Firstly, the microstructure of splats and coating, meanwhile the 3D topography of the splats were observed by SEM (SU1510) and video microscope (VHX-2000). Secondly, the nanohardness of splats was evaluated by nanoindentation (NHT) in order to be compared with microhardness of coating measured by microhardness tester (HV-1000A). The results show that the nanohardness of splats with diameter of 70 μm, 100 μm and 140 μm were in the scope of 11∼12 GPa while the microhardness of coating were in the range of 8∼9 GPa. Because the splats had not micro-defects such as pores and cracks in the nanohardness evaluated nano-zone, the nanohardness of the splats can be utilized to predict the highest hardness of coating without micro-defects. This method indicates the maximum of sprayed coating hardness and will reduce the test number to get high hardness coating for better wear resistance.

  2. MAIN STAGES SCIENTIFIC AND PRODUCTION MASTERING THE TERRITORY AVERAGE URAL

    Directory of Open Access Journals (Sweden)

    V.S. Bochko

    2006-09-01

    Full Text Available Questions of the shaping Average Ural, as industrial territory, on base her scientific study and production mastering are considered in the article. It is shown that studies of Ural resources and particularities of the vital activity of its population were concerned by Russian and foreign scientist in XVIII-XIX centuries. It is noted that in XX century there was a transition to systematic organizing-economic study of production power, society and natures of Average Ural. More attention addressed on new problems of region and on needs of their scientific solving.

  3. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  4. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  5. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  6. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  7. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  8. Analisa Highest and Best Use Pada Lahan Kosong Di Jemur Gayungan II Surabaya

    Directory of Open Access Journals (Sweden)

    Finda Virgitta Faradiany

    2014-09-01

    Full Text Available Perkembangan bisnis properti di Surabaya yang semakin pesat, mengakibatkan permintaan terhadap lahan semakin tinggi. Namun fakta di lapangan menampakkan hal yang sebaliknya karena ternyata masih terdapat lahan-lahan yang dibiarkan kosong tidak dimanfaatkan oleh pemiliknya. Kondisi yang demikian memerlukan efisiensi dan optimalisasi penggunaan lahan dengan mendirikan sebuah properti komersial yang memberikan keuntungan bagi pemilik serta lingkungan sekitarnya.Lahan “X” seluas 1786 m2 berlokasi di Jl. Jemur Gayungan II merupakan lahan kosong yang terletak di dekat daerah perkantoran dan berpotensi dikembangkan menjadi properti komersial. Penentuan nilai lahan “X” bergantung pada penggunaan lahan. Metode penilaian yang digunakan adalah analisa penggunaan tertinggi dan terbaik atau Highest and Best Use (HBU yang secara legal diijinkan, secara fisik memungkinkan, layak secara finansial dan memiliki produktifitas maksimum. Dari hasil penelitian didapatkan alternatif yang menghasilkan nilai lahan tertinggi dan produktivitas maksimum adalah hotel. Nilai lahan yang didapatkan sebesar Rp 9.722.718/m2 dengan produktivitas meningkat sebesar 486%.

  9. Failure of ETV in patients with the highest ETV success scores.

    Science.gov (United States)

    Gianaris, Thomas J; Nazar, Ryan; Middlebrook, Emily; Gonda, David D; Jea, Andrew; Fulkerson, Daniel H

    2017-09-01

    OBJECTIVE Endoscopic third ventriculostomy (ETV) is a surgical alternative to placing a CSF shunt in certain patients with hydrocephalus. The ETV Success Score (ETVSS) is a reliable, simple method to estimate the success of the procedure by 6 months of postoperative follow-up. The highest score is 90, estimating a 90% chance of the ETV effectively treating hydrocephalus without requiring a shunt. Treatment with ETV fails in certain patients, despite their being the theoretically best candidates for the procedure. In this study the authors attempted to identify factors that further predicted success in patients with the highest ETVSSs. METHODS A retrospective review was performed of all patients treated with ETV at 3 institutions. Demographic, radiological, and clinical data were recorded. All patients by definition were older than 1 year, had obstructive hydrocephalus, and did not have a prior shunt. Failure of ETV was defined as the need for a shunt by 1 year. The ETV was considered a success if the patient did not require another surgery (either shunt placement or a repeat endoscopic procedure) by 1 year. A statistical analysis was performed to identify factors associated with success or failure. RESULTS Fifty-nine patients met the entry criteria for the study. Eleven patients (18.6%) required further surgery by 1 year. All of these patients received a shunt. The presenting symptom of lethargy statistically correlated with success (p = 0.0126, odds ratio [OR] = 0.072). The preoperative radiological finding of transependymal flow (p = 0.0375, OR 0.158) correlated with success. A postoperative larger maximum width of the third ventricle correlated with failure (p = 0.0265). CONCLUSIONS The preoperative findings of lethargy and transependymal flow statistically correlated with success. This suggests that the best candidates for ETV are those with a relatively acute elevation of intracranial pressure. Cases without these findings may represent the failures in this

  10. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  11. The part of the solar spectrum with the highest influence on the formation of SOA in the continental boundary layer

    Directory of Open Access Journals (Sweden)

    M. Boy

    2002-01-01

    Full Text Available The relationship between nucleation events and spectral solar irradiance was analysed using two years of data collected at the Station for Measuring Forest Ecosystem-Atmosphere Relations (SMEAR II in Hyytiälä, Finland. We analysed the data in two different ways. In the first step we calculated ten nanometer average values from the irradiance measurements between 280 and 580 nm and explored if any special wavelengths groups showed higher values on event days compared to a spectral reference curve for all the days for 2 years or to reference curves for every month. The results indicated that short wavelength irradiance between 300 and 340 nm is higher on event days in winter (February and March compared to the monthly reference graph but quantitative much smaller than in spring or summer. By building the ratio between the average values of different event classes and the yearly reference graph we obtained peaks between 1.17 and 1.6 in the short wavelength range (300--340 nm. In the next step we included number concentrations of particles between 3 and 10 nm and calculated correlation coefficients between the different wavelengths groups and the particles. The results were quite similar to those obtained previously; the highest correlation coefficients were reached for the spectral irradiance groups 3--5 (300--330 nm with average values for the single event classes around 0.6 and a nearly linear decrease towards higher wavelengths groups by 30%. Both analyses indicate quite clearly that short wavelength irradiance between 300 and 330 or 340 nm is the most important solar spectral radiation for the formation of newly formed aerosols. In the end we introduce a photochemical mechanism as one possible pathway how short wavelength irradiance can influence the formation of SOA by calculating the production rate of excited oxygen. This mechanism shows in which way short wavelength irradiance can influence the formation of new particles even though the

  12. Nuclear fuel management via fuel quality factor averaging

    International Nuclear Information System (INIS)

    Mingle, J.O.

    1978-01-01

    The numerical procedure of prime number averaging is applied to the fuel quality factor distribution of once and twice-burned fuel in order to evolve a fuel management scheme. The resulting fuel shuffling arrangement produces a near optimal flat power profile both under beginning-of-life and end-of-life conditions. The procedure is easily applied requiring only the solution of linear algebraic equations. (author)

  13. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  14. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  15. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  16. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  17. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  18. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  19. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  20. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  1. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  2. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  3. Towards highest peak intensities for ultra-short MeV-range ion bunches

    OpenAIRE

    Simon Busold; Dennis Schumacher; Christian Brabetz; Diana Jahn; Florian Kroll; Oliver Deppert; Ulrich Schramm; Thomas E. Cowan; Abel Blažević; Vincent Bagnoud; Markus Roth

    2015-01-01

    A laser-driven, multi-MeV-range ion beamline has been installed at the GSI Helmholtz center for heavy ion research. The high-power laser PHELIX drives the very short (picosecond) ion acceleration on ?m scale, with energies ranging up to 28.4?MeV for protons in a continuous spectrum. The necessary beam shaping behind the source is accomplished by applying magnetic ion lenses like solenoids and quadrupoles and a radiofrequency cavity. Based on the unique beam properties from the laser-driven so...

  4. African American Women: Surviving Breast Cancer Mortality against the Highest Odds

    Directory of Open Access Journals (Sweden)

    Shelley White-Means

    2015-12-01

    Full Text Available Among the country’s 25 largest cities, the breast cancer mortality disparity is highest in Memphis, Tennessee, where African American women are twice as likely to die from breast cancer as White women. This qualitative study of African-American breast cancer survivors explores experiences during and post treatment that contributed to their beating the high odds of mortality. Using a semi-structured interview guide, a focus group session was held in 2012 with 10 breast cancer survivors. Thematic analysis and a deductive a priori template of codes were used to analyze the data. Five main themes were identified: family history, breast/body awareness and preparedness to manage a breast cancer event, diagnosis experience and reaction to the diagnosis, family reactions, and impact on life. Prayer and family support were central to coping, and survivors voiced a cultural acceptance of racial disparities in health outcomes. They reported lack of provider sensitivity regarding pain, financial difficulties, negative responses from family/friends, and resiliency strategies for coping with physical and mental limitations. Our research suggested that a patient-centered approach of demystifying breast cancer (both in patient-provider communication and in community settings would impact how women cope with breast cancer and respond to information about its diagnosis.

  5. THE IMPACT OF FREQUENCY STANDARDS ON COHERENCE IN VLBI AT THE HIGHEST FREQUENCIES

    Energy Technology Data Exchange (ETDEWEB)

    Rioja, M.; Dodson, R. [ICRAR, University of Western Australia, Perth (Australia); Asaki, Y. [Institute of Space and Astronautical Science, 3-1-1 Yoshinodai, Chuou, Sagamihara, Kanagawa 252-5210 (Japan); Hartnett, J. [School of Physics, University of Western Australia, Perth (Australia); Tingay, S., E-mail: maria.rioja@icrar.org [ICRAR, Curtin University, Perth (Australia)

    2012-10-01

    We have carried out full imaging simulation studies to explore the impact of frequency standards in millimeter and submillimeter very long baseline interferometry (VLBI), focusing on the coherence time and sensitivity. In particular, we compare the performance of the H-maser, traditionally used in VLBI, to that of ultra-stable cryocooled sapphire oscillators over a range of observing frequencies, weather conditions, and analysis strategies. Our simulations show that at the highest frequencies, the losses induced by H-maser instabilities are comparable to those from high-quality tropospheric conditions. We find significant benefits in replacing H-masers with cryocooled sapphire oscillator based frequency references in VLBI observations at frequencies above 175 GHz in sites which have the best weather conditions; at 350 GHz we estimate a 20%-40% increase in sensitivity over that obtained when the sites have H-masers, for coherence losses of 20%-10%, respectively. Maximum benefits are to be expected by using co-located Water Vapor Radiometers for atmospheric correction. In this case, we estimate a 60%-120% increase in sensitivity over the H-maser at 350 GHz.

  6. Exchange Interactions on the Highest-Spin Reported Molecule: the Mixed-Valence Fe42 Complex

    Science.gov (United States)

    Aravena, Daniel; Venegas-Yazigi, Diego; Ruiz, Eliseo

    2016-04-01

    The finding of high-spin molecules that could behave as conventional magnets has been one of the main challenges in Molecular Magnetism. Here, the exchange interactions, present in the highest-spin molecule published in the literature, Fe42, have been analysed using theoretical methods based on Density Functional Theory. The system with a total spin value S = 45 is formed by 42 iron centres containing 18 high-spin FeIII ferromagnetically coupled and 24 diamagnetic low-spin FeII ions. The bridging ligands between the two paramagnetic centres are two cyanide ligands coordinated to the diamagnetic FeII cations. Calculations were performed using either small Fe4 or Fe3 models or the whole Fe42 complex, showing the presence of two different ferromagnetic couplings between the paramagnetic FeIII centres. Finally, Quantum Monte Carlo simulations for the whole system were carried out in order to compare the experimental and simulated magnetic susceptibility curves from the calculated exchange coupling constants with the experimental one. This comparison allows for the evaluation of the accuracy of different exchange-correlation functionals to reproduce such magnetic properties.

  7. Analisis Highest and Best Use (HBU pada Lahan Jl. Gubeng Raya No. 54 Surabaya

    Directory of Open Access Journals (Sweden)

    Akmaluddin Akmaluddin

    2013-03-01

    Full Text Available Laju pertumbuhan penduduk dan tingkat perekonomian yang semakin meningkat di  kota-kota besar seperti Surabaya, bertolak belakang dengan  ketersediaan lahan yang terbatas. Selayaknya properti yang akan dibangun di atas suatu lahan dapat memberikan manfaat yang maksimal serta efisien agar hasilnya dapat dirasakan demi pembangunan wilayah tersebut. Oleh karena itu, perlu dilakukan perhitungan  penggunaan yang paling memungkinkan dan diizinkan dari suatu tanah kosong atau tanah yang  sudah dibangun, dimana secara fisik dimungkinkan, didukung atau dibenarkan oleh peraturan, layak secara keuangan dan menghasilkan nilai tertinggi. Dalam penelitian ini dilakukan analisis Highest and Best Use (HBU pada lahan di Jl. Gubeng Raya No. 54 Surabaya seluas 1.150 m2 yang direncanakan akan dibangun hotel. Lahan tersebut berpotensi untuk dikembangkan menjadi properti komersial seperti hotel, apartemen, perkantoran dan pertokoan. Analisis tersebut menggunakan tinjauan terhadap aspek fisik, legal, finansial dan produktivitas maksimumnya. Dari hasil penelitian ini didapatkan alternatif properti komersial hotel yang memiliki penggunaan tertinggi dan terbaik pada pemanfaatan lahan dengan nilai lahan Rp. 67.069.980,31/ m2.

  8. Analisa Alternatif Revitalisasi Pasar Gubeng Masjid Surabaya dengan Metode Highest And Best Use

    Directory of Open Access Journals (Sweden)

    Marsha Swalia Mustika

    2017-01-01

    Full Text Available Dalam era globalisasi ini banyak bermunculan pasar-pasar modern yang dibangun dengan segala kelebihan dan fasilitasnya. Munculnya pasar-pasar modern membuat keberadaan pasar tradisional tersudut, tidak terkecuali Pasar Gubeng Masjid Surabaya. Namun keberadaan pasar yang strategis yaitu dekat dengan perkantoran, hotel dan pusat perbelanjaan, serta stasiun kereta api membuat pasar tersebut memiliki potensi untuk dikembangkan menjadi properti yang memberikan nilai lahan tertinggi dan terbaik. Oleh karena itu, perlu dilakukan analisa Highest and Best Use (HBU yang dapat memberikan masukan untuk melakukan investasi terbaik. Analisa HBU ini menggunakan empat kriteria yaitu secara fisik dimungkinkan, secara legal diizinkan , secara finansial layak, dan memiliki produktivitas maksimum. Alternatif yang memiliki produktivitas maksimum tersebut dapat dijadikan sebagai nilai lahan tertinggi dan terbaik pada lahan Pasar Gubeng Masjid Surabaya. Dari hasil penelitian didapatkan alternatif yang menghasilkan nilai lahan tertinggi dan produktivitas maksimum adalah alternatif pengembangan multi use pasar dengan pusat perbelanjaan.. Nilai lahan yang didapatkan sebesar Rp 46.946.524,-/m2 dengan produktivitas meningkat sebesar 312%.

  9. Transitional care for the highest risk patients: findings of a randomised control study

    Directory of Open Access Journals (Sweden)

    Kheng Hock Lee

    2015-10-01

    Full Text Available Background: Interventions to prevent readmissions of patients at highest risk have not been rigorously evaluated. We conducted a randomised controlled trial to determine if a post-discharge transitional care programme can reduce readmissions of such patients in Singapore. Methods: We randomised 840 patients with two or more unscheduled readmissions in the prior 90 days and Length of stay, Acuity of admission, Comorbidity of patient, Emergency department utilisation score ≥10 to the intervention programme (n = 419 or control (n = 421. Patients allocated to the intervention group received post-discharge surveillance by a multidisciplinary integrated care team and early review in the clinic. The primary outcome was the proportion of patients with at least one unscheduled readmission within 30 days after discharge. Results: We found no statistically significant reduction in readmissions or emergency department visits in patients on the intervention group compared to usual care. However, patients in the intervention group reported greater patient satisfaction (p < 0.001. Conclusion: Any beneficial effect of interventions initiated after discharge is small for high-risk patients with multiple comorbidity and complex care needs. Future transitional care interventions should focus on providing the entire cycle of care for such patients starting from time of admission to final transition to the primary care setting. Trial Registration: Clinicaltrials.gov, no NCT02325752

  10. Transitional care for the highest risk patients: findings of a randomised control study

    Directory of Open Access Journals (Sweden)

    Kheng Hock Lee

    2015-10-01

    Full Text Available Background: Interventions to prevent readmissions of patients at highest risk have not been rigorously evaluated. We conducted a randomised controlled trial to determine if a post-discharge transitional care programme can reduce readmissions of such patients in Singapore.Methods: We randomised 840 patients with two or more unscheduled readmissions in the prior 90 days and Length of stay, Acuity of admission, Comorbidity of patient, Emergency department utilisation score ≥10 to the intervention programme (n = 419 or control (n = 421. Patients allocated to the intervention group received post-discharge surveillance by a multidisciplinary integrated care team and early review in the clinic. The primary outcome was the proportion of patients with at least one unscheduled readmission within 30 days after discharge.Results: We found no statistically significant reduction in readmissions or emergency department visits in patients on the intervention group compared to usual care. However, patients in the intervention group reported greater patient satisfaction (p < 0.001.Conclusion: Any beneficial effect of interventions initiated after discharge is small for high-risk patients with multiple comorbidity and complex care needs. Future transitional care interventions should focus on providing the entire cycle of care for such patients starting from time of admission to final transition to the primary care setting.Trial Registration: Clinicaltrials.gov, no NCT02325752

  11. Measurement of radon concentration in dwellings in the region of highest lung cancer incidence in India

    International Nuclear Information System (INIS)

    Zoliana, B.; Rohmingliana, P.C.; Sahoo, B.K.; Mayya, Y.S.

    2015-01-01

    Monitoring of radon exhalation from soil and its concentration in indoor is found to be helpful in many investigations such as health risk assessment and others as radiation damage to bronchial cells which eventually can be the second leading cause of lung cancer next to smoking. The fact that Aizawl District, Mizoram, India has the highest lung cancer incidence rates among males and females in Age Adjusted Rate (AAR) in India as declared by Population Based Cancer Registry Report 2008 indicates the need for quantification of radon and its anomalies attached to it. Measurement of radon concentration had been carried out inside the dwellings in Aizawl district, Mizoram. A time integrated method of measurement was employed by using a solid state nuclear track detector (SSNTD) type (LR-115 films) kept in a twin cup dosimeter for measurement of concentration of radon and thoron. The dosimeters were suspended over bed rooms or living rooms in selected dwellings. They were deployed for a period of about 120 days at a time in 63 houses which were selected according to their place of location viz. fault region, places where fossil remains were found and geologically unidentified region. After the desired period of exposure, the detectors were retrieved and chemically etched which were then counted by using a spark counter. The recorded nuclear tract densities are then converted into air concentrations of Radon and Thoron

  12. A System with a Choice of Highest-Bidder-First and FIFO Services

    Directory of Open Access Journals (Sweden)

    Tejas Bodas

    2015-02-01

    Full Text Available Service systems using a highest-bidder-first (HBF policy have been studied in queueing literature for various applications and in economics literature to model corruption. Such systems have applications in modern problems like scheduling jobs in cloud computing scenarios or placement of ads on web pages. However, using a HBF service is like using a spot market and may not be preferred by many users. For such users, it may be good to provide a simple scheduler, e.g., a FIFO service. Further, in some situations it may even be necessary that a free service queue operates alongside a HBF queue. Motivated by such a scenario, we propose and analyze a service system with a FIFO server and a HBF server in parallel. Arriving customers are from a heterogeneous population with different valuations of their delay costs. They strategically choose between FIFO and HBF service; if HBF is chosen, they also choose the bid value to optimize an individual cost. We characterize the Wardrop equilibrium in such a system and analyze the revenue to the server. We see that when the total capacity is fixed and is shared between the FIFO and HBF servers, revenue is maximised when the FIFO capacity is non zero. However, if the FIFO server is added to an HBF server, then the revenue decreases with increasing FIFO capacity. We also discuss the case when customers are allowed to balk.

  13. African American Women: Surviving Breast Cancer Mortality against the Highest Odds

    Science.gov (United States)

    White-Means, Shelley; Rice, Muriel; Dapremont, Jill; Davis, Barbara; Martin, Judy

    2015-01-01

    Among the country’s 25 largest cities, the breast cancer mortality disparity is highest in Memphis, Tennessee, where African American women are twice as likely to die from breast cancer as White women. This qualitative study of African-American breast cancer survivors explores experiences during and post treatment that contributed to their beating the high odds of mortality. Using a semi-structured interview guide, a focus group session was held in 2012 with 10 breast cancer survivors. Thematic analysis and a deductive a priori template of codes were used to analyze the data. Five main themes were identified: family history, breast/body awareness and preparedness to manage a breast cancer event, diagnosis experience and reaction to the diagnosis, family reactions, and impact on life. Prayer and family support were central to coping, and survivors voiced a cultural acceptance of racial disparities in health outcomes. They reported lack of provider sensitivity regarding pain, financial difficulties, negative responses from family/friends, and resiliency strategies for coping with physical and mental limitations. Our research suggested that a patient-centered approach of demystifying breast cancer (both in patient-provider communication and in community settings) would impact how women cope with breast cancer and respond to information about its diagnosis. PMID:26703655

  14. A cosmopolitan design of teacher education and a progressive orientation towards the highest good

    Directory of Open Access Journals (Sweden)

    Klas Roth

    2013-01-01

    Full Text Available In this paper I discuss a Kantian conception of cosmopolitan education. It suggests that we pursue the highest good – an object of morality – in the world together, and requires that we acknowledge the value of freedom, render ourselves both efficacious and autonomous in practice, cultivate our judgment, and unselfishly co-operate in the co-ordination and fulfilment of our morally permissible ends. Now, such an accomplishment is one of the most difficult challenges, and may not be achieved in our time, if ever. In the first part of the paper I show that we, according to Kant, have to interact with each other, and comply with the moral law in the quest of general happiness, not merely personal happiness. In the second part, I argue that a cosmopolitan design of teacher education in Kantian terms can establish moral character, even though good moral character is ultimately the outcome of free choice. Such a design can do so by optimizing the freedom of those concerned to set and pursue their morally permissible ends, and to cultivate their judgment through the use of examples. This requires, inter alia, that they be enabled, and take responsibility, to think for themselves, in the position of everyone else, and consistently; and to strengthen their virtue or self-mastery to comply, in practice, with the moral law.

  15. Oxygen pathway modeling estimates high reactive oxygen species production above the highest permanent human habitation.

    Directory of Open Access Journals (Sweden)

    Isaac Cano

    Full Text Available The production of reactive oxygen species (ROS from the inner mitochondrial membrane is one of many fundamental processes governing the balance between health and disease. It is well known that ROS are necessary signaling molecules in gene expression, yet when expressed at high levels, ROS may cause oxidative stress and cell damage. Both hypoxia and hyperoxia may alter ROS production by changing mitochondrial Po2 (PmO2. Because PmO2 depends on the balance between O2 transport and utilization, we formulated an integrative mathematical model of O2 transport and utilization in skeletal muscle to predict conditions to cause abnormally high ROS generation. Simulations using data from healthy subjects during maximal exercise at sea level reveal little mitochondrial ROS production. However, altitude triggers high mitochondrial ROS production in muscle regions with high metabolic capacity but limited O2 delivery. This altitude roughly coincides with the highest location of permanent human habitation. Above 25,000 ft., more than 90% of exercising muscle is predicted to produce abnormally high levels of ROS, corresponding to the "death zone" in mountaineering.

  16. Medical school dropout--testing at admission versus selection by highest grades as predictors.

    Science.gov (United States)

    O'Neill, Lotte; Hartvigsen, Jan; Wallstedt, Birgitta; Korsholm, Lars; Eika, Berit

    2011-11-01

    Very few studies have reported on the effect of admission tests on medical school dropout. The main aim of this study was to evaluate the predictive validity of non-grade-based admission testing versus grade-based admission relative to subsequent dropout. This prospective cohort study followed six cohorts of medical students admitted to the medical school at the University of Southern Denmark during 2002-2007 (n=1544). Half of the students were admitted based on their prior achievement of highest grades (Strategy 1) and the other half took a composite non-grade-based admission test (Strategy 2). Educational as well as social predictor variables (doctor-parent, origin, parenthood, parents living together, parent on benefit, university-educated parents) were also examined. The outcome of interest was students' dropout status at 2 years after admission. Multivariate logistic regression analysis was used to model dropout. Strategy 2 (admission test) students had a lower relative risk for dropping out of medical school within 2 years of admission (odds ratio 0.56, 95% confidence interval 0.39-0.80). Only the admission strategy, the type of qualifying examination and the priority given to the programme on the national application forms contributed significantly to the dropout model. Social variables did not predict dropout and neither did Strategy 2 admission test scores. Selection by admission testing appeared to have an independent, protective effect on dropout in this setting. © Blackwell Publishing Ltd 2011.

  17. Highest-order optical phonon-mediated relaxation in CdTe/ZnTe quantum dots

    International Nuclear Information System (INIS)

    Masumoto, Yasuaki; Nomura, Mitsuhiro; Okuno, Tsuyoshi; Terai, Yoshikazu; Kuroda, Shinji; Takita, K.

    2003-01-01

    The highest 19th-order longitudinal optical (LO) phonon-mediated relaxation was observed in photoluminescence excitation spectra of CdTe self-assembled quantum dots grown in ZnTe. Hot excitons photoexcited highly in the ZnTe barrier layer are relaxed into the wetting-layer state by emitting multiple LO phonons of the barrier layer successively. Below the wetting-layer state, the LO phonons involved in the relaxation are transformed to those of interfacial Zn x Cd 1-x Te surrounding CdTe quantum dots. The ZnTe-like and CdTe-like LO phonons of Zn x Cd 1-x Te and lastly acoustic phonons are emitted in the relaxation into the CdTe dots. The observed main relaxation is the fast relaxation directly into CdTe quantum dots and is not the relaxation through either the wetting-layer quantum well or the band bottom of the ZnTe barrier layer. This observation shows very efficient optical phonon-mediated relaxation of hot excitons excited highly in the ZnTe conduction band through not only the ZnTe extended state but also localized state in the CdTe quantum dots reflecting strong exciton-LO phonon interaction of telluride compounds

  18. Which Environmental Factors Have the Highest Impact on the Performance of People Experiencing Difficulties in Capacity?

    Directory of Open Access Journals (Sweden)

    Verena Loidl

    2016-04-01

    Full Text Available Disability is understood by the World Health Organization (WHO as the outcome of the interaction between a health condition and personal and environmental factors. Comprehensive data about environmental factors is therefore essential to understand and influence disability. We aimed to identify which environmental factors have the highest impact on the performance of people with mild, moderate and severe difficulties in capacity, who are at risk of experiencing disability to different extents, using data from a pilot study of the WHO Model Disability Survey in Cambodia and random forest regression. Hindering or facilitating aspects of places to socialize in community activities, transportation and natural environment as well as use and need of personal assistance and use of medication on a regular basis were the most important environmental factors across groups. Hindering or facilitating aspects of the general environment were the most relevant in persons experiencing mild levels of difficulties in capacity, while social support, attitudes of others and use of medication on a regular basis were highly relevant for the performance of persons experiencing moderate to higher levels of difficulties in capacity. Additionally, we corroborate the high importance of the use and need of assistive devices for people with severe difficulties in capacity.

  19. How to reliably deliver narrow individual-patient error bars for optimization of pacemaker AV or VV delay using a "pick-the-highest" strategy with haemodynamic measurements.

    Science.gov (United States)

    Francis, Darrel P

    2013-03-10

    Intuitive and easily-described, "pick-the-highest" is often recommended for quantitative optimization of AV and especially VV delay settings of biventricular pacemakers (BVP; cardiac resynchronization therapy, CRT). But reliable selection of the optimum setting is challenged by beat-to-beat physiological variation, which "pick-the-highest" combats by averaging multiple heartbeats. Optimization is not optimization unless the optimum is identified confidently. This document shows how to calculate how many heartbeats must be averaged to optimize reliably by pick-the-highest. Any reader, by conducting a few measurements, can calculate for locally-available methods (i) biological scatter between replicate measurements, and (ii) curvature of the biological response. With these, for any clinically-desired precision of optimization, the necessary number of heartbeats can be calculated. To achieve 95% confidence of getting within ±∆x of the true optimum, the number of heartbeats needed is 2(scatter/curvature)(2)/∆x(4) per setting. Applying published scatter/curvature values (which readers should re-evaluate locally) indicates that optimizing AV, even coarsely with a 40ms-wide band of precision, requires many thousand beats. For VV delay, the number approaches a million. Moreover, identifying the optimum twice as precisely requires 30-fold more beats. "Pick the highest" is quick to say but slow to do. We must not expect staff to do the impossible; nor criticise them for not doing so. Nor should we assume recommendations and published protocols are well-designed. Reliable AV or VV optimization, using "pick-the-highest" on commonly-recommended manual measurements, is unrealistic. Improving time-efficiency of the optimization process to become clinically realistic may need a curve-fitting strategy instead, with all acquired data marshalled conjointly. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Compressive sensing-based wideband capacitance measurement with a fixed sampling rate lower than the highest exciting frequency

    International Nuclear Information System (INIS)

    Xu, Lijun; Ren, Ying; Sun, Shijie; Cao, Zhang

    2016-01-01

    In this paper, an under-sampling method for wideband capacitance measurement was proposed by using the compressive sensing strategy. As the excitation signal is sparse in the frequency domain, the compressed sampling method that uses a random demodulator was adopted, which could greatly decrease the sampling rate. Besides, four switches were used to replace the multiplier in the random demodulator. As a result, not only the sampling rate can be much smaller than the signal excitation frequency, but also the circuit’s structure is simpler and its power consumption is lower. A hardware prototype was constructed to validate the method. In the prototype, an excitation voltage with a frequency up to 200 kHz was applied to a capacitance-to-voltage converter. The output signal of the converter was randomly modulated by a pseudo-random sequence through four switches. After a low-pass filter, the signal was sampled by an analog-to-digital converter at a sampling rate of 50 kHz, which was three times lower than the highest exciting frequency. The frequency and amplitude of the signal were then reconstructed to obtain the measured capacitance. Both theoretical analysis and experiments were carried out to show the feasibility of the proposed method and to evaluate the performance of the prototype, including its linearity, sensitivity, repeatability, accuracy and stability within a given measurement range. (paper)

  1. Solving The Longstanding Problem Of Low-Energy Nuclear Reactions At the Highest Microscopic Level - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Quaglioni, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-22

    A 2011 DOE-NP Early Career Award (ECA) under Field Work Proposal (FWP) SCW1158 supported the project “Solving the Long-Standing Problem of Low-Energy Nuclear Reactions at the Highest Microscopic Level” in the five-year period from June 15, 2011 to June 14, 2016. This project, led by PI S. Quaglioni, aimed at developing a comprehensive and computationally efficient framework to arrive at a unified description of structural properties and reactions of light nuclei in terms of constituent protons and neutrons interacting through nucleon-nucleon (NN) and three-nucleon (3N) forces. Specifically, the project had three main goals: 1) arriving at the accurate predictions for fusion reactions that power stars and Earth-based fusion facilities; 2) realizing a comprehensive description of clustering and continuum effects in exotic nuclei, including light Borromean systems; and 3) achieving fundamental understanding of the role of the 3N force in nuclear reactions and nuclei at the drip line.

  2. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  3. Information ranks highest: Expectations of female adolescents with a rare genital malformation towards health care services.

    Directory of Open Access Journals (Sweden)

    Elisabeth Simoes

    Full Text Available Access to highly specialized health care services and support to meet the patient's specific needs is critical for health outcome, especially during age-related transitions within the health care system such as with adolescents entering adult medicine. Being affected by an orphan disease complicates the situation in several important respects. Long distances to dedicated institutions and scarcity of knowledge, even among medical doctors, may present major obstacles for proper access to health care services and health chances. This study is part of the BMBF funded TransCareO project examining in a mixed-method design health care provisional deficits, preferences, and barriers in health care access as perceived by female adolescents affected by the Mayer-Rokitansky-Küster-Hauser syndrome (MRKHS, a rare (orphan genital malformation.Prior to a communicative validation workshop, critical elements of MRKHS related care and support (items were identified in interviews with MRKHS patients. During the subsequent workshop, 87 persons involved in health care and support for MRKHS were asked to rate the items using a 7-point Likert scale (7, strongly agree; 1, strongly disagree as to 1 the elements' potential importance (i.e., health care expected to be "best practice", or priority and 2 the presently experienced care. A gap score between the two was computed highlighting fields of action. Items were arranged into ten separate questionnaires representing domains of care and support (e.g., online-portal, patient participation. Within each domain, several items addressed various aspects of "information" and "access". Here, we present the outcome of items' evaluation by patients (attended, NPAT = 35; respondents, NRESP = 19.Highest priority scores occurred for domains "Online-Portal", "Patient participation", and "Tailored informational offers", characterizing them as extremely important for the perception as best practice. Highest gap scores yielded domains

  4. Nonlinear Analysis to Detect if Excellent Nursing Work Environments Have Highest Well-Being.

    Science.gov (United States)

    Casalicchio, Giuseppe; Lesaffre, Emmanuel; Küchenhoff, Helmut; Bruyneel, Luk

    2017-09-01

    To detect potentially nonlinear associations between nurses' work environment and nurse staffing on the one hand and nurse burnout on the other hand. A cross-sectional multicountry study for which data collection using a survey of 33,731 registered nurses in 12 European countries took place during 2009 to 2010. A semiparametric latent variable model that describes both linear and potentially nonlinear associations between burnout (Maslach Burnout Inventory: emotional exhaustion, depersonalization, personal accomplishment) and work environment (Practice Environment Scale of the Nursing Work Index: managerial support for nursing, doctor-nurse collegial relations, promotion of care quality) and staffing (patient-to-nurse ratio). Similar conclusions are reached from linear and nonlinear models estimating the association between work environment and burnout. For staffing, an increase in the patient-to-nurse ratio is associated with an increase in emotional exhaustion. At about 15 patients per nurse, no further increase in emotional exhaustion is seen. Absence of evidence for diminishing returns of improving work environments suggests that continuous improvement and achieving excellence in nurse work environments pays off strongly in terms of lower nurse-reported burnout rates. Nurse staffing policy would benefit from a larger number of studies that identify specific minimum as well as maximum thresholds at which inputs affect nurse and patient outcomes. Nurse burnout is omnipresent and has previously been shown to be related to worse patient outcomes. Additional increments in characteristics of excellent work environments, up to the highest possible standard, correspond to lower nurse burnout. © 2017 Sigma Theta Tau International.

  5. Lost opportunities in HIV prevention: programmes miss places where exposures are highest

    Science.gov (United States)

    Sandøy, Ingvild F; Siziya, Seter; Fylkesnes, Knut

    2008-01-01

    Background Efforts at HIV prevention that focus on high risk places might be more effective and less stigmatizing than those targeting high risk groups. The objective of the present study was to assess risk behaviour patterns, signs of current preventive interventions and apparent gaps in places where the risk of HIV transmission is high and in communities with high HIV prevalence. Methods The PLACE method was used to collect data. Inhabitants of selected communities in Lusaka and Livingstone were interviewed about where people met new sexual partners. Signs of HIV preventive activities in these places were recorded. At selected venues, people were interviewed about their sexual behaviour. Peer educators and staff of NGOs were also interviewed. Results The places identified were mostly bars, restaurants or sherbeens, and fewer than 20% reported any HIV preventive activity such as meetings, pamphlets or posters. In 43% of places in Livingstone and 26% in Lusaka, condoms were never available. There were few active peer educators. Among the 432 persons in Lusaka and 676 in Livingstone who were invited for interview about sexual behaviour, consistent condom use was relatively high in Lusaka (77%) but low in Livingstone (44% of men and 34% of women). Having no condom available was the most common reason for not using one. Condom use in Livingstone was higher among individuals socializing in places where condoms always were available. Conclusion In the places studied we found a high prevalence of behaviours with a high potential for HIV transmission but few signs of HIV preventive interventions. Covering the gaps in prevention in these high exposure places should be given the highest priority. PMID:18218124

  6. How to identify the person holding the highest position in the criminal hierarchy?

    Directory of Open Access Journals (Sweden)

    Grigoryev D.A.

    2014-12-01

    Full Text Available The current version of the resolution of the RF Supreme Court Plenum of June 10, 2010 N 12, clarifying the provisions of the law on liability for crimes committed by a person holding the highest position in the criminal hierarchy (Part 4 of Article 210 of the RF Criminal Code, is criticized. Evaluative character of the considered aggravating circumstance doesn’t allow to develop clear criteria for identifying the leaders of the criminal environment. Basing on the theory provisions and court practice, the authors suggest three criteria. The first criterion is specific actions including: establishment and leadership of the criminal association (criminal organization; coordinating criminal acts; creating sustainable links between different organized groups acting independently; dividing spheres of criminal influence, sharing criminal income and other criminal activities, indicating person’s authority and leadership in a particular area or in a particular sphere of activity. The second is having money, valuables and other property obtained by criminal means, without the person’s direct participation in their acquisition; transferring money, valuables and other property to that person systematically, without legal grounds (unjust enrichment; spending that money, valuables and other property to carry out criminal activities (crimes themselves and conditions of their commission. The third is international criminal ties manifested in committing one of the crimes under Part 1 of Article 210 of the RF Criminal Code, if this crime is transnational in nature; ties with extremist and (or terrorist organizations, as well as corruption ties. The court may use one or several of these criteria.

  7. Analisa Penggunaan Tertinggi dan Terbaik (Highest and Best Use Analysis pada Lahan Pasar Turi Lama Surabaya

    Directory of Open Access Journals (Sweden)

    Maulida Herradiyanti

    2017-01-01

    Full Text Available Pasar  Turi  merupakan  pasar  yang  telah lama  menjadi  ikon  perdagangan  tidak  hanya  di Surabaya,  namun  juga  di  Indonesia  Timur.  Kebakaran hebat yang terjadi pada Juli 2007 telah menghanguskan bangunan Pasar Turi. Aktivitas perdagangan di tempat tersebut  otomatis  terhenti.  Hingga saat ini, lahan Pasar Turi  Tahap  III  atau  yang  biasa  disebut  Pasar  Turi Lama  masih  terbengkalai.  Padahal,  lahan  seluas  16281 m2tersebut terletak di wilayah sentra perdagangan dan cocok  untuk  dikembangkan  menjadi properti komersial seperti perkantoran, pertokoan, rumah toko (ruko, danpasar tradisional. Salah  satu  cara  untuk  menentukan  penggunaan lahan Pasar Turi Lama adalah dengan metode Highest and  Best  Use  (HBU.  HBU  adalah  suatu  metode  untuk menentukan  penggunaan  aset  yang  memberikan peruntukan  paling  optimal sehingga dapat memberikan nilai  lahan  tertinggi.  Kriteria  HBU  yaitu  diijinkan secara  legal,  memungkinkan  secara  fisik,  layak  secara finansial, dan memiliki produktivitas maksimum.Hasil penelitian ini didapatkan alternatif pertokoan sebagai  alternatif  penggunaan  lahan  terbaik  dengan nilai  lahan  tertinggi  yaitu  sebesar  Rp27.994.695,78/m2dengan produktivitas maksimum sebesar  124%.

  8. Risk of influenza transmission in a hospital emergency department during the week of highest incidence.

    Science.gov (United States)

    Esteve-Esteve, Miguel; Bautista-Rentero, Daniel; Zanón-Viguer, Vicente

    2018-02-01

    To estimate the risk of influenza transmission in patients coming to a hospital emergency department during the week of highest incidence and to analyze factors associated with transmission. Retrospective observational analysis of a cohort of patients treated in the emergency room during the 2014-2015 flu season. The following variables were collected from records: recorded influenza diagnosis, results of a rapid influenza confirmation test, point of exposure (emergency department, outpatient clinic, or the community), age, sex, flu vaccination or not, number of emergency visits, time spent in the waiting room, and total time in the hospital. We compiled descriptive statistics and performed bivariate and multivariate analyses by means of a Poisson regression to estimate relative risk (RR) and 95% CIs. The emergency department patients had a RR of contracting influenza 3.29 times that of the communityexposed population (95% CI, 1.53-7.08, P=.002); their risk was 2.05 times greater than that of outpatient clinic visitors (95% CI, 1.04-4.02, P=.036). Emergency patients under the age of 15 years had a 5.27 greater risk than older patients (95% CI, 1.59-17.51; P=.007). The RR of patients visiting more than once was 11.43 times greater (95% CI, 3.58-36.44; P<.001). The risk attributable to visiting the emergency department risk was 70.5%, whereas risk attributable to community exposure was 2%. The risk of contracting influenza is greater for emergency department patients than for the general population or for patients coming to the hospital for outpatient clinic visits. Patients under the age of 15 years incur greater risk.

  9. Achieving world's highest level of nuclear safety learning from overseas nuclear trouble events

    International Nuclear Information System (INIS)

    Okumoto, Masaru

    2014-01-01

    Nuclear Information Research Project of Institute of Nuclear Safety System, Incorporated (INSS) had acquired trouble information of nuclear power plants (NPPs) up to annual several thousand events issued by overseas regulatory agencies for more than 20 years since INSS established and analyzed it in details respectively after the screening. Lessons extracted from the analysis were offered as suggestions to electric utilities having PWRs in Japan. Such activities would surely contribute to maintain and improve nuclear safety with no objection. However, they could not prevent the occurrence of accident of Fukushima Daiichi NPPs. Thus the project had reviewed usefulness of past activities and how improved could be by listening sincerely to outside opinions. This report introduced outlines of recent activities. Competent suggestions to electric utilities might be made with improved reflection of lessons to needed rules, deepened information sharing within the project and raised awareness of the problem. (T. Tanaka)

  10. Assessment of natural background radiation in one of the highest regions of Ecuador

    Science.gov (United States)

    Pérez, Mario; Chávez, Estefanía; Echeverría, Magdy; Córdova, Rafael; Recalde, Celso

    2018-05-01

    Natural background radiation was measured in the province of Chimborazo (Ecuador) with the following reference coordinates 1°40'00''S 78°39'00''W, where the furthest point to the center of the planet is located. Natural background radiation measurements were performed at 130 randomly selected sites using a Geiger Müller GCA-07W portable detector; these measurements were run at 6 m away from buildings or walls and 1 m above the ground. The global average natural background radiation established by UNSCEAR is 2.4 mSv y-1. In the study area measurements ranged from 0.57 mSv y-1 to 3.09 mSv y-1 with a mean value of 1.57 mSv y-1, the maximum value was recorded in the north of the study area at 5073 metres above sea level (m.a.s.l.), and the minimum value was recorded in the southwestern area at 297 m.a.s.l. An isodose map was plotted to represent the equivalent dose rate due to natural background radiation. An analysis of variance (ANOVA) between the data of the high and low regions of the study area showed a significant difference (p < α), in addition a linear correlation coefficient of 0.92 was obtained, supporting the hypothesis that in high altitude zones extraterrestrial radiation contributes significantly to natural background radiation.

  11. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  12. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  13. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  14. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  15. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  16. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  17. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  18. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  19. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  20. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  1. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  2. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  3. Extending a Consensus-based Fuzzy Ordered Weighting Average (FOWA Model in New Water Quality Indices

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Baghapour

    2017-07-01

    Full Text Available In developing a specific WQI (Water Quality Index, many water quality parameters are involved with different levels of importance. The impact of experts’ different opinions and viewpoints, current risks affecting their opinions, and plurality of the involved parameters double the significance of the issue. Hence, the current study tries to apply a consensus-based FOWA (Fuzzy Ordered Weighting Average model as one of the most powerful and well-known Multi Criteria Decision Making (MCDM techniques to determine the importance of the used parameters in the development of such WQIs which is shown with an example. This operator has provided the capability of modeling the risks in decision-making through applying the optimistic degree of stakeholders and their power coupled with the use of fuzzy numbers. Totally, 22 water quality parameters for drinking purposes are considered in this study. To determine the weight of each parameter, the viewpoints of 4 decision-making groups of experts are taken into account. After determining the final weights, to validate the use of each parameter in a potential WQI, consensus degrees of both the decision makers and the parameters are calculated. All calculations are carried out by using the expertise software called Group Fuzzy Decision Making (GFDM. The highest and the lowest weight values, 0.999 and 0.073 respectively, are related to Hg and temperature. Regarding the type of consumption that is drinking, the parameters’ weights and ranks are consistent with their health impacts. Moreover, the decision makers’ highest and lowest consensus degrees were 0.9905 and 0.9669, respectively. Among the water quality parameters, temperature (with consensus degree of 0.9972 and Pb (with consensus degree of 0.9665, received the highest and lowest agreement from the decision making group. This study indicates that the weight of parameters in determining water quality largely depends on the experts’ opinions and

  4. Extending a Consensus-based Fuzzy Ordered Weighting Average (FOWA Model in New Water Quality Indices

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Baghapour

    2017-07-01

    Full Text Available In developing a specific WQI (Water Quality Index, many quality parameters are involved with different levels of importance. The impact of experts’ different opinions and viewpoints, current risks affecting their opinions, and plurality of the involved parameters double the significance of the issue. Hence, the current study tries to apply a consensus-based FOWA (Fuzzy Ordered Weighting Average model as one of the most powerful and well-known Multi-Criteria Decision- Making (MCDM techniques to determine the importance of the used parameters in the development of such WQIs which is shown with an example. This operator has provided the capability of modeling the risks in decision-making through applying the optimistic degree of stakeholders and their power coupled with the use of fuzzy numbers. Totally, 22 water quality parameters for drinking purposes were considered in this study. To determine the weight of each parameter, the viewpoints of 4 decision-making groups of experts were taken into account. After determining the final weights, to validate the use of each parameter in a potential WQI, consensus degrees of both the decision makers and the parameters are calculated. The highest and the lowest weight values, 0.999 and 0.073 respectively, were related to Hg and temperature. Regarding the type of consumption that was drinking, the parameters’ weights and ranks were consistent with their health impacts. Moreover, the decision makers’ highest and lowest consensus degrees were 0.9905 and 0.9669, respectively. Among the water quality parameters, temperature (with consensus degree of 0.9972 and Pb (with consensus degree of 0.9665, received the highest and lowest agreement with the decision-making group. This study indicated that the weight of parameters in determining water quality largely depends on the experts’ opinions and approaches. Moreover, using the FOWA model provides results accurate and closer- to-reality on the significance of

  5. Investigation of Behçet's Disease and Recurrent Aphthous Stomatitis Frequency: The Highest Prevalence in Turkey

    Directory of Open Access Journals (Sweden)

    Yalçın Baş

    2016-08-01

    Full Text Available Background: The Recurrent Aphthous Stomatitis (RAS is the most frequently observed painful pathology of the oral mucosa in the society. It appears mostly in idiopathic form; however, it may also be related with systemic diseases like Behçet’s Disease (BD. Aims: Determining the prevalence of RAS and BD in the Northern Anatolian Region, which is one of the important routes on the Antique Silk Road. Study Design: Cross-sectional study. Methods: Overall, 85 separate exemplification groups were formed to reflect the population density, and the demographic data of the region they represent. In the first stage, the individuals, who were selected in random order, were invited to a Family Physician Unit at a certain date and time. The dermatological examinations of the volunteering individuals were performed by only 3 dermatology specialists. In the second stage, those individuals who had symptoms of BD were invited to our hospital, and the Pathergy Test and eye examinations were performed. Results: The annual prevalence of RAS was determined as 10.84%. The annual prevalence was determined to be higher in women than in men (p=0.000. It was observed that the prevalence was at the peak level in the 3rd decade, and then decreased proportionally in the following decades (p=0.000. It was also observed that the aphtha recurrence decreased in the following decades (p=0.048. The Behçet’s prevalence was found to be 0.60%. The prevalence in women was found to be higher than in men (0.86% female, 0.14% male; p=0.022. Conclusion: While the RAS prevalence ratio was at an average value when compared with the other societies; the BD prevalence was found as the highest ratio in the world according to the literature.

  6. European and US publications in the 50 highest ranking pathology journals from 2000 to 2006.

    Science.gov (United States)

    Fritzsche, F R; Oelrich, B; Dietel, M; Jung, K; Kristiansen, G

    2008-04-01

    To analyse the contributions of the 15 primary member states of the European Union and selected non-European countries to pathological research between 2000 and 2006. Pathological journals were screened using ISI Web of Knowledge database. The number of publications and related impact factors were determined for each country. Relevant socioeconomic indicators were related to the scientific output. Subsequently, results were compared to publications in 10 of the leading biomedical journals. The research output remained generally stable. In Europe, the UK, Germany, France, Italy and Spain ranked top concerning contributions to publications and impact factors in the pathological and leading general biomedical journals. With regard to socioeconomic data, smaller, mainly northern European countries showed a relatively higher efficiency. Of the lager countries, the UK is the most efficient in that respect. The rising economic powers of China and India were consistently in the rear. Results mirror the leading role of the USA in pathology research but also show the relevance of European scientists. The scientometric approach in this study provides a new fundamental and comparative overview of pathology research in the European Union and the USA which could help to benchmark scientific output among countries.

  7. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  8. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  9. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  10. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  11. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  12. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  13. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  14. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  15. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  16. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  17. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  18. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  19. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  20. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  1. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  2. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  3. Mercury contamination from artisanal gold mining in Antioquia, Colombia: The world's highest per capita mercury pollution.

    Science.gov (United States)

    Cordy, Paul; Veiga, Marcello M; Salih, Ibrahim; Al-Saadi, Sari; Console, Stephanie; Garcia, Oseas; Mesa, Luis Alberto; Velásquez-López, Patricio C; Roeser, Monika

    2011-12-01

    The artisanal gold mining sector in Colombia has 200,000 miners officially producing 30tonnes Au/a. In the Northeast of the Department of Antioquia, there are 17 mining towns and between 15,000 and 30,000 artisanal gold miners. Guerrillas and paramilitary activities in the rural areas of Antioquia pushed miners to bring their gold ores to the towns to be processed in Processing Centers or entables. These Centers operate in the urban areas amalgamating the whole ore, i.e. without previous concentration, and later burn gold amalgam without any filtering/condensing system. Based on mercury mass balance in 15 entables, 50% of the mercury added to small ball mills (cocos) is lost: 46% with tailings and 4% when amalgam is burned. In just 5 cities of Antioquia, with a total of 150,000 inhabitants: Segovia, Remedios, Zaragoza, El Bagre, and Nechí, there are 323 entables producing 10-20tonnes Au/a. Considering the average levels of mercury consumption estimated by mass balance and interviews of entables owners, the mercury consumed (and lost) in these 5 municipalities must be around 93tonnes/a. Urban air mercury levels range from 300ng Hg/m(3) (background) to 1million ng Hg/m(3) (inside gold shops) with 10,000ng Hg/m(3) being common in residential areas. The WHO limit for public exposure is 1000ng/m(3). The total mercury release/emissions to the Colombian environment can be as high as 150tonnes/a giving this country the shameful first position as the world's largest mercury polluter per capita from artisanal gold mining. One necessary government intervention is to cut the supply of mercury to the entables. In 2009, eleven companies in Colombia legally imported 130tonnes of metallic mercury, much of it flowing to artisanal gold mines. Entables must be removed from urban centers and technical assistance is badly needed to improve their technology and reduce emissions. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Simplifying consent for HIV testing is associated with an increase in HIV testing and case detection in highest risk groups, San Francisco January 2003-June 2007.

    Directory of Open Access Journals (Sweden)

    Nicola M Zetola

    2008-07-01

    Full Text Available Populations at highest risk for HIV infection face multiple barriers to HIV testing. To facilitate HIV testing procedures, the San Francisco General Hospital Medical Center eliminated required written patient consent for HIV testing in its medical settings in May 2006. To describe the change in HIV testing rates in different hospital settings and populations after the change in HIV testing policy in the SFDH medical center, we performed an observational study using interrupted time series analysis.Data from all patients aged 18 years and older seen from January 2003 through June 2007 at the San Francisco Department of Public Health (SFDPH medical care system were included in the analysis. The monthly HIV testing rate per 1000 had patient-visits was calculated for the overall population and stratified by hospital setting, age, sex, race/ethnicity, homelessness status, insurance status and primary language.By June 2007, the average monthly rate of HIV tests per 1000 patient-visits increased 4.38 (CI, 2.17-6.60, p<0.001 over the number predicted if the policy change had not occurred (representing a 44% increase. The monthly average number of new positive HIV tests increased from 8.9 (CI, 6.3-11.5 to 14.9 (CI, 10.6-19.2, p<0.001, representing a 67% increase. Although increases in HIV testing were seen in all populations, populations at highest risk for HIV infection, particularly men, the homeless, and the uninsured experienced the highest increases in monthly HIV testing rates after the policy change.The elimination of the requirement for written consent in May 2006 was associated with a significant and sustained increase in HIV testing rates and HIV case detection in the SFDPH medical center. Populations facing the higher barriers to HIV testing had the highest increases in HIV testing rates and case detection in response to the policy change.

  5. Consumer Airfare Report: Table 5 - Detailed Fare Information For Highest and Lowest Fare Markets Under 750 Miles

    Data.gov (United States)

    Department of Transportation — Provides detailed fare information for highest and lowest fare markets under 750 miles. For a more complete explanation, please read the introductory information at...

  6. Efficiency of Finish power transmission network companies

    International Nuclear Information System (INIS)

    Anon.

    2001-01-01

    The Finnish Energy Market Authority has investigated the efficiency of power transmissions network companies. The results show that the intensification potential of the branch is 402 million FIM, corresponding to about 15% of the total costs of the branch and 7.3 % of the turnout. Energy Market Authority supervises the reasonableness of the power transmission prices, and it will use the results of the research in supervision. The research was carried out by the Quantitative Methods Research Group of Helsinki School of Economics. The main objective of the research was to create an efficiency estimation method for electric power distribution network business used for Finnish conditions. Data of the year 1998 was used as basic material in the research. Twenty-one of the 102 power distribution network operators was estimated to be totally efficient. Highest possible efficiency rate was 100, and the average of the efficiency rates of all the operators was 76.9, the minimum being 42.6

  7. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  8. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  9. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  10. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  11. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  12. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  13. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  14. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  15. Exposure to power frequency electromagnetic fields

    International Nuclear Information System (INIS)

    Skotte, J.

    1993-01-01

    The purpose was to asses personal exposure to power frequency electromagnetic fields in Denmark. Exposure to electrical and magnetic 50 Hz fields were measured with personal dosimeters in periods of 24 hours covering both occupational and residential environments. The study included both highly exposed and 'normal' exposed jobs. Measurements were carried out with dosimeters, which sample electrical and magnetic fields every 5 sec. Participants also wore the dosimeter during transportation. The dynamic range of the dosimeters was 0.01-200 μT and 0.6-10000 V/m. The highest average exposure in homes near high power lines was 2.24 μT. In most homes without nearby high power lines the average exposure was below 0.05 μT. Average values of '24-hour-dose' (μT times hours) for the generator facility, transmission line and substation workers were approximately the same as for the people living near high power lines (5 μT x hours). Electric field measurements with personal dosimeters involve several factors of uncertainty, as the body, posture, position of dosimeter etc. influence the results. The highest exposed groups were transmission line workers (GM: 44 V/m) and substation workers (GM: 23 V/m) but there were large variations (GSD: 4.7-4.8). In the work time the exposure level was the same for office workers and workers in the industry groups (GM: 12-13 V/m). In homes near high power lines (GM: 23 V/m) there was a non-significant tendency to higher exposure compared to homes without nearby high power lines. (AB) (11 refs.)

  16. Effect of random edge failure on the average path length

    Energy Technology Data Exchange (ETDEWEB)

    Guo Dongchao; Liang Mangui; Li Dandan; Jiang Zhongyuan, E-mail: mgliang58@gmail.com, E-mail: 08112070@bjtu.edu.cn [Institute of Information Science, Beijing Jiaotong University, 100044, Beijing (China)

    2011-10-14

    We study the effect of random removal of edges on the average path length (APL) in a large class of uncorrelated random networks in which vertices are characterized by hidden variables controlling the attachment of edges between pairs of vertices. A formula for approximating the APL of networks suffering random edge removal is derived first. Then, the formula is confirmed by simulations for classical ER (Erdoes and Renyi) random graphs, BA (Barabasi and Albert) networks, networks with exponential degree distributions as well as random networks with asymptotic power-law degree distributions with exponent {alpha} > 2. (paper)

  17. Average level of satisfaction in 10 European countries: explanation of differences

    OpenAIRE

    Veenhoven, Ruut

    1996-01-01

    textabstractABSTRACT Surveys in 10 European nations assessed satisfaction with life-as-a-whole and satisfaction with three life-domains (finances, housing, social contacts). Average satisfaction differs markedly across countries. Both satisfaction with life-as-a-whole and satisfaction with life-domains are highest in North-Western Europe, medium in Southern Europe and lowest in the East-European nations. Cultural measurement bias is unlikely to be involved. The country differences in average ...

  18. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  19. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  20. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  1. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  2. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  3. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  4. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  5. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  6. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  7. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  8. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  9. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  10. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  11. Cutting-Edge High-Power Ultrafast Thin Disk Oscillators

    Directory of Open Access Journals (Sweden)

    Thomas Südmeyer

    2013-04-01

    Full Text Available A growing number of applications in science and industry are currently pushing the development of ultrafast laser technologies that enable high average powers. SESAM modelocked thin disk lasers (TDLs currently achieve higher pulse energies and average powers than any other ultrafast oscillator technology, making them excellent candidates in this goal. Recently, 275 W of average power with a pulse duration of 583 fs were demonstrated, which represents the highest average power so far demonstrated from an ultrafast oscillator. In terms of pulse energy, TDLs reach more than 40 μJ pulses directly from the oscillator. In addition, another major milestone was recently achieved, with the demonstration of a TDL with nearly bandwidth-limited 96-fs long pulses. The progress achieved in terms of pulse duration of such sources enabled the first measurement of the carrier-envelope offset frequency of a modelocked TDL, which is the first key step towards full stabilization of such a source. We will present the key elements that enabled these latest results, as well as an outlook towards the next scaling steps in average power, pulse energy and pulse duration of such sources. These cutting-edge sources will enable exciting new applications, and open the door to further extending the current performance milestones.

  12. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  13. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  14. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  15. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  16. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  17. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  18. 2002 Nuclear Power World Report - Evaluation

    International Nuclear Information System (INIS)

    Anon.

    2003-01-01

    Last year, in 2002, 441 nuclear power plants were available for power supply in 31 countries in the world. With an aggregate gross power of 377,359 MWe, and an aggregate net power of 359,429 MWe, respectively, the nuclear generating capacity reached its highest level so far. Nine different reactor lines are used in commercial facilities. Light water reactors (PWR and BWR) contribute 355 plants, which makes them the most common reactor line. In twelve countries, 32 nuclear power plants with an aggregate gross power of 26,842 MWe and an aggregate net power of 25,546 MWe, respectively, are under construction. Of these, 25 units are light water reactors while eight are CANDU-type plants. In eighteen countries, 94 commercial reactors with more than 5 MWe power have been decommissioned so far. Most of these plants are prototypes with low powers. 228 of the nuclear power plants currently in operation, i.e. slightly more than half of them, were commissioned in the eighties. The oldest commercial nuclear power plant, Calder Hall unit 1, supplied power into the public grid in its 47th year of operation in 2002. The availability in terms of time and capacity of nuclear power plants rose from 74.23% in 1991 to 83.40% in 2001. A continued rise to approx. 85% is expected for 2002. In the same way, the non-availability in terms of time (unscheduled) dropped from 6.90% to 3.48%. The four nuclear power plants in Finland are the world's leaders with a cumulated average capacity availability of 90.00%. (orig.) [de

  19. Highest weight generating functions for hyperKähler T{sup ⋆}(G/H) spaces

    Energy Technology Data Exchange (ETDEWEB)

    Hanany, Amihay [Theoretical Physics Group, Imperial College London,Prince Consort Road, London, SW7 2AZ (United Kingdom); Ramgoolam, Sanjaye [Centre for Research in String Theory,School of Physics and Astronomy, Queen Mary University of London,Mile End Road, London E1 4NS (United Kingdom); Rodriguez-Gomez, Diego [Department of Physics, Universidad de Oviedo,Avda. Calvo Sotelo 18, 33007, Oviedo (Spain)

    2016-10-05

    We develop an efficient procedure for counting holomorphic functions on a hyperKahler cone that has a resolution as a cotangent bundle of a homogeneous space by providing a formula for computing the corresponding Highest Weight Generating function.

  20. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  1. Quetelet, the average man and medical knowledge.

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  2. [Quetelet, the average man and medical knowledge].

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  3. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  4. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  5. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  6. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  7. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  8. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  9. ship between IS-month mating mass and average lifetime repro

    African Journals Online (AJOL)

    1976; Elliol, Rae & Wickham, 1979; Napier, et af., 1980). Although being in general agreement with results in the literature, it is evident that the present phenotypic correlations between I8-month mating mass and average lifetime lambing and weaning rate tended to be equal to the highest comparable estimates in the ...

  10. Influence of sports games classes in specialized sections on formation of healthy lifestyle at students of the highest educational institutions

    OpenAIRE

    Kudryavtsev, M.; Galimova, A.; Alshuvayli, Kh.; Altuvayni, A.

    2018-01-01

    In modern society, the problem of formation of healthy lifestyle at youth, in particular, at students of the highest educational institutions is very relevant. Sport is a good mean for motivation, in this case – sports games. Purpose: to reveal consequences of participation in sports games and influence of these actions on healthy lifestyle of students of the highest educational institutions, to designate a role of classes in the sections, specializing in preparation for sports games in this ...

  11. Evaluation of soft x-ray average recombination coefficient and average charge for metallic impurities in beam-heated plasmas

    International Nuclear Information System (INIS)

    Sesnic, S.S.; Bitter, M.; Hill, K.W.; Hiroe, S.; Hulse, R.; Shimada, M.; Stratton, B.; von Goeler, S.

    1986-05-01

    The soft x-ray continuum radiation in TFTR low density neutral beam discharges can be much lower than its theoretical value obtained by assuming a corona equilibrium. This reduced continuum radiation is caused by an ionization equilibrium shift toward lower states, which strongly changes the value of the average recombination coefficient of metallic impurities anti γ, even for only slight changes in the average charge, anti Z. The primary agent for this shift is the charge exchange between the highly ionized impurity ions and the neutral hydrogen, rather than impurity transport, because the central density of the neutral hydrogen is strongly enhanced at lower plasma densities with intense beam injection. In the extreme case of low density, high neutral beam power TFTR operation (energetic ion mode) the reduction in anti γ can be as much as one-half to two-thirds. We calculate the parametric dependence of anti γ and anti Z for Ti, Cr, Fe, and Ni impurities on neutral density (equivalent to beam power), electron temperature, and electron density. These values are obtained by using either a one-dimensional impurity transport code (MIST) or a zero-dimensional code with a finite particle confinement time. As an example, we show the variation of anti γ and anti Z in different TFTR discharges

  12. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  13. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  14. The concept of average LET values determination

    International Nuclear Information System (INIS)

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  15. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  16. Characterizing individual painDETECT symptoms by average pain severity

    Directory of Open Access Journals (Sweden)

    Sadosky A

    2016-07-01

    Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain

  17. Hydroelectric power in Switzerland: large growth potential by increasing the installed power

    International Nuclear Information System (INIS)

    Schleiss, A.

    2007-01-01

    Due to its important hydroelectric power generation facilities (about 525 plants with a total power of 13,314 MW producing about 35.3 TWh annually) Switzerland plays an important role in the interconnected European power system. Large artificial storage lakes in the Swiss Alps can generate peak power during hours of highest demand: 9700 MW are available from accumulated energy and the total power of pumped-storage facilities amounts to 1700 MW. The latter allow refilling the reservoirs at periods of low power demand and generating power at periods of peak demand. In the case of favorable conditions, the yearly average power production could be increased by 6% and the production during the winter period (October to March) by 20% by the year 2020. However, looking forward to the year 2050, the annual production is expected to decrease by 3% despite a possible extension of hydropower. This decrease is due to the enforcement of the minimum residual water flow rates required by a new legislation to protect the rivers. The enforcement is due at latest when the present licenses for water utilization expire. On the other hand, the installed (peak) power might be further increased by 50% by retrofitting the existing installations and constructing the pumped-storage plants currently at the planning stage

  18. Highest PBDE levels (max 63 ppm) yet found in biota measured in seabird eggs from San Francisco Bay

    Energy Technology Data Exchange (ETDEWEB)

    She, J.; Holden, A.; Tanner, M.; Sharp, M.; Hooper, K. [Department of Toxic Substances Control, Berkeley, CA (United States). Hazardous Materials Lab.; Adelsbach, T. [Environmental Contaminants Division, Sacramento Fish and Wildlife Office, US Fish and Wildlife Service, Sacramento, CA (United States)

    2004-09-15

    High levels of polybrominated diphenylethers (PBDEs) have been found in humans and wildlife from the San Francisco Bay Area, with levels in women among the highest in the world, and levels in piscivorous seabird eggs at the ppm level. Seabirds are useful for monitoring and assessing ecosystem health at various times and places because they occupy a high trophic level in the marine food web, are long-lived, and are generally localized near their breeding and non-breeding sites. In collaboration with the US Fish and Wildlife Services (USFWS), we are carrying out a three-year investigation of dioxin, PCB and PBDE levels in eggs from fish-eating seabirds. Year 1 (2002) PBDE measurements from 73 bird eggs were reported at Dioxin2003. Year 2 (2003) PBDE measurements from 45 samples are presented in this report. The highest PBDE level measured in eggs was 63 ppm, lipid, which is the highest PBDE level, yet reported in biota.

  19. Inclusion of Highest Glasgow Coma Scale Motor Component Score in Mortality Risk Adjustment for Benchmarking of Trauma Center Performance.

    Science.gov (United States)

    Gomez, David; Byrne, James P; Alali, Aziz S; Xiong, Wei; Hoeft, Chris; Neal, Melanie; Subacius, Harris; Nathens, Avery B

    2017-12-01

    The Glasgow Coma Scale (GCS) is the most widely used measure of traumatic brain injury (TBI) severity. Currently, the arrival GCS motor component (mGCS) score is used in risk-adjustment models for external benchmarking of mortality. However, there is evidence that the highest mGCS score in the first 24 hours after injury might be a better predictor of death. Our objective was to evaluate the impact of including the highest mGCS score on the performance of risk-adjustment models and subsequent external benchmarking results. Data were derived from the Trauma Quality Improvement Program analytic dataset (January 2014 through March 2015) and were limited to the severe TBI cohort (16 years or older, isolated head injury, GCS ≤8). Risk-adjustment models were created that varied in the mGCS covariates only (initial score, highest score, or both initial and highest mGCS scores). Model performance and fit, as well as external benchmarking results, were compared. There were 6,553 patients with severe TBI across 231 trauma centers included. Initial and highest mGCS scores were different in 47% of patients (n = 3,097). Model performance and fit improved when both initial and highest mGCS scores were included, as evidenced by improved C-statistic, Akaike Information Criterion, and adjusted R-squared values. Three-quarters of centers changed their adjusted odds ratio decile, 2.6% of centers changed outlier status, and 45% of centers exhibited a ≥0.5-SD change in the odds ratio of death after including highest mGCS score in the model. This study supports the concept that additional clinical information has the potential to not only improve the performance of current risk-adjustment models, but can also have a meaningful impact on external benchmarking strategies. Highest mGCS score is a good potential candidate for inclusion in additional models. Copyright © 2017 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  20. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  1. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  2. Public power costs less

    International Nuclear Information System (INIS)

    Moody, D.

    1993-01-01

    The reasons why residential customers of public power utilities paid less for power than private sector customers is discussed. Residential customers of investor-owned utilities (IOU's) paid average rates that were 28% above those paid by customers by possibly owned systems during 1990. The reasons for this disparity are that management costs faced by public power systems are below those of private power companies, indicating a greater efficiency of management among public power systems, and customer accounts expenses averaged $33.00 per customer for publicly owned electric utilities compared to $39.00 per customer for private utilities

  3. Ground-state energies and highest occupied eigenvalues of atoms in exchange-only density-functional theory

    Science.gov (United States)

    Li, Yan; Harbola, Manoj K.; Krieger, J. B.; Sahni, Viraht

    1989-11-01

    The exchange-correlation potential of the Kohn-Sham density-functional theory has recently been interpreted as the work required to move an electron against the electric field of its Fermi-Coulomb hole charge distribution. In this paper we present self-consistent results for ground-state total energies and highest occupied eigenvalues of closed subshell atoms as obtained by this formalism in the exchange-only approximation. The total energies, which are an upper bound, lie within 50 ppm of Hartree-Fock theory for atoms heavier than Be. The highest occupied eigenvalues, as a consequence of this interpretation, approximate well the experimental ionization potentials. In addition, the self-consistently calculated exchange potentials are very close to those of Talman and co-workers [J. D. Talman and W. F. Shadwick, Phys. Rev. A 14, 36 (1976); K. Aashamar, T. M. Luke, and J. D. Talman, At. Data Nucl. Data Tables 22, 443 (1978)].

  4. Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock

    Science.gov (United States)

    Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.

    2001-01-01

    Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.

  5. An Experimental Observation of Axial Variation of Average Size of Methane Clusters in a Gas Jet

    International Nuclear Information System (INIS)

    Ji-Feng, Han; Chao-Wen, Yang; Jing-Wei, Miao; Jian-Feng, Lu; Meng, Liu; Xiao-Bing, Luo; Mian-Gong, Shi

    2010-01-01

    Axial variation of average size of methane clusters in a gas jet produced by supersonic expansion of methane through a cylindrical nozzle of 0.8 mm in diameter is observed using a Rayleigh scattering method. The scattered light intensity exhibits a power scaling on the backing pressure ranging from 16 to 50 bar, and the power is strongly Z dependent varying from 8.4 (Z = 3 mm) to 5.4 (Z = 11 mm), which is much larger than that of the argon cluster. The scattered light intensity versus axial position shows that the position of 5 mm has the maximum signal intensity. The estimation of the average cluster size on axial position Z indicates that the cluster growth process goes forward until the maximum average cluster size is reached at Z = 9 mm, and the average cluster size will decrease gradually for Z > 9 mm

  6. High-Performance Control in Radio Frequency Power Amplification Systems

    DEFF Research Database (Denmark)

    Høyerby, Mikkel Christian Kofod

    . It is clearly shown that single-phase switch-mode control systems based on oscillation (controlled unstable operation) of the whole power train provide the highest possible control bandwidth. A study of the limitations of cartesian feedback is also included. It is shown that bandwidths in excess of 4MHz can...... frequency power amplifiers (RFPAs) in conjunction with cartesian feedback (CFB) used to linearize the overall transmitter system. On a system level, it is demonstrated how envelope tracking is particularly useful for RF carriers with high peak-to-average power ratios, such as TEDS with 10dB. It is also...... demonstrated how the envelope tracking technique introduces a number of potential pitfalls to the system, namely in the form of power supply ripple intermodulation (PSIM), reduced RFPA linearity and a higherimpedance supply rail for the RFPA. Design and analysis techniques for these three issues are introduced...

  7. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  8. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  9. The Experience Elicited by Hallucinogens Presents the Highest Similarity to Dreaming within a Large Database of Psychoactive Substance Reports

    Science.gov (United States)

    Sanz, Camila; Zamberlan, Federico; Erowid, Earth; Erowid, Fire; Tagliazucchi, Enzo

    2018-01-01

    Ever since the modern rediscovery of psychedelic substances by Western society, several authors have independently proposed that their effects bear a high resemblance to the dreams and dreamlike experiences occurring naturally during the sleep-wake cycle. Recent studies in humans have provided neurophysiological evidence supporting this hypothesis. However, a rigorous comparative analysis of the phenomenology (“what it feels like” to experience these states) is currently lacking. We investigated the semantic similarity between a large number of subjective reports of psychoactive substances and reports of high/low lucidity dreams, and found that the highest-ranking substance in terms of the similarity to high lucidity dreams was the serotonergic psychedelic lysergic acid diethylamide (LSD), whereas the highest-ranking in terms of the similarity to dreams of low lucidity were plants of the Datura genus, rich in deliriant tropane alkaloids. Conversely, sedatives, stimulants, antipsychotics, and antidepressants comprised most of the lowest-ranking substances. An analysis of the most frequent words in the subjective reports of dreams and hallucinogens revealed that terms associated with perception (“see,” “visual,” “face,” “reality,” “color”), emotion (“fear”), setting (“outside,” “inside,” “street,” “front,” “behind”) and relatives (“mom,” “dad,” “brother,” “parent,” “family”) were the most prevalent across both experiences. In summary, we applied novel quantitative analyses to a large volume of empirical data to confirm the hypothesis that, among all psychoactive substances, hallucinogen drugs elicit experiences with the highest semantic similarity to those of dreams. Our results and the associated methodological developments open the way to study the comparative phenomenology of different altered states of consciousness and its relationship with non-invasive measurements of brain physiology. PMID

  10. The Experience Elicited by Hallucinogens Presents the Highest Similarity to Dreaming within a Large Database of Psychoactive Substance Reports

    Directory of Open Access Journals (Sweden)

    Camila Sanz

    2018-01-01

    Full Text Available Ever since the modern rediscovery of psychedelic substances by Western society, several authors have independently proposed that their effects bear a high resemblance to the dreams and dreamlike experiences occurring naturally during the sleep-wake cycle. Recent studies in humans have provided neurophysiological evidence supporting this hypothesis. However, a rigorous comparative analysis of the phenomenology (“what it feels like” to experience these states is currently lacking. We investigated the semantic similarity between a large number of subjective reports of psychoactive substances and reports of high/low lucidity dreams, and found that the highest-ranking substance in terms of the similarity to high lucidity dreams was the serotonergic psychedelic lysergic acid diethylamide (LSD, whereas the highest-ranking in terms of the similarity to dreams of low lucidity were plants of the Datura genus, rich in deliriant tropane alkaloids. Conversely, sedatives, stimulants, antipsychotics, and antidepressants comprised most of the lowest-ranking substances. An analysis of the most frequent words in the subjective reports of dreams and hallucinogens revealed that terms associated with perception (“see,” “visual,” “face,” “reality,” “color”, emotion (“fear”, setting (“outside,” “inside,” “street,” “front,” “behind” and relatives (“mom,” “dad,” “brother,” “parent,” “family” were the most prevalent across both experiences. In summary, we applied novel quantitative analyses to a large volume of empirical data to confirm the hypothesis that, among all psychoactive substances, hallucinogen drugs elicit experiences with the highest semantic similarity to those of dreams. Our results and the associated methodological developments open the way to study the comparative phenomenology of different altered states of consciousness and its relationship with non-invasive measurements of brain

  11. Communication: The highest frequency hydrogen bond vibration and an experimental value for the dissociation energy of formic acid dimer

    DEFF Research Database (Denmark)

    Kollipost, F.; Larsen, René Wugt; Domanskaya, A.V.

    2012-01-01

    The highest frequency hydrogen bond fundamental of formic acid dimer, ν24 (Bu), is experimentally located at 264 cm−1. FTIR spectra of this in-plane bending mode of (HCOOH)2 and band centers of its symmetric D isotopologues (isotopomers) recorded in a supersonic slit jet expansion are presented...... thermodynamics treatment of the dimerization process up to room temperature. We obtain D0 = 59.5(5) kJ/mol as the best experimental estimate for the dimer dissociation energy at 0 K. Further improvements have to wait for a more consistent determination of the room temperature equilibrium constant....

  12. The Physikalisch-Technische Bundesanstalt PTB (physical-technical Federal institution) - research institute and highest technical authority

    International Nuclear Information System (INIS)

    Klages, H.

    1976-01-01

    The PTB Braunschweig and Berlin is a Federal institution for the natural sciences and engineering and the highest technical authority for measurements. It is subject to the directions of the Federal Ministry for Economic Affairs. Its main tasks are representation, maintenance and development of physical units and, in connection with this, research, examinations, and granting permissions for calibration measuring equipment, as well as examinations of building types and permissions. The types of measuring equipment are represented. Many examinations are carried out on a voluntary basis. The advisory activities and the PTB's publications are also reported on. An organizational plan informs of the structure of the PTB. (orig.) [de

  13. The Licancabur Project: Exploring the Limits of Life in the Highest Lake on Earth as an Analog to Martian Paleolakes

    Science.gov (United States)

    Cabrol, N. A.; Grin, E. A.; McKay, C. P.; Friedmann, I.; Diaz, G. Chong; Demergasso, C.; Kisse, K.; Grigorszky, I.; Friedmann, R. Ocampo; Hock, A.

    2003-01-01

    The Licancabur volcano (6017 m) hosts the highest and one of the least explored lakes in the world in its summit crater. It is located 22 deg.50 min. South / 67 deg.53 min. West at the boundary of Chile and Bolivia in the High-Andes. In a freezing environment, the lake located in volcano-tectonic environment combines low-oxygen, low atmospheric pressure due to altitude, and high-UV radiation (see table). However, its bottom water temperature remains above 0 C year-round. These conditions make Licancabur a unique analog to Martian paleolakes considered high-priority sites for the search for life on Mars.

  14. Potential need for re-definition of the highest priority recovery action in the Krsko SAG-1

    International Nuclear Information System (INIS)

    Bilic Zabric, T.; Basic, I.

    2005-01-01

    Replacement of old SG (Steam Generators) [7] and the characteristic of new ones throws the question of proper accident management strategy, which leans on philosophy that repair and recovery actions have first priority. In the current NPP Krsko SAMGs (Severe Accident Management Guidelines), water supply to the SG has priority over re-injection water into the core. NPP Krsko reconsidered the highest priority of SAG-1 (inject water to the SG), against the WOG (Westinghouse Owners Group) generic approach (inject water into the core) and potential revision of Severe Accident Phenomenology Evaluations using MAAP (Modular accident Analysis Program) 4.0.5 code. (author)

  15. Justification of the averaging method for parabolic equations containing rapidly oscillating terms with large amplitudes

    International Nuclear Information System (INIS)

    Levenshtam, V B

    2006-01-01

    We justify the averaging method for abstract parabolic equations with stationary principal part that contain non-linearities (subordinate to the principal part) some of whose terms are rapidly oscillating in time with zero mean and are proportional to the square root of the frequency of oscillation. Our interest in the exponent 1/2 is motivated by the fact that terms proportional to lower powers of the frequency have no influence on the average. For linear equations of the same type, we justify an algorithm for the study of the stability of solutions in the case when the stationary averaged problem has eigenvalues on the imaginary axis (the critical case)

  16. High power infrared QCLs: advances and applications

    Science.gov (United States)

    Patel, C. Kumar N.

    2012-01-01

    QCLs are becoming the most important sources of laser radiation in the midwave infrared (MWIR) and longwave infrared (LWIR) regions because of their size, weight, power and reliability advantages over other laser sources in the same spectral regions. The availability of multiwatt RT operation QCLs from 3.5 μm to >16 μm with wall plug efficiency of 10% or higher is hastening the replacement of traditional sources such as OPOs and OPSELs in many applications. QCLs can replace CO2 lasers in many low power applications. Of the two leading groups in improvements in QCL performance, Pranalytica is the commercial organization that has been supplying the highest performance QCLs to various customers for over four year. Using a new QCL design concept, the non-resonant extraction [1], we have achieved CW/RT power of >4.7 W and WPE of >17% in the 4.4 μm - 5.0 μm region. In the LWIR region, we have recently demonstrated QCLs with CW/RT power exceeding 1 W with WPE of nearly 10 % in the 7.0 μm-10.0 μm region. In general, the high power CW/RT operation requires use of TECs to maintain QCLs at appropriate operating temperatures. However, TECs consume additional electrical power, which is not desirable for handheld, battery-operated applications, where system power conversion efficiency is more important than just the QCL chip level power conversion efficiency. In high duty cycle pulsed (quasi-CW) mode, the QCLs can be operated without TECs and have produced nearly the same average power as that available in CW mode with TECs. Multiwatt average powers are obtained even in ambient T>70°C, with true efficiency of electrical power-to-optical power conversion being above 10%. Because of the availability of QCLs with multiwatt power outputs and wavelength range covering a spectral region from ~3.5 μm to >16 μm, the QCLs have found instantaneous acceptance for insertion into multitude of defense and homeland security applications, including laser sources for infrared

  17. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  18. Large-signal analysis of DC motor drive system using state-space averaging technique

    International Nuclear Information System (INIS)

    Bekir Yildiz, Ali

    2008-01-01

    The analysis of a separately excited DC motor driven by DC-DC converter is realized by using state-space averaging technique. Firstly, a general and unified large-signal averaged circuit model for DC-DC converters is given. The method converts power electronic systems, which are periodic time-variant because of their switching operation, to unified and time independent systems. Using the averaged circuit model enables us to combine the different topologies of converters. Thus, all analysis and design processes about DC motor can be easily realized by using the unified averaged model which is valid during whole period. Some large-signal variations such as speed and current relating to DC motor, steady-state analysis, large-signal and small-signal transfer functions are easily obtained by using the averaged circuit model

  19. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Highest recorded electrical conductivity and microstructure in polypropylene-carbon nanotubes composites and the effect of carbon nanofibers addition

    Science.gov (United States)

    Ramírez-Herrera, C. A.; Pérez-González, J.; Solorza-Feria, O.; Romero-Partida, N.; Flores-Vela, A.; Cabañas-Moreno, J. G.

    2018-04-01

    In the last decade, numerous investigations have been devoted to the preparation of polypropylene-multiwalled carbon nanotubes (PP/MWCNT) nanocomposites having enhanced properties, and in particular, high electrical conductivities (> 1 S cm-1). The present work establishes that the highest electrical conductivity in PP/MWCNT nanocomposites is limited by the amount of nanofiller content which can be incorporated in the polymer matrix, namely, about 20 wt%. This concentration of MWCNT in PP leads to a maximum electrical conductivity slightly lower than 8 S cm-1, but only by assuring an adequate combination of dispersion and spatial distribution of the carbon nanotubes. The realization of such an optimal microstructure depends on the characteristics of the production process of the PP/MWCNT nanocomposites; in our experiments, involving composite fabrication by melt mixing and hot pressing, a second re-processing cycle is shown to increase the electrical conductivity values by up to two orders of magnitude, depending on the MWCNT content of the nanocomposite. A modest increase of the highest electrical conductivity obtained in nanocomposites with 21.5 wt% MWCNT content has been produced by the combined use of carbon nanofibers (CNF) and MWCNT, so that the total nanofiller content was increased to 30 wt% in the nanocomposite with PP—15 wt% MWCNT—15 wt%CNF.

  1. Presentation and verification of a simple mathematical model foridentification of the areas behind noise barrierwith the highest performance

    Directory of Open Access Journals (Sweden)

    M. Monazzam

    2009-07-01

    Full Text Available Background and aims   Traffic noise barriers are the most important measure to control the environmental noise pollution. Diffraction from top edge of noise barriers is the most important path of indirect sound wave moves towards receiver.Therefore, most studies are focused on  improvement of this kind.   Methods   T-shape profile barriers are one of the most successful barrier among many different profiles. In this investigation the theory of destructive effect of diffracted waves from real edge of barrier and the wave diffracted from image of the barrier with phase difference of radians is used. Firstly a simple mathematical representation of the zones behind rigid and absorbent T- shape barriers with the highest insertion loss using the destructive effect of indirect path via barrier  image is introduced and then two different profile reflective and absorption barrier is used for  verification of the introduced model   Results   The results are then compared with the results of a verified two dimensional boundary element method at 1/3 octave band frequencies and in a wide field behind those barriers. Avery good agreement between the results has been achieved. In this method effective height is used for any different profile barriers.   Conclusion   The introduced model is very simple, flexible and fast and could be used for choosing the best location of profile rigid and absorptive barriers to achieve the highest  performance.  

  2. The Idea of a Highest Divine Principle — Founding Reason and Spirituality. A Necessary Concept of a Comparative Philosophy?

    Directory of Open Access Journals (Sweden)

    Claudia Bickmann

    2012-10-01

    Full Text Available By reference to the Platonic, Aristotelian, and Neo-Platonic philosophical traditions (and then to German Idealism, including Husserl and Heidegger, I will indicate the way in which the concept of reason—on the one side—depends on the horizon of spirituality (by searching for the ultimate ground within us and the striving for the highest good; and inversely—how far the idea of the divine or our spiritual self may be deepened, understood and transmitted by reference to reason and rationality. But whereas philosophical analysis aims at the universal dimensions of spirituality or the divine (as in Plato's idea of the 'highest good', the Aristotelian 'Absolute substance', the 'Oneness of the One' (Plotinus and the Neo-Platonists or the Hegelian 'Absolute spirit',—Comparative Theology may preserve the dimension of spirituality or divinity in its individuality and specifity. Comparative Theology mediates between the universality of the philosophical discourse and the uniqueness of our individual experience (symbolized by a sacred person—such as Jesus, Brahman, Buddha or Mohammed by reflecting and analyzing our religious experiences and practices. Religion may lose its specificity by comparative conceptual analysis within the field of philosophy, but Comparative Theology may enhance the vital dimensions of the very same spiritual experience by placing them in a comparative perspective.

  3. Stable, high power, high efficiency picosecond ultraviolet generation at 355 nm in K3B6O10 Br crystal

    Science.gov (United States)

    Hou, Z. Y.; Wang, L. R.; Xia, M. J.; Yan, D. X.; Zhang, Q. L.; Zhang, L.; Liu, L. J.; Xu, D. G.; Zhang, D. X.; Wang, X. Y.; Li, R. K.; Chen, C. T.

    2018-06-01

    We demonstrate a high efficiency and high power picosecond ultraviolet source at 355 nm with stable output by sum frequency generation from a Nd:YAG laser using a type-I critically phase matched K3B6O10 Br crystal as nonlinear optical material. Conversion efficiency as high as 30.8% was achieved using a 25 ps laser at 1064 nm operated at 10 Hz. Similar work is done by using a 35 W 10 ps laser at 1064 nm as the pump source with a repetition rate of 80 MHz, and the highest average output power obtained was up to 5.3 W. In addition, the power stability of the 355 nm output power measurement shows that the standard deviation fluctuations of the average power are ±0.69% and ±0.91% at 3.0 W and 3.5 W, respectively.

  4. A Single Image Dehazing Method Using Average Saturation Prior

    Directory of Open Access Journals (Sweden)

    Zhenfei Gu

    2017-01-01

    Full Text Available Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP, which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.

  5. Who jumps the highest? Anthropometric and physiological correlations of vertical jump in youth elite female volleyball players.

    Science.gov (United States)

    Nikolaidis, Pantelis T; Gkoudas, Konstantinos; Afonso, José; Clemente-Suarez, Vicente J; Knechtle, Beat; Kasabalis, Stavros; Kasabalis, Athanasios; Douda, Helen; Tokmakidis, Savvas; Torres-Luque, Gema

    2017-06-01

    The aim of the present study was to examine the relationship of vertical jump (Abalakov jump [AJ]) with anthropometric and physiological parameters in youth elite female volleyball players. Seventy-two selected volleyball players from the region of Athens (age 13.3±0.7 years, body mass 62.0±7.2 kg, height 171.5±5.7 cm, body fat 21.2±4.5%), classified into quartiles according to AJ performance (group A, 21.4-26.5 cm; group B, 26.8-29.9 cm; group C, 30.5-33.7 cm; group D, 33.8-45.9 cm), performed a series of physical fitness tests. AJ was correlated with anthropometric (age at peak height velocity [APHV]: r=0.38, Pvolleyball players that jumped the highest were those who matured later than others.

  6. Repeated Radionuclide therapy in metastatic paraganglioma leading to the highest reported cumulative activity of 131I-MIBG

    International Nuclear Information System (INIS)

    Ezziddin, Samer; Sabet, Amir; Ko, Yon-Dschun; Xun, Sunny; Matthies, Alexander; Biersack, Hans-Jürgen

    2012-01-01

    131 I-MIBG therapy for neuroendocrine tumours may be dose limited. The common range of applied cumulative activities is 10-40 GBq. We report the uneventful cumulative administration of 111 GBq (= 3 Ci) 131 I-MIBG in a patient with metastatic paraganglioma. Ten courses of 131 I-MIBG therapy were given within six years, accomplishing symptomatic, hormonal and tumour responses with no serious adverse effects. Chemotherapy with cisplatin/vinblastine/dacarbazine was the final treatment modality with temporary control of disease, but eventually the patient died of progression. The observed cumulative activity of 131 I-MIBG represents the highest value reported to our knowledge, and even though 12.6 GBq of 90 Y-DOTATOC were added intermediately, no associated relevant bone marrow, hepatic or other toxicity were observed. In an individual attempt to palliate metastatic disease high cumulative activity alone should not preclude the patient from repeat treatment

  7. Impact of thermal frequency drift on highest precision force microscopy using quartz-based force sensors at low temperatures

    Directory of Open Access Journals (Sweden)

    Florian Pielmeier

    2014-04-01

    Full Text Available In frequency modulation atomic force microscopy (FM-AFM the stability of the eigenfrequency of the force sensor is of key importance for highest precision force measurements. Here, we study the influence of temperature changes on the resonance frequency of force sensors made of quartz, in a temperature range from 4.8–48 K. The sensors are based on the qPlus and length extensional principle. The frequency variation with temperature T for all sensors is negative up to 30 K and on the order of 1 ppm/K, up to 13 K, where a distinct kink appears, it is linear. Furthermore, we characterize a new type of miniaturized qPlus sensor and confirm the theoretically predicted reduction in detector noise.

  8. Analysis of the highest transverse energy events seen in the UAl detector at the Spp-barS collider

    International Nuclear Information System (INIS)

    1987-06-01

    The first full solid angle analysis is presented of large transverse energy events in pp-bar collisions at the CERN collider. Events with transverse energies in excess of 200 GeV at √s = 630 GeV are studied for any non-standard physics and quantitatively compared with expectations from perturbative QCD Monte Carlo models. A corrected differential cross section is presented. A detailed examination is made of jet profiles, event jet multiplicities and the fraction of the transverse energy carried by the two jets with the highest transverse jet energies. There is good agreement with standard theory for events with transverse energies up to the largest observed values (approx. √s/2) and the analysis shows no evidence for any non-QCD mechanism to account for the event characteristics. (author)

  9. A common founder mutation in FANCA underlies the world's highest prevalence of Fanconi anemia in Gypsy families from Spain.

    Science.gov (United States)

    Callén, Elsa; Casado, José A; Tischkowitz, Marc D; Bueren, Juan A; Creus, Amadeu; Marcos, Ricard; Dasí, Angeles; Estella, Jesús M; Muñoz, Arturo; Ortega, Juan J; de Winter, Johan; Joenje, Hans; Schindler, Detlev; Hanenberg, Helmut; Hodgson, Shirley V; Mathew, Christopher G; Surrallés, Jordi

    2005-03-01

    Fanconi anemia (FA) is a genetic disease characterized by bone marrow failure and cancer predisposition. Here we have identified Spanish Gypsies as the ethnic group with the world's highest prevalence of FA (carrier frequency of 1/64-1/70). DNA sequencing of the FANCA gene in 8 unrelated Spanish Gypsy FA families after retroviral subtyping revealed a homozygous FANCA mutation (295C>T) leading to FANCA truncation and FA pathway disruption. This mutation appeared specific for Spanish Gypsies as it is not found in other Gypsy patients with FA from Hungary, Germany, Slovakia, and Ireland. Haplotype analysis showed that Spanish Gypsy patients all share the same haplotype. Our data thus suggest that the high incidence of FA among Spanish Gypsies is due to an ancestral founder mutation in FANCA that originated in Spain less than 600 years ago. The high carrier frequency makes the Spanish Gypsies a population model to study FA heterozygote mutations in cancer.

  10. Analysis of the highest transverse energy events seen in the UA1 detector at the Spanti pS collider

    International Nuclear Information System (INIS)

    Albajar, C.; Bezaguet, A.; Cennini, P.

    1987-01-01

    This is the first full solid angle analysis of large transverse energy events in panti p collisions at the CERN collider. Events with transverse energies in excess of 200 GeV at √s=630 GeV are studied for any non-standard physics and quantitatively compared with expectations from perturbative QCD Monte Carlo models. A corrected differential cross section is presented. A detailed examination is made of jet profiles, event jet multiplicities and the fraction of the transverse energy carried by the two jets with the highest transverse jet energies. There is good agreement with standard theory for events with transverse energies up to the largest observed values (≅ √s/2) and the analysis shows no evidence for any non-QCD mechanism to account for the event characteristics. (orig.)

  11. Contact with HIV prevention services highest in gay and bisexual men at greatest risk: cross-sectional survey in Scotland

    Directory of Open Access Journals (Sweden)

    Hart Graham J

    2010-12-01

    Full Text Available Abstract Background Men who have sex with men (MSM remain the group most at risk of acquiring HIV in the UK and new HIV prevention strategies are needed. In this paper, we examine what contact MSM currently have with HIV prevention activities and assess the extent to which these could be utilised further. Methods Anonymous, self-complete questionnaires and Orasure™ oral fluid collection kits were distributed to men visiting the commercial gay scenes in Glasgow and Edinburgh in April/May 2008. 1508 men completed questionnaires (70.5% response rate and 1277 provided oral fluid samples (59.7% response rate; 1318 men were eligible for inclusion in the analyses. Results 82.5% reported some contact with HIV prevention activities in the past 12 months, 73.1% obtained free condoms from a gay venue or the Internet, 51.1% reported accessing sexual health information (from either leaflets in gay venues or via the Internet, 13.5% reported talking to an outreach worker and 8.0% reported participating in counselling on sexual health or HIV prevention. Contact with HIV prevention activities was associated with frequency of gay scene use and either HIV or other STI testing in the past 12 months, but not with sexual risk behaviours. Utilising counselling was also more likely among men who reported having had an STI in the past 12 months and HIV-positive men. Conclusions Men at highest risk, and those likely to be in contact with sexual health services, are those who report most contact with a range of current HIV prevention activities. Offering combination prevention, including outreach by peer health workers, increased uptake of sexual health services delivering behavioural and biomedical interventions, and supported by social marketing to ensure continued community engagement and support, could be the way forward. Focused investment in the needs of those at highest risk, including those diagnosed HIV-positive, may generate a prevention dividend in the long

  12. ASSESSMENT OF PAHS AND SELECTED PESTICIDES IN SHALLOW GROUNDWATER IN THE HIGHEST PROTECTED AREAS IN THE OPOLE REGION, POLAND

    Directory of Open Access Journals (Sweden)

    Mariusz Głowacki

    2014-04-01

    Full Text Available The ground water quality was determined after the analyses of water samples from 18 wells. The wells were in the Groundwater Area with the Highest Protection (Triassic water, Opole region, Poland, rural build up. The water table level was low: 0.5 – 18.0 m below the ground surface level (except for one artesian well. The following parameters were determined: pH, EC, colour, ammonium, nitrite, nitrate, dissolved orthophosphate, total phosphorus, dissolved oxygen, BOD, COD-Mn, COD-Cr, humic substances, chloride, sulphate, total hardness, alkalinity, dry residue PAHs (16 compounds, pesticides (6 compounds, however, only selected data were presented in this paper. In all the analysed water samples chloro-organic pesticides were observed. The analysed water contained heptachlor in the highest concentrations of 15.97 mg/dm3. Good quality water must not include concentrations higher than 0.5 mg/dm3 of heptachlor. However, the concentration was circa 32 times higher than this value. The second pesticide determining poor water quality is dieldrin. This compound in the investigated groundwater was 1.94 mg/dm3 – 4 times higher than the limit for acceptable quality ground water. The concentration of pesticides also changed over the course of the research; the concentration in the analysed groundwater in the same well changed quite dramatically over a period of 1 year. Although PAHs and pesticides are potentially toxic for biological organisms they do exist in the environment as a product of the natural biological transformation of organic matter. The noted concentrations and compositions of PAH compounds were different to natural PAHs. It confirms the fact that agricultural activity influences groundwater quality.

  13. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  14. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  15. 18 CFR 301.4 - Exchange Period Average System Cost determination.

    Science.gov (United States)

    2010-04-01

    ... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE... Period and extend through four (4) years after the Exchange Period. The load forecast for Contract System... Utility's ASC until the change in service territory takes place. (g) ASC determination for Consumer-owned...

  16. On critical cases in limit theory for stationary increments Lévy driven moving averages

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas; Podolskij, Mark

    averages. The limit theory heavily depends on the interplay between the given order of the increments, the considered power, the Blumenthal-Getoor index of the driving pure jump Lévy process L and the behavior of the kernel function g at 0. In this work we will study the critical cases, which were...

  17. Averaging hydraulic head, pressure head, and gravitational head in subsurface hydrology, and implications for averaged fluxes, and hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    G. H. de Rooij

    2009-07-01

    Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.

  18. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  19. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  20. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  1. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  2. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  3. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  4. Bioinformatics programs are 31-fold over-represented among the highest impact scientific papers of the past two decades.

    Science.gov (United States)

    Wren, Jonathan D

    2016-09-01

    To analyze the relative proportion of bioinformatics papers and their non-bioinformatics counterparts in the top 20 most cited papers annually for the past two decades. When defining bioinformatics papers as encompassing both those that provide software for data analysis or methods underlying data analysis software, we find that over the past two decades, more than a third (34%) of the most cited papers in science were bioinformatics papers, which is approximately a 31-fold enrichment relative to the total number of bioinformatics papers published. More than half of the most cited papers during this span were bioinformatics papers. Yet, the average 5-year JIF of top 20 bioinformatics papers was 7.7, whereas the average JIF for top 20 non-bioinformatics papers was 25.8, significantly higher (P papers, bioinformatics journals tended to have higher Gini coefficients, suggesting that development of novel bioinformatics resources may be somewhat 'hit or miss'. That is, relative to other fields, bioinformatics produces some programs that are extremely widely adopted and cited, yet there are fewer of intermediate success. jdwren@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Internal exposure profile of occupational workers of a BWR type atomic power station

    International Nuclear Information System (INIS)

    Hegde, A.G.; Bhat, I.S.

    1979-01-01

    The internal exposure profile of major radionuclides, for Tarapur Atomic Power Station (India) occupational staff for the last 9 years (1970-1978) of station operation, is presented. This power station has two boiling water reactor units. The occupational staff were monitored for internal exposure with the whole body counter. It has been observed that 60 Co, 134 Cs and 137 Cs are major contaminants. The highest yearly average of internal exposure was less than 1% of maximum permissible body burden recommended by ICRP. Depending on the nature of exposures the power station employees were classified under four different groups, (i) maintenance, (ii) operations, (iii) techanical and (iv) non-technical. This study revealed that maintenance group had highest incidence of internal exposure among these. It is also observed that contribution of 60 Co is maximum in the exposure of this group. (B.G.W.)

  6. Impact of integrating wind power in the Norwegian power system

    International Nuclear Information System (INIS)

    Tande, John Olav

    2006-04-01

    Wind power may in the future constitute a significant part of the Norwegian electricity supply. 20 TWh annual wind generation is a realistic goal for 2020 assuming wind farms on-land and offshore. The development of grid codes for wind farms is sound. It is recognising that large wind farms are basically power plants and may participate in securing efficient and stable power system operation. Modern wind farms may control the reactive power or voltage as any other power plant, and may also control active power or frequency as long as wind conditions permits. Grid code requirements must however be carefully assessed and possibly adjusted over time aiming for overall least cost solutions. Development of wind farms are today to some degree hindered by conservative assumptions being made on operation of wind farms in areas with limited power transfer capacity. By accepting temporary grid congestions, however, a large increase installed wind power is viable. For grid congestion that appears a few hours per year only, the cost of lost generation will be modest and may be economic over the alternatives of limiting wind farm capacities or increasing the grid transfer capacity. Wind generation impact on power system operation and adequacy will be overall positive. Combining wind and hydro provides for a more stable annual energy supply than hydro alone, and wind generation will generally be higher in the winter period than in the summer. Wind will replace the generation with the highest operating cost, and reduce the average Nord Pool spot market price. 20 TWh wind will reduce price with about 3 oere/kWh and CO 2 emissions by 12-14 million tons for the case of replacing coal, and about 6 million tons for replacing natural gas. Wind impact on need for balancing power is small, i.e. the extra balancing cost is about 0,8 oere per kWh wind, and about half if investment in new reserve capacity is not needed. In summary this report demonstrates options for large scale integration

  7. Wind power

    International Nuclear Information System (INIS)

    Gipe, P.

    2007-01-01

    This book is a translation of the edition published in the USA under the title of ''wind power: renewable energy for home, farm and business''. In the wake of mass blackouts and energy crises, wind power remains a largely untapped resource of renewable energy. It is a booming worldwide industry whose technology, under the collective wing of aficionados like author Paul Gipe, is coming of age. Wind Power guides us through the emergent, sometimes daunting discourse on wind technology, giving frank explanations of how to use wind technology wisely and sound advice on how to avoid common mistakes. Since the mid-1970's, Paul Gipe has played a part in nearly every aspect of wind energy development from installing small turbines to promoting wind energy worldwide. As an American proponent of renewable energy, Gipe has earned the acclaim and respect of European energy specialists for years, but his arguments have often fallen on deaf ears at home. Today, the topic of wind power is cropping up everywhere from the beaches of Cape Cod to the Oregon-Washington border, and one wind turbine is capable of producing enough electricity per year to run 200 average American households. Now, Paul Gipe is back to shed light on this increasingly important energy source with a revised edition of Wind Power. Over the course of his career, Paul Gipe has been a proponent, participant, observer, and critic of the wind industry. His experience with wind has given rise to two previous books on the subject, Wind Energy Basics and Wind Power for Home and Business, which have sold over 50,000 copies. Wind Power for Home and Business has become a staple for both homeowners and professionals interested in the subject, and now, with energy prices soaring, interest in wind power is hitting an all-time high. With chapters on output and economics, Wind Power discloses how much you can expect from each method of wind technology, both in terms of energy and financial savings. The book updated models

  8. ACHIEVEMENT OF THE HIGHEST LEVEL OF SAFETY AND HEALTH AT WORK AND THE SATISFACTION OF EMPLOYEES IN THE TEXTILE INDUSTRY

    Directory of Open Access Journals (Sweden)

    Snezana Urosevic

    2016-12-01

    Full Text Available Safety and health at work involves the exercise of such working conditions that take certain measures and activities in order to protect the life and health of employees. The interest of society, of all stakeholders and every individual is to achieve the highest level of safety and health at work, to unwanted consequences such as injuries, occupational diseases and diseases related to work are reduced to a minimum, and to create the conditions work in which employees have a sense of satisfaction in the performance of their professional duties. Textile industry is a sector with higher risk, because the plants of textile industry prevailing unfavorable microclimate conditions: high air temperature and high humidity, and often insufficient illumination of rooms and increased noise. The whole line of production in the textile industry, there is a risk of injury, the most common with mechanical force, or gaining burns from heat or chemicals. All of these factors are present in the process of production and processing of textiles and the same may affect the incidence of occupational diseases of workers, absenteeism, reduction of their working capacity and productivity. With the progress of the textile industry production increases in the number of hazardous and harmful substances that may pose a potential danger to the employee in this branch of the economy as well as the harmful impact on the environment. Therefore, it is important to give special attention to these problems.

  9. Density functional theory, comparative vibrational spectroscopic studies, highest occupied molecular orbital and lowest unoccupied molecular orbital analysis of Linezolid

    Science.gov (United States)

    Rajalakshmi, K.; Gunasekaran, S.; Kumaresan, S.

    2015-06-01

    The Fourier transform infrared spectra and Fourier transform Raman spectra of Linezolid have been recorded in the regions 4,000-400 and 4,000-100 cm-1, respectively. Utilizing the observed Fourier transform infrared spectra and Fourier transform Raman spectra data, a complete vibrational assignment and analysis of the fundamental modes of the compound have been carried out. The optimum molecular geometry, harmonic vibrational frequencies, infrared intensities and Raman scattering activities, have been calculated by density functional theory with 6-31G(d,p), 6-311G(d,p) and M06-2X/6-31G(d,p) levels. The difference between the observed and scaled wavenumber values of most of the fundamentals is very small. A detailed interpretation of the infrared and Raman spectra of Linezolid is reported. Mulliken's net charges have also been calculated. Ultraviolet-visible spectrum of the title molecule has also been calculated using time-dependent density functional method. Besides, molecular electrostatic potential, highest occupied molecular orbital and lowest unoccupied molecular orbital analysis and several thermodynamic properties have been performed by the density functional theoretical method.

  10. New hybrid magnet system for structure research at highest magnetic fields and temperatures in the millikelvin region

    International Nuclear Information System (INIS)

    Smeibidl, Peter; Ehmler, Hartmut; Tennant, Alan; Bird, Mark

    2012-01-01

    The Helmholtz Centre Berlin (HZB) is a user facility for the study of structure and dynamics with neutrons and synchrotron radiation with special emphasis on experiments under extreme conditions. Neutron scattering is uniquely suited to study magnetic properties on a microscopic length scale, because neutrons have comparable wavelengths and, due to their magnetic moment, they interact with the atomic magnetic moments. At HZB a dedicated instrument for neutron scattering at extreme magnetic fields and low temperatures is under construction, the Extreme Environment Diffractometer ExED. It is projected according to the time-of-flight principle for elastic and inelastic neutron scattering and for the special geometric constraints of analysing samples in a high field magnet. The new hybrid magnet will not only allow for novel experiments, it will be at the forefront of development in magnet technology itself. With a set of superconducting and resistive coils a maximum field above 30 T will be possible. To compromise between the needs of the magnet design for highest fields and the concept of the neutron instrument, the magnetic field will be generated by means of a coned, resistive inner solenoid and a superconducting outer solenoid with horizontal field orientation. To allow for experiments down to Millikelvin Temperatures the installation of a 3 He or a dilution cryostat with a closed cycle precooling stage is foreseen.

  11. Pleistocene climatic oscillations rather than recent human disturbance influence genetic diversity in one of the world's highest treeline species.

    Science.gov (United States)

    Peng, Yanling; Lachmuth, Susanne; Gallegos, Silvia C; Kessler, Michael; Ramsay, Paul M; Renison, Daniel; Suarez, Ricardo; Hensen, Isabell

    2015-10-01

    Biological responses to climatic change usually leave imprints on the genetic diversity and structure of plants. Information on the current genetic diversity and structure of dominant tree species has facilitated our general understanding of phylogeographical patterns. Using amplified fragment length polymorphism (AFLPs), we compared genetic diversity and structure of 384 adults of P. tarapacana with those of 384 seedlings across 32 forest sites spanning a latitudinal gradient of 600 km occurring between 4100 m and 5000 m a.s.l. in Polylepis tarapacana (Rosaceae), one of the world's highest treeline species endemic to the central Andes. Moderate to high levels of genetic diversity and low genetic differentiation were detected in both adults and seedlings, with levels of genetic diversity and differentiation being almost identical. Four slightly genetically divergent clusters were identified that accorded to differing geographical regions. Genetic diversity decreased from south to north and with increasing precipitation for adults and seedlings, but there was no relationship to elevation. Our study shows that, unlike the case for other Andean treeline species, recent human activities have not affected the genetic structure of P. tarapacana, possibly because its inhospitable habitat is unsuitable for agriculture. The current genetic pattern of P. tarapacana points to a historically more widespread distribution at lower altitudes, which allowed considerable gene flow possibly during the glacial periods of the Pleistocene epoch, and also suggests that the northern Argentinean Andes may have served as a refugium for historical populations. © 2015 Botanical Society of America.

  12. A Systematic Study to Optimize SiPM Photo-Detectors for Highest Time Resolution in PET

    CERN Document Server

    Gundacker, S.; Frisch, B.; Hillemanns, H.; Jarron, P.; Meyer, T.; Pauwels, K.; Lecoq, P.

    2012-01-01

    We report on a systematic study of time resolution made with three different commercial silicon photomultipliers (SiPMs) (Hamamatsu MPPC S10931-025P, S10931-050P, and S10931-100P) and two LSO scintillating crystals. This study aimed to determine the optimum detector conditions for highest time resolution in a prospective time-of-flight positron emission tomography (TOF-PET) system. Measurements were based on the time over threshold method in a coincidence setup using the ultrafast amplifier-discriminator NINO and a fast oscilloscope. Our tests with the three SiPMs of the same area but of different SPAD sizes and fill factors led to best results with the Hamamatsu type of 50×50×μm2 single-pixel size. For this type of SiPM and under realistic geometrical PET scanner conditions, i.e., with 2×2×10×mm3 LSO crystals, a coincidence time resolution of 220 ±4 ps FWHM could be achieved. The results are interpreted in terms of SiPM photon detection efficiency (PDE), dark noise, and photon yield.

  13. Site characterization of the highest-priority geologic formations for CO2 storage in Wyoming

    Energy Technology Data Exchange (ETDEWEB)

    Surdam, Ronald C. [Univ. of Wyoming, Laramie, WY (United States); Bentley, Ramsey [Univ. of Wyoming, Laramie, WY (United States); Campbell-Stone, Erin [Univ. of Wyoming, Laramie, WY (United States); Dahl, Shanna [Univ. of Wyoming, Laramie, WY (United States); Deiss, Allory [Univ. of Wyoming, Laramie, WY (United States); Ganshin, Yuri [Univ. of Wyoming, Laramie, WY (United States); Jiao, Zunsheng [Univ. of Wyoming, Laramie, WY (United States); Kaszuba, John [Univ. of Wyoming, Laramie, WY (United States); Mallick, Subhashis [Univ. of Wyoming, Laramie, WY (United States); McLaughlin, Fred [Univ. of Wyoming, Laramie, WY (United States); Myers, James [Univ. of Wyoming, Laramie, WY (United States); Quillinan, Scott [Univ. of Wyoming, Laramie, WY (United States)

    2013-12-07

    This study, funded by U.S. Department of Energy National Energy Technology Laboratory award DE-FE0002142 along with the state of Wyoming, uses outcrop and core observations, a diverse electric log suite, a VSP survey, in-bore testing (DST, injection tests, and fluid sampling), a variety of rock/fluid analyses, and a wide range of seismic attributes derived from a 3-D seismic survey to thoroughly characterize the highest-potential storage reservoirs and confining layers at the premier CO2 geological storage site in Wyoming. An accurate site characterization was essential to assessing the following critical aspects of the storage site: (1) more accurately estimate the CO2 reservoir storage capacity (Madison Limestone and Weber Sandstone at the Rock Springs Uplift (RSU)), (2) evaluate the distribution, long-term integrity, and permanence of the confining layers, (3) manage CO2 injection pressures by removing formation fluids (brine production/treatment), and (4) evaluate potential utilization of the stored CO2

  14. [Archivos de Bronconeumología: among the 3 Spanish medical journals with the highest national impact factors].

    Science.gov (United States)

    Aleixandre Benavent, R; Valderrama Zurián, J C; Castellano Gómez, M; Simó Meléndez, R; Navarro Molina, C

    2004-12-01

    Citation analysis elucidates patterns of information consumption within professional communities. The aim of this study was to analyze the citations of 87 Spanish medical journals by calculating their impact factors and immediacy indices for 2001, and to estimate the importance of Archivos de Bronconeumología within the framework of Spanish medicine. Eighty-seven Spanish medical journals were included. All were listed in the Spanish Medical Index (Indice Medico Español) and in at least one of the following databases: MEDLINE, BIOSIS, EMBASE, or Science Citation Index. References to articles from 1999 through 2001 in citable articles from 2001 were analyzed. Using the method of the Institute for Scientific Information, we calculated the national impact factor and immediacy index for each journal. The journals with the highest national impact factors were Revista Española de Quimioterapia (0.894), Medicina Clínica (0.89), and Archivos de Bronconeumología (0.732). The self-citation percentage of Archivos de Bronconeumología was 18.3% and the immediacy index was 0.033. The impact factor obtained by Archivos de Bronconeumología confirms its importance in Spanish medicine and validates its inclusion as a source journal in Science Citation Index and Journal Citation Report.

  15. The Risk of Reported Cryptosporidiosis in Children Aged <5 Years in Australia is Highest in Very Remote Regions.

    Science.gov (United States)

    Lal, Aparna; Fearnley, Emily; Kirk, Martyn

    2015-09-18

    The incidence of cryptosporidiosis is highest in children <5 years, yet little is known about disease patterns across urban and rural areas of Australia. In this study, we examine whether the risk of reported cryptosporidiosis in children <5 years varies across an urban-rural gradient, after controlling for season and gender. Using Australian data on reported cryptosporidiosis from 2001 to 2012, we spatially linked disease data to an index of geographic remoteness to examine the geographic variation in cryptosporidiosis risk using negative binomial regression. The Incidence Risk Ratio (IRR) of reported cryptosporidiosis was higher in inner regional (IRR 1.4 95% CI 1.2-1.7, p < 0.001), and outer regional areas (IRR 2.4 95% CI 2.2-2.9, p < 0.001), and in remote (IRR 5.2 95% CI 4.3-6.2, p < 0.001) and very remote (IRR 8.2 95% CI 6.9-9.8, p < 0.001) areas, compared to major cities. A linear test for trend showed a statistically significant trend with increasing remoteness. Remote communities need to be a priority for future targeted health promotion and disease prevention interventions to reduce cryptosporidiosis in children <5 years.

  16. The Risk of Reported Cryptosporidiosis in Children Aged <5 Years in Australia is Highest in Very Remote Regions

    Directory of Open Access Journals (Sweden)

    Aparna Lal

    2015-09-01

    Full Text Available The incidence of cryptosporidiosis is highest in children <5 years, yet little is known about disease patterns across urban and rural areas of Australia. In this study, we examine whether the risk of reported cryptosporidiosis in children <5 years varies across an urban-rural gradient, after controlling for season and gender. Using Australian data on reported cryptosporidiosis from 2001 to 2012, we spatially linked disease data to an index of geographic remoteness to examine the geographic variation in cryptosporidiosis risk using negative binomial regression. The Incidence Risk Ratio (IRR of reported cryptosporidiosis was higher in inner regional (IRR 1.4 95% CI 1.2–1.7, p < 0.001, and outer regional areas (IRR 2.4 95% CI 2.2–2.9, p < 0.001, and in remote (IRR 5.2 95% CI 4.3–6.2, p < 0.001 and very remote (IRR 8.2 95% CI 6.9–9.8, p < 0.001 areas, compared to major cities. A linear test for trend showed a statistically significant trend with increasing remoteness. Remote communities need to be a priority for future targeted health promotion and disease prevention interventions to reduce cryptosporidiosis in children <5 years.

  17. Two Clock Transitions in Neutral Yb for the Highest Sensitivity to Variations of the Fine-Structure Constant

    Science.gov (United States)

    Safronova, Marianna S.; Porsev, Sergey G.; Sanner, Christian; Ye, Jun

    2018-04-01

    We propose a new frequency standard based on a 4 f146 s 6 p P0 3 -4 f136 s25 d (J =2 ) transition in neutral Yb. This transition has a potential for high stability and accuracy and the advantage of the highest sensitivity among atomic clocks to variation of the fine-structure constant α . We find its dimensionless α -variation enhancement factor to be K =-15 , in comparison to the most sensitive current clock (Yb+ E 3 , K =-6 ), and it is 18 times larger than in any neutral-atomic clocks (Hg, K =0.8 ). Combined with the unprecedented stability of an optical lattice clock for neutral atoms, this high sensitivity opens new perspectives for searches for ultralight dark matter and for tests of theories beyond the standard model of elementary particles. Moreover, together with the well-established 1S0-3P0 transition, one will have two clock transitions operating in neutral Yb, whose interleaved interrogations may further reduce systematic uncertainties of such clock-comparison experiments.

  18. Impacts of informal trails on vegetation and soils in the highest protected area in the Southern Hemisphere.

    Science.gov (United States)

    Barros, Agustina; Gonnet, Jorge; Pickering, Catherine

    2013-09-30

    There is limited recreation ecology research in South America, especially studies looking at informal trails. Impacts of informal trails formed by hikers and pack animals on vegetation and soils were assessed for the highest protected area in the Southern Hemisphere, Aconcagua Provincial Park. The number of braided trails, their width and depth were surveyed at 30 sites along the main access route to Mt Aconcagua (6962 m a.s.l.). Species composition, richness and cover were also measured on control and trail transects. A total of 3.3 ha of alpine meadows and 13.4 ha of alpine steppe were disturbed by trails. Trails through meadows resulted in greater soil loss, more exposed soil and rock and less vegetation than trails through steppe vegetation. Trampling also affected the composition of meadow and steppe vegetation with declines in sedges, herbs, grasses and shrubs on trails. These results highlight how visitor use can result in substantial cumulative damage to areas of high conservation value in the Andes. With unregulated use of trails and increasing visitation, park agencies need to limit the further spread of informal trails and improve the conservation of plant communities in Aconcagua Provincial Park and other popular parks in the region. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Operating experience from Swedish nuclear power plants 2001

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-07-01

    The total production of electricity from Swedish nuclear power plants was 69.2 TWh during 2001, which is an increase of more than 25% compared to 2000. The hydroelectric power production increased to 78.3 TWh, 22% more than during a normal year, i.e. a year with average rainfall. Wind power contributed 0.5 TWh, and remaining production sources, mainly from solid fuel plants combined with district heating, contributed 9.6 TWh. The electricity generation totalled 157.6 TWh, the highest annual production to date. The preliminary figures for export were 18.5 TWh and and for import 11.1 TWh. Operational statistics are presented for each Swedish reactor. Two events, given INES level 1 rating, are reported from Barsebaeck 2 and Ringhals 2.

  20. Operating experience from Swedish nuclear power plants 2001

    International Nuclear Information System (INIS)

    2002-01-01

    The total production of electricity from Swedish nuclear power plants was 69.2 TWh during 2001, which is an increase of more than 25% compared to 2000. The hydroelectric power production increased to 78.3 TWh, 22% more than during a normal year, i.e. a year with average rainfall. Wind power contributed 0.5 TWh, and remaining production sources, mainly from solid fuel plants combined with district heating, contributed 9.6 TWh. The electricity generation totalled 157.6 TWh, the highest annual production to date. The preliminary figures for export were 18.5 TWh and and for import 11.1 TWh. Operational statistics are presented for each Swedish reactor. Two events, given INES level 1 rating, are reported from Barsebaeck 2 and Ringhals 2

  1. 2004 world nuclear power report - evaluation

    International Nuclear Information System (INIS)

    Anon.

    2004-01-01

    Last year, 2003, 439 nuclear power plants were available for electricity generation in 31 countries of the world. With an aggregate gross capacity of 380,489 MWe and an aggregate net capacity of 361,476MWe, nuclear generating capacity reached its highest level so far. Nine different reactor lines are operated in the commercial nuclear power plants. Light water reactors (PWR and BWR) continue to be in the lead with 355 plants. Twenty-nine nuclear power plants with an aggregate gross capacity of 24,222 MWe and an aggregate net capacity of 23,066 MWe were under construction in eleven countries. Of these, twenty are light water reactors, and seven are CANDU-type reactors. Ninety-nine commercial reactors with a capacity in excess of 5 MWe have so far been decommissioned in eighteen countries, most of them prototype plants of low power. 228 plants, i. e. slightly more than half of the number of plants currently in operation, were commissioned in the 1980s. The oldest commercial nuclear power plant in the world, Calder Hall unit 1, was disconnected from the power grid for good in its 48th year of operation in 2003. For the first time in ten years, the availability in terms of time and capacity of nuclear power plants has decreased from 83,80% in 2002 to 80.50%, and from 84.60% to 81.50%, respectively, in 2003. The main causes are prolonged outages of high-capacity plants in Japan as a consequence of administrative restrictions. The four nuclear power plants in Finland continue to be at the top of the list worldwide with a cumulated average availability of capacity of 90.30%. (orig.)

  2. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  3. A story about the discovery of the largest glacier and the highest peak in heart of the Pamirs

    Directory of Open Access Journals (Sweden)

    V. M. Kotlyakov

    2014-01-01

    Full Text Available The paper tells a story how the “blank spot” at the Pamirs center was puzzled out. In 1878, a small party of explorers headed by V.D. Oshanin had found here a big glacier about 30–40 km long and named it for Fedchenko. In 1884–85, known investigator G.E. Grumm-Grzhimailo made his important proposal about orographic structure of the Pamirs central part. In 1890, expedition headed by topographer N.I. Kosinenko investigated the lower part of the Fedchenko Glacier and, for the first time, saw a separate high peak. In 1916, astronomer Ya.I. Belyaev had put on a map a great pyramidal summit but he had mistaken it for the Garmo Peak well known to local Tadzhiks (Fig. 2.In 1927, N.L. Korzhenevsky published a chart of arrangement of ridges near sources of the river Muksu (Fig. 3 that became a basis for work of the Tadzhik-Pamir expedition of 1928–1932. In 1928, Ya.I. Belyaev determined a true length of the Fechenko Glacier that was 70 km, and geodesist I.G. Dorofeevmapped the whole basin of this glacier (Fig. 4 including also a high irregular truncated pyramid of 7495 m in height (as he believed. But earlier this summit was identified as known the Garmo Peak. And only in 1932, it was established that definitions made by Dorofeev in 1928 were related to this highest peak of the Pamirs and also of the whole Soviet Union. The chart of real Central Pamir orography constructed by I.G. Dorofeev is presented in the paper together with his letter addressed to the author (Fig. 5.Thus, the “Garmo peaks” which were observed by the above mentioned explorers were actually three different summits. One of them does tower on the north of the “knot being puzzled out” and reaches 7495 m, and namely this “one-tooth” peak was repeatedly seen by N.V. Krylenko from valleys Gando and Garmo. It was named then the Stalin peak, and later – the peak of Communism. Another one is located in 18 km southward, and this peak is actually the true Garmo Peak 6595 m

  4. Binding of higher alcohols onto Mn(12) single-molecule magnets (SMMs): access to the highest barrier Mn(12) SMM.

    Science.gov (United States)

    Lampropoulos, Christos; Redler, Gage; Data, Saiti; Abboud, Khalil A; Hill, Stephen; Christou, George

    2010-02-15

    Two new members of the Mn(12) family of single-molecule magnets (SMMs), [Mn(12)O(12)(O(2)CCH(2)Bu(t))(16)(Bu(t)OH)(H(2)O)(3)].2Bu(t)OH (3.2Bu(t)OH) and [Mn(12)O(12)(O(2)CCH(2)Bu(t))(16)(C(5)H(11)OH)(4)] (4) (C(5)H(11)OH is 1-pentanol), are reported. They were synthesized from [Mn(12)O(12)(O(2)CMe)(16)(H(2)O)(4)].2MeCO(2)H.4H(2)O (1) by carboxylate substitution and crystallization from the appropriate alcohol-containing solvent. Complexes 3 and 4 are new members of the recently established [Mn(12)O(12)(O(2)CCH(2)Bu(t))(16)(solv)(4)] (solv = H(2)O, alcohols) family of SMMs. Only one bulky Bu(t)OH can be accommodated into 3, and even this causes significant distortion of the [Mn(12)O(12)] core. Variable-temperature, solid-state alternating current (AC) magnetization studies were carried out on complexes 3 and 4, and they established that both possess an S = 10 ground state spin and are SMMs. However, the magnetic behavior of the two compounds was found to be significantly different, with 4 showing out-of-phase AC peaks at higher temperatures than 3. High-frequency electron paramagnetic resonance (HFEPR) studies were carried out on single crystals of 3.2Bu(t)OH and 4, and these revealed that the axial zero-field splitting constant, D, is very different for the two compounds. Furthermore, it was established that 4 is the Mn(12) SMM with the highest kinetic barrier (U(eff)) to date. The results reveal alcohol substitution as an additional and convenient means to affect the magnetization relaxation barrier of the Mn(12) SMMs without major change to the ligation or oxidation state.

  5. Where do the highest energy CR's come from? and How does the Milky Way affect their arrival directions?

    Directory of Open Access Journals (Sweden)

    Kronberg Philipp

    2013-06-01

    Full Text Available A “grand magnetic design” for the Milky Way disk clearly emerges within ~1.5 kpc of the Galactic mid-plane near the Sun [1], and reveals a pitch angle of −5.5°. directed inward from the Solar tangential. This pitch angle can be expected to differ for Galactic disc locations other than ours. Above ~1.5 kpc, the field geometry is completely different, and its 3-D structure is not yet completely specified. However it appears that the UHECR (> 5 · 1019 eV propagation to us is not much affected by the halo field, at least for protons. I discuss new multi-parameter analyses of UHECR deflections which provide a conceptual “template” for future interpretations of energy-species-direction data from AUGER, HiRes etc., and their successors [2]. I show how the strength and structure of the cosmologically nearby intergalactic magnetic field, BIGM, is now well-estimated out to D ~5 Mpc from the Milky Way: − 20nG. These are the first VHECR data-based estimates of BIGM on nearby supragalactic scales, and are also important for understanding and modeling CR propagation in the more distant Universe. CR Acceleration to the highest energies is probably a natural accompanying phenomenon of Supermassive Black Hole (SMBH-associated jets and lobes [3]. I briefly describe what we know about magnetic configurations in these candidate sites for UHECR acceleration, and the first direct estimate of an extragalactic Poynting flux current, ~3 · 1018 Amperes [4, 5]. This number connects directly to SMBH accretion disk physics, and leads directly to ideas of how VHECR acceleration in jets and lobes, possibly involving magnetic reconnection, is likely to be common in the Universe. It remains to be fully understood.

  6. Identification of multiple sclerosis patients at highest risk of cognitive impairment using an integrated brain magnetic resonance imaging assessment approach.

    Science.gov (United States)

    Uher, T; Vaneckova, M; Sormani, M P; Krasensky, J; Sobisek, L; Dusankova, J Blahova; Seidl, Z; Havrdova, E; Kalincik, T; Benedict, R H B; Horakova, D

    2017-02-01

    While impaired cognitive performance is common in multiple sclerosis (MS), it has been largely underdiagnosed. Here a magnetic resonance imaging (MRI) screening algorithm is proposed to identify patients at highest risk of cognitive impairment. The objective was to examine whether assessment of lesion burden together with whole brain atrophy on MRI improves our ability to identify cognitively impaired MS patients. Of the 1253 patients enrolled in the study, 1052 patients with all cognitive, volumetric MRI and clinical data available were included in the analysis. Brain MRI and neuropsychological assessment with the Brief International Cognitive Assessment for Multiple Sclerosis were performed. Multivariable logistic regression and individual prediction analysis were used to investigate the associations between MRI markers and cognitive impairment. The results of the primary analysis were validated at two subsequent time points (months 12 and 24). The prevalence of cognitive impairment was greater in patients with low brain parenchymal fraction (BPF) (3.5 ml) than in patients with high BPF (>0.85) and low T2-LV (patients predicted cognitive impairment with 83% specificity, 82% negative predictive value, 51% sensitivity and 75% overall accuracy. The risk of confirmed cognitive decline over the follow-up was greater in patients with high T2-LV (OR 2.1; 95% CI 1.1-3.8) and low BPF (OR 2.6; 95% CI 1.4-4.7). The integrated MRI assessment of lesion burden and brain atrophy may improve the stratification of MS patients who may benefit from cognitive assessment. © 2016 EAN.

  7. Lyman alpha emission in nearby star-forming galaxies with the lowest metallicities and the highest [OIII]/[OII] ratios

    Science.gov (United States)

    Izotov, Yuri

    2017-08-01

    The Lyman alpha line of hydrogen is the strongest emission line in galaxies and the tool of predilection for identifying and studying star-forming galaxies over a wide range of redshifts, especially in the early universe. However, it has become clear over the years that not all of the Lyman alpha radiation escapes, due to its resonant scattering on the interstellar and intergalactic medium, and absorption by dust. Although our knowledge of the high-z universe depends crucially on that line, we still do not have a complete understanding of the mechanisms behind the production, radiative transfer and escape of Lyman alpha in galaxies. We wish here to investigate these mechanisms by studying the properties of the ISM in a unique sample of 8 extreme star-forming galaxies (SFGs) that have the highest excitation in the SDSS spectral data base. These dwarf SFGs have considerably lower stellar masses and metallicities, and higher equivalent widths and [OIII]5007/[OII]3727 ratios compared to all nearby SFGs with Lyman alpha emission studied so far with COS. They are, however, very similar to the dwarf Lyman alpha emitters at redshifts 3-6, which are thought to be the main sources of reionization in the early Universe. By combining the HST/COS UV data with data in the optical range, and using photoionization and radiative transfer codes, we will be able to study the properties of the Lyman alpha in these unique objects, derive column densities of the neutral hydrogen N(HI) and compare them with N(HI) obtained from the HeI emission-line ratios in the optical spectra. We will derive Lyman alpha escape fractions and indirectly Lyman continuum escape fractions.

  8. Can All Doctors Be Like This? Seven Stories of Communication Transformation Told by Physicians Rated Highest by Patients.

    Science.gov (United States)

    Janisse, Tom; Tallman, Karen

    2017-01-01

    The top predictors of patient satisfaction with clinical visits are the quality of the physician-patient relationship and the communications contributing to their relationship. How do physicians improve their communication, and what effect does it have on them? This article presents the verbatim stories of seven high-performing physicians describing their transformative change in the areas of communication, connection, and well-being. Data for this study are based on interviews from a previous study in which a 6-question set was posed, in semistructured 60-minute interviews, to 77 of the highest-performing Permanente Medical Group physicians in 4 Regions on the "Art of Medicine" patient survey. Transformation stories emerged spontaneously during the interviews, and so it was an incidental finding when some physicians identified that they were not always high performing in their communication with patients. Seven different modes of transformation in communication were described by these physicians: a listening tool, an awareness course, finding new meaning in clinical practice, a technologic tool, a sudden insight, a mentor observation, and a physician-as-patient experience. These stories illustrate how communication skills can be learned through various activities and experiences that transform physicians into those who are highly successful communicators. All modes result in a change of state-a new way of seeing, of being-and are not just a new tool or a new practice, but a change in state of mind. This state resulted in a marked change of behavior, and a substantial improvement of communication and relationship.

  9. Comparison on the Analysis on PM10 Data based on Average and Extreme Series

    Directory of Open Access Journals (Sweden)

    Mohd Amin Nor Azrita

    2018-01-01

    Full Text Available The main concern in environmental issue is on extreme phenomena (catastrophic instead of common events. However, most statistical approaches are concerned primarily with the centre of a distribution or on the average value rather than the tail of the distribution which contains the extreme observations. The concept of extreme value theory affords attention to the tails of distribution where standard models are proved unreliable to analyse extreme series. High level of particulate matter (PM10 is a common environmental problem which causes various impacts to human health and material damages. If the main concern is on extreme events, then extreme value analysis provides the best result with significant evidence. The monthly average and monthly maxima PM10 data for Perlis from 2003 to 2014 were analysed. Forecasting for average data is made by Holt-Winters method while return level determine the predicted value of extreme events that occur on average once in a certain period. The forecasting from January 2015 to December 2016 for average data found that the highest forecasted value is 58.18 (standard deviation 18.45 on February 2016 while return level achieved 253.76 units for 24 months (2015-2016 return periods.

  10. SPATIAL DISTRIBUTION OF THE AVERAGE RUNOFF IN THE IZA AND VIȘEU WATERSHEDS

    Directory of Open Access Journals (Sweden)

    HORVÁTH CS.

    2015-03-01

    Full Text Available The average runoff represents the main parameter with which one can best evaluate an area’s water resources and it is also an important characteristic in al river runoff research. In this paper we choose a GIS methodology for assessing the spatial evolution of the average runoff, using validity curves we identifies three validity areas in which the runoff changes differently with altitude. The tree curves were charted using the average runoff values of 16 hydrometric stations from the area, eight in the Vișeu and eight in the Iza river catchment. Identifying the appropriate areas of the obtained correlations curves (between specific average runoff and catchments mean altitude allowed the assessment of potential runoff at catchment level and on altitudinal intervals. By integrating the curves functions in to GIS we created an average runoff map for the area; from which one can easily extract runoff data using GIS spatial analyst functions. The study shows that from the three areas the highest runoff corresponds with the third zone but because it’s small area the water volume is also minor. It is also shown that with the use of the created runoff map we can compute relatively quickly correct runoff values for areas without hydrologic control.

  11. Data base of system-average dose rates at nuclear power plants: Final report

    International Nuclear Information System (INIS)

    Beal, S.K.; Britz, W.L.; Cohen, S.C.; Goldin, A.S.; Goldin, D.J.

    1987-10-01

    In this work, a data base is derived of area dose rates for systems and components listed in the Energy Economic Data Base (EEDB). The data base is derived from area surveys obtained during outages at four boiling water reactors (BWRs) at three stations and eight pressurized water reactors (PWRs) at four stations. Separate tables are given for BWRs and PWRs. These tables may be combined with estimates of labor hours to provide order-of-magnitude estimates of exposure for purposes of regulatory analysis. They are only valid for work involving entire systems or components. The estimates of labor hours used in conjunction with the dose rates to estimate exposure must be adjusted to account for in-field time. Finally, the dose rates given in the data base do not reflect ALARA considerations. 11 refs., 2 figs., 3 tabs

  12. Field control in a standing wave structure at high average beam power

    International Nuclear Information System (INIS)

    McKeown, J.; Fraser, J.S.; McMichael, G.E.

    1976-01-01

    A 100% duty factor electron beam has been accelerated through a graded-β side-coupled standing wave structure operating in π/2 mode. Three non-interacting control loops are necessary to provide the accelerating field amplitude and phase and to control structure resonance. The principal disturbances have been identified and measured over the beam current range of 0 to 20 mA. Design details are presented of control loops which regulate the accelerating field amplitude to +-0.3% and its phase to +-0.5 deg for 50% beam loading. (author)

  13. Radiation chemical research around a 15 MeV high average power linac

    International Nuclear Information System (INIS)

    Lahorte, P.; Mondelaers, W.; Masschaele, B.; Cauwels, P.

    1998-01-01

    Complete text of publication follows. The Laboratory of Subatomic and Radiation Physics of the University of Gent is equipped with a 15 MeV 20 kW linear electron accelerator (linac) facility. This accelerator was initially designed for fundamental nuclear physics research but was modified to generate beams for new experimental interdisciplinary projects. In its present configuration the accelerator is used as a multipurpose apparatus for research in the fields of polymer chemistry (crosslinking), biomaterials (hydrogels, drug delivery systems, implants), medicine (extracorporeal bone irradiation, human grafts), biomedical materials, food technology (package materials, food preservation), dosimetry (EPR of alanine systems, geldosimetry), solid-state physics, agriculture and nuclear and radiation physics. In this paper an overview will be presented of both the various research projects around our linac facility involving radiation chemistry and the specialised technologies facilitating this research

  14. High Average Power Raman Conversion in Diamond: ’Eyesafe’ Output and Fiber Laser Conversion

    Science.gov (United States)

    2015-06-19

    Kitzler and RP. Mildren, Laser & Photon. Reviews, vol. 8, L37 -L41 (2014) 5 Distribution Code A: Approved for public release, distribution is... L37 -L41 (2014) O. Kitzler, A. McKay, D.J. Spence and R.P. Mildren, "Modelling and Optimization of Continuous-Wave External Cavity Raman Lasers

  15. High average power, highly brilliant laser-produced plasma source for soft X-ray spectroscopy.

    Science.gov (United States)

    Mantouvalou, Ioanna; Witte, Katharina; Grötzsch, Daniel; Neitzel, Michael; Günther, Sabrina; Baumann, Jonas; Jung, Robert; Stiel, Holger; Kanngiesser, Birgit; Sandner, Wolfgang

    2015-03-01

    In this work, a novel laser-produced plasma source is presented which delivers pulsed broadband soft X-radiation in the range between 100 and 1200 eV. The source was designed in view of long operating hours, high stability, and cost effectiveness. It relies on a rotating and translating metal target and achieves high stability through an on-line monitoring device using a four quadrant extreme ultraviolet diode in a pinhole camera arrangement. The source can be operated with three different laser pulse durations and various target materials and is equipped with two beamlines for simultaneous experiments. Characterization measurements are presented with special emphasis on the source position and emission stability of the source. As a first application, a near edge X-ray absorption fine structure measurement on a thin polyimide foil shows the potential of the source for soft X-ray spectroscopy.

  16. Kilowatt average power 100 J-level diode pumped solid state laser

    Czech Academy of Sciences Publication Activity Database

    Mason, P.; Divoký, Martin; Ertel, K.; Pilař, Jan; Butcher, T.; Hanuš, Martin; Banerjee, S.; Phillips, J.; Smith, J.; De Vido, M.; Lucianetti, Antonio; Hernandez-Gomez, C.; Edwards, C.; Mocek, Tomáš; Collier, J.

    2017-01-01

    Roč. 4, č. 4 (2017), s. 438-439 ISSN 2334-2536 R&D Projects: GA MŠk LO1602; GA MŠk LM2015086 Institutional support: RVO:68378271 Keywords : diode-pumped * solid state * laser Subject RIV: BH - Optics, Masers, Lasers OBOR OECD: Optics (including laser optics and quantum optics) Impact factor: 7.727, year: 2016

  17. Characterization of a klystrode as a RF source for high-average-power accelerators

    International Nuclear Information System (INIS)

    Rees, D.; Keffeler, D.; Roybal, W.; Tallerico, P.J.

    1995-01-01

    The klystrode is a relatively new type of RF source that has demonstrated dc-to-RF conversion efficiencies in excess of 70% and a control characteristic uniquely different from those for klystron amplifiers. The different control characteristic allows the klystrode to achieve this high conversion efficiency while still providing a control margin for regulation of the accelerator cavity fields. The authors present test data from a 267-MHz, 250-kW, continuous-wave (CW) klystrode amplifier and contrast this data with conventional klystron performance, emphasizing the strengths and weaknesses of the klystrode technology for accelerator applications. They present test results describing that limitation for the 250-kW, CW klystrode and extrapolate the data to other frequencies. A summary of the operating regime explains the clear advantages of the klystrode technology over the klystron technology

  18. Pulse repetition frequency effects in a high average power x-ray preionized excimer laser

    International Nuclear Information System (INIS)

    Fontaine, B.; Forestier, B.; Delaporte, P.; Canarelli, P.

    1989-01-01

    Experimental study of waves damping in a high repetition rate excimer laser is undertaken. Excitation of laser active medium in a subsonic loop is achieved by means of a classical discharge, through transfer capacitors. The discharge stability is controlled by a wire ion plasma (w.i.p.) X-rays gun. The strong acoustic waves induced by the active medium excitation may lead to a decrease, at high PRF, of the energy per pulse. First results of the influence of a damping of induced density perturbations between two successive pulses are presented

  19. National and Subnational Population-Based Incidence of Cancer in Thailand: Assessing Cancers with the Highest Burdens

    Directory of Open Access Journals (Sweden)

    Shama Virani

    2017-08-01

    Full Text Available In Thailand, five cancer types—breast, cervical, colorectal, liver and lung cancer—contribute to over half of the cancer burden. The magnitude of these cancers must be quantified over time to assess previous health policies and highlight future trajectories for targeted prevention efforts. We provide a comprehensive assessment of these five cancers nationally and subnationally, with trend analysis, projections, and number of cases expected for the year 2025 using cancer registry data. We found that breast (average annual percent change (AAPC: 3.1% and colorectal cancer (female AAPC: 3.3%, male AAPC: 4.1% are increasing while cervical cancer (AAPC: −4.4% is decreasing nationwide. However, liver and lung cancers exhibit disproportionately higher burdens in the northeast and north regions, respectively. Lung cancer increased significantly in northeastern and southern women, despite low smoking rates. Liver cancers are expected to increase in the northern males and females. Liver cancer increased in the south, despite the absence of the liver fluke, a known factor, in this region. Our findings are presented in the context of health policy, population dynamics and serve to provide evidence for future prevention strategies. Our subnational estimates provide a basis for understanding variations in region-specific risk factor profiles that contribute to incidence trends over time.

  20. Te and ne profiles on JFT-2M plasma with the highest spatial resolution TV Thomson scattering system

    International Nuclear Information System (INIS)

    Yamauchi, T.

    1993-01-01

    A high spatial resolution TV Thomson scattering system was constructed on JFT-2M tokamak. This system is similar to those used at PBX-M and TFTR. These systems are providing complete profiles of Te and ne at a single time during a plasma discharge. The characteristics of JFT-2M TVTS are as follows: 1. Measured points are composed of not only 81 points for the scattered light and plasma light, whose time difference is 2 ms, but also 10 points for plasma light measured at the same time with scattered light. 2. Spatial resolution is 0.86 cm, which is higher than any other Thomson scattering system. 3. Sensitivity of detector composed of image intensifier tubes and CCD is as high as that of photomultiplier tube. Te and ne profiles have been measured over one year on JFT-2M. The line-averaged electron density measured was in the region of 5x10 12 cm -3 - 7x10 13 cm -3 and the measured electron temperature was in the region of 50 eV -1.2 keV. (author) 7 refs., 7 figs., 1 tab

  1. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  2. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  3. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  4. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  5. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  6. Extension of the time-average model to Candu refueling schemes involving reshuffling

    International Nuclear Information System (INIS)

    Rouben, Benjamin; Nichita, Eleodor

    2008-01-01

    Candu reactors consist of a horizontal non-pressurized heavy-water-filled vessel penetrated axially by fuel channels, each containing twelve 50-cm-long fuel bundles cooled by pressurized heavy water. Candu reactors are refueled on-line and, as a consequence, the core flux and power distributions change continuously. For design purposes, a 'time-average' model was developed in the 1970's to calculate the average over time of the flux and power distribution and to study the effects of different refueling schemes. The original time-average model only allows treatment of simple push-through refueling schemes whereby fresh fuel is inserted at one end of the channel and irradiated fuel is removed from the other end. With the advent of advanced fuel cycles and new Candu designs, novel refueling schemes may be considered, such as reshuffling discharged fuel from some channels into other channels, to achieve better overall discharge burnup. Such reshuffling schemes cannot be handled by the original time-average model. This paper presents an extension of the time-average model to allow for the treatment of refueling schemes with reshuffling. Equations for the extended model are presented, together with sample results for a simple demonstration case. (authors)

  7. Efavirenz Has the Highest Anti-Proliferative Effect of Non-Nucleoside Reverse Transcriptase Inhibitors against Pancreatic Cancer Cells.

    Directory of Open Access Journals (Sweden)

    Markus Hecht

    Full Text Available Cancer prevention and therapy in HIV-1-infected patients will play an important role in future. The non-nucleoside reverse transcriptase inhibitors (NNRTI Efavirenz and Nevirapine are cytotoxic against cancer cells in vitro. As other NNRTIs have not been studied so far, all clinically used NNRTIs were tested and the in vitro toxic concentrations were compared to drug levels in patients to predict possible anti-cancer effects in vivo.Cytotoxicity was studied by Annexin-V-APC/7AAD staining and flow cytometry in the pancreatic cancer cell lines BxPC-3 and Panc-1 and confirmed by colony formation assays. The 50% effective cytotoxic concentrations (EC50 were calculated and compared to the blood levels in our patients and published data.The in vitro EC50 of the different drugs in the BxPC-3 pancreatic cancer cells were: Efavirenz 31.5 μmol/l (= 9944 ng/ml, Nevirapine 239 μmol/l (= 63,786 ng/ml, Etravirine 89.0 μmol/l (= 38,740 ng/ml, Lersivirine 543 μmol/l (= 168,523 ng/ml, Delavirdine 171 μmol/l (= 78,072 ng/ml, Rilpivirine 24.4 μmol/l (= 8941 ng/ml. As Efavirenz and Rilpivirine had the highest cytotoxic potential and Nevirapine is frequently used in HIV-1 positive patients, the results of these three drugs were further studied in Panc-1 pancreatic cancer cells and confirmed with colony formation assays. 205 patient blood levels of Efavirenz, 127 of Rilpivirine and 31 of Nevirapine were analyzed. The mean blood level of Efavirenz was 3587 ng/ml (range 162-15,363 ng/ml, of Rilpivirine 144 ng/ml (range 0-572 ng/ml and of Nevirapine 4955 ng/ml (range 1856-8697 ng/ml. Blood levels from our patients and from published data had comparable Efavirenz levels to the in vitro toxic EC50 in about 1 to 5% of all patients.All studied NNRTIs were toxic against cancer cells. A low percentage of patients taking Efavirenz reached in vitro cytotoxic blood levels. It can be speculated that in HIV-1 positive patients having high Efavirenz blood levels pancreatic

  8. A New Orally Active, Aminothiol Radioprotector-Free of Nausea and Hypotension Side Effects at Its Highest Radioprotective Doses

    Energy Technology Data Exchange (ETDEWEB)

    Soref, Cheryl M. [ProCertus BioPharm, Inc., Madison, WI (United States); Hacker, Timothy A. [Department of Medicine, Cardiovascular Physiology Core, University of Wisconsin-Madison, Madison, WI (United States); Fahl, William E., E-mail: fahl@oncology.wisc.edu [ProCertus BioPharm, Inc., Madison, WI (United States); McArdle Laboratory for Cancer Research, University of Wisconsin Carbone Cancer Center, Madison, WI (United States)

    2012-04-01

    Purpose: A new aminothiol, PrC-210, was tested for orally conferred radioprotection (rats, mice; 9.0 Gy whole-body, which was otherwise lethal to 100% of the animals) and presence of the debilitating side effects (nausea/vomiting, hypotension/fainting) that restrict use of the current aminothiol, amifostine (Ethyol, WR-2721). Methods and Materials: PrC-210 in water was administered to rats and mice at times before irradiation, and percent-survival was recorded for 60 days. Subcutaneous (SC) amifostine (positive control) or SC PrC-210 was administered to ferrets (Mustela putorius furo) and retching/emesis responses were recorded. Intraperitoneal amifostine (positive control) or PrC-210 was administered to arterial cannulated rats to score drug-induced hypotension. Results: Oral PrC-210 conferred 100% survival in rat and mouse models against an otherwise 100% lethal whole-body radiation dose (9.0 Gy). Oral PrC-210, administered by gavage 30-90 min before irradiation, conferred a broad window of radioprotection. The comparison of PrC-210 and amifostine side effects was striking because there was no retching or emesis in 10 ferrets treated with PrC-210 and no induced hypotension in arterial cannulated rats treated with PrC-210. The tested PrC-210 doses were the ferret and rat equivalent doses of the 0.5 maximum tolerated dose (MTD) PrC-210 dose in mice. The human equivalent of this mouse 0.5 MTD PrC-210 dose would likely be the highest PrC-210 dose used in humans. By comparison, the mouse 0.5 MTD amifostine dose, 400 {mu}g/g body weight (equivalent to the human amifostine dose of 910 mg/m{sup 2}), when tested at equivalent ferret and rat doses in the above models produced 100% retching/vomiting in ferrets and 100% incidence of significant, progressive hypotension in rats. Conclusions: The PrC-210 aminothiol, with no detectable nausea/vomiting or hypotension side effects in these preclinical models, is a logical candidate for human drug development to use in healthy

  9. Time of highest tuberculosis death risk and associated factors: an observation of 12 years in Northern Thailand

    Directory of Open Access Journals (Sweden)

    Saiyud Moolphate

    2011-02-01

    Full Text Available Saiyud Moolphate1,2, Myo Nyein Aung1,3, Oranuch Nampaisan1, Supalert Nedsuwan4, Pacharee Kantipong5, Narin Suriyon6, Chamnarn Hansudewechakul6, Hideki Yanai7, Norio Yamada2, Nobukatsu Ishikawa21TB/HIV Research Foundation, Chiang Rai, Thailand; 2Research Institute of Tuberculosis, Japan Anti-Tuberculosis Association (RIT-JATA, Tokyo, Japan; 3Department of Pharmacology, University of Medicine, Mandalay, Myanmar; 4Department of Preventive and Social Medicine, Chiang Rai Regional Hospital, Chiang Rai, Thailand; 5Department of Health Service System Development, Chiang Rai Regional Hospital, Chiang Rai, Thailand; 6Provincial Health Office, Chiang Rai, Thailand; 7Department of Clinical Laboratory, Fukujuji Hospital, Tokyo, JapanPurpose: Northern Thailand is a tuberculosis (TB endemic area with a high TB death rate. We aimed to establish the time of highest death risk during TB treatment, and to identify the risk factors taking place during that period of high risk.Patients and methods: We explored the TB surveillance data of the Chiang Rai province, Northern Thailand, retrospectively for 12 years. A total of 19,174 TB patients (including 5,009 deaths were investigated from 1997 to 2008, and the proportion of deaths in each month of TB treatment was compared. Furthermore, multiple logistic regression analysis was performed to identify the characteristics of patients who died in the first month of TB treatment. A total of 5,626 TB patients from 2005 to 2008 were included in this regression analysis.Result: The numbers of deaths in the first month of TB treatment were 38%, 39%, and 46% in the years 1997–2000, 2001–2004, and 2005–2008, respectively. The first month of TB treatment is the time of the maximum number of deaths. Moreover, advancing age, HIV infection, and being a Thai citizen were significant factors contributing to these earlier deaths in the course of TB treatment.Conclusion: Our findings have pointed to the specific time period and

  10. Applications of resonance-averaged gamma-ray spectroscopy with tailored beams

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1982-01-01

    The use of techniques based on the direct experimental averaging over compound nuclear capturing states has proved valuable for investigations of nuclear structure. The various methods that have been employed are described, with particular emphasis on the transmission filter, or tailored beam technique. The mathematical limitations on averaging imposed by the filter band pass are discussed. It can readily be demonstrated that a combination of filters at different energies can form a powerful method for spin and parity predictions. Several recent examples from the HFBR program are presented

  11. An averaging battery model for a lead-acid battery operating in an electric car

    Science.gov (United States)

    Bozek, J. M.

    1979-01-01

    A battery model is developed based on time averaging the current or power, and is shown to be an effective means of predicting the performance of a lead acid battery. The effectiveness of this battery model was tested on battery discharge profiles expected during the operation of an electric vehicle following the various SAE J227a driving schedules. The averaging model predicts the performance of a battery that is periodically charged (regenerated) if the regeneration energy is assumed to be converted to retrievable electrochemical energy on a one-to-one basis.

  12. Applications of resonance-averaged gamma-ray spectroscopy with tailored beams

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1982-01-01

    The use of techniques based on the direct experimental averaging over compound nuclear capturing states has proved valuable for investigations of nuclear structure. The various methods that have been employed are described, with particular emphasis on the transmission filter, or tailored beam technique. The mathematical limitations on averaging imposed by the filtre band pass are discussed. It can readily be demonstrated that a combination of filters at different energies can form a powerful method for spin and parity predictions. Several recent examples from the HFBR program are presented. (author)

  13. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  14. A Wide Band Gap Polymer with a Deep Highest Occupied Molecular Orbital Level Enables 14.2% Efficiency in Polymer Solar Cells.

    Science.gov (United States)

    Li, Sunsun; Ye, Long; Zhao, Wenchao; Yan, Hongping; Yang, Bei; Liu, Delong; Li, Wanning; Ade, Harald; Hou, Jianhui

    2018-05-21

    To simultaneously achieve low photon energy loss ( E loss ) and broad spectral response, the molecular design of the wide band gap (WBG) donor polymer with a deep HOMO level is of critical importance in fullerene-free polymer solar cells (PSCs). Herein, we developed a new benzodithiophene unit, i.e., DTBDT-EF, and conducted systematic investigations on a WBG DTBDT-EF-based donor polymer, namely, PDTB-EF-T. Due to the synergistic electron-withdrawing effect of the fluorine atom and ester group, PDTB-EF-T exhibits a higher oxidation potential, i.e., a deeper HOMO level (ca. -5.5 eV) than most well-known donor polymers. Hence, a high open-circuit voltage of 0.90 V was obtained when paired with a fluorinated small molecule acceptor (IT-4F), corresponding to a low E loss of 0.62 eV. Furthermore, side-chain engineering demonstrated that subtle side-chain modulation of the ester greatly influences the aggregation effects and molecular packing of polymer PDTB-EF-T. With the benefits of the stronger interchain π-π interaction, the improved ordering structure, and thus the highest hole mobility, the most symmetric charge transport and reduced recombination are achieved for the linear decyl-substituted PDTB-EF-T (P2)-based PSCs, leading to the highest short-circuit current density and fill factor (FF). Due to the high Flory-Huggins interaction parameter (χ), surface-directed phase separation occurs in the P2:IT-4F blend, which is supported by X-ray photoemission spectroscopy results and cross-sectional transmission electron microscope images. By taking advantage of the vertical phase distribution of the P2:IT-4F blend, a high power conversion efficiency (PCE) of 14.2% with an outstanding FF of 0.76 was recorded for inverted devices. These results demonstrate the great potential of the DTBDT-EF unit for future organic photovoltaic applications.

  15. Potential Remedies for the High Synchrotron-Radiation-Induced Heat Load for Future Highest-Energy-Proton Circular Colliders

    CERN Document Server

    AUTHOR|(CDS)2084568; Baglin, Vincent; Schaefers, Franz

    2015-01-01

    We propose a new method for handling the high synchrotron radiation (SR) induced heat load of future circular hadron colliders (like FCC-hh). FCC-hh are dominated by the production of SR, which causes a significant heat load on the accelerator walls. Removal of such a heat load in the cold part of the machine, as done in the Large Hadron Collider, will require more than 100 MW of electrical power and a major cooling system. We studied a totally different approach, identifying an accelerator beam screen whose illuminated surface is able to forward reflect most of the photons impinging onto it. Such a reflecting beam screen will transport a significant part of this heat load outside the cold dipoles. Then, in room temperature sections, it could be more efficiently dissipated. Here we will analyze the proposed solution and address its full compatibility with all other aspects an accelerator beam screen must fulfill to keep under control beam instabilities as caused by electron cloud formation, impedance, dynamic...

  16. The Average Temporal and Spectral Evolution of Gamma-Ray Bursts

    International Nuclear Information System (INIS)

    Fenimore, E.E.

    1999-01-01

    We have averaged bright BATSE bursts to uncover the average overall temporal and spectral evolution of gamma-ray bursts (GRBs). We align the temporal structure of each burst by setting its duration to a standard duration, which we call T left-angleDurright-angle . The observed average open-quotes aligned T left-angleDurright-angle close quotes profile for 32 bright bursts with intermediate durations (16 - 40 s) has a sharp rise (within the first 20% of T left-angleDurright-angle ) and then a linear decay. Exponentials and power laws do not fit this decay. In particular, the power law seen in the X-ray afterglow (∝T -1.4 ) is not observed during the bursts, implying that the X-ray afterglow is not just an extension of the average temporal evolution seen during the gamma-ray phase. The average burst spectrum has a low-energy slope of -1.03, a high-energy slope of -3.31, and a peak in the νF ν distribution at 390 keV. We determine the average spectral evolution. Remarkably, it is also a linear function, with the peak of the νF ν distribution given by ∼680-600(T/T left-angleDurright-angle ) keV. Since both the temporal profile and the peak energy are linear functions, on average, the peak energy is linearly proportional to the intensity. This behavior is inconsistent with the external shock model. The observed temporal and spectral evolution is also inconsistent with that expected from variations in just a Lorentz factor. Previously, trends have been reported for GRB evolution, but our results are quantitative relationships that models should attempt to explain. copyright copyright 1999. The American Astronomical Society

  17. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  18. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  19. 47 CFR 1.959 - Computation of average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...

  20. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...

  1. The average covering tree value for directed graph games

    NARCIS (Netherlands)

    Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf

    We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering

  2. The Average Covering Tree Value for Directed Graph Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.

    2012-01-01

    Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all

  3. Analytic computation of average energy of neutrons inducing fission

    International Nuclear Information System (INIS)

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  4. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  5. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail

    2015-01-01

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees

  6. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any

  7. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  8. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  9. Anomalous behavior of q-averages in nonextensive statistical mechanics

    International Nuclear Information System (INIS)

    Abe, Sumiyoshi

    2009-01-01

    A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases

  10. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  11. Forecasting of Average Monthly River Flows in Colombia

    Science.gov (United States)

    Mesa, O. J.; Poveda, G.

    2006-05-01

    The last two decades have witnessed a marked increase in our knowledge of the causes of interannual hydroclimatic variability and our ability to make predictions. Colombia, located near the seat of the ENSO phenomenon, has been shown to experience negative (positive) anomalies in precipitation in concert with El Niño (La Niña). In general besides the Pacific Ocean, Colombia has climatic influences from the Atlantic Ocean and the Caribbean Sea through the tropical forest of the Amazon basin and the savannas of the Orinoco River, in top of the orographic and hydro-climatic effects introduced by the Andes. As in various other countries of the region, hydro-electric power contributes a large proportion (75 %) of the total electricity generation in Colombia. Also, most agriculture is rain-fed dependant, and domestic water supply relies mainly on surface waters from creeks and rivers. Besides, various vector borne tropical diseases intensify in response to rain and temperature changes. Therefore, there is a direct connection between climatic fluctuations and national and regional economies. This talk specifically presents different forecasts of average monthly stream flows for the inflow into the largest reservoir used for hydropower generation in Colombia, and illustrates the potential economic savings of such forecasts. Because of planning of the reservoir operation, the most appropriated time scale for this application is the annual to interannual. Fortunately, this corresponds to the scale at which hydroclimate variability understanding has improved significantly. Among the different possibilities we have explored: traditional statistical ARIMA models, multiple linear regression, natural and constructed analogue models, the linear inverse model, neural network models, the non-parametric regression splines (MARS) model, regime dependant Markovian models and one we termed PREBEO, which is based on spectral bands decomposition using wavelets. Most of the methods make

  12. The Highest Resolution Chandra View of Photoionization and Jet-Cloud Interaction in the Nuclear Region of NGC 4151

    Science.gov (United States)

    Wang, Junfeng; Fabbiano, G.; Karovska, M.; Elvis, M.; Risaliti, G.; Zezas, A.; Mundell, C. G.

    2009-10-01

    We report high resolution imaging of the nucleus of the Seyfert 1 galaxy NGC 4151 obtained with a 50 ks Chandra High Resolution Camera (HRC) observation. The HRC image resolves the emission on spatial scales of 0farcs5, ~30 pc, showing an extended X-ray morphology overall consistent with the narrow-line region (NLR) seen in optical line emission. Removal of the bright point-like nuclear source and image deconvolution techniques both reveal X-ray enhancements that closely match the substructures seen in the Hubble Space Telescope [O III] image and prominent knots in the radio jet. We find that most of the NLR clouds in NGC 4151 have [O III]/soft X-ray ratio ~10, despite the distance of the clouds from the nucleus. This ratio is consistent with the values observed in NLRs of some Seyfert 2 galaxies, which indicates a uniform ionization parameter even at large radii and a density decreasing as r -2 as expected for a nuclear wind scenario. The [O III]/X-ray ratios at the location of radio knots show an excess of X-ray emission, suggesting shock heating in addition to photoionization. We examine various mechanisms for the X-ray emission and find that, in contrast to jet-related X-ray emission in more powerful active galactic nucleus, the observed jet parameters in NGC 4151 are inconsistent with synchrotron emission, synchrotron self-Compton, inverse Compton of cosmic microwave background photons or galaxy optical light. Instead, our results favor thermal emission from the interaction between radio outflow and NLR gas clouds as the origin for the X-ray emission associated with the jet. This supports previous claims that frequent jet-interstellar medium interaction may explain why jets in Seyfert galaxies appear small, slow, and thermally dominated, distinct from those kpc-scale jets in the radio galaxies.

  13. THE HIGHEST RESOLUTION CHANDRA VIEW OF PHOTOIONIZATION AND JET-CLOUD INTERACTION IN THE NUCLEAR REGION OF NGC 4151

    International Nuclear Information System (INIS)

    Wang Junfeng; Fabbiano, G.; Karovska, M.; Elvis, M.; Risaliti, G.; Zezas, A.; Mundell, C. G.

    2009-01-01

    We report high resolution imaging of the nucleus of the Seyfert 1 galaxy NGC 4151 obtained with a 50 ks Chandra High Resolution Camera (HRC) observation. The HRC image resolves the emission on spatial scales of 0.''5, ∼30 pc, showing an extended X-ray morphology overall consistent with the narrow-line region (NLR) seen in optical line emission. Removal of the bright point-like nuclear source and image deconvolution techniques both reveal X-ray enhancements that closely match the substructures seen in the Hubble Space Telescope [O III] image and prominent knots in the radio jet. We find that most of the NLR clouds in NGC 4151 have [O III]/soft X-ray ratio ∼10, despite the distance of the clouds from the nucleus. This ratio is consistent with the values observed in NLRs of some Seyfert 2 galaxies, which indicates a uniform ionization parameter even at large radii and a density decreasing as r -2 as expected for a nuclear wind scenario. The [O III]/X-ray ratios at the location of radio knots show an excess of X-ray emission, suggesting shock heating in addition to photoionization. We examine various mechanisms for the X-ray emission and find that, in contrast to jet-related X-ray emission in more powerful active galactic nucleus, the observed jet parameters in NGC 4151 are inconsistent with synchrotron emission, synchrotron self-Compton, inverse Compton of cosmic microwave background photons or galaxy optical light. Instead, our results favor thermal emission from the interaction between radio outflow and NLR gas clouds as the origin for the X-ray emission associated with the jet. This supports previous claims that frequent jet-interstellar medium interaction may explain why jets in Seyfert galaxies appear small, slow, and thermally dominated, distinct from those kpc-scale jets in the radio galaxies.

  14. Record high-average current from a high-brightness photoinjector

    Energy Technology Data Exchange (ETDEWEB)

    Dunham, Bruce; Barley, John; Bartnik, Adam; Bazarov, Ivan; Cultrera, Luca; Dobbins, John; Hoffstaetter, Georg; Johnson, Brent; Kaplan, Roger; Karkare, Siddharth; Kostroun, Vaclav; Li Yulin; Liepe, Matthias; Liu Xianghong; Loehl, Florian; Maxson, Jared; Quigley, Peter; Reilly, John; Rice, David; Sabol, Daniel [Cornell Laboratory for Accelerator-Based Sciences and Education, Cornell University, Ithaca, New York 14853 (United States); and others

    2013-01-21

    High-power, high-brightness electron beams are of interest for many applications, especially as drivers for free electron lasers and energy recovery linac light sources. For these particular applications, photoemission injectors are used in most cases, and the initial beam brightness from the injector sets a limit on the quality of the light generated at the end of the accelerator. At Cornell University, we have built such a high-power injector using a DC photoemission gun followed by a superconducting accelerating module. Recent results will be presented demonstrating record setting performance up to 65 mA average current with beam energies of 4-5 MeV.

  15. Short-term load forecasting of power system

    Science.gov (United States)

    Xu, Xiaobin

    2017-05-01

    In order to ensure the scientific nature of optimization about power system, it is necessary to improve the load forecasting accuracy. Power system load forecasting is based on accurate statistical data and survey data, starting from the history and current situation of electricity consumption, with a scientific method to predict the future development trend of power load and change the law of science. Short-term load forecasting is the basis of power system operation and analysis, which is of great significance to unit combination, economic dispatch and safety check. Therefore, the load forecasting of the power system is explained in detail in this paper. First, we use the data from 2012 to 2014 to establish the partial least squares model to regression analysis the relationship between daily maximum load, daily minimum load, daily average load and each meteorological factor, and select the highest peak by observing the regression coefficient histogram Day maximum temperature, daily minimum temperature and daily average temperature as the meteorological factors to improve the accuracy of load forecasting indicators. Secondly, in the case of uncertain climate impact, we use the time series model to predict the load data for 2015, respectively, the 2009-2014 load data were sorted out, through the previous six years of the data to forecast the data for this time in 2015. The criterion for the accuracy of the prediction is the average of the standard deviations for the prediction results and average load for the previous six years. Finally, considering the climate effect, we use the BP neural network model to predict the data in 2015, and optimize the forecast results on the basis of the time series model.

  16. Evaluation of Dynamic Channel and Power Assignment for Cognitive Networks

    Energy Technology Data Exchange (ETDEWEB)

    Syed A. Ahmad; Umesh Shukla; Ryan E. Irwin; Luiz A. DaSilva; Allen B. MacKenzie

    2011-03-01

    In this paper, we develop a unifying optimization formulation to describe the Dynamic Channel and Power Assignment (DCPA) problem and evaluation method for comparing DCPA algorithms. DCPA refers to the allocation of transmit power and frequency channels to links in a cognitive network so as to maximize the total number of feasible links while minimizing the aggregate transmit power. We apply our evaluation method to five algorithms representative of DCPA used in literature. This comparison illustrates the tradeoffs between control modes (centralized versus distributed) and channel/power assignment techniques. We estimate the complexity of each algorithm. Through simulations, we evaluate the effectiveness of the algorithms in achieving feasible link allocations in the network, as well as their power efficiency. Our results indicate that, when few channels are available, the effectiveness of all algorithms is comparable and thus the one with smallest complexity should be selected. The Least Interfering Channel and Iterative Power Assignment (LICIPA) algorithm does not require cross-link gain information, has the overall lowest run time, and highest feasibility ratio of all the distributed algorithms; however, this comes at a cost of higher average power per link.

  17. Aperture averaging and BER for Gaussian beam in underwater oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-03-01

    In an underwater wireless optical communication (UWOC) link, power fluctuations over finite-sized collecting lens are investigated for a horizontally propagating Gaussian beam wave. The power scintillation index, also known as the irradiance flux variance, for the received irradiance is evaluated in weak oceanic turbulence by using the Rytov method. This lets us further quantify the associated performance indicators, namely, the aperture averaging factor and the average bit-error rate (). The effects on the UWOC link performance of the oceanic turbulence parameters, i.e., the rate of dissipation of kinetic energy per unit mass of fluid, the rate of dissipation of mean-squared temperature, Kolmogorov microscale, the ratio of temperature to salinity contributions to the refractive index spectrum as well as system parameters, i.e., the receiver aperture diameter, Gaussian source size, laser wavelength and the link distance are investigated.

  18. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  19. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  20. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...