WorldWideScience

Sample records for higher average power

  1. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  2. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  3. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  4. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  5. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  6. High average power solid state laser power conditioning system

    International Nuclear Information System (INIS)

    Steinkraus, R.F.

    1987-01-01

    The power conditioning system for the High Average Power Laser program at Lawrence Livermore National Laboratory (LLNL) is described. The system has been operational for two years. It is high voltage, high power, fault protected, and solid state. The power conditioning system drives flashlamps that pump solid state lasers. Flashlamps are driven by silicon control rectifier (SCR) switched, resonant charged, (LC) discharge pulse forming networks (PFNs). The system uses fiber optics for control and diagnostics. Energy and thermal diagnostics are monitored by computers

  7. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  8. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  9. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  10. Minimal average consumption downlink base station power control strategy

    OpenAIRE

    Holtkamp H.; Auer G.; Haas H.

    2011-01-01

    We consider single cell multi-user OFDMA downlink resource allocation on a flat-fading channel such that average supply power is minimized while fulfilling a set of target rates. Available degrees of freedom are transmission power and duration. This paper extends our previous work on power optimal resource allocation in the mobile downlink by detailing the optimal power control strategy investigation and extracting fundamental characteristics of power optimal operation in cellular downlink. W...

  11. Average gluon and quark jet multiplicities at higher orders

    Energy Technology Data Exchange (ETDEWEB)

    Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2013-05-15

    We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.

  12. High Average Power Fiber Laser for Satellite Communications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...

  13. Eighth CW and High Average Power RF Workshop

    CERN Document Server

    2014-01-01

    We are pleased to announce the next Continuous Wave and High Average RF Power Workshop, CWRF2014, to take place at Hotel NH Trieste, Trieste, Italy from 13 to 16 May, 2014. This is the eighth in the CWRF workshop series and will be hosted by Elettra - Sincrotrone Trieste S.C.p.A. (www.elettra.eu). CWRF2014 will provide an opportunity for designers and users of CW and high average power RF systems to meet and interact in a convivial environment to share experiences and ideas on applications which utilize high-power klystrons, gridded tubes, combined solid-state architectures, high-voltage power supplies, high-voltage modulators, high-power combiners, circulators, cavities, power couplers and tuners. New ideas for high-power RF system upgrades and novel ways of RF power generation and distribution will also be discussed. CWRF2014 sessions will start on Tuesday morning and will conclude on Friday lunchtime. A visit to Elettra and FERMI will be organized during the workshop. ORGANIZING COMMITTEE (OC): Al...

  14. High average power diode pumped solid state lasers for CALIOPE

    International Nuclear Information System (INIS)

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers

  15. High Average Power UV Free Electron Laser Experiments At JLAB

    International Nuclear Information System (INIS)

    Douglas, David; Benson, Stephen; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle; Tennant, Christopher; Williams, Gwyn

    2012-01-01

    Having produced 14 kW of average power at ∼2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.

  16. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  17. Thermal effects in high average power optical parametric amplifiers.

    Science.gov (United States)

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given.

  18. High Average Power, High Energy Short Pulse Fiber Laser System

    Energy Technology Data Exchange (ETDEWEB)

    Messerly, M J

    2007-11-13

    Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.

  19. Database of average-power damage thresholds at 1064 nm

    International Nuclear Information System (INIS)

    Rainer, F.; Hildum, E.A.; Milam, D.

    1987-01-01

    We have completed a database of average-power, laser-induced, damage thresholds at 1064 nm on a variety of materials. Measurements were made with a newly constructed laser to provide design input for moderate and high average-power laser projects. The measurements were conducted with 16-ns pulses at pulse-repetition frequencies ranging from 6 to 120 Hz. Samples were typically irradiated for time ranging from a fraction of a second up to 5 minutes (36,000 shots). We tested seven categories of samples which included antireflective coatings, high reflectors, polarizers, single and multiple layers of the same material, bare and overcoated metal surfaces, bare polished surfaces, and bulk materials. The measured damage threshold ranged from 2 for some metals to > 46 J/cm 2 for a bare polished glass substrate. 4 refs., 7 figs., 1 tab

  20. Power Efficiency Improvements through Peak-to-Average Power Ratio Reduction and Power Amplifier Linearization

    Directory of Open Access Journals (Sweden)

    Zhou G Tong

    2007-01-01

    Full Text Available Many modern communication signal formats, such as orthogonal frequency-division multiplexing (OFDM and code-division multiple access (CDMA, have high peak-to-average power ratios (PARs. A signal with a high PAR not only is vulnerable in the presence of nonlinear components such as power amplifiers (PAs, but also leads to low transmission power efficiency. Selected mapping (SLM and clipping are well-known PAR reduction techniques. We propose to combine SLM with threshold clipping and digital baseband predistortion to improve the overall efficiency of the transmission system. Testbed experiments demonstrate the effectiveness of the proposed approach.

  1. Recent developments in high average power driver technology

    International Nuclear Information System (INIS)

    Prestwich, K.R.; Buttram, M.T.; Rohwein, G.J.

    1979-01-01

    Inertial confinement fusion (ICF) reactors will require driver systems operating with tens to hundreds of megawatts of average power. The pulse power technology that will be required to build such drivers is in a primitive state of development. Recent developments in repetitive pulse power are discussed. A high-voltage transformer has been developed and operated at 3 MV in a single pulse experiment and is being tested at 1.5 MV, 5 kj and 10 pps. A low-loss, 1 MV, 10 kj, 10 pps Marx generator is being tested. Test results from gas-dynamic spark gaps that operate both in the 100 kV and 700 kV range are reported. A 250 kV, 1.5 kA/cm 2 , 30 ns electron beam diode has operated stably for 1.6 x 10 5 pulses

  2. TRIGA research reactors with higher power density

    International Nuclear Information System (INIS)

    Whittemore, W.L.

    1994-01-01

    The recent trend in new or upgraded research reactors is to higher power densities (hence higher neutron flux levels) but not necessarily to higher power levels. The TRIGA LEU fuel with burnable poison is available in small diameter fuel rods capable of high power per rod (≅48 kW/rod) with acceptable peak fuel temperatures. The performance of a 10-MW research reactor with a compact core of hexagonal TRIGA fuel clusters has been calculated in detail. With its light water coolant, beryllium and D 2 O reflector regions, this reactor can provide in-core experiments with thermal fluxes in excess of 3 x 10 14 n/cm 2 ·s and fast fluxes (>0.1 MeV) of 2 x 10 14 n/cm 2 ·s. The core centerline thermal neutron flux in the D 2 O reflector is about 2 x 10 14 n/cm 2 ·s and the average core power density is about 230 kW/liter. Using other TRIGA fuel developed for 25-MW test reactors but arranged in hexagonal arrays, power densities in excess of 300 kW/liter are readily available. A core with TRIGA fuel operating at 15-MW and generating such a power density is capable of producing thermal neutron fluxes in a D 2 O reflector of 3 x 10 14 n/cm 2 ·s. A beryllium-filled central region of the core can further enhance the core leakage and hence the neutron flux in the reflector. (author)

  3. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  4. Potential of high-average-power solid state lasers

    International Nuclear Information System (INIS)

    Emmett, J.L.; Krupke, W.F.; Sooy, W.R.

    1984-01-01

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels

  5. High-average-power laser medium based on silica glass

    Science.gov (United States)

    Fujimoto, Yasushi; Nakatsuka, Masahiro

    2000-01-01

    Silica glass is one of the most attractive materials for a high-average-power laser. We have developed a new laser material base don silica glass with zeolite method which is effective for uniform dispersion of rare earth ions in silica glass. High quality medium, which is bubbleless and quite low refractive index distortion, must be required for realization of laser action. As the main reason of bubbling is due to hydroxy species remained in the gelation same, we carefully choose colloidal silica particles, pH value of hydrochloric acid for hydrolysis of tetraethylorthosilicate on sol-gel process, and temperature and atmosphere control during sintering process, and then we get a bubble less transparent rare earth doped silica glass. The refractive index distortion of the sample also discussed.

  6. Strengthened glass for high average power laser applications

    International Nuclear Information System (INIS)

    Cerqua, K.A.; Lindquist, A.; Jacobs, S.D.; Lambropoulos, J.

    1987-01-01

    Recent advancements in high repetition rate and high average power laser systems have put increasing demands on the development of improved solid state laser materials with high thermal loading capabilities. The authors have developed a process for strengthening a commercially available Nd doped phosphate glass utilizing an ion-exchange process. Results of thermal loading fracture tests on moderate size (160 x 15 x 8 mm) glass slabs have shown a 6-fold improvement in power loading capabilities for strengthened samples over unstrengthened slabs. Fractographic analysis of post-fracture samples has given insight into the mechanism of fracture in both unstrengthened and strengthened samples. Additional stress analysis calculations have supported these findings. In addition to processing the glass' surface during strengthening in a manner which preserves its post-treatment optical quality, the authors have developed an in-house optical fabrication technique utilizing acid polishing to minimize subsurface damage in samples prior to exchange treatment. Finally, extension of the strengthening process to alternate geometries of laser glass has produced encouraging results, which may expand the potential or strengthened glass in laser systems, making it an exciting prospect for many applications

  7. Drug addiction, love, and the higher power.

    Science.gov (United States)

    Sussman, Steve; Reynaud, Michel; Aubin, Henri-Jean; Leventhal, Adam M

    2011-09-01

    This discussion piece suggests that reliance on a Higher Power in drug abuse recovery programs is entertained among some addicts for its psychobiological effects. Prayer, meditation, early romantic love, and drug abuse may have in common activation of mesolimbic dopaminergic pathways of the brain and the generation of intense emotional states. In this sense, reliance on a Higher Power may operate as a substitute addiction, which replaces the psychobiological functions formerly served by drug use. Implications of this perspective are discussed.

  8. A high average power beam dump for an electron accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xianghong, E-mail: xl66@cornell.edu [Cornell Laboratory of Accelerator-based Sciences and Education, Cornell University, Ithaca, NY 14853 (United States); Bazarov, Ivan; Dunham, Bruce M.; Kostroun, Vaclav O.; Li, Yulin; Smolenski, Karl W. [Cornell Laboratory of Accelerator-based Sciences and Education, Cornell University, Ithaca, NY 14853 (United States)

    2013-05-01

    The electron beam dump for Cornell University's Energy Recovery Linac (ERL) prototype injector was designed and manufactured to absorb 600 kW of electron beam power at beam energies between 5 and 15 MeV. It is constructed from an aluminum alloy using a cylindrical/conical geometry, with water cooling channels between an inner vacuum chamber and an outer jacket. The electron beam is defocused and its centroid is rastered around the axis of the dump to dilute the power density. A flexible joint connects the inner body and the outer jacket to minimize thermal stress. A quadrant detector at the entrance to the dump monitors the electron beam position and rastering. Electron scattering calculations, thermal and thermomechanical stress analysis, and radiation calculations are presented.

  9. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    Science.gov (United States)

    Gao, Peng

    2018-04-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  10. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    Science.gov (United States)

    Gao, Peng

    2018-06-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  11. Energy stability in a high average power FEL

    International Nuclear Information System (INIS)

    Mermings, L.; Bisognano, J.; Delayen, J.

    1995-01-01

    Recirculating, energy-recovering linacs can be used as driver accelerators for high power FELs. Instabilities which arise from fluctuations of the cavity fields or beam current are investigated. Energy changes can cause beam loss on apertures, or, when coupled to M, phase oscillations. Both effects change the beam induced voltage in the cavities and can lead to unstable variations of the accelerating field. Stability analysis for small perturbations from equilibrium is performed and threshold currents are determined. Furthermore, the analytical model is extended to include feedback. Comparison with simulation results derived from direct integration of the equations of motion is presented. Design strategies to increase the instability threshold are discussed and the UV Demo FEL, proposed for construction at CEBAF, and the INP Recuperatron at Novosibirsk are used as examples

  12. The true bladder dose: on average thrice higher than the ICRU reference

    International Nuclear Information System (INIS)

    Barillot, I.; Horiot, J.C.; Maingon, P.; Bone-Lepinoy, M.C.; D'Hombres, A.; Comte, J.; Delignette, A.; Feutray, S.; Vaillant, D.

    1996-01-01

    The aim of this study is to compare ICRU dose to doses at the bladder base located from ultrasonography measurements. Since 1990, the dose delivered to the bladder during utero-vaginal brachytherapy was systematically calculated at 3 or 4 points representative of bladder base determined with ultrasonography. The ICRU Reference Dose (IRD) from films, the Maximum Dose (Dmax), the Mean Dose (Dmean) representative of the dose received by a large area of bladder mucosa, the Reference Dose Rate (RDR) and the Mean Dose Rate (MDR) were recorded. Material: from 1990 to 1994, 198 measurements were performed in 152 patients. 98 patients were treated for cervix carcinomas, 54 for endometrial carcinomas. Methods: Bladder complications were classified using French Italian Syllabus. The influence of doses and dose rates on complications were tested using non parametric t test. Results: On average IRD is 21 Gy +/- 12 Gy, Dmax is 51Gy +/- 21Gy, Dmean is 40 Gy +/16 Gy. On average Dmax is thrice higher than IRD and Dmean twice higher than IRD. The same results are obtained for cervix and endometrium. Comparisons on dose rates were also performed: MDR is on average twice higher than RDR (RDR 48 cGy/h vs MDR 88 cGy/h). The five observed complications consist of incontinence only (3 G1, 1G2, 1G3). They are only statistically correlated with RDR p=0.01 (46 cGy/h in patients without complications vs 74 cGy/h in patients with complications). However the full responsibility of RT remains doubtful and should be shared with surgery in all cases. In summary: Bladder mucosa seems to tolerate well much higher doses than previous recorded without increased risk of severe sequelae. However this finding is probably explained by our efforts to spare most of bladder mucosa by 1 deg. ) customised external irradiation therapy (4 fields, full bladder) 2 deg. ) reproduction of physiologic bladder filling during brachytherapy by intermittent clamping of the Foley catheter

  13. Control of underactuated driftless systems using higher-order averaging theory

    OpenAIRE

    Vela, Patricio A.; Burdick, Joel W.

    2003-01-01

    This paper applies a recently developed "generalized averaging theory" to construct stabilizing feedback control laws for underactuated driftless systems. These controls exponentialy stabilize in the average; the actual system may orbit around the average. Conditions for which the orbit collapses to the averaged trajectory are given. An example validates the theory, demonstrating its utility.

  14. Potential for efficient frequency conversion at high average power using solid state nonlinear optical materials

    International Nuclear Information System (INIS)

    Eimerl, D.

    1985-01-01

    High-average-power frequency conversion using solid state nonlinear materials is discussed. Recent laboratory experience and new developments in design concepts show that current technology, a few tens of watts, may be extended by several orders of magnitude. For example, using KD*P, efficient doubling (>70%) of Nd:YAG at average powers approaching 100 KW is possible; and for doubling to the blue or ultraviolet regions, the average power may approach 1 MW. Configurations using segmented apertures permit essentially unlimited scaling of average power. High average power is achieved by configuring the nonlinear material as a set of thin plates with a large ratio of surface area to volume and by cooling the exposed surfaces with a flowing gas. The design and material fabrication of such a harmonic generator are well within current technology

  15. Money, Power, Equity and Higher Education

    Directory of Open Access Journals (Sweden)

    Seyed Ali Enjoo

    2018-03-01

    Full Text Available In current issue of the Journal of Medical Education, Afshar in the Editorial “The Role of Private Sector in Higher Education; From Quantity and Quality to Access and Social Justice” proposed the importance of justice and quality. (1 It seems that there are some differences between two typesof private sector in higher education. One type of private financial support in higher education comes purely from private sector without any contribution of public sector. The second type of private finance in the higher education especially the type which has grown recently in Iranianhigher education is a type of combination between public higher education and private sector the so called international branch of the university till recent years, and nowadays called selfgoverning campus of the university. (2 In this type of private contribution to public higher education those who have no or little money must pass very hard national examination to be accepted in the university, and those who can pay the tuition fee could enter to the best schools of that university without the exam (in the firstyear of the project or by loose standards or lower cut off scores. Actually, this is an instance of the double standards.One of the elements of being equitable and avoiding discrimination is to prevent undue achievement by the owners of the power such as owners of political, religious, economic, or military power, and to avoid any distinction according to race, colour, sex, language, and etc. (3 In this type ofprivate money absorption in the higher education, while the others have no extra way to enter to the university that would lead to achievement of scientific power, the owners of the economic powers’ daughters and sons could have a special chance to achieve scientific power by the powerof their parents, and there is a different criterion to enter the university based on non-scientific differences.In such situation growing student movements against

  16. Design of a high average-power FEL driven by an existing 20 MV electrostatic-accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Kimel, I.; Elias, L.R. [Univ. of Central Florida, Orlando, FL (United States)

    1995-12-31

    There are some important applications where high average-power radiation is required. Two examples are industrial machining and space power-beaming. Unfortunately, up to date no FEL has been able to show more than 10 Watts of average power. To remedy this situation we started a program geared towards the development of high average-power FELs. As a first step we are building in our CREOL laboratory, a compact FEL which will generate close to 1 kW in CW operation. As the next step we are also engaged in the design of a much higher average-power system based on a 20 MV electrostatic accelerator. This FEL will be capable of operating CW with a power output of 60 kW. The idea is to perform a high power demonstration using the existing 20 MV electrostatic accelerator at the Tandar facility in Buenos Aires. This machine has been dedicated to accelerate heavy ions for experiments and applications in nuclear and atomic physics. The necessary adaptations required to utilize the machine to accelerate electrons will be described. An important aspect of the design of the 20 MV system, is the electron beam optics through almost 30 meters of accelerating and decelerating tubes as well as the undulator. Of equal importance is a careful design of the long resonator with mirrors able to withstand high power loading with proper heat dissipation features.

  17. Improved performance of high average power semiconductor arrays for applications in diode pumped solid state lasers

    International Nuclear Information System (INIS)

    Beach, R.; Emanuel, M.; Benett, W.; Freitas, B.; Ciarlo, D.; Carlson, N.; Sutton, S.; Skidmore, J.; Solarz, R.

    1994-01-01

    The average power performance capability of semiconductor diode laser arrays has improved dramatically over the past several years. These performance improvements, combined with cost reductions pursued by LLNL and others in the fabrication and packaging of diode lasers, have continued to reduce the price per average watt of laser diode radiation. Presently, we are at the point where the manufacturers of commercial high average power solid state laser systems used in material processing applications can now seriously consider the replacement of their flashlamp pumps with laser diode pump sources. Additionally, a low cost technique developed and demonstrated at LLNL for optically conditioning the output radiation of diode laser arrays has enabled a new and scalable average power diode-end-pumping architecture that can be simply implemented in diode pumped solid state laser systems (DPSSL's). This development allows the high average power DPSSL designer to look beyond the Nd ion for the first time. Along with high average power DPSSL's which are appropriate for material processing applications, low and intermediate average power DPSSL's are now realizable at low enough costs to be attractive for use in many medical, electronic, and lithographic applications

  18. Estimation of average annual streamflows and power potentials for Alaska and Hawaii

    Energy Technology Data Exchange (ETDEWEB)

    Verdin, Kristine L. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL)

    2004-05-01

    This paper describes the work done to develop average annual streamflow estimates and power potential for the states of Alaska and Hawaii. The Elevation Derivatives for National Applications (EDNA) database was used, along with climatic datasets, to develop flow and power estimates for every stream reach in the EDNA database. Estimates of average annual streamflows were derived using state-specific regression equations, which were functions of average annual precipitation, precipitation intensity, drainage area, and other elevation-derived parameters. Power potential was calculated through the use of the average annual streamflow and the hydraulic head of each reach, which is calculated from the EDNA digital elevation model. In all, estimates of streamflow and power potential were calculated for over 170,000 stream segments in the Alaskan and Hawaiian datasets.

  19. National survey provides average power quality profiles for different customer groups

    International Nuclear Information System (INIS)

    Hughes, B.; Chan, J.

    1996-01-01

    A three year survey, beginning in 1991, was conducted by the Canadian Electrical Association to study the levels of power quality that exist in Canada, and to determine ways to increase utility expertise in making power quality measurements. Twenty-two utilities across Canada were involved, with a total of 550 sites being monitored, including residential and commercial customers. Power disturbances, power outages and power quality were recorded for each site. To create a group average power quality plot, the transient disturbance activity for each site was normalized to a per channel, per month basis and then divided into a grid. Results showed that the average power quality provided by Canadian utilities was very good. Almost all the electrical disturbance within a customer premises were created and stayed within those premises. Disturbances were generally beyond utility control. Utilities could, however, reduce the amount of time the steady-state voltage exceeds the CSA normal voltage upper limit. 5 figs

  20. Sub-100 fs high average power directly blue-diode-laser-pumped Ti:sapphire oscillator

    Science.gov (United States)

    Rohrbacher, Andreas; Markovic, Vesna; Pallmann, Wolfgang; Resan, Bojan

    2016-03-01

    Ti:sapphire oscillators are a proven technology to generate sub-100 fs (even sub-10 fs) pulses in the near infrared and are widely used in many high impact scientific fields. However, the need for a bulky, expensive and complex pump source, typically a frequency-doubled multi-watt neodymium or optically pumped semiconductor laser, represents the main obstacle to more widespread use. The recent development of blue diodes emitting over 1 W has opened up the possibility of directly diode-laser-pumped Ti:sapphire oscillators. Beside the lower cost and footprint, a direct diode pumping provides better reliability, higher efficiency and better pointing stability to name a few. The challenges that it poses are lower absorption of Ti:sapphire at available diode wavelengths and lower brightness compared to typical green pump lasers. For practical applications such as bio-medicine and nano-structuring, output powers in excess of 100 mW and sub-100 fs pulses are required. In this paper, we demonstrate a high average power directly blue-diode-laser-pumped Ti:sapphire oscillator without active cooling. The SESAM modelocking ensures reliable self-starting and robust operation. We will present two configurations emitting 460 mW in 82 fs pulses and 350 mW in 65 fs pulses, both operating at 92 MHz. The maximum obtained pulse energy reaches 5 nJ. A double-sided pumping scheme with two high power blue diode lasers was used for the output power scaling. The cavity design and the experimental results will be discussed in more details.

  1. High-Average-Power Diffraction Pulse-Compression Gratings Enabling Next-Generation Ultrafast Laser Systems

    Energy Technology Data Exchange (ETDEWEB)

    Alessi, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-11-01

    Pulse compressors for ultrafast lasers have been identified as a technology gap in the push towards high peak power systems with high average powers for industrial and scientific applications. Gratings for ultrashort (sub-150fs) pulse compressors are metallic and can absorb a significant percentage of laser energy resulting in up to 40% loss as well as thermal issues which degrade on-target performance. We have developed a next generation gold grating technology which we have scaled to the petawatt-size. This resulted in improvements in efficiency, uniformity and processing as compared to previous substrate etched gratings for high average power. This new design has a deposited dielectric material for the grating ridge rather than etching directly into the glass substrate. It has been observed that average powers as low as 1W in a compressor can cause distortions in the on-target beam. We have developed and tested a method of actively cooling diffraction gratings which, in the case of gold gratings, can support a petawatt peak power laser with up to 600W average power. We demonstrated thermo-mechanical modeling of a grating in its use environment and benchmarked with experimental measurement. Multilayer dielectric (MLD) gratings are not yet used for these high peak power, ultrashort pulse durations due to their design challenges. We have designed and fabricated broad bandwidth, low dispersion MLD gratings suitable for delivering 30 fs pulses at high average power. This new grating design requires the use of a novel Out Of Plane (OOP) compressor, which we have modeled, designed, built and tested. This prototype compressor yielded a transmission of 90% for a pulse with 45 nm bandwidth, and free of spatial and angular chirp. In order to evaluate gratings and compressors built in this project we have commissioned a joule-class ultrafast Ti:Sapphire laser system. Combining the grating cooling and MLD technologies developed here could enable petawatt laser systems to

  2. Comparison of power pulses from homogeneous and time-average-equivalent models

    International Nuclear Information System (INIS)

    De, T.K.; Rouben, B.

    1995-01-01

    The time-average-equivalent model is an 'instantaneous' core model designed to reproduce the same three dimensional power distribution as that generated by a time-average model. However it has been found that the time-average-equivalent model gives a full-core static void reactivity about 8% smaller than the time-average or homogeneous models. To investigate the consequences of this difference in static void reactivity in time dependent calculations, simulations of the power pulse following a hypothetical large-loss-of-coolant accident were performed with a homogeneous model and compared with the power pulse from the time-average-equivalent model. The results show that there is a much smaller difference in peak dynamic reactivity than in static void reactivity between the two models. This is attributed to the fact that voiding is not complete, but also to the retardation effect of the delayed-neutron precursors on the dynamic flux shape. The difference in peak reactivity between the models is 0.06 milli-k. The power pulses are essentially the same in the two models, because the delayed-neutron fraction in the time-average-equivalent model is lower than in the homogeneous model, which compensates for the lower void reactivity in the time-average-equivalent model. (author). 1 ref., 5 tabs., 9 figs

  3. Can the bivariate Hurst exponent be higher than an average of the separate Hurst exponents?

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    2015-01-01

    Roč. 431, č. 1 (2015), s. 124-127 ISSN 0378-4371 R&D Projects: GA ČR(CZ) GP14-11402P Institutional support: RVO:67985556 Keywords : Correlations * Power- law cross-correlations * Bivariate Hurst exponent * Spectrum coherence Subject RIV: AH - Economics Impact factor: 1.785, year: 2015 http://library.utia.cas.cz/separaty/2015/E/kristoufek-0452314.pdf

  4. Power electronic supply system with the wind turbine dedicated for average power receivers

    Science.gov (United States)

    Widerski, Tomasz; Skrzypek, Adam

    2018-05-01

    This article presents the original project of the AC-DC-AC converter dedicated to low power wind turbines. Such a set can be a good solution for powering isolated objects that do not have access to the power grid, for example isolated houses, mountain lodges or forester's lodges, where they can replace expensive diesel engine generators. An additional source of energy in the form of a mini-wind farm is also a good alternative to yachts, marinas and tent sites, which are characterized by relatively low power consumption. This article presents a designed low power wind converter that is dedicated to these applications. The main design idea of the authors was to create a device that converts the very wide range input voltage directly to a stable 230VAC output voltage without the battery buffer. Authors focused on maximum safety of using and service. The converter contains the thermal protection, short-circuit protection and overvoltage protection. The components have been selected in such a way as to ensure that the device functions as efficiently as possible.

  5. Generation and Applications of High Average Power Mid-IR Supercontinuum in Chalcogenide Fibers

    OpenAIRE

    Petersen, Christian Rosenberg

    2016-01-01

    Mid-infrared supercontinuum with up to 54.8 mW average power, and maximum bandwidth of 1.77-8.66 μm is demonstrated as a result of pumping tapered chalcogenide photonic crystal fibers with a MHz parametric source at 4 μm

  6. Contemporary and prospective fuel cycles for WWER-440 based on new assemblies with higher uranium capacity and higher average fuel enrichment

    International Nuclear Information System (INIS)

    Gagarinskiy, A.A.; Saprykin, V.V.

    2009-01-01

    RRC 'Kurchatov Institute' has performed an extensive cycle of calculations intended to validate the opportunities of improving different fuel cycles for WWER-440 reactors. Works were performed to upgrade and improve WWER-440 fuel cycles on the basis of second-generation fuel assemblies allowing core thermal power to be uprated to 107 108 % of its nominal value (1375 MW), while maintaining the same fuel operation lifetime. Currently intensive work is underway to develop fuel cycles based on second-generation assemblies with higher fuel capacity and average fuel enrichment per assembly increased up to 4.87 % of U-235. Fuel capacity of second-generation assemblies was increased by means of eliminated central apertures of fuel pellets, and pellet diameter extended due to reduced fuel cladding thickness. This paper intends to summarize the results of works performed in the field of WWER-440 fuel cycle modernization, and to present yet unemployed opportunities and prospects of further improvement of WWER-440 neutronic and operating parameters by means of additional optimization of fuel assembly designs and fuel element arrangements applied. (Authors)

  7. Recent advances in the development of high average power induction accelerators for industrial and environmental applications

    International Nuclear Information System (INIS)

    Neau, E.L.

    1994-01-01

    Short-pulse accelerator technology developed during the early 1960's through the late 1980's is being extended to high average power systems capable of use in industrial and environmental applications. Processes requiring high dose levels and/or high volume throughput will require systems with beam power levels from several hundreds of kilowatts to megawatts. Beam accelerating potentials can range from less than 1 MeV to as much as 10 MeV depending on the type of beam, depth of penetration required, and the density of the product being treated. This paper addresses the present status of a family of high average power systems, with output beam power levels up to 200 kW, now in operation that use saturable core switches to achieve output pulse widths of 50 to 80 nanoseconds. Inductive adders and field emission cathodes are used to generate beams of electrons or x-rays at up to 2.5 MeV over areas of 1000 cm 2 . Similar high average power technology is being used at ≤ 1 MeV to drive repetitive ion beam sources for treatment of material surfaces over 100's of cm 2

  8. Application of Bayesian model averaging to measurements of the primordial power spectrum

    International Nuclear Information System (INIS)

    Parkinson, David; Liddle, Andrew R.

    2010-01-01

    Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940 s s is specified at a pivot scale 0.015 Mpc -1 . For the tensors model averaging can tighten the credible upper limit, depending on prior assumptions.

  9. Design of an L-band normally conducting RF gun cavity for high peak and average RF power

    Energy Technology Data Exchange (ETDEWEB)

    Paramonov, V., E-mail: paramono@inr.ru [Institute for Nuclear Research of Russian Academy of Sciences, 60-th October Anniversary prospect 7a, 117312 Moscow (Russian Federation); Philipp, S. [Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, D-15738 Zeuthen (Germany); Rybakov, I.; Skassyrskaya, A. [Institute for Nuclear Research of Russian Academy of Sciences, 60-th October Anniversary prospect 7a, 117312 Moscow (Russian Federation); Stephan, F. [Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, D-15738 Zeuthen (Germany)

    2017-05-11

    To provide high quality electron bunches for linear accelerators used in free electron lasers and particle colliders, RF gun cavities operate with extreme electric fields, resulting in a high pulsed RF power. The main L-band superconducting linacs of such facilities also require a long RF pulse length, resulting in a high average dissipated RF power in the gun cavity. The newly developed cavity based on the proven advantages of the existing DESY RF gun cavities, underwent significant changes. The shape of the cells is optimized to reduce the maximal surface electric field and RF loss power. Furthermore, the cavity is equipped with an RF probe to measure the field amplitude and phase. The elaborated cooling circuit design results in a lower temperature rise on the cavity RF surface and permits higher dissipated RF power. The paper presents the main solutions and results of the cavity design.

  10. A novel Generalized State-Space Averaging (GSSA) model for advanced aircraft electric power systems

    International Nuclear Information System (INIS)

    Ebrahimi, Hadi; El-Kishky, Hassan

    2015-01-01

    Highlights: • A study model is developed for aircraft electric power systems. • A novel GSSA model is developed for the interconnected power grid. • The system’s dynamics are characterized under various conditions. • The averaged results are compared and verified with the actual model. • The obtained measured values are validated with available aircraft standards. - Abstract: The growing complexity of Advanced Aircraft Electric Power Systems (AAEPS) has made conventional state-space averaging models inadequate for systems analysis and characterization. This paper presents a novel Generalized State-Space Averaging (GSSA) model for the system analysis, control and characterization of AAEPS. The primary objective of this paper is to introduce a mathematically elegant and computationally simple model to copy the AAEPS behavior at the critical nodes of the electric grid. Also, to reduce some or all of the drawbacks (complexity, cost, simulation time…, etc) associated with sensor-based monitoring and computer aided design software simulations popularly used for AAEPS characterization. It is shown in this paper that the GSSA approach overcomes the limitations of the conventional state-space averaging method, which fails to predict the behavior of AC signals in a circuit analysis. Unlike conventional averaging method, the GSSA model presented in this paper includes both DC and AC components. This would capture the key dynamic and steady-state characteristics of the aircraft electric systems. The developed model is then examined for the aircraft system’s visualization and accuracy of computation under different loading scenarios. Through several case studies, the applicability and effectiveness of the GSSA method is verified by comparing to the actual real-time simulation model obtained from Powersim 9 (PSIM9) software environment. The simulations results represent voltage, current and load power at the major nodes of the AAEPS. It has been demonstrated that

  11. A Hybrid Islanding Detection Technique Using Average Rate of Voltage Change and Real Power Shift

    DEFF Research Database (Denmark)

    Mahat, Pukar; Chen, Zhe; Bak-Jensen, Birgitte

    2009-01-01

    The mainly used islanding detection techniques may be classified as active and passive techniques. Passive techniques don't perturb the system but they have larger nondetection znes, whereas active techniques have smaller nondetection zones but they perturb the system. In this paper, a new hybrid...... technique is proposed to solve this problem. An average rate of voltage change (passive technique) has been used to initiate a real power shift (active technique), which changes the eal power of distributed generation (DG), when the passive technique cannot have a clear discrimination between islanding...

  12. Rf system modeling for the high average power FEL at CEBAF

    International Nuclear Information System (INIS)

    Merminga, L.; Fugitt, J.; Neil, G.; Simrock, S.

    1995-01-01

    High beam loading and energy recovery compounded by use of superconducting cavities, which requires tight control of microphonic noise, place stringent constraints on the linac rf system design of the proposed high average power FEL at CEBAF. Longitudinal dynamics imposes off-crest operation, which in turn implies a large tuning angle to minimize power requirements. Amplitude and phase stability requirements are consistent with demonstrated performance at CEBAF. A numerical model of the CEBAF rf control system is presented and the response of the system is examined under large parameter variations, microphonic noise, and beam current fluctuations. Studies of the transient behavior lead to a plausible startup and recovery scenario

  13. PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM

    OpenAIRE

    Bahubali K. Shiragapur; Uday Wali

    2016-01-01

    In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR) quantity. The Golay Code (24, 12), Reed-Muller code (16, 11), Hamming code (7, 4) and Hybrid technique (Combination of Signal Scrambling and Signal Distortion) proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conve...

  14. High average power Q-switched 1314 nm two-crystal Nd:YLF laser

    CSIR Research Space (South Africa)

    Botha, RC

    2015-02-01

    Full Text Available . 40, No. 4 / OPTICS LETTERS High average power Q-switched 1314 nm two-crystal Nd:YLF laser R. C. Botha,1,2,* W. Koen,3 M. J. D. Esser,3,4 C. Bollig,3,5 W. L. Combrinck,1,6 H. M. von Bergmann,2 and H. J. Strauss3 1HartRAO, P.O. Box 443...

  15. The use of induction linacs with nonlinear magnetic drive as high average power accelerators

    International Nuclear Information System (INIS)

    Birx, D.L.; Cook, E.G.; Hawkins, S.A.; Newton, M.A.; Poor, S.E.; Reginato, L.L.; Schmidt, J.A.; Smith, M.W.

    1985-01-01

    The marriage of induction linac technology with Nonlinear Magnetic Modulators has produced some unique capabilities. It appears possible to produce electron beams with average currents measured in amperes, at gradients exceeding 1 MeV/m, and with power efficiences approaching 50%. A 2 MeV, 5 kA electron accelerator is under construction at Lawrence Livermore National Laboratory (LLNL) to allow us to demonstrate some of these concepts. Progress on this project is reported here. (orig.)

  16. Average spectral power changes at the hippocampal electroencephalogram in schizophrenia model induced by ketamine.

    Science.gov (United States)

    Sampaio, Luis Rafael L; Borges, Lucas T N; Silva, Joyse M F; de Andrade, Francisca Roselin O; Barbosa, Talita M; Oliveira, Tatiana Q; Macedo, Danielle; Lima, Ricardo F; Dantas, Leonardo P; Patrocinio, Manoel Cláudio A; do Vale, Otoni C; Vasconcelos, Silvânia M M

    2018-02-01

    The use of ketamine (Ket) as a pharmacological model of schizophrenia is an important tool for understanding the main mechanisms of glutamatergic regulated neural oscillations. Thus, the aim of the current study was to evaluate Ket-induced changes in the average spectral power using the hippocampal quantitative electroencephalography (QEEG). To this end, male Wistar rats were submitted to a stereotactic surgery for the implantation of an electrode in the right hippocampus. After three days, the animals were divided into four groups that were treated for 10 consecutive days with Ket (10, 50, or 100 mg/kg). Brainwaves were captured on the 1st or 10th day, respectively, to acute or repeated treatments. The administration of Ket (10, 50, or 100 mg/kg), compared with controls, induced changes in the hippocampal average spectral power of delta, theta, alpha, gamma low or high waves, after acute or repeated treatments. Therefore, based on the alterations in the average spectral power of hippocampal waves induced by Ket, our findings might provide a basis for the use of hippocampal QEEG in animal models of schizophrenia. © 2017 Société Française de Pharmacologie et de Thérapeutique.

  17. High-throughput machining using high average power ultrashort pulse lasers and ultrafast polygon scanner

    Science.gov (United States)

    Schille, Joerg; Schneider, Lutz; Streek, André; Kloetzer, Sascha; Loeschner, Udo

    2016-03-01

    In this paper, high-throughput ultrashort pulse laser machining is investigated on various industrial grade metals (Aluminium, Copper, Stainless steel) and Al2O3 ceramic at unprecedented processing speeds. This is achieved by using a high pulse repetition frequency picosecond laser with maximum average output power of 270 W in conjunction with a unique, in-house developed two-axis polygon scanner. Initially, different concepts of polygon scanners are engineered and tested to find out the optimal architecture for ultrafast and precision laser beam scanning. Remarkable 1,000 m/s scan speed is achieved on the substrate, and thanks to the resulting low pulse overlap, thermal accumulation and plasma absorption effects are avoided at up to 20 MHz pulse repetition frequencies. In order to identify optimum processing conditions for efficient high-average power laser machining, the depths of cavities produced under varied parameter settings are analyzed and, from the results obtained, the characteristic removal values are specified. The maximum removal rate is achieved as high as 27.8 mm3/min for Aluminium, 21.4 mm3/min for Copper, 15.3 mm3/min for Stainless steel and 129.1 mm3/min for Al2O3 when full available laser power is irradiated at optimum pulse repetition frequency.

  18. Specification of optical components for a high average-power laser environment

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.R.; Chow, R.; Rinmdahl, K.A.; Willis, J.B.; Wong, J.N.

    1997-06-25

    Optical component specifications for the high-average-power lasers and transport system used in the Atomic Vapor Laser Isotope Separation (AVLIS) plant must address demanding system performance requirements. The need for high performance optics has to be balanced against the practical desire to reduce the supply risks of cost and schedule. This is addressed in optical system design, careful planning with the optical industry, demonstration of plant quality parts, qualification of optical suppliers and processes, comprehensive procedures for evaluation and test, and a plan for corrective action.

  19. Laser properties of an improved average-power Nd-doped phosphate glass

    International Nuclear Information System (INIS)

    Payne, S.A.; Marshall, C.D.; Bayramian, A.J.

    1995-01-01

    The Nd-doped phosphate laser glass described herein can withstand 2.3 times greater thermal loading without fracture, compared to APG-1 (commercially-available average-power glass from Schott Glass Technologies). The enhanced thermal loading capability is established on the basis of the intrinsic thermomechanical properties (expansion, conduction, fracture toughness, and Young's modulus), and by direct thermally-induced fracture experiments using Ar-ion laser heating of the samples. This Nd-doped phosphate glass (referred to as APG-t) is found to be characterized by a 29% lower gain cross section and a 25% longer low-concentration emission lifetime

  20. Angle-averaged effective proton-carbon analyzing powers at intermediate energies

    International Nuclear Information System (INIS)

    Amir-Ahmadi, H.R.; Berg, A.M. van den; Hunyadi, M.; Kalantar-Nayestanaki, N.; Kis, M.; Mahjour-Shafiei, M.; Messchendorp, J.G.; Woertche, H.J.

    2006-01-01

    The angle-averaged effective analyzing powers, A-bar c , for proton-carbon inclusive scattering were measured as a function of the kinetic energy of protons in a double scattering experiment. The measurements were performed in the kinetic energy range of 44.8-136.5MeV at the center of 1-5cm thick graphite analyzers using a polarized proton beam on a CH 2 film or liquid hydrogen serving as target for the primary scattering. These data can be used for measuring the polarization of protons emerging from other reactions such as H(d-bar ,p-bar )d

  1. Development of high-average-power-laser medium based on silica glass

    International Nuclear Information System (INIS)

    Fujimoto, Yasushi; Nakatsuka, Masahiro

    2000-01-01

    We have developed a high-average-power laser material based on silica glass. A new method using Zeolite X is effective for homogeneously dispersing rare earth ions in silica glass to get a high quantum yield. High quality medium, which is bubbleless and quite low refractive index distortion, must be required for realization of laser action, and therefore, we have carefully to treat the gelation and sintering processes, such as, selection of colloidal silica, pH value of for hydrolysis of tetraethylorthosilicate, and sintering history. The quality of the sintered sample and the applications are discussed. (author)

  2. Strips of hourly power options. Approximate hedging using average-based forward contracts

    International Nuclear Information System (INIS)

    Lindell, Andreas; Raab, Mikael

    2009-01-01

    We study approximate hedging strategies for a contingent claim consisting of a strip of independent hourly power options. The payoff of the contingent claim is a sum of the contributing hourly payoffs. As there is no forward market for specific hours, the fundamental problem is to find a reasonable hedge using exchange-traded forward contracts, e.g. average-based monthly contracts. The main result is a simple dynamic hedging strategy that reduces a significant part of the variance. The idea is to decompose the contingent claim into mathematically tractable components and to use empirical estimations to derive hedging deltas. Two benefits of the method are that the technique easily extends to more complex power derivatives and that only a few parameters need to be estimated. The hedging strategy based on the decomposition technique is compared with dynamic delta hedging strategies based on local minimum variance hedging, using a correlated traded asset. (author)

  3. Cooperative AF Relaying in Spectrum-Sharing Systems: Performance Analysis under Average Interference Power Constraints and Nakagami-m Fading

    KAUST Repository

    Xia, Minghua; Aissa, Sonia

    2012-01-01

    the optimal end-to-end performance, the transmit powers of the secondary source and the relays are optimized with respect to average interference power constraints at primary users and Nakagami-$m$ fading parameters of interference channels (for mathematical

  4. Autoregressive moving average fitting for real standard deviation in Monte Carlo power distribution calculation

    International Nuclear Information System (INIS)

    Ueki, Taro

    2010-01-01

    The noise propagation of tallies in the Monte Carlo power method can be represented by the autoregressive moving average process of orders p and p-1 (ARMA(p,p-1)], where p is an integer larger than or equal to two. The formula of the autocorrelation of ARMA(p,q), p≥q+1, indicates that ARMA(3,2) fitting is equivalent to lumping the eigenmodes of fluctuation propagation in three modes such as the slow, intermediate and fast attenuation modes. Therefore, ARMA(3,2) fitting was applied to the real standard deviation estimation of fuel assemblies at particular heights. The numerical results show that straightforward ARMA(3,2) fitting is promising but a stability issue must be resolved toward the incorporation in the distributed version of production Monte Carlo codes. The same numerical results reveal that the average performance of ARMA(3,2) fitting is equivalent to that of the batch method in MCNP with a batch size larger than one hundred and smaller than two hundred cycles for a 1100 MWe pressurized water reactor. The bias correction of low lag autocovariances in MVP/GMVP is demonstrated to have the potential of improving the average performance of ARMA(3,2) fitting. (author)

  5. Average stopping powers for electron and photon sources for radiobiological modeling and microdosimetric applications

    Science.gov (United States)

    Vassiliev, Oleg N.; Kry, Stephen F.; Grosshans, David R.; Mohan, Radhe

    2018-03-01

    This study concerns calculation of the average electronic stopping power for photon and electron sources. It addresses two problems that have not yet been fully resolved. The first is defining the electron spectrum used for averaging in a way that is most suitable for radiobiological modeling. We define it as the spectrum of electrons entering the sensitive to radiation volume (SV) within the cell nucleus, at the moment they enter the SV. For this spectrum we derive a formula that combines linearly the fluence spectrum and the source spectrum. The latter is the distribution of initial energies of electrons produced by a source. Previous studies used either the fluence or source spectra, but not both, thereby neglecting a part of the complete spectrum. Our derived formula reduces to these two prior methods in the case of high and low energy sources, respectively. The second problem is extending electron spectra to low energies. Previous studies used an energy cut-off on the order of 1 keV. However, as we show, even for high energy sources, such as 60Co, electrons with energies below 1 keV contribute about 30% to the dose. In this study all the spectra were calculated with Geant4-DNA code and a cut-off energy of only 11 eV. We present formulas for calculating frequency- and dose-average stopping powers, numerical results for several important electron and photon sources, and tables with all the data needed to use our formulas for arbitrary electron and photon sources producing electrons with initial energies up to  ∼1 MeV.

  6. Development of linear proton accelerators with the high average beam power

    CERN Document Server

    Bomko, V A; Egorov, A M

    2001-01-01

    Review of the current situation in the development of powerful linear proton accelerators carried out in many countries is given. The purpose of their creation is solving problems of safe and efficient nuclear energetics on a basis of the accelerator-reactor complex. In this case a proton beam with the energy up to 1 GeV, the average current of 30 mA is required. At the same time there is a needed in more powerful beams,for example, for production of tritium and transmutation of nuclear waste products. The creation of accelerators of such a power will be followed by the construction of linear accelerators of 1 GeV but with a more moderate beam current. They are intended for investigation of many aspects of neutron physics and neutron engineering. Problems in the creation of efficient constructions for the basic and auxiliary equipment, the reliability of the systems, and minimization of the beam losses in the process of acceleration will be solved.

  7. Design and component specifications for high average power laser optical systems

    Energy Technology Data Exchange (ETDEWEB)

    O' Neil, R.W.; Sawicki, R.H.; Johnson, S.A.; Sweatt, W.C.

    1987-01-01

    Laser imaging and transport systems are considered in the regime where laser-induced damage and/or thermal distortion have significant design implications. System design and component specifications are discussed and quantified in terms of the net system transport efficiency and phase budget. Optical substrate materials, figure, surface roughness, coatings, and sizing are considered in the context of visible and near-ir optical systems that have been developed at Lawrence Livermore National Laboratory for laser isotope separation applications. In specific examples of general applicability, details of the bulk and/or surface absorption, peak and/or average power damage threshold, coating characteristics and function, substrate properties, or environmental factors will be shown to drive the component size, placement, and shape in high-power systems. To avoid overstressing commercial fabrication capabilities or component design specifications, procedures will be discussed for compensating for aberration buildup, using a few carefully placed adjustable mirrors. By coupling an aggressive measurements program on substrates and coatings to the design effort, an effective technique has been established to project high-power system performance realistically and, in the process, drive technology developments to improve performance or lower cost in large-scale laser optical systems. 13 refs.

  8. Cloud-based design of high average power traveling wave linacs

    Science.gov (United States)

    Kutsaev, S. V.; Eidelman, Y.; Bruhwiler, D. L.; Moeller, P.; Nagler, R.; Barbe Welzel, J.

    2017-12-01

    The design of industrial high average power traveling wave linacs must accurately consider some specific effects. For example, acceleration of high current beam reduces power flow in the accelerating waveguide. Space charge may influence the stability of longitudinal or transverse beam dynamics. Accurate treatment of beam loading is central to the design of high-power TW accelerators, and it is especially difficult to model in the meter-scale region where the electrons are nonrelativistic. Currently, there are two types of available codes: tracking codes (e.g. PARMELA or ASTRA) that cannot solve self-consistent problems, and particle-in-cell codes (e.g. Magic 3D or CST Particle Studio) that can model the physics correctly but are very time-consuming and resource-demanding. Hellweg is a special tool for quick and accurate electron dynamics simulation in traveling wave accelerating structures. The underlying theory of this software is based on the differential equations of motion. The effects considered in this code include beam loading, space charge forces, and external magnetic fields. We present the current capabilities of the code, provide benchmarking results, and discuss future plans. We also describe the browser-based GUI for executing Hellweg in the cloud.

  9. Design and component specifications for high average power laser optical systems

    International Nuclear Information System (INIS)

    O'Neil, R.W.; Sawicki, R.H.; Johnson, S.A.; Sweatt, W.C.

    1987-01-01

    Laser imaging and transport systems are considered in the regime where laser-induced damage and/or thermal distortion have significant design implications. System design and component specifications are discussed and quantified in terms of the net system transport efficiency and phase budget. Optical substrate materials, figure, surface roughness, coatings, and sizing are considered in the context of visible and near-ir optical systems that have been developed at Lawrence Livermore National Laboratory for laser isotope separation applications. In specific examples of general applicability, details of the bulk and/or surface absorption, peak and/or average power damage threshold, coating characteristics and function, substrate properties, or environmental factors will be shown to drive the component size, placement, and shape in high-power systems. To avoid overstressing commercial fabrication capabilities or component design specifications, procedures will be discussed for compensating for aberration buildup, using a few carefully placed adjustable mirrors. By coupling an aggressive measurements program on substrates and coatings to the design effort, an effective technique has been established to project high-power system performance realistically and, in the process, drive technology developments to improve performance or lower cost in large-scale laser optical systems. 13 refs

  10. PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM

    Directory of Open Access Journals (Sweden)

    Bahubali K. Shiragapur

    2016-03-01

    Full Text Available In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR quantity. The Golay Code (24, 12, Reed-Muller code (16, 11, Hamming code (7, 4 and Hybrid technique (Combination of Signal Scrambling and Signal Distortion proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conventional and Modified Selective mapping techniques. The simulation results are validated through statistical properties, for proposed technique’s autocorrelation value is maximum shows reduction in PAPR. The symbol preference is the key idea to reduce PAPR based on Hamming distance. The simulation results are discussed in detail, in this article.

  11. Adaptive Control for Buck Power Converter Using Fixed Point Inducting Control and Zero Average Dynamics Strategies

    Science.gov (United States)

    Hoyos Velasco, Fredy Edimer; García, Nicolás Toro; Garcés Gómez, Yeison Alberto

    In this paper, the output voltage of a buck power converter is controlled by means of a quasi-sliding scheme. The Fixed Point Inducting Control (FPIC) technique is used for the control design, based on the Zero Average Dynamics (ZAD) strategy, including load estimation by means of the Least Mean Squares (LMS) method. The control scheme is tested in a Rapid Control Prototyping (RCP) system based on Digital Signal Processing (DSP) for dSPACE platform. The closed loop system shows adequate performance. The experimental and simulation results match. The main contribution of this paper is to introduce the load estimator by means of LMS, to make ZAD and FPIC control feasible in load variation conditions. In addition, comparison results for controlled buck converter with SMC, PID and ZAD-FPIC control techniques are shown.

  12. High average power CW FELs [Free Electron Laser] for application to plasma heating: Designs and experiments

    International Nuclear Information System (INIS)

    Booske, J.H.; Granatstein, V.L.; Radack, D.J.; Antonsen, T.M. Jr.; Bidwell, S.; Carmel, Y.; Destler, W.W.; Latham, P.E.; Levush, B.; Mayergoyz, I.D.; Zhang, Z.X.

    1989-01-01

    A short period wiggler (period ∼ 1 cm), sheet beam FEL has been proposed as a low-cost source of high average power (1 MW) millimeter-wave radiation for plasma heating and space-based radar applications. Recent calculation and experiments have confirmed the feasibility of this concept in such critical areas as rf wall heating, intercepted beam (''body'') current, and high voltage (0.5 - 1 MV) sheet beam generation and propagation. Results of preliminary low-gain sheet beam FEL oscillator experiments using a field emission diode and pulse line accelerator have verified that lasing occurs at the predicted FEL frequency. Measured start oscillation currents also appear consistent with theoretical estimates. Finally, we consider the possibilities of using a short-period, superconducting planar wiggler for improved beam confinement, as well as access to the high gain, strong pump Compton regime with its potential for highly efficient FEL operation

  13. Research on DC-RF superconducting photocathode injector for high average power FELs

    International Nuclear Information System (INIS)

    Zhao Kui; Hao Jiankui; Hu Yanle; Zhang Baocheng; Quan Shengwen; Chen Jiaer; Zhuang Jiejia

    2001-01-01

    To obtain high average current electron beams for a high average power Free Electron Laser (FEL), a DC-RF superconducting injector is designed. It consists of a DC extraction gap, a 1+((1)/(2)) superconducting cavity and a coaxial input system. The DC gap, which takes the form of a Pierce configuration, is connected to the 1+((1)/(2)) superconducting cavity. The photocathode is attached to the negative electrode of the DC gap. The anode forms the bottom of the ((1)/(2)) cavity. Simulations are made to model the beam dynamics of the electron beams extracted by the DC gap and accelerated by the superconducting cavity. High quality electron beams with emittance lower than 3 π-mm-mrad can be obtained. The optimization of experiments with the DC gap, as well as the design of experiments with the coaxial coupler have all been completed. An optimized 1+((1)/(2)) superconducting cavity is in the process of being studied and manufactured

  14. Cooperative AF Relaying in Spectrum-Sharing Systems: Performance Analysis under Average Interference Power Constraints and Nakagami-m Fading

    KAUST Repository

    Xia, Minghua

    2012-06-01

    Since the electromagnetic spectrum resource becomes more and more scarce, improving spectral efficiency is extremely important for the sustainable development of wireless communication systems and services. Integrating cooperative relaying techniques into spectrum-sharing cognitive radio systems sheds new light on higher spectral efficiency. In this paper, we analyze the end-to-end performance of cooperative amplify-and-forward (AF) relaying in spectrum-sharing systems. In order to achieve the optimal end-to-end performance, the transmit powers of the secondary source and the relays are optimized with respect to average interference power constraints at primary users and Nakagami-$m$ fading parameters of interference channels (for mathematical tractability, the desired channels from secondary source to relay and from relay to secondary destination are assumed to be subject to Rayleigh fading). Also, both partial and opportunistic relay-selection strategies are exploited to further enhance system performance. Based on the exact distribution functions of the end-to-end signal-to-noise ratio (SNR) obtained herein, the outage probability, average symbol error probability, diversity order, and ergodic capacity of the system under study are analytically investigated. Our results show that system performance is dominated by the resource constraints and it improves slowly with increasing average SNR. Furthermore, larger Nakagami-m fading parameter on interference channels deteriorates system performance slightly. On the other hand, when interference power constraints are stringent, opportunistic relay selection can be exploited to improve system performance significantly. All analytical results are corroborated by simulation results and they are shown to be efficient tools for exact evaluation of system performance.

  15. Peak-to-average power ratio reduction in interleaved OFDMA systems

    KAUST Repository

    Al-Shuhail, Shamael; Ali, Anum; Al-Naffouri, Tareq Y.

    2015-01-01

    Orthogonal frequency division multiple access (OFDMA) systems suffer from several impairments, and communication system engineers use powerful signal processing tools to combat these impairments and to keep up with the capacity/rate demands. One of these impairments is high peak-to-average power ratio (PAPR) and clipping is the simplest peak reduction scheme. However, in general, when multiple users are subjected to clipping, frequency domain clipping distortions spread over the spectrum of all users. This results in compromised performance and hence clipping distortions need to be mitigated at the receiver. Mitigating these distortions in multiuser case is not simple and requires complex clipping mitigation procedures at the receiver. However, it was observed that interleaved OFDMA presents a special structure that results in only self-inflicted clipping distortions (i.e., the distortions of a particular user do not interfere with other users). In this work, we prove analytically that distortions do not spread over multiple users (while utilizing interleaved carrier assignment in OFDMA) and construct a compressed sensing system that utilizes the sparsity of the clipping distortions and recovers it on each user. We provide numerical results that validate our analysis and show promising performance for the proposed clipping recovery scheme.

  16. 7.5 MeV High Average Power Linear Accelerator System for Food Irradiation Applications

    International Nuclear Information System (INIS)

    Eichenberger, Carl; Palmer, Dennis; Wong, Sik-Lam; Robison, Greg; Miller, Bruce; Shimer, Daniel

    2005-09-01

    In December 2004 the US Food and Drug Administration (FDA) approved the use of 7.5 MeV X-rays for irradiation of food products. The increased efficiency for treatment at 7.5 MeV (versus the previous maximum allowable X-ray energy of 5 MeV) will have a significant impact on processing rates and, therefore, reduce the per-package cost of irradiation using X-rays. Titan Pulse Sciences Division is developing a new food irradiation system based on this ruling. The irradiation system incorporates a 7.5 MeV electron linear accelerator (linac) that is capable of 100 kW average power. A tantalum converter is positioned close to the exit window of the scan horn. The linac is an RF standing waveguide structure based on a 5 MeV accelerator that is used for X-ray processing of food products. The linac is powered by a 1300 MHz (L-Band) klystron tube. The electrical drive for the klystron is a solid state modulator that uses inductive energy store and solid-state opening switches. The system is designed to operate 7000 hours per year. Keywords: Rf Accelerator, Solid state modulator, X-ray processing

  17. Peak-to-average power ratio reduction in interleaved OFDMA systems

    KAUST Repository

    Al-Shuhail, Shamael

    2015-12-07

    Orthogonal frequency division multiple access (OFDMA) systems suffer from several impairments, and communication system engineers use powerful signal processing tools to combat these impairments and to keep up with the capacity/rate demands. One of these impairments is high peak-to-average power ratio (PAPR) and clipping is the simplest peak reduction scheme. However, in general, when multiple users are subjected to clipping, frequency domain clipping distortions spread over the spectrum of all users. This results in compromised performance and hence clipping distortions need to be mitigated at the receiver. Mitigating these distortions in multiuser case is not simple and requires complex clipping mitigation procedures at the receiver. However, it was observed that interleaved OFDMA presents a special structure that results in only self-inflicted clipping distortions (i.e., the distortions of a particular user do not interfere with other users). In this work, we prove analytically that distortions do not spread over multiple users (while utilizing interleaved carrier assignment in OFDMA) and construct a compressed sensing system that utilizes the sparsity of the clipping distortions and recovers it on each user. We provide numerical results that validate our analysis and show promising performance for the proposed clipping recovery scheme.

  18. Development of high average power industrial Nd:YAG laser with peak power of 10 kW class

    International Nuclear Information System (INIS)

    Kim, Cheol Jung; Kim, Jeong Mook; Jung, Chin Mann; Kim, Soo Sung; Kim, Kwang Suk; Kim, Min Suk; Cho, Jae Wan; Kim, Duk Hyun

    1992-03-01

    We developed and commercialized an industrial pulsed Nd:YAG laser with peak power of 10 kW class for fine cutting and drilling applications. Several commercial models have been investigated in design and performance. We improved its quality to the level of commercial Nd:YAG laser by an endurance test for each parts of laser system. The maximum peak power and average power of our laser were 10 kW and 250 W, respectively. Moreover, the laser pulse width could be controlled from 0.5 msec to 20 msec continuously. Many optical parts were localized and lowered much in cost. Only few parts were imported and almost 90% in cost were localized. Also, to accellerate the commercialization by the joint company, the training and transfer of technology were pursued in the joint participation in design and assembly by company researchers from the early stage. Three Nd:YAG lasers have been assembled and will be tested in industrial manufacturing process to prove the capability of developed Nd:YAG laser with potential users. (Author)

  19. Compact Source of Electron Beam with Energy of 200 kEv and Average Power of 2 kW

    CERN Document Server

    Kazarezov, Ivan; Balakin, Vladimir E; Bryazgin, Alex; Bulatov, Alexandre; Glazkov, Ivan; Kokin, Evgeny; Krainov, Gennady; Kuznetsov, Gennady I; Molokoedov, Andrey; Tuvik, Alfred

    2005-01-01

    The paper describes a compact electron beam source with average electron energy of 200 keV. The source operates with pulse power up to 2 MW under average power not higher than 2 kW, pulsed beam current up to 10 A, pulse duration up to 2 mks, and repetition rate up to 5 kHz. The electron beam is extracted through aluminium-beryllium alloy foil. The pulse duration and repetition rate can be changed from control desk. High-voltage generator for the source with output voltage up to 220 kV is realized using the voltage-doubling circuit which consists of 30 sections. The insulation type - gas, SF6 under pressure of 8 atm. The cooling of the foil supporting tubes is provided by a water-alcohol mixture from an independent source. The beam output window dimensions are 180?75 mm, the energy spread in the beam +10/-30%, the source weight is 80 kg.

  20. Micro-engineered first wall tungsten armor for high average power laser fusion energy systems

    Science.gov (United States)

    Sharafat, Shahram; Ghoniem, Nasr M.; Anderson, Michael; Williams, Brian; Blanchard, Jake; Snead, Lance; HAPL Team

    2005-12-01

    The high average power laser program is developing an inertial fusion energy demonstration power reactor with a solid first wall chamber. The first wall (FW) will be subject to high energy density radiation and high doses of high energy helium implantation. Tungsten has been identified as the candidate material for a FW armor. The fundamental concern is long term thermo-mechanical survivability of the armor against the effects of high temperature pulsed operation and exfoliation due to the retention of implanted helium. Even if a solid tungsten armor coating would survive the high temperature cyclic operation with minimal failure, the high helium implantation and retention would result in unacceptable material loss rates. Micro-engineered materials, such as castellated structures, plasma sprayed nano-porous coatings and refractory foams are suggested as a first wall armor material to address these fundamental concerns. A micro-engineered FW armor would have to be designed with specific geometric features that tolerate high cyclic heating loads and recycle most of the implanted helium without any significant failure. Micro-engineered materials are briefly reviewed. In particular, plasma-sprayed nano-porous tungsten and tungsten foams are assessed for their potential to accommodate inertial fusion specific loads. Tests show that nano-porous plasma spray coatings can be manufactured with high permeability to helium gas, while retaining relatively high thermal conductivities. Tungsten foams where shown to be able to overcome thermo-mechanical loads by cell rotation and deformation. Helium implantation tests have shown, that pulsed implantation and heating releases significant levels of implanted helium. Helium implantation and release from tungsten was modeled using an expanded kinetic rate theory, to include the effects of pulsed implantations and thermal cycles. Although, significant challenges remain micro-engineered materials are shown to constitute potential

  1. Micro-engineered first wall tungsten armor for high average power laser fusion energy systems

    International Nuclear Information System (INIS)

    Sharafat, Shahram; Ghoniem, Nasr M.; Anderson, Michael; Williams, Brian; Blanchard, Jake; Snead, Lance

    2005-01-01

    The high average power laser program is developing an inertial fusion energy demonstration power reactor with a solid first wall chamber. The first wall (FW) will be subject to high energy density radiation and high doses of high energy helium implantation. Tungsten has been identified as the candidate material for a FW armor. The fundamental concern is long term thermo-mechanical survivability of the armor against the effects of high temperature pulsed operation and exfoliation due to the retention of implanted helium. Even if a solid tungsten armor coating would survive the high temperature cyclic operation with minimal failure, the high helium implantation and retention would result in unacceptable material loss rates. Micro-engineered materials, such as castellated structures, plasma sprayed nano-porous coatings and refractory foams are suggested as a first wall armor material to address these fundamental concerns. A micro-engineered FW armor would have to be designed with specific geometric features that tolerate high cyclic heating loads and recycle most of the implanted helium without any significant failure. Micro-engineered materials are briefly reviewed. In particular, plasma-sprayed nano-porous tungsten and tungsten foams are assessed for their potential to accommodate inertial fusion specific loads. Tests show that nano-porous plasma spray coatings can be manufactured with high permeability to helium gas, while retaining relatively high thermal conductivities. Tungsten foams where shown to be able to overcome thermo-mechanical loads by cell rotation and deformation. Helium implantation tests have shown, that pulsed implantation and heating releases significant levels of implanted helium. Helium implantation and release from tungsten was modeled using an expanded kinetic rate theory, to include the effects of pulsed implantations and thermal cycles. Although, significant challenges remain micro-engineered materials are shown to constitute potential

  2. Industrial applications of high-average power high-peak power nanosecond pulse duration Nd:YAG lasers

    Science.gov (United States)

    Harrison, Paul M.; Ellwi, Samir

    2009-02-01

    Within the vast range of laser materials processing applications, every type of successful commercial laser has been driven by a major industrial process. For high average power, high peak power, nanosecond pulse duration Nd:YAG DPSS lasers, the enabling process is high speed surface engineering. This includes applications such as thin film patterning and selective coating removal in markets such as the flat panel displays (FPD), solar and automotive industries. Applications such as these tend to require working spots that have uniform intensity distribution using specific shapes and dimensions, so a range of innovative beam delivery systems have been developed that convert the gaussian beam shape produced by the laser into a range of rectangular and/or shaped spots, as required by demands of each project. In this paper the authors will discuss the key parameters of this type of laser and examine why they are important for high speed surface engineering projects, and how they affect the underlying laser-material interaction and the removal mechanism. Several case studies will be considered in the FPD and solar markets, exploring the close link between the application, the key laser characteristics and the beam delivery system that link these together.

  3. A Front End for Multipetawatt Lasers Based on a High-Energy, High-Average-Power Optical Parametric Chirped-Pulse Amplifier

    International Nuclear Information System (INIS)

    Bagnoud, V.

    2004-01-01

    We report on a high-energy, high-average-power optical parametric chirped-pulse amplifier developed as the front end for the OMEGA EP laser. The amplifier provides a gain larger than 109 in two stages leading to a total energy of 400 mJ with a pump-to-signal conversion efficiency higher than 25%

  4. Systematic approach to peak-to-average power ratio in OFDM

    Science.gov (United States)

    Schurgers, Curt

    2001-11-01

    OFDM multicarrier systems support high data rate wireless transmission using orthogonal frequency channels, and require no extensive equalization, yet offer excellent immunity against fading and inter-symbol interference. The major drawback of these systems is the large Peak-to-Average power Ratio (PAR) of the transmit signal, which renders a straightforward implementation very costly and inefficient. Existing approaches that attack this PAR issue are abundant, but no systematic framework or comparison between them exist to date. They sometimes even differ in the problem definition itself and consequently in the basic approach to follow. In this work, we provide a systematic approach that resolves this ambiguity and spans the existing PAR solutions. The basis of our framework is the observation that efficient system implementations require a reduced signal dynamic range. This range reduction can be modeled as a hard limiting, also referred to as clipping, where the extra distortion has to be considered as part of the total noise tradeoff. We illustrate that the different PAR solutions manipulate this tradeoff in alternative ways in order to improve the performance. Furthermore, we discuss and compare a broad range of such techniques and organize them into three classes: block coding, clip effect transformation and probabilistic.

  5. A ROBUST CLUSTER HEAD SELECTION BASED ON NEIGHBORHOOD CONTRIBUTION AND AVERAGE MINIMUM POWER FOR MANETs

    Directory of Open Access Journals (Sweden)

    S.Balaji

    2015-06-01

    Full Text Available Mobile Adhoc network is an instantaneous wireless network that is dynamic in nature. It supports single hop and multihop communication. In this infrastructure less network, clustering is a significant model to maintain the topology of the network. The clustering process includes different phases like cluster formation, cluster head selection, cluster maintenance. Choosing cluster head is important as the stability of the network depends on well-organized and resourceful cluster head. When the node has increased number of neighbors it can act as a link between the neighbor nodes which in further reduces the number of hops in multihop communication. Promisingly the node with more number of neighbors should also be available with enough energy to provide stability in the network. Hence these aspects demand the focus. In weight based cluster head selection, closeness and average minimum power required is considered for purging the ineligible nodes. The optimal set of nodes selected after purging will compete to become cluster head. The node with maximum weight selected as cluster head. Mathematical formulation is developed to show the proposed method provides optimum result. It is also suggested that weight factor in calculating the node weight should give precise importance to energy and node stability.

  6. The Higher Power of Patron: Profile of Newbery Winner

    Science.gov (United States)

    Oleck, Joan

    2007-01-01

    One lousy starred review. That was all, initially, that Susan Patron had to show for the 10 years she spent writing "The Higher Power of Lucky," her funny, tender story of a little girl struggling to gain control over her life. One star, from "Kirkus Reviews," for the heart and soul Patron poured into her second novel. Positive notices had…

  7. Peer Mentoring in Higher Education: Issues of Power and Control

    Science.gov (United States)

    Christie, Hazel

    2014-01-01

    In response to widespread support for mentoring schemes in higher education this article calls for a more critical investigation of the dynamics of power and control, which are intrinsic to the mentoring process, and questions presumptions that mentoring brings only positive benefits to its participants. It provides this more critical appraisal by…

  8. Radio frequency plasma nitriding of aluminium at higher power levels

    International Nuclear Information System (INIS)

    Gredelj, Sabina; Kumar, Sunil; Gerson, Andrea R.; Cavallaro, Giuseppe P.

    2006-01-01

    Nitriding of aluminium 2011 using a radio frequency plasma at higher power levels (500 and 700 W) and lower substrate temperature (500 deg. C) resulted in higher AlN/Al 2 O 3 ratios than obtained at 100 W and 575 deg. C. AlN/Al 2 O 3 ratios derived from X-ray photoelectron spectroscopic analysis (and corroborated by heavy ion elastic recoil time of flight spectrometry) for treatments preformed at 100 (575 deg. C), 500 (500 deg. C) and 700 W (500 deg. C) were 1.0, 1.5 and 3.3, respectively. Scanning electron microscopy revealed that plasma nitrided surfaces obtained at higher power levels exhibited much finer nodular morphology than obtained at 100 W

  9. 53 W average power few-cycle fiber laser system generating soft x rays up to the water window.

    Science.gov (United States)

    Rothhardt, Jan; Hädrich, Steffen; Klenke, Arno; Demmler, Stefan; Hoffmann, Armin; Gotschall, Thomas; Eidam, Tino; Krebs, Manuel; Limpert, Jens; Tünnermann, Andreas

    2014-09-01

    We report on a few-cycle laser system delivering sub-8-fs pulses with 353 μJ pulse energy and 25 GW of peak power at up to 150 kHz repetition rate. The corresponding average output power is as high as 53 W, which represents the highest average power obtained from any few-cycle laser architecture so far. The combination of both high average and high peak power provides unique opportunities for applications. We demonstrate high harmonic generation up to the water window and record-high photon flux in the soft x-ray spectral region. This tabletop source of high-photon flux soft x rays will, for example, enable coherent diffractive imaging with sub-10-nm resolution in the near future.

  10. Observer design for DC/DC power converters with bilinear averaged model

    NARCIS (Netherlands)

    Spinu, V.; Dam, M.C.A.; Lazar, M.

    2012-01-01

    Increased demand for high bandwidth and high efficiency made full state-feedback control solutions very attractive to power-electronics community. However, full state measurement is economically prohibitive for a large range of applications. Moreover, state measurements in switching power converters

  11. Spatial models for probabilistic prediction of wind power with application to annual-average and high temporal resolution data

    DEFF Research Database (Denmark)

    Lenzi, Amanda; Pinson, Pierre; Clemmensen, Line Katrine Harder

    2017-01-01

    average wind power generation, and for a high temporal resolution (typically wind power averages over 15-min time steps). In both cases, we use a spatial hierarchical statistical model in which spatial correlation is captured by a latent Gaussian field. We explore how such models can be handled...... with stochastic partial differential approximations of Matérn Gaussian fields together with Integrated Nested Laplace Approximations. We demonstrate the proposed methods on wind farm data from Western Denmark, and compare the results to those obtained with standard geostatistical methods. The results show...

  12. The Application of Cryogenic Laser Physics to the Development of High Average Power Ultra-Short Pulse Lasers

    Directory of Open Access Journals (Sweden)

    David C. Brown

    2016-01-01

    Full Text Available Ultrafast laser physics continues to advance at a rapid pace, driven primarily by the development of more powerful and sophisticated diode-pumping sources, the development of new laser materials, and new laser and amplification approaches such as optical parametric chirped-pulse amplification. The rapid development of high average power cryogenic laser sources seems likely to play a crucial role in realizing the long-sought goal of powerful ultrafast sources that offer concomitant high peak and average powers. In this paper, we review the optical, thermal, thermo-optic and laser parameters important to cryogenic laser technology, recently achieved laser and laser materials progress, the progression of cryogenic laser technology, discuss the importance of cryogenic laser technology in ultrafast laser science, and what advances are likely to be achieved in the near-future.

  13. Determination of the in-core power and the average core temperature of low power research reactors using gamma dose rate measurements

    International Nuclear Information System (INIS)

    Osei Poku, L.

    2012-01-01

    Most reactors incorporate out-of-core neutron detectors to monitor the reactor power. An accurate relationship between the powers indicated by these detectors and actual core thermal power is required. This relationship is established by calibrating the thermal power. The most common method used in calibrating the thermal power of low power reactors is neutron activation technique. To enhance the principle of multiplicity and diversity of measuring the thermal neutron flux and/or power and temperature difference and/or average core temperature of low power research reactors, an alternative and complimentary method has been developed, in addition to the current method. Thermal neutron flux/Power and temperature difference/average core temperature were correlated with measured gamma dose rate. The thermal neutron flux and power predicted using gamma dose rate measurement were in good agreement with the calibrated/indicated thermal neutron fluxes and powers. The predicted data was also good agreement with thermal neutron fluxes and powers obtained using the activation technique. At an indicated power of 30 kW, the gamma dose rate measured predicted thermal neutron flux of (1* 10 12 ± 0.00255 * 10 12 ) n/cm 2 s and (0.987* 10 12 ± 0.00243 * 10 12 ) which corresponded to powers of (30.06 ± 0.075) kW and (29.6 ± 0.073) for both normal level of the pool water and 40 cm below normal levels respectively. At an indicated power of 15 kW, the gamma dose rate measured predicted thermal neutron flux of (5.07* 10 11 ± 0.025* 10 11 ) n/cm 2 s and (5.12 * 10 11 ±0.024* 10 11 ) n/cm 2 s which corresponded to power of (15.21 ± 0.075) kW and (15.36 ± 0.073) kW for both normal levels of the pool water and 40 cm below normal levels respectively. The power predicted by this work also compared well with power obtained from a three-dimensional neutronic analysis for GHARR-1 core. The predicted power also compares well with calculated power using a correlation equation obtained from

  14. Efficient processing of CFRP with a picosecond laser with up to 1.4 kW average power

    Science.gov (United States)

    Onuseit, V.; Freitag, C.; Wiedenmann, M.; Weber, R.; Negel, J.-P.; Löscher, A.; Abdou Ahmed, M.; Graf, T.

    2015-03-01

    Laser processing of carbon fiber reinforce plastic (CFRP) is a very promising method to solve a lot of the challenges for large-volume production of lightweight constructions in automotive and airplane industries. However, the laser process is actual limited by two main issues. First the quality might be reduced due to thermal damage and second the high process energy needed for sublimation of the carbon fibers requires laser sources with high average power for productive processing. To achieve thermal damage of the CFRP of less than 10μm intensities above 108 W/cm² are needed. To reach these high intensities in the processing area ultra-short pulse laser systems are favored. Unfortunately the average power of commercially available laser systems is up to now in the range of several tens to a few hundred Watt. To sublimate the carbon fibers a large volume specific enthalpy of 85 J/mm³ is necessary. This means for example that cutting of 2 mm thick material with a kerf width of 0.2 mm with industry-typical 100 mm/sec requires several kilowatts of average power. At the IFSW a thin-disk multipass amplifier yielding a maximum average output power of 1100 W (300 kHz, 8 ps, 3.7 mJ) allowed for the first time to process CFRP at this average power and pulse energy level with picosecond pulse duration. With this unique laser system cutting of CFRP with a thickness of 2 mm an effective average cutting speed of 150 mm/sec with a thermal damage below 10μm was demonstrated.

  15. Synchronously pumped optical parametric oscillation in periodically poled lithium niobate with 1-W average output power

    NARCIS (Netherlands)

    Graf, T.; McConnell, G.; Ferguson, A.I.; Bente, E.A.J.M.; Burns, D.; Dawson, M.D.

    1999-01-01

    We report on a rugged all-solid-state laser source of near-IR radiation in the range of 1461–1601 nm based on a high-power Nd:YVO4 laser that is mode locked by a semiconductor saturable Bragg reflector as the pump source of a synchronously pumped optical parametric oscillator with a periodically

  16. High average power scaling of optical parametric amplification through cascaded difference-frequency generators

    Science.gov (United States)

    Jovanovic, Igor; Comaskey, Brian J.

    2004-09-14

    A first pump pulse and a signal pulse are injected into a first optical parametric amplifier. This produces a first amplified signal pulse. At least one additional pump pulse and the first amplified signal pulse are injected into at least one additional optical parametric amplifier producing an increased power coherent optical pulse.

  17. Program THEK energy production units of average power and using thermal conversion of solar radiation

    Science.gov (United States)

    1978-01-01

    General studies undertaken by the C.N.R.S. in the field of solar power plants have generated the problem of building energy production units in the medium range of electrical power, in the order of 100 kW. Among the possible solutions, the principle of the use of distributed heliothermal converters has been selected as being, with the current status of things, the most advantageous solution. This principle consists of obtaining the conversion of concentrated radiation into heat by using a series of heliothermal conversion modules scattered over the ground; the produced heat is collected by a heat-carrying fluid circulating inside a thermal loop leading to a device for both regulation and storage.

  18. Mixed-mode distribution systems for high average power electron cyclotron heating

    International Nuclear Information System (INIS)

    White, T.L.; Kimrey, H.D.; Bigelow, T.S.

    1984-01-01

    The ELMO Bumpy Torus-Scale (EBT-S) experiment consists of 24 simple magnetic mirrors joined end-to-end to form a torus of closed magnetic field lines. In this paper, we first describe an 80% efficient mixed-mode unpolarized heating system which couples 28-GHz microwave power to the midplane of the 24 EBT-S cavities. The system consists of two radiused bends feeding a quasi-optical mixed-mode toroidal distribution manifold. Balancing power to the 24 cavities is determined by detailed computer ray tracing. A second 28-GHz electron cyclotron heating (ECH) system using a polarized grid high field launcher is described. The launcher penetrates the fundamental ECH resonant surface without a vacuum window with no observable breakdown up to 1 kW/cm 2 (source limited) with 24 kW delivered to the plasma. This system uses the same mixed-mode output as the first system but polarizes the launched power by using a grid of WR42 apertures. The efficiency of this system is 32%, but can be improved by feeding multiple launchers from a separate distribution manifold

  19. Green-diode-pumped femtosecond Ti:Sapphire laser with up to 450 mW average power.

    Science.gov (United States)

    Gürel, K; Wittwer, V J; Hoffmann, M; Saraceno, C J; Hakobyan, S; Resan, B; Rohrbacher, A; Weingarten, K; Schilt, S; Südmeyer, T

    2015-11-16

    We investigate power-scaling of green-diode-pumped Ti:Sapphire lasers in continuous-wave (CW) and mode-locked operation. In a first configuration with a total pump power of up to 2 W incident onto the crystal, we achieved a CW power of up to 440 mW and self-starting mode-locking with up to 200 mW average power in 68-fs pulses using semiconductor saturable absorber mirror (SESAM) as saturable absorber. In a second configuration with up to 3 W of pump power incident onto the crystal, we achieved up to 650 mW in CW operation and up to 450 mW in 58-fs pulses using Kerr-lens mode-locking (KLM). The shortest pulse duration was 39 fs, which was achieved at 350 mW average power using KLM. The mode-locked laser generates a pulse train at repetition rates around 400 MHz. No complex cooling system is required: neither the SESAM nor the Ti:Sapphire crystal is actively cooled, only air cooling is applied to the pump diodes using a small fan. Because of mass production for laser displays, we expect that prices for green laser diodes will become very favorable in the near future, opening the door for low-cost Ti:Sapphire lasers. This will be highly attractive for potential mass applications such as biomedical imaging and sensing.

  20. High energy, high average power solid state green or UV laser

    Science.gov (United States)

    Hackel, Lloyd A.; Norton, Mary; Dane, C. Brent

    2004-03-02

    A system for producing a green or UV output beam for illuminating a large area with relatively high beam fluence. A Nd:glass laser produces a near-infrared output by means of an oscillator that generates a high quality but low power output and then multi-pass through and amplification in a zig-zag slab amplifier and wavefront correction in a phase conjugator at the midway point of the multi-pass amplification. The green or UV output is generated by means of conversion crystals that follow final propagation through the zig-zag slab amplifier.

  1. Average stopping powers and the use of non-analyte spiking for the determination of phosphorus and sodium by PIPPS

    International Nuclear Information System (INIS)

    Olivier, C.; Morland, H.J.

    1991-01-01

    By using particle induced prompt photon spectrometry, PIPPS, the ratios of the average stopping powers in samples and standards can be used to determine elemental compositions. Since the average stopping powers in the samples are in general unknown, this procedure poses a problem. It has been shown that by spiking the sample with a known amount of a compound with known stopping power and containing a non-analyte element, appropriate stopping powers in the samples can be determined by measuring the prompt gamma-ray yields induced in the spike. Using 5-MeV protons and lithium compounds as non-analyte spikes, sodium and phosphorus were determined in ivory, while sodium was determined in geological samples. For the stopping power determinations in the samples the 429-keV 7 Li n(1,0) and 478-keV 7 Li (1,0) gamma rays were measured, while for phosphorus and sodium determinations the high yield 1,266-keV 31 P (1,0), 440-keV 23 Na (1,0), 1,634-keV, Na 23 α(1,0) and 1,637-keV 23 Na (2,1) gamma rays were used. The method was tested by analyzing the standard reference materials SRM 91, 120c and 694

  2. Electrical method for the measurements of volume averaged electron density and effective coupled power to the plasma bulk

    Science.gov (United States)

    Henault, M.; Wattieaux, G.; Lecas, T.; Renouard, J. P.; Boufendi, L.

    2016-02-01

    Nanoparticles growing or injected in a low pressure cold plasma generated by a radiofrequency capacitively coupled capacitive discharge induce strong modifications in the electrical parameters of both plasma and discharge. In this paper, a non-intrusive method, based on the measurement of the plasma impedance, is used to determine the volume averaged electron density and effective coupled power to the plasma bulk. Good agreements are found when the results are compared to those given by other well-known and established methods.

  3. High-throughput machining using a high-average power ultrashort pulse laser and high-speed polygon scanner

    Science.gov (United States)

    Schille, Joerg; Schneider, Lutz; Streek, André; Kloetzer, Sascha; Loeschner, Udo

    2016-09-01

    High-throughput ultrashort pulse laser machining is investigated on various industrial grade metals (aluminum, copper, and stainless steel) and Al2O3 ceramic at unprecedented processing speeds. This is achieved by using a high-average power picosecond laser in conjunction with a unique, in-house developed polygon mirror-based biaxial scanning system. Therefore, different concepts of polygon scanners are engineered and tested to find the best architecture for high-speed and precision laser beam scanning. In order to identify the optimum conditions for efficient processing when using high-average laser powers, the depths of cavities made in the samples by varying the processing parameter settings are analyzed and, from the results obtained, the characteristic removal values are specified. For overlapping pulses of optimum fluence, the removal rate is as high as 27.8 mm3/min for aluminum, 21.4 mm3/min for copper, 15.3 mm3/min for stainless steel, and 129.1 mm3/min for Al2O3, when a laser beam of 187 W average laser powers irradiates. On stainless steel, it is demonstrated that the removal rate increases to 23.3 mm3/min when the laser beam is very fast moving. This is thanks to the low pulse overlap as achieved with 800 m/s beam deflection speed; thus, laser beam shielding can be avoided even when irradiating high-repetitive 20-MHz pulses.

  4. The Mercury Laser System-A scaleable average-power laser for fusion and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Ebbers, C A; Moses, E I

    2008-03-26

    Nestled in a valley between the whitecaps of the Pacific and the snowcapped crests of the Sierra Nevada, Lawrence Livermore National Laboratory (LLNL) is home to the nearly complete National Ignition Facility (NIF). The purpose of NIF is to create a miniature star-on demand. An enormous amount of laser light energy (1.8 MJ in a pulse that is 20 ns in duration) will be focused into a small gold cylinder approximately the size of a pencil eraser. Centered in the gold cylinder (or hohlraum) will be a nearly perfect sphere filled with a complex mixture of hydrogen gas isotopes that is similar to the atmosphere of our Sun. During experiments, the laser light will hit the inside of the gold cylinder, heating the metal until it emits X-rays (similar to how your electric stove coil emits visible red light when heated). The X-rays will be used to compress the hydrogen-like gas with such pressure that the gas atoms will combine or 'fuse' together, producing the next heavier element (helium) and releasing energy in the form of energetic particles. 2010 will mark the first credible attempt at this world-changing event: the achievement of fusion energy 'break-even' on Earth using NIF, the world's largest laser! NIF is anticipated to eventually perform this immense technological accomplishment once per week, with the capability of firing up to six shots per day - eliminating the need for continued underground testing of our nation's nuclear stockpile, in addition to opening up new realms of science. But what about the day after NIF achieves ignition? Although NIF will achieve fusion energy break-even and gain, the facility is not designed to harness the enormous potential of fusion for energy generation. A fusion power plant, as opposed to a world-class engineering research facility, would require that the laser deliver drive pulses nearly 100,000 times more frequently - a rate closer to 10 shots per second as opposed to several shots per day.

  5. The Mercury Laser System-A scaleable average-power laser for fusion and beyond

    International Nuclear Information System (INIS)

    Ebbers, C.A.; Moses, E.I.

    2009-01-01

    Nestled in a valley between the whitecaps of the Pacific and the snowcapped crests of the Sierra Nevada, Lawrence Livermore National Laboratory (LLNL) is home to the nearly complete National Ignition Facility (NIF). The purpose of NIF is to create a miniature star-on demand. An enormous amount of laser light energy (1.8 MJ in a pulse that is 20 ns in duration) will be focused into a small gold cylinder approximately the size of a pencil eraser. Centered in the gold cylinder (or hohlraum) will be a nearly perfect sphere filled with a complex mixture of hydrogen gas isotopes that is similar to the atmosphere of our Sun. During experiments, the laser light will hit the inside of the gold cylinder, heating the metal until it emits X-rays (similar to how your electric stove coil emits visible red light when heated). The X-rays will be used to compress the hydrogen-like gas with such pressure that the gas atoms will combine or 'fuse' together, producing the next heavier element (helium) and releasing energy in the form of energetic particles. 2010 will mark the first credible attempt at this world-changing event: the achievement of fusion energy 'break-even' on Earth using NIF, the world's largest laser NIF is anticipated to eventually perform this immense technological accomplishment once per week, with the capability of firing up to six shots per day - eliminating the need for continued underground testing of our nation's nuclear stockpile, in addition to opening up new realms of science. But what about the day after NIF achieves ignition? Although NIF will achieve fusion energy break-even and gain, the facility is not designed to harness the enormous potential of fusion for energy generation. A fusion power plant, as opposed to a world-class engineering research facility, would require that the laser deliver drive pulses nearly 100,000 times more frequently - a rate closer to 10 shots per second as opposed to several shots per day.

  6. Capturing power at higher voltages from arrays of microbial fuel cells without voltage reversal

    KAUST Repository

    Kim, Younggy

    2011-01-01

    Voltages produced by microbial fuel cells (MFCs) cannot be sustainably increased by linking them in series due to voltage reversal, which substantially reduces stack voltages. It was shown here that MFC voltages can be increased with continuous power production using an electronic circuit containing two sets of multiple capacitors that were alternately charged and discharged (every one second). Capacitors were charged in parallel by the MFCs, but linked in series while discharging to the circuit load (resistor). The parallel charging of the capacitors avoided voltage reversal, while discharging the capacitors in series produced up to 2.5 V with four capacitors. There were negligible energy losses in the circuit compared to 20-40% losses typically obtained with MFCs using DC-DC converters to increase voltage. Coulombic efficiencies were 67% when power was generated via four capacitors, compared to only 38% when individual MFCs were operated with a fixed resistance of 250 Ω. The maximum power produced using the capacitors was not adversely affected by variable performance of the MFCs, showing that power generation can be maintained even if individual MFCs perform differently. Longer capacitor charging and discharging cycles of up to 4 min maintained the average power but increased peak power by up to 2.6 times. These results show that capacitors can be used to easily obtain higher voltages from MFCs, allowing for more useful capture of energy from arrays of MFCs. © 2011 The Royal Society of Chemistry.

  7. Optimization and Annual Average Power Predictions of a Backward Bent Duct Buoy Oscillating Water Column Device Using the Wells Turbine.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Christopher S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bull, Diana L [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Willits, Steven M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Fontaine, Arnold A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-08-01

    This Technical Report presents work completed by The Applied Research Laboratory at The Pennsylvania State University, in conjunction with Sandia National Labs, on the optimization of the power conversion chain (PCC) design to maximize the Average Annual Electric Power (AAEP) output of an Oscillating Water Column (OWC) device. The design consists of two independent stages. First, the design of a floating OWC, a Backward Bent Duct Buoy (BBDB), and second the design of the PCC. The pneumatic power output of the BBDB in random waves is optimized through the use of a hydrodynamically coupled, linear, frequency-domain, performance model that links the oscillating structure to internal air-pressure fluctuations. The PCC optimization is centered on the selection and sizing of a Wells Turbine and electric power generation equipment. The optimization of the PCC involves the following variables: the type of Wells Turbine (fixed or variable pitched, with and without guide vanes), the radius of the turbine, the optimal vent pressure, the sizing of the power electronics, and number of turbines. Also included in this Technical Report are further details on how rotor thrust and torque are estimated, along with further details on the type of variable frequency drive selected.

  8. Power Based Phase-Locked Loop Under Adverse Conditions with Moving Average Filter for Single-Phase System

    Directory of Open Access Journals (Sweden)

    Menxi Xie

    2017-06-01

    Full Text Available High performance synchronization methord is citical for grid connected power converter. For single-phase system, power based phase-locked loop(pPLL uses a multiplier as phase detector(PD. As single-phase grid voltage is distorted, the phase error information contains ac disturbances oscillating at integer multiples of fundamental frequency which lead to detection error. This paper presents a new scheme based on moving average filter(MAF applied in-loop of pPLL. The signal characteristic of phase error is dissussed in detail. A predictive rule is adopted to compensate the delay induced by MAF, thus achieving fast dynamic response. In the case of frequency deviate from nomimal, estimated frequency is fed back to adjust the filter window length of MAF and buffer size of predictive rule. Simulation and experimental results show that proposed PLL achieves good performance under adverse grid conditions.

  9. Relationship Between Selected Strength and Power Assessments to Peak and Average Velocity of the Drive Block in Offensive Line Play.

    Science.gov (United States)

    Jacobson, Bert H; Conchola, Eric C; Smith, Doug B; Akehi, Kazuma; Glass, Rob G

    2016-08-01

    Jacobson, BH, Conchola, EC, Smith, DB, Akehi, K, and Glass, RG. Relationship between selected strength and power assessments to peak and average velocity of the drive block in offensive line play. J Strength Cond Res 30(8): 2202-2205, 2016-Typical strength training for football includes the squat and power clean (PC) and routinely measured variables include 1 repetition maximum (1RM) squat and 1RM PC along with the vertical jump (VJ) for power. However, little research exists regarding the association between the strength exercises and velocity of an actual on-the-field performance. The purpose of this study was to investigate the relationship of peak velocity (PV) and average velocity (AV) of the offensive line drive block to 1RM squat, 1RM PC, the VJ, body mass (BM), and body composition. One repetition maximum assessments for the squat and PC were recorded along with VJ height, BM, and percent body fat. These data were correlated with PV and AV while performing the drive block. Peal velocity and AV were assessed using a Tendo Power and Speed Analyzer as the linemen fired, from a 3-point stance into a stationary blocking dummy. Pearson product analysis yielded significant (p ≤ 0.05) correlations between PV and AV and the VJ, the squat, and the PC. A significant inverse association was found for both PV and AV and body fat. These data help to confirm that the typical exercises recommended for American football linemen is positively associated with both PV and AV needed for the drive block effectiveness. It is recommended that these exercises remain the focus of a weight room protocol and that ancillary exercises be built around these exercises. Additionally, efforts to reduce body fat are recommended.

  10. Half-Watt average power femtosecond source spanning 3-8 µm based on subharmonic generation in GaAs

    Science.gov (United States)

    Smolski, Viktor; Vasilyev, Sergey; Moskalev, Igor; Mirov, Mike; Ru, Qitian; Muraviev, Andrey; Schunemann, Peter; Mirov, Sergey; Gapontsev, Valentin; Vodopyanov, Konstantin

    2018-06-01

    Frequency combs with a wide instantaneous spectral span covering the 3-20 µm molecular fingerprint region are highly desirable for broadband and high-resolution frequency comb spectroscopy, trace molecular detection, and remote sensing. We demonstrate a novel approach for generating high-average-power middle-infrared (MIR) output suitable for producing frequency combs with an instantaneous spectral coverage close to 1.5 octaves. Our method is based on utilizing a highly-efficient and compact Kerr-lens mode-locked Cr2+:ZnS laser operating at 2.35-µm central wavelength with 6-W average power, 77-fs pulse duration, and high 0.9-GHz repetition rate; to pump a degenerate (subharmonic) optical parametric oscillator (OPO) based on a quasi-phase-matched GaAs crystal. Such subharmonic OPO is a nearly ideal frequency converter capable of extending the benefits of frequency combs based on well-established mode-locked pump lasers to the MIR region through rigorous, phase- and frequency-locked down conversion. We report a 0.5-W output in the form of an ultra-broadband spectrum spanning 3-8 µm measured at 50-dB level.

  11. High average power, diode pumped petawatt laser systems: a new generation of lasers enabling precision science and commercial applications

    Science.gov (United States)

    Haefner, C. L.; Bayramian, A.; Betts, S.; Bopp, R.; Buck, S.; Cupal, J.; Drouin, M.; Erlandson, A.; Horáček, J.; Horner, J.; Jarboe, J.; Kasl, K.; Kim, D.; Koh, E.; Koubíková, L.; Maranville, W.; Marshall, C.; Mason, D.; Menapace, J.; Miller, P.; Mazurek, P.; Naylon, A.; Novák, J.; Peceli, D.; Rosso, P.; Schaffers, K.; Sistrunk, E.; Smith, D.; Spinka, T.; Stanley, J.; Steele, R.; Stolz, C.; Suratwala, T.; Telford, S.; Thoma, J.; VanBlarcom, D.; Weiss, J.; Wegner, P.

    2017-05-01

    Large laser systems that deliver optical pulses with peak powers exceeding one Petawatt (PW) have been constructed at dozens of research facilities worldwide and have fostered research in High-Energy-Density (HED) Science, High-Field and nonlinear physics [1]. Furthermore, the high intensities exceeding 1018W/cm2 allow for efficiently driving secondary sources that inherit some of the properties of the laser pulse, e.g. pulse duration, spatial and/or divergence characteristics. In the intervening decades since that first PW laser, single-shot proof-of-principle experiments have been successful in demonstrating new high-intensity laser-matter interactions and subsequent secondary particle and photon sources. These secondary sources include generation and acceleration of charged-particle (electron, proton, ion) and neutron beams, and x-ray and gamma-ray sources, generation of radioisotopes for positron emission tomography (PET), targeted cancer therapy, medical imaging, and the transmutation of radioactive waste [2, 3]. Each of these promising applications requires lasers with peak power of hundreds of terawatt (TW) to petawatt (PW) and with average power of tens to hundreds of kW to achieve the required secondary source flux.

  12. Development of a higher power cooling system for lithium targets.

    Science.gov (United States)

    Phoenix, B; Green, S; Scott, M C; Bennett, J R J; Edgecock, T R

    2015-12-01

    The accelerator based Boron Neutron Capture Therapy beam at the University of Birmingham is based around a solid thick lithium target cooled by heavy water. Significant upgrades to Birmingham's Dynamitron accelerator are planned prior to commencing a clinical trial. These upgrades will result in an increase in maximum achievable beam current to at least 3 mA. Various upgrades to the target cooling system to cope with this increased power have been investigated. Tests of a phase change coolant known as "binary ice" have been carried out using an induction heater to provide a comparable power input to the Dynamitron beam. The experimental data shows no improvement over chilled water in the submerged jet system, with both systems exhibiting the same heat input to target temperature relation for a given flow rate. The relationship between the cooling circuit pumping rate and the target temperature in the submerged jet system has also been tested. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Design and development of a 6 MW peak, 24 kW average power S-band klystron

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, L.M.; Meena, Rakesh; Nangru, Subhash; Kant, Deepender; Pal, Debashis; Lamba, O.S.; Jindal, Vishnu; Jangid, Sushil Kumar, E-mail: joslm@rediffmail.com [Central Electronics Engineering Research Institute, Council of Scientific and Industrial Research, Pilani (India); Chakravarthy, D.P.; Dixit, Kavita [Bhabha Atomic Research Centre, Mumbai (India)

    2011-07-01

    A 6 MW peak, 24 kW average power S-band Klystron is under development at CEERI, Pilani under an MoU between BARC and CEERI. The design of the klystron has been completed. The electron gun has been designed using TRAK and MAGIC codes. RF cavities have been designed using HFSS and CST Microwave Studio while the complete beam wave interaction simulation has been done using MAGIC code. The thermal design of collector and RF window has been done using ANSYS code. A Gun Collector Test Module (GCTM) was developed before making actual klystron to validate gun perveance and thermal design of collector. A high voltage solid state pulsed modulator has been installed for performance valuation of the tube. The paper will cover the design aspects of the tube and experimental test results of GCTM and klystron. (author)

  14. Overview of the HiLASE project: high average power pulsed DPSSL systems for research and industry

    Czech Academy of Sciences Publication Activity Database

    Divoký, Martin; Smrž, Martin; Chyla, Michal; Sikocinski, Pawel; Severová, Patricie; Novák, Ondřej; Huynh, Jaroslav; Nagisetty, Siva S.; Miura, Taisuke; Pilař, Jan; Slezák, Jiří; Sawicka, Magdalena; Jambunathan, Venkatesan; Vanda, Jan; Endo, Akira; Lucianetti, Antonio; Rostohar, Danijela; Mason, P.D.; Phillips, P.J.; Ertel, K.; Banerjee, S.; Hernandez-Gomez, C.; Collier, J.L.; Mocek, Tomáš

    2014-01-01

    Roč. 2, SI (2014), s. 1-10 ISSN 2095-4719 R&D Projects: GA MŠk ED2.1.00/01.0027; GA MŠk EE2.3.20.0143; GA MŠk EE2.3.30.0057 Grant - others:HILASE(XE) CZ.1.05/2.1.00/01.0027; OP VK 6(XE) CZ.1.07/2.3.00/20.0143; OP VK 4 POSTDOK(XE) CZ.1.07/2.3.00/30.0057 Institutional support: RVO:68378271 Keywords : DPSSL * Yb3C:YAG * thin-disk * multi-slab * pulsed high average power laser Subject RIV: BH - Optics, Masers, Lasers

  15. Design and development of a 6 MW peak, 24 kW average power S-band klystron

    International Nuclear Information System (INIS)

    Joshi, L.M.; Meena, Rakesh; Nangru, Subhash; Kant, Deepender; Pal, Debashis; Lamba, O.S.; Jindal, Vishnu; Jangid, Sushil Kumar; Chakravarthy, D.P.; Dixit, Kavita

    2011-01-01

    A 6 MW peak, 24 kW average power S-band Klystron is under development at CEERI, Pilani under an MoU between BARC and CEERI. The design of the klystron has been completed. The electron gun has been designed using TRAK and MAGIC codes. RF cavities have been designed using HFSS and CST Microwave Studio while the complete beam wave interaction simulation has been done using MAGIC code. The thermal design of collector and RF window has been done using ANSYS code. A Gun Collector Test Module (GCTM) was developed before making actual klystron to validate gun perveance and thermal design of collector. A high voltage solid state pulsed modulator has been installed for performance valuation of the tube. The paper will cover the design aspects of the tube and experimental test results of GCTM and klystron. (author)

  16. A high-average power tapered FEL amplifier at submillimeter frequencies using sheet electron beams and short-period wigglers

    International Nuclear Information System (INIS)

    Bidwell, S.W.; Radack, D.J.; Antonsen, T.M. Jr.; Booske, J.H.; Carmel, Y.; Destler, W.W.; Granatstein, V.L.; Levush, B.; Latham, P.E.; Zhang, Z.X.

    1990-01-01

    A high-average-power FEL amplifier operating at submillimeter frequencies is under development at the University of Maryland. Program goals are to produce a CW, ∼1 MW, FEL amplifier source at frequencies between 280 GHz and 560 GHz. To this end, a high-gain, high-efficiency, tapered FEL amplifier using a sheet electron beam and a short-period (superconducting) wiggler has been chosen. Development of this amplifier is progressing in three stages: (1) beam propagation through a long length (∼1 m) of short period (λ ω = 1 cm) wiggler, (2) demonstration of a proof-of-principle amplifier experiment at 98 GHz, and (3) designs of a superconducting tapered FEL amplifier meeting the ultimate design goal specifications. 17 refs., 1 fig., 1 tab

  17. The measurement of power losses at high magnetic field densities or at small cross-section of test specimen using the averaging

    CERN Document Server

    Gorican, V; Hamler, A; Nakata, T

    2000-01-01

    It is difficult to achieve sufficient accuracy of power loss measurement at high magnetic field densities where the magnetic field strength gets more and more distorted, or in cases where the influence of noise increases (small specimen cross section). The influence of averaging on the accuracy of power loss measurement was studied on the cast amorphous magnetic material Metglas 2605-TCA. The results show that the accuracy of power loss measurements can be improved by using the averaging of data acquisition points.

  18. Warfarin maintenance dose in older patients: higher average dose and wider dose frequency distribution in patients of African ancestry than those of European ancestry.

    Science.gov (United States)

    Garwood, Candice L; Clemente, Jennifer L; Ibe, George N; Kandula, Vijay A; Curtis, Kristy D; Whittaker, Peter

    2010-06-15

    Studies report that warfarin doses required to maintain therapeutic anticoagulation decrease with age; however, these studies almost exclusively enrolled patients of European ancestry. Consequently, universal application of dosing paradigms based on such evidence may be confounded because ethnicity also influences dose. Therefore, we determined if warfarin dose decreased with age in Americans of African ancestry, if older African and European ancestry patients required different doses, and if their daily dose frequency distributions differed. Our chart review examined 170 patients of African ancestry and 49 patients of European ancestry cared for in our anticoagulation clinic. We calculated the average weekly dose required for each stable, anticoagulated patient to maintain an international normalized ratio of 2.0 to 3.0, determined dose averages for groups 80 years of age and plotted dose as a function of age. The maintenance dose in patients of African ancestry decreased with age (PAfrican ancestry required higher average weekly doses than patients of European ancestry: 33% higher in the 70- to 79-year-old group (38.2+/-1.9 vs. 28.8+/-1.7 mg; P=0.006) and 52% in the >80-year-old group (33.2+/-1.7 vs. 21.8+/-3.8 mg; P=0.011). Therefore, 43% of older patients of African ancestry required daily doses >5mg and hence would have been under-dosed using current starting-dose guidelines. The dose frequency distribution was wider for older patients of African ancestry compared to those of European ancestry (PAfrican ancestry indicate that strategies for initiating warfarin therapy based on studies of patients of European ancestry could result in insufficient anticoagulation and thereby potentially increase their thromboembolism risk. Copyright 2010 Elsevier Inc. All rights reserved.

  19. High-average-power 2 μm few-cycle optical parametric chirped pulse amplifier at 100 kHz repetition rate.

    Science.gov (United States)

    Shamir, Yariv; Rothhardt, Jan; Hädrich, Steffen; Demmler, Stefan; Tschernajew, Maxim; Limpert, Jens; Tünnermann, Andreas

    2015-12-01

    Sources of long wavelengths few-cycle high repetition rate pulses are becoming increasingly important for a plethora of applications, e.g., in high-field physics. Here, we report on the realization of a tunable optical parametric chirped pulse amplifier at 100 kHz repetition rate. At a central wavelength of 2 μm, the system delivered 33 fs pulses and a 6 W average power corresponding to 60 μJ pulse energy with gigawatt-level peak powers. Idler absorption and its crystal heating is experimentally investigated for a BBO. Strategies for further power scaling to several tens of watts of average power are discussed.

  20. Development of a 33 kV, 20 A long pulse converter modulator for high average power klystron

    Energy Technology Data Exchange (ETDEWEB)

    Reghu, T.; Mandloi, V.; Shrivastava, Purushottam [Pulsed High Power Microwave Section, Raja Ramanna Centre for Advanced Technology, Indore 452013, M.P. (India)

    2014-05-15

    Research, design, and development of high average power, long pulse modulators for the proposed Indian Spallation Neutron Source are underway at Raja Ramanna Centre for Advanced Technology. With this objective, a prototype of long pulse modulator capable of delivering 33 kV, 20 A at 5 Hz repetition rate has been designed and developed. Three Insulated Gate Bipolar Transistors (IGBT) based switching modules driving high frequency, high voltage transformers have been used to generate high voltage output. The IGBT based switching modules are shifted in phase by 120° with respect to each other. The switching frequency is 25 kHz. Pulses of 1.6 ms pulse width, 80 μs rise time, and 70 μs fall time have been achieved at the modulator output. A droop of ±0.6% is achieved using a simple segmented digital droop correction technique. The total fault energy transferred to the load during fault has been measured by conducting wire burn tests and is found to be within 3.5 J.

  1. Performance study of highly efficient 520 W average power long pulse ceramic Nd:YAG rod laser

    Science.gov (United States)

    Choubey, Ambar; Vishwakarma, S. C.; Ali, Sabir; Jain, R. K.; Upadhyaya, B. N.; Oak, S. M.

    2013-10-01

    We report the performance study of a 2% atomic doped ceramic Nd:YAG rod for long pulse laser operation in the millisecond regime with pulse duration in the range of 0.5-20 ms. A maximum average output power of 520 W with 180 J maximum pulse energy has been achieved with a slope efficiency of 5.4% using a dual rod configuration, which is the highest for typical lamp pumped ceramic Nd:YAG lasers. The laser output characteristics of the ceramic Nd:YAG rod were revealed to be nearly equivalent or superior to those of high-quality single crystal Nd:YAG rod. The laser pump chamber and resonator were designed and optimized to achieve a high efficiency and good beam quality with a beam parameter product of 16 mm mrad (M2˜47). The laser output beam was efficiently coupled through a 400 μm core diameter optical fiber with 90% overall transmission efficiency. This ceramic Nd:YAG laser will be useful for various material processing applications in industry.

  2. Experimental assessment of blade tip immersion depth from free surface on average power and thrust coefficients of marine current turbine

    Science.gov (United States)

    Lust, Ethan; Flack, Karen; Luznik, Luksa

    2014-11-01

    Results from an experimental study on the effects of marine current turbine immersion depth from the free surface are presented. Measurements are performed with a 1/25 scale (diameter D = 0.8m) two bladed horizontal axis turbine towed in the large towing tank at the U.S. Naval Academy. Thrust and torque are measured using a dynamometer, mounted in line with the turbine shaft. Shaft rotation speed and blade position are measured using a shaft position indexing system. The tip speed ratio (TSR) is adjusted using a hysteresis brake which is attached to the output shaft. Two optical wave height sensors are used to measure the free surface elevation. The turbine is towed at 1.68 m/s, resulting in a 70% chord based Rec = 4 × 105. An Acoustic Doppler Velocimeter (ADV) is installed one turbine diameter upstream of the turbine rotation plane to characterize the inflow turbulence. Measurements are obtained at four relative blade tip immersion depths of z/D = 0.5, 0.4, 0.3, and 0.2 at a TSR value of 7 to identify the depth where free surface effects impact overall turbine performance. The overall average power and thrust coefficient are presented and compared to previously conducted baseline tests. The influence of wake expansion blockage on the turbine performance due to presence of the free surface at these immersion depths will also be discussed.

  3. Development of a 33 kV, 20 A long pulse converter modulator for high average power klystron

    International Nuclear Information System (INIS)

    Reghu, T.; Mandloi, V.; Shrivastava, Purushottam

    2014-01-01

    Research, design, and development of high average power, long pulse modulators for the proposed Indian Spallation Neutron Source are underway at Raja Ramanna Centre for Advanced Technology. With this objective, a prototype of long pulse modulator capable of delivering 33 kV, 20 A at 5 Hz repetition rate has been designed and developed. Three Insulated Gate Bipolar Transistors (IGBT) based switching modules driving high frequency, high voltage transformers have been used to generate high voltage output. The IGBT based switching modules are shifted in phase by 120° with respect to each other. The switching frequency is 25 kHz. Pulses of 1.6 ms pulse width, 80 μs rise time, and 70 μs fall time have been achieved at the modulator output. A droop of ±0.6% is achieved using a simple segmented digital droop correction technique. The total fault energy transferred to the load during fault has been measured by conducting wire burn tests and is found to be within 3.5 J

  4. Development of a 33 kV, 20 A long pulse converter modulator for high average power klystron

    Science.gov (United States)

    Reghu, T.; Mandloi, V.; Shrivastava, Purushottam

    2014-05-01

    Research, design, and development of high average power, long pulse modulators for the proposed Indian Spallation Neutron Source are underway at Raja Ramanna Centre for Advanced Technology. With this objective, a prototype of long pulse modulator capable of delivering 33 kV, 20 A at 5 Hz repetition rate has been designed and developed. Three Insulated Gate Bipolar Transistors (IGBT) based switching modules driving high frequency, high voltage transformers have been used to generate high voltage output. The IGBT based switching modules are shifted in phase by 120° with respect to each other. The switching frequency is 25 kHz. Pulses of 1.6 ms pulse width, 80 μs rise time, and 70 μs fall time have been achieved at the modulator output. A droop of ±0.6% is achieved using a simple segmented digital droop correction technique. The total fault energy transferred to the load during fault has been measured by conducting wire burn tests and is found to be within 3.5 J.

  5. The weighted average cost of capital over the lifecycle of the firm: Is the overinvestment problem of mature firms intensified by a higher WACC?

    Directory of Open Access Journals (Sweden)

    Carlos S. Garcia

    2016-08-01

    Full Text Available Firm lifecycle theory predicts that the Weighted Average Cost of Capital (WACC will tend to fall over the lifecycle of the firm (Mueller, 2003, p. 80-81. However, given that previous research finds that corporate governance deteriorates as firms get older (Mueller and Yun, 1998; Saravia, 2014 there is good reason to suspect that the opposite could be the case, that is, that the WACC is higher for older firms. Since our literature review indicates that no direct tests to clarify this question have been carried out up till now, this paper aims to fill the gap by testing this prediction empirically. Our findings support the proposition that the WACC of younger firms is higher than that of mature firms. Thus, we find that the mature firm overinvestment problem is not intensified by a higher cost of capital, on the contrary, our results suggest that mature firms manage to invest in negative net present value projects even though they have access to cheaper capital. This finding sheds new light on the magnitude of the corporate governance problems found in mature firms.

  6. A novel power spectrum calculation method using phase-compensation and weighted averaging for the estimation of ultrasound attenuation.

    Science.gov (United States)

    Heo, Seo Weon; Kim, Hyungsuk

    2010-05-01

    An estimation of ultrasound attenuation in soft tissues is critical in the quantitative ultrasound analysis since it is not only related to the estimations of other ultrasound parameters, such as speed of sound, integrated scatterers, or scatterer size, but also provides pathological information of the scanned tissue. However, estimation performances of ultrasound attenuation are intimately tied to the accurate extraction of spectral information from the backscattered radiofrequency (RF) signals. In this paper, we propose two novel techniques for calculating a block power spectrum from the backscattered ultrasound signals. These are based on the phase-compensation of each RF segment using the normalized cross-correlation to minimize estimation errors due to phase variations, and the weighted averaging technique to maximize the signal-to-noise ratio (SNR). The simulation results with uniform numerical phantoms demonstrate that the proposed method estimates local attenuation coefficients within 1.57% of the actual values while the conventional methods estimate those within 2.96%. The proposed method is especially effective when we deal with the signal reflected from the deeper depth where the SNR level is lower or when the gated window contains a small number of signal samples. Experimental results, performed at 5MHz, were obtained with a one-dimensional 128 elements array, using the tissue-mimicking phantoms also show that the proposed method provides better estimation results (within 3.04% of the actual value) with smaller estimation variances compared to the conventional methods (within 5.93%) for all cases considered. Copyright 2009 Elsevier B.V. All rights reserved.

  7. Soft Power and Higher Education: An Examination of China's Confucius Institutes

    Science.gov (United States)

    Yang, Rui

    2010-01-01

    China's global presence has become a significant subject. However, little attention has been directed to the role of higher education in projecting China's soft power, and little academic work has been done directly on it, despite the fact that there has been some work on related topics. Borrowing the theories of soft power and higher education…

  8. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  9. Ultra-short pulse delivery at high average power with low-loss hollow core fibers coupled to TRUMPF's TruMicro laser platforms for industrial applications

    Science.gov (United States)

    Baumbach, S.; Pricking, S.; Overbuschmann, J.; Nutsch, S.; Kleinbauer, J.; Gebs, R.; Tan, C.; Scelle, R.; Kahmann, M.; Budnicki, A.; Sutter, D. H.; Killi, A.

    2017-02-01

    Multi-megawatt ultrafast laser systems at micrometer wavelength are commonly used for material processing applications, including ablation, cutting and drilling of various materials or cleaving of display glass with excellent quality. There is a need for flexible and efficient beam guidance, avoiding free space propagation of light between the laser head and the processing unit. Solid core step index fibers are only feasible for delivering laser pulses with peak powers in the kW-regime due to the optical damage threshold in bulk silica. In contrast, hollow core fibers are capable of guiding ultra-short laser pulses with orders of magnitude higher peak powers. This is possible since a micro-structured cladding confines the light within the hollow core and therefore minimizes the spatial overlap between silica and the electro-magnetic field. We report on recent results of single-mode ultra-short pulse delivery over several meters in a lowloss hollow core fiber packaged with industrial connectors. TRUMPF's ultrafast TruMicro laser platforms equipped with advanced temperature control and precisely engineered opto-mechanical components provide excellent position and pointing stability. They are thus perfectly suited for passive coupling of ultra-short laser pulses into hollow core fibers. Neither active beam launching components nor beam trackers are necessary for a reliable beam delivery in a space and cost saving packaging. Long term tests with weeks of stable operation, excellent beam quality and an overall transmission efficiency of above 85 percent even at high average power confirm the reliability for industrial applications.

  10. Dose calculation for photon-emitting brachytherapy sources with average energy higher than 50 keV: report of the AAPM and ESTRO.

    Science.gov (United States)

    Perez-Calatayud, Jose; Ballester, Facundo; Das, Rupak K; Dewerd, Larry A; Ibbott, Geoffrey S; Meigooni, Ali S; Ouhib, Zoubir; Rivard, Mark J; Sloboda, Ron S; Williamson, Jeffrey F

    2012-05-01

    Recommendations of the American Association of Physicists in Medicine (AAPM) and the European Society for Radiotherapy and Oncology (ESTRO) on dose calculations for high-energy (average energy higher than 50 keV) photon-emitting brachytherapy sources are presented, including the physical characteristics of specific (192)Ir, (137)Cs, and (60)Co source models. This report has been prepared by the High Energy Brachytherapy Source Dosimetry (HEBD) Working Group. This report includes considerations in the application of the TG-43U1 formalism to high-energy photon-emitting sources with particular attention to phantom size effects, interpolation accuracy dependence on dose calculation grid size, and dosimetry parameter dependence on source active length. Consensus datasets for commercially available high-energy photon sources are provided, along with recommended methods for evaluating these datasets. Recommendations on dosimetry characterization methods, mainly using experimental procedures and Monte Carlo, are established and discussed. Also included are methodological recommendations on detector choice, detector energy response characterization and phantom materials, and measurement specification methodology. Uncertainty analyses are discussed and recommendations for high-energy sources without consensus datasets are given. Recommended consensus datasets for high-energy sources have been derived for sources that were commercially available as of January 2010. Data are presented according to the AAPM TG-43U1 formalism, with modified interpolation and extrapolation techniques of the AAPM TG-43U1S1 report for the 2D anisotropy function and radial dose function.

  11. Eigenstates of the higher power of the annihilation operator of two-parameter deformed harmonic oscillator

    International Nuclear Information System (INIS)

    Wang Jisuo; Sun Changyong; He Jinyu

    1996-01-01

    The eigenstates of the higher power of the annihilation operator a qs k (k≥3) of the two-parameter deformed harmonic oscillator are constructed. Their completeness is demonstrated in terms of the qs-integration

  12. Sex, Grades and Power in Higher Education in Ghana and Tanzania

    Science.gov (United States)

    Morley, Louise

    2011-01-01

    Quantitative increases tell a partial story about the quality of women's participation in higher education. Women students' reporting of sexual harassment has been noteworthy in a recent study that I directed on widening participation in higher education in Ghana and Tanzania. The hierarchical and gendered power relations within universities have…

  13. High average power 1314 nm Nd:YLF laser, passively Q-switched with V:YAG

    CSIR Research Space (South Africa)

    Botha, RC

    2013-03-01

    Full Text Available A 1314 nm Nd:YLF laser was designed and operated both CW and passively Q-switched. Maximum CW output of 10.4 W resulted from 45.2 Wof incident pump power. Passive Q-switching was obtained by inserting a V:YAG saturable absorber in the cavity...

  14. High-average-power UV generation at 266 and 355 nm in β-BaB/sub 2/O/sub 4/

    International Nuclear Information System (INIS)

    Liu, K.C.; Rhoades, M.

    1987-01-01

    UV light has been generated previously by harmonic conversion from Nd:YAG lasers using the nonlinear crystals KD*P and ADP. Most of the previous studies have employed lasers with high peak power due to the low-harmonic-conversion efficiency of these crystals and also low average power due to the phase mismatch caused by temperature detuning resulting from UV absorption. A new nonlinear crystal β-BaB/sub 2/O/sub 4/ has recently been reported which provides for the possibility of overcoming the aforementioned problems. The authors utilized β-BaB/sub 2/O/sub 4/ to frequency triple and frequency quadruple a high-repetition-rate cw-pumped Nd:YAG laser and achieved up to 1-W average power with Gaussian spatial distribution at 266 and 355 nm. β-BaB/sub 2/O/sub 4/ has demonstrated its advantages for high-average-power UV generation. Its major drawback is a low-angular-acceptance bandwidth which requires a high-quality fundamental pump beam

  15. Extension of Tom Booth's Modified Power Method for Higher Eigen Modes

    International Nuclear Information System (INIS)

    Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung

    2015-01-01

    A possible technique to get the even higher modes is suggested, but it is difficult to be applied practically. In this paper, a general solution strategy is proposed, which can extend Tom Booth's modified power method to get the higher Eigenmodes and there is no limitation about the number of Eigenmodes that can be obtained with this method. In this paper, a general solution strategy is proposed, which can extend Tom Booth's modified power method to get the higher Eigenmodes and there is no limitation about the number of Eigenmodes that can be obtained with this method. It is more practical than the original solution strategy that Tom Booth proposed. The implementation of the method in Monte Carlo code shows significant advantages comparing to the original power method

  16. Computerized system for building 'the rose' of the winds and defining the velocity and the average density of the wind power for a given place

    International Nuclear Information System (INIS)

    Valkov, I.; Dekova, I.; Arnaudov, A.; Kostadinov, A.

    2002-01-01

    This paper considers the structure and the working principle of a computerized system for building 'the rose' of the winds. The behaviour of the system has been experimentally investigated and on the basis of the received data 'the rose' of the winds has been built, a diagram of the average wind velocity at a predefined step in the course of time has been made, and the average density of the wind power has been quantitatively defined. The proposed system enables possibilities for creating a data base of wind parameters, their processing and graphical visualizing of the received results. The system allows to improve the work of devices of wild's wind gauge type. (authors)

  17. Development of laser diode-pumped high average power solid-state laser for the pumping of Ti:sapphire CPA system

    Energy Technology Data Exchange (ETDEWEB)

    Maruyama, Yoichiro; Tei, Kazuyoku; Kato, Masaaki; Niwa, Yoshito; Harayama, Sayaka; Oba, Masaki; Matoba, Tohru; Arisawa, Takashi; Takuma, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    Laser diode pumped all solid state, high repetition frequency (PRF) and high energy Nd:YAG laser using zigzag slab crystals has been developed for the pumping source of Ti:sapphire CPA system. The pumping laser installs two main amplifiers which compose ring type amplifier configuration. The maximum amplification gain of the amplifier system is 140 and the condition of saturated amplification is achieved with this high gain. The average power of fundamental laser radiation is 250 W at the PRF of 200 Hz and the pulse duration is around 20 ns. The average power of second harmonic is 105 W at the PRF of 170 Hz and the pulse duration is about 16 ns. The beam profile of the second harmonic is near top hat and will be suitable for the pumping of Ti:sapphire laser crystal. The wall plug efficiency of the laser is 2.0 %. (author)

  18. Research and higher education background of the Paks Nuclear Power Plant, Hungary. Past and present

    International Nuclear Information System (INIS)

    Csom, Gy.

    2002-01-01

    The connection of the Paks Nuclear Power Plant, Hungary, with research and development as well as with higher education is discussed. The main research areas include reactor physics, thermohydraulics, radiochemistry and radiochemical analysis, electronics and nuclear instruments, computers, materials science. The evolution of relations with higher education in Hungary and the PNPP is presented, before and after the installation of the various units. (R.P.)

  19. Investigation on repetition rate and pulse duration influences on ablation efficiency of metals using a high average power Yb-doped ultrafast laser

    Directory of Open Access Journals (Sweden)

    Lopez J.

    2013-11-01

    Full Text Available Ultrafast lasers provide an outstanding processing quality but their main drawback is the low removal rate per pulse compared to longer pulses. This limitation could be overcome by increasing both average power and repetition rate. In this paper, we report on the influence of high repetition rate and pulse duration on both ablation efficiency and processing quality on metals. All trials have been performed with a single tunable ultrafast laser (350 fs to 10ps.

  20. The final power calibration of the IPEN/MB-01 nuclear reactor for various configurations obtained from the measurements of the absolute average neutron flux

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Alexandre Fonseca Povoa da, E-mail: alexandre.povoa@mar.mil.br [Centro Tecnologico da Marinha em Sao Paulo (CTMSP), Sao Paulo, SP (Brazil); Bitelli, Ulysses d' Utra; Mura, Luiz Ernesto Credidio; Lima, Ana Cecilia de Souza; Betti, Flavio; Santos, Diogo Feliciano dos, E-mail: ubitelli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    The use of neutron activation foils is a widely spread technique applied to obtain nuclear parameters then comparing the results with those calculated using specific methodologies and available nuclear data. By irradiation of activation foils and subsequent measurement of its induced activity, it is possible to determine the neutron flux at the position of irradiation. The power level during operation of the reactor is a parameter which is directly proportional to the average neutron flux throughout the core. The objective of this work is to gather data from irradiation of gold foils symmetrically placed along a cylindrically configured core which presents only a small excess reactivity in order to derive the power generated throughout the spatial thermal and epithermal neutron flux distribution over the core of the IPEN/MB-01 Nuclear Reactor, eventually lending to a proper calibration of its nuclear channels. The foils are fixed in a Lucite plate then irradiated with and without cadmium sheaths so as to obtain the absolute thermal and epithermal neutron flux. The correlation between the average power neutron flux resulting from the gold foils irradiation, and the average power digitally indicated by the nuclear channel number 6, allows for the calibration of the nuclear channels of the reactor. The reactor power level obtained by thermal neutron flux mapping was (74.65 ± 2.45) watts to a mean counting per seconds of 37881 cps to nuclear channel number 10 a pulse detector, and 0.719.10{sup -5} ampere to nuclear linear channel number 6 (a non-compensated ionization chamber). (author)

  1. A Dialogue between Partnership and Feminism: Deconstructing Power and Exclusion in Higher Education

    Science.gov (United States)

    Mercer-Mapstone, Lucy; Mercer, Gina

    2018-01-01

    Students as partners (SaP) has seen an increase in focus as an area of active student engagement in higher education. Many complexities and challenges have been shared in this evolving field regarding inclusivity and power. We discuss, in this dialogue, insights that can be uncovered by exploring SaP through a feminist lens--illuminating the fact…

  2. Attachment to God/Higher Power and Bulimic Symptoms among College Women

    Science.gov (United States)

    Buser, Juleen K.; Gibson, Sandy

    2016-01-01

    The authors examined the relationship between avoidant and anxious attachment to God/Higher Power and bulimia symptoms among 599 female college student participants. After controlling for body mass index, the authors found a positive association between both attachment variables and bulimia. When entered together in a regression, anxious…

  3. Power Distance in Online Learning: Experience of Chinese Learners in U.S. Higher Education

    Science.gov (United States)

    Zhang, Yi (Leaf)

    2013-01-01

    The purpose of this research study was to explore the influence of Confucian-heritage culture on Chinese learners' online learning and engagement in online discussion in U.S. higher education. More specifically, this research studied Chinese learners' perceptions of power distance and its impact on their interactions with instructors and peers in…

  4. A general solution strategy of modified power method for higher mode solutions

    International Nuclear Information System (INIS)

    Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung

    2016-01-01

    A general solution strategy of the modified power iteration method for calculating higher eigenmodes has been developed and applied in continuous energy Monte Carlo simulation. The new approach adopts four features: 1) the eigen decomposition of transfer matrix, 2) weight cancellation for higher modes, 3) population control with higher mode weights, and 4) stabilization technique of statistical fluctuations using multi-cycle accumulations. The numerical tests of neutron transport eigenvalue problems successfully demonstrate that the new strategy can significantly accelerate the fission source convergence with stable convergence behavior while obtaining multiple higher eigenmodes at the same time. The advantages of the new strategy can be summarized as 1) the replacement of the cumbersome solution step of high order polynomial equations required by Booth's original method with the simple matrix eigen decomposition, 2) faster fission source convergence in inactive cycles, 3) more stable behaviors in both inactive and active cycles, and 4) smaller variances in active cycles. Advantages 3 and 4 can be attributed to the lower sensitivity of the new strategy to statistical fluctuations due to the multi-cycle accumulations. The application of the modified power method to continuous energy Monte Carlo simulation and the higher eigenmodes up to 4th order are reported for the first time in this paper. -- Graphical abstract: -- Highlights: •Modified power method is applied to continuous energy Monte Carlo simulation. •Transfer matrix is introduced to generalize the modified power method. •All mode based population control is applied to get the higher eigenmodes. •Statistic fluctuation can be greatly reduced using accumulated tally results. •Fission source convergence is accelerated with higher mode solutions.

  5. Diode-side-pumped intracavity frequency-doubled Nd:YAG/BaWO4 Raman laser generating average output power of 3.14 W at 590 nm.

    Science.gov (United States)

    Li, Shutao; Zhang, Xingyu; Wang, Qingpu; Zhang, Xiaolei; Cong, Zhenhua; Zhang, Huaijin; Wang, Jiyang

    2007-10-15

    We report a linear-cavity high-power all-solid-state Q-switched yellow laser. The laser source comprises a diode-side-pumped Nd:YAG module that produces 1064 nm fundamental radiation, an intracavity BaWO(4) Raman crystal that generates a first-Stokes laser at 1180 nm, and a KTP crystal that frequency doubles the first-Stokes laser to 590 nm. A convex-plane cavity is employed in this configuration to counteract some of the thermal effect caused by high pump power. An average output power of 3.14 W at 590 nm is obtained at a pulse repetition frequency of 10 kHz.

  6. A diode-pumped continuous-wave Nd:YAG laser with an average output power of 1 kW

    International Nuclear Information System (INIS)

    Lee, Sung Man; Cha, Byung Heon; Kim, Cheol Jung

    2004-01-01

    A diode-pumped Nd:YAG laser with an average output power of 1 kW is developed for industrial applications, such as metal cutting, precision welding, etc. To develop such a diode-pumped high power solid-state laser, a series of laser modules have been used in general with and without thermal birefringence compensation. For example, Akiyama et al. used three laser modules to obtain a output power of 5.4 kW CW.1 In the side-pumped Nd:YAG laser, which is a commonly used pump scheme to obtain high output power, the crystal rod has a short thermal focal length at a high input pump power, and the short thermal focal length in turn leads to beam distortion within a laser resonator. Therefore, to achieve a high output power with good stability, isotropic beam profile, and high optical efficiency, the detailed analysis of the resonator stability condition depending on both mirror distances and a crystal separation is essential

  7. Daily Average Wind Power Interval Forecasts Based on an Optimal Adaptive-Network-Based Fuzzy Inference System and Singular Spectrum Analysis

    Directory of Open Access Journals (Sweden)

    Zhongrong Zhang

    2016-01-01

    Full Text Available Wind energy has increasingly played a vital role in mitigating conventional resource shortages. Nevertheless, the stochastic nature of wind poses a great challenge when attempting to find an accurate forecasting model for wind power. Therefore, precise wind power forecasts are of primary importance to solve operational, planning and economic problems in the growing wind power scenario. Previous research has focused efforts on the deterministic forecast of wind power values, but less attention has been paid to providing information about wind energy. Based on an optimal Adaptive-Network-Based Fuzzy Inference System (ANFIS and Singular Spectrum Analysis (SSA, this paper develops a hybrid uncertainty forecasting model, IFASF (Interval Forecast-ANFIS-SSA-Firefly Alogorithm, to obtain the upper and lower bounds of daily average wind power, which is beneficial for the practical operation of both the grid company and independent power producers. To strengthen the practical ability of this developed model, this paper presents a comparison between IFASF and other benchmarks, which provides a general reference for this aspect for statistical or artificially intelligent interval forecast methods. The comparison results show that the developed model outperforms eight benchmarks and has a satisfactory forecasting effectiveness in three different wind farms with two time horizons.

  8. The ETA-II linear induction accelerator and IMP wiggler: A high-average-power millimeter-wave free-electron laser for plasma heating

    International Nuclear Information System (INIS)

    Allen, S.L.; Scharlemann, E.T.

    1993-01-01

    The authors have constructed a 140-GHz free-electron laser to generate high-average-power microwaves for heating the MTX tokamak plasma. A 5.5-m steady-state wiggler (Intense Microwave, Prototype-IMP) has been installed at the end of the upgraded 60-cell ETA-II accelerator, and is configured as an FEL amplifier for the output of a 140-GHz long-pulse gyrotron. Improvements in the ETA-II accelerator include a multicable-feed power distribution network, better magnetic alignment using a stretched-wire alignment technique (SWAT), and a computerized tuning algorithm that directly minimizes the transverse sweep (corkscrew motion) of the electron beam. The upgrades were first tested on the 20-cell, 3-MeV front end of ETA-II and resulted in greatly improved energy flatness and reduced corkscrew motion. The upgrades were then incorporated into the full 60-cell configuration of ETA-II, along with modifications to allow operation in 50-pulse bursts at pulse repetition frequencies up to 5 kHz. The pulse power modifications were developed and tested on the High Average Power Test Stand (HAPTS), and have significantly reduced the voltage and timing jitter of the MAG 1D magnetic pulse compressors. The 2-3 kA, 6-7 MeV beam from ETA-II is transported to the IMP wiggler, which has been reconfigured as a laced wiggler, with both permanent magnets and electromagnets, for high magnetic field operation. Tapering of the wiggler magnetic field is completely computer controlled and can be optimized based on the output power. The microwaves from the FEL are transmitted to the MTX tokamak by a windowless quasi-optical microwave transmission system. Experiments at MTX are focused on studies of electron-cyclotron-resonance heating (ECRH) of the plasma. The authors summarize here the accelerator and pulse power modifications, and describe the status of ETA-II, IMP, and MTX operations

  9. The ETA-II linear induction accelerator and IMP wiggler: A high-average-power millimeter-wave free-electron-laser for plasma heating

    International Nuclear Information System (INIS)

    Allen, S.L.; Scharlemann, E.T.

    1992-05-01

    We have constructed a 140-GHz free-electron laser to generate high-average-power microwaves for heating the MTX tokamak plasma. A 5.5-m steady-state wiggler (intense Microwave Prototype-IMP) has been installed at the end of the upgraded 60-cell ETA-II accelerator, and is configured as an FEL amplifier for the output of a 140-GHz long-pulse gyrotron. Improvements in the ETA-II accelerator include a multicable-feed power distribution network, better magnetic alignment using a stretched-wire alignment technique (SWAT). and a computerized tuning algorithm that directly minimizes the transverse sweep (corkscrew motion) of the electron beam. The upgrades were first tested on the 20-cell, 3-MeV front end of ETA-II and resulted in greatly improved energy flatness and reduced corkscrew motion. The upgrades were then incorporated into the full 60-cell configuration of ETA-II, along with modifications to allow operation in 50-pulse bursts at pulse repetition frequencies up to 5 kHz. The pulse power modifications were developed and tested on the High Average Power Test Stand (HAPTS), and have significantly reduced the voltage and timing jitter of the MAG 1D magnetic pulse compressors. The 2-3 kA. 6-7 MeV beam from ETA-II is transported to the IMP wiggler, which has been reconfigured as a laced wiggler, with both permanent magnets and electromagnets, for high magnetic field operation. Tapering of the wiggler magnetic field is completely computer controlled and can be optimized based on the output power. The microwaves from the FEL are transmitted to the MTX tokamak by a windowless quasi-optical microwave transmission system. Experiments at MTX are focused on studies of electron-cyclotron-resonance heating (ECRH) of the plasma. We summarize here the accelerator and pulse power modifications, and describe the status of ETA-II, IMP, and MTX operations

  10. Predictive Power of Primary and Secondary School Success Criterion on Transition to Higher Education Examination Scores

    OpenAIRE

    Atilla ÖZDEMİR; Selahattin GELBAL

    2016-01-01

    It is seen that education has a significant effect that changes an individual’s life in our country in which education is a way of moving up the social ladder. In order to continue to a higher education program after graduating from high school, students have to succeed in transition to higher education examination. Thus, the entrance exam is an important factor to determine the future of the students. In our country, middle school grades and high school grade point average that is added to u...

  11. Investigation of the thermal and optical performance of a spatial light modulator with high average power picosecond laser exposure for materials processing applications

    Science.gov (United States)

    Zhu, G.; Whitehead, D.; Perrie, W.; Allegre, O. J.; Olle, V.; Li, Q.; Tang, Y.; Dawson, K.; Jin, Y.; Edwardson, S. P.; Li, L.; Dearden, G.

    2018-03-01

    Spatial light modulators (SLMs) addressed with computer generated holograms (CGHs) can create structured light fields on demand when an incident laser beam is diffracted by a phase CGH. The power handling limitations of these devices based on a liquid crystal layer has always been of some concern. With careful engineering of chip thermal management, we report the detailed optical phase and temperature response of a liquid cooled SLM exposed to picosecond laser powers up to 〈P〉  =  220 W at 1064 nm. This information is critical for determining device performance at high laser powers. SLM chip temperature rose linearly with incident laser exposure, increasing by only 5 °C at 〈P〉  =  220 W incident power, measured with a thermal imaging camera. Thermal response time with continuous exposure was 1-2 s. The optical phase response with incident power approaches 2π radians with average power up to 〈P〉  =  130 W, hence the operational limit, while above this power, liquid crystal thickness variations limit phase response to just over π radians. Modelling of the thermal and phase response with exposure is also presented, supporting experimental observations well. These remarkable performance characteristics show that liquid crystal based SLM technology is highly robust when efficiently cooled. High speed, multi-beam plasmonic surface micro-structuring at a rate R  =  8 cm2 s-1 is achieved on polished metal surfaces at 〈P〉  =  25 W exposure while diffractive, multi-beam surface ablation with average power 〈P〉  =100 W on stainless steel is demonstrated with ablation rate of ~4 mm3 min-1. However, above 130 W, first order diffraction efficiency drops significantly in accord with the observed operational limit. Continuous exposure for a period of 45 min at a laser power of 〈P〉  =  160 W did not result in any detectable drop in diffraction efficiency, confirmed afterwards by the efficient

  12. Power-law scaling of extreme dynamics near higher-order exceptional points

    Science.gov (United States)

    Zhong, Q.; Christodoulides, D. N.; Khajavikhan, M.; Makris, K. G.; El-Ganainy, R.

    2018-02-01

    We investigate the extreme dynamics of non-Hermitian systems near higher-order exceptional points in photonic networks constructed using the bosonic algebra method. We show that strong power oscillations for certain initial conditions can occur as a result of the peculiar eigenspace geometry and its dimensionality collapse near these singularities. By using complementary numerical and analytical approaches, we show that, in the parity-time (PT ) phase near exceptional points, the logarithm of the maximum optical power amplification scales linearly with the order of the exceptional point. We focus in our discussion on photonic systems, but we note that our results apply to other physical systems as well.

  13. Higher-order-mode (HOM) power in elliptical superconducting cavities for intense pulsed proton accelerators

    CERN Document Server

    Sang Ho Kim; Dong O Jeon; Sundeli, R

    2002-01-01

    In linacs for intense pulsed proton accelerators, the beam has a multiple time-structure, and each beam time-structure generates resonance. When a higher-order mode (HOM) is near these resonance frequencies, the induced voltage could be large and accordingly the resulting HOM power, too. In order to understand the effects of a complex beam time-structure on the mode excitations and the resulting HOM powers in elliptical superconducting cavities, analytic expressions are developed, with which the beam-induced voltage and corresponding power are explored, taking into account the properties of HOM frequency behavior in elliptical superconducting cavities. The results and understandings from this analysis are presented with the beam parameters of the Spallation Neutron Source (SNS) superconducting linac.

  14. A new hybrid nonlinear congruential number generator based on higher functional power of logistic maps

    International Nuclear Information System (INIS)

    Cecen, Songul; Demirer, R. Murat; Bayrak, Coskun

    2009-01-01

    We propose a nonlinear congruential pseudorandom number generator consisting of summation of higher order composition of random logistic maps under certain congruential mappings. We change both bifurcation parameters of logistic maps in the interval of U=[3.5599,4) and coefficients of the polynomials in each higher order composition of terms up to degree d. This helped us to obtain a perfect random decorrelated generator which is infinite and aperiodic. It is observed from the simulation results that our new PRNG has good uniformity and power spectrum properties with very flat white noise characteristics. The results are interesting, new and may have applications in cryptography and in Monte Carlo simulations.

  15. Planning for Higher Oil Prices : Power Sector Impact in Latin America and the Caribbean

    OpenAIRE

    Yépez-García, Rigoberto Ariel; San Vicente Portes, Luis; García, Luis Enrique

    2013-01-01

    A scenario with higher oil prices has important implications for diverting from oil-based technologies to renewables, as well as gas, coal, and nuclear alternatives. By 2030, energy demand in Latin America and the Caribbean (LAC) is expected to double from 2008 levels. A key issue is deciding on the most appropriate mix of fuels for power generation, given the various prices of energy sour...

  16. The mercury laser system - An average power, gas-cooled, Yb:S-FAP based system with frequency conversion and wavefront correction

    Energy Technology Data Exchange (ETDEWEB)

    Bibeau, C.; Bayramian, A.; Armstrong, P.; Ault, E.; Beach, R.; Benapfl, M.; Campbell, R.; Dawson, J.; Ebbers, C.; Freitas, B.; Kent, R.; Liao, Z.; Ladran, T.; Menapace, J.; Molander, B.; Moses, E.; Oberhelman, S.; Payne, S.; Peterson, N.; Schaffers, K.; Stolz, C.; Sutton, S.; Tassano, J.; Telford, S.; Utterback, E. [Lawrence Livermore National Lab., Livermore, CA (United States); Randles, M. [Northrop Grumman Space Technologies, Charlotte, NC (United States); Chain, B.; Fei, Y. [Crystal Photonics, Sanford, Fl (United States)

    2006-06-15

    We report on the operation of the Mercury laser with fourteen 4*6 cm{sup 2} Yb:S-FAP amplifier slabs pumped by eight 100 kW peak power diode arrays. The system was continuously run at 55 J and 10 Hz for several hours, (2*10{sup 5} cumulative shots) with over 80% of the energy in a 6 times diffraction limited spot at 1.047 {mu}m. Improved optical quality was achieved in Yb:S-FAP amplifiers with magneto-rheological finishing, a deterministic polishing method. In addition, average power frequency conversion employing YCOB crystal was demonstrated at 50% conversion efficiency or 22.6 J at 10 Hz. (authors)

  17. Performance of MgO:PPLN, KTA, and KNbO₃ for mid-wave infrared broadband parametric amplification at high average power.

    Science.gov (United States)

    Baudisch, M; Hemmer, M; Pires, H; Biegert, J

    2014-10-15

    The performance of potassium niobate (KNbO₃), MgO-doped periodically poled lithium niobate (MgO:PPLN), and potassium titanyl arsenate (KTA) were experimentally compared for broadband mid-wave infrared parametric amplification at a high repetition rate. The seed pulses, with an energy of 6.5 μJ, were amplified using 410 μJ pump energy at 1064 nm to a maximum pulse energy of 28.9 μJ at 3 μm wavelength and at a 160 kHz repetition rate in MgO:PPLN while supporting a transform limited duration of 73 fs. The high average powers of the interacting beams used in this study revealed average power-induced processes that limit the scaling of optical parametric amplification in MgO:PPLN; the pump peak intensity was limited to 3.8  GW/cm² due to nonpermanent beam reshaping, whereas in KNbO₃ an absorption-induced temperature gradient in the crystal led to permanent internal distortions in the crystal structure when operated above a pump peak intensity of 14.4  GW/cm².

  18. Amplified spontaneous emission and thermal management on a high average-power diode-pumped solid-state laser - the Lucia laser system

    International Nuclear Information System (INIS)

    Albach, D.

    2010-01-01

    The development of the laser triggered the birth of numerous fields in both scientific and industrial domains. High intensity laser pulses are a unique tool for light/matter interaction studies and applications. However, current flash-pumped glass-based systems are inherently limited in repetition-rate and efficiency. Development within recent years in the field of semiconductor lasers and gain media drew special attention to a new class of lasers, the so-called Diode Pumped Solid State Laser (DPSSL). DPSSLs are highly efficient lasers and are candidates of choice for compact, high average-power systems required for industrial applications but also as high-power pump sources for ultra-high intense lasers. The work described in this thesis takes place in the context of the 1 kilowatt average-power DPSSL program Lucia, currently under construction at the 'Laboratoire d'Utilisation des Laser Intenses' (LULI) at the Ecole Polytechnique, France. Generation of sub-10 nanosecond long pulses with energies of up to 100 joules at repetition rates of 10 hertz are mainly limited by Amplified Spontaneous Emission (ASE) and thermal effects. These limitations are the central themes of this work. Their impact is discussed within the context of a first Lucia milestone, set around 10 joules. The developed laser system is shown in detail from the oscillator level to the end of the amplification line. A comprehensive discussion of the impact of ASE and thermal effects is completed by related experimental benchmarks. The validated models are used to predict the performances of the laser system, finally resulting in a first activation of the laser system at an energy level of 7 joules in a single-shot regime and 6.6 joules at repetition rates up to 2 hertz. Limitations and further scaling approaches are discussed, followed by an outlook for the further development. (author) [fr

  19. Experimental Demonstration of Higher Precision Weak-Value-Based Metrology Using Power Recycling

    Science.gov (United States)

    Wang, Yi-Tao; Tang, Jian-Shun; Hu, Gang; Wang, Jian; Yu, Shang; Zhou, Zong-Quan; Cheng, Ze-Di; Xu, Jin-Shi; Fang, Sen-Zhi; Wu, Qing-Lin; Li, Chuan-Feng; Guo, Guang-Can

    2016-12-01

    The weak-value-based metrology is very promising and has attracted a lot of attention in recent years because of its remarkable ability in signal amplification. However, it is suggested that the upper limit of the precision of this metrology cannot exceed that of classical metrology because of the low sample size caused by the probe loss during postselection. Nevertheless, a recent proposal shows that this probe loss can be reduced by the power-recycling technique, and thus enhance the precision of weak-value-based metrology. Here we experimentally realize the power-recycled interferometric weak-value-based beam-deflection measurement and obtain the amplitude of the detected signal and white noise by discrete Fourier transform. Our results show that the detected signal can be strengthened by power recycling, and the power-recycled weak-value-based signal-to-noise ratio can surpass the upper limit of the classical scheme, corresponding to the shot-noise limit. This work sheds light on higher precision metrology and explores the real advantage of the weak-value-based metrology over classical metrology.

  20. An Electrochemical Capacitor with Applicable Energy Density of 7.4 Wh/kg at Average Power Density of 3000 W/kg.

    Science.gov (United States)

    Zhai, Teng; Lu, Xihong; Wang, Hanyu; Wang, Gongming; Mathis, Tyler; Liu, Tianyu; Li, Cheng; Tong, Yexiang; Li, Yat

    2015-05-13

    Electrochemical capacitors represent a new class of charge storage devices that can simultaneously achieve high energy density and high power density. Previous reports have been primarily focused on the development of high performance capacitor electrodes. Although these electrodes have achieved excellent specific capacitance based on per unit mass of active materials, the gravimetric energy densities calculated based on the weight of entire capacitor device were fairly small. This is mainly due to the large mass ratio between current collector and active material. We aimed to address this issue by a 2-fold approach of minimizing the mass of current collector and increasing the electrode performance. Here we report an electrochemical capacitor using 3D graphene hollow structure as current collector, vanadium sulfide and manganese oxide as anode and cathode materials, respectively. 3D graphene hollow structure provides a lightweight and highly conductive scaffold for deposition of pseudocapacitive materials. The device achieves an excellent active material ratio of 24%. Significantly, it delivers a remarkable energy density of 7.4 Wh/kg (based on the weight of entire device) at the average power density of 3000 W/kg. This is the highest gravimetric energy density reported for asymmetric electrochemical capacitors at such a high power density.

  1. The power of non-determinism in higher-order implicit complexity

    DEFF Research Database (Denmark)

    Kop, Cynthia Louisa Martina; Simonsen, Jakob Grue

    2017-01-01

    We investigate the power of non-determinism in purely functional programming languages with higher-order types. Specifically, we consider cons-free programs of varying data orders, equipped with explicit non-deterministic choice. Cons-freeness roughly means that data constructors cannot occur...... in function bodies and all manipulation of storage space thus has to happen indirectly using the call stack. While cons-free programs have previously been used by several authors to characterise complexity classes, the work on non-deterministic programs has almost exclusively considered programs of data order...... 0. Previous work has shown that adding explicit non-determinism to consfree programs taking data of order 0 does not increase expressivity; we prove that this—dramatically—is not the case for higher data orders: adding non-determinism to programs with data order at least 1 allows...

  2. Chronic pain and praying to a higher power: useful or useless?

    Science.gov (United States)

    Andersson, Gerhard

    2008-06-01

    In the present study a Swedish sample of 118 persons with chronic pain completed online tests on two occasions in association with treatment trials. A three item subscale measuring praying as a coping strategy was derived from the Coping Strategies Questionnaire (CSQ), but adapted to refer to "a higher power" instead of "God". Measures of pain and anxiety/depression were also included. Results revealed significant associations between praying and pain interference and impairment. Praying was also associated with anxiety and depression scores. Results also showed that prayer predicted depression scores at follow-up, and that follow-up prayer was predicted by pain interference at first measurement occasion. Overall, if prayer had any relation with the other variables it was in the negative direction of more distress being associated with more praying both concurrently and prospectively.

  3. Developing Student Worksheet Based On Higher Order Thinking Skills on the Topic of Transistor Power Amplifier

    Science.gov (United States)

    Sardia Ratna Kusuma, Luckey; Rakhmawati, Lusia; Wiryanto

    2018-04-01

    The purpose of this study is to develop a student worksheet about the transistor power amplifier based on higher order thinking skills include critical, logical, reflective, metacognitive, and creative thinking, which could be useful for teachers in improving student learning outcomes. Research and Development (R & D) methodology was used in this study. The pilot study of the worksheet was carried out with class X AV 2 at SMK Negeri 5 Surabaya. The result showed satisfies aspect of validity with 81.76 %, and effectiveness (students learning outcomes is classically passed out with percentage of 82.4 % and the students gave positive responses to the student worksheet of each statement. It can be concluded that this worksheet categorized good and worthy to be used as a source of learning in the learning activities.

  4. Estimation of the hydraulic conductivity of a two-dimensional fracture network using effective medium theory and power-law averaging

    Science.gov (United States)

    Zimmerman, R. W.; Leung, C. T.

    2009-12-01

    Most oil and gas reservoirs, as well as most potential sites for nuclear waste disposal, are naturally fractured. In these sites, the network of fractures will provide the main path for fluid to flow through the rock mass. In many cases, the fracture density is so high as to make it impractical to model it with a discrete fracture network (DFN) approach. For such rock masses, it would be useful to have recourse to analytical, or semi-analytical, methods to estimate the macroscopic hydraulic conductivity of the fracture network. We have investigated single-phase fluid flow through generated stochastically two-dimensional fracture networks. The centers and orientations of the fractures are uniformly distributed, whereas their lengths follow a lognormal distribution. The aperture of each fracture is correlated with its length, either through direct proportionality, or through a nonlinear relationship. The discrete fracture network flow and transport simulator NAPSAC, developed by Serco (Didcot, UK), is used to establish the “true” macroscopic hydraulic conductivity of the network. We then attempt to match this value by starting with the individual fracture conductances, and using various upscaling methods. Kirkpatrick’s effective medium approximation, which works well for pore networks on a core scale, generally underestimates the conductivity of the fracture networks. We attribute this to the fact that the conductances of individual fracture segments (between adjacent intersections with other fractures) are correlated with each other, whereas Kirkpatrick’s approximation assumes no correlation. The power-law averaging approach proposed by Desbarats for porous media is able to match the numerical value, using power-law exponents that generally lie between 0 (geometric mean) and 1 (harmonic mean). The appropriate exponent can be correlated with statistical parameters that characterize the fracture density.

  5. Combined peak-to-average power ratio reduction and physical layer security enhancement in optical orthogonal frequency division multiplexing visible-light communication systems

    Science.gov (United States)

    Wang, Zhongpeng; Chen, Shoufa

    2016-07-01

    A physical encryption scheme for discrete Hartley transform (DHT) precoded orthogonal frequency division multiplexing (OFDM) visible-light communication (VLC) systems using frequency domain chaos scrambling is proposed. In the scheme, the chaos scrambling, which is generated by a modified logistic mapping, is utilized to enhance the physical layer of security, and the DHT precoding is employed to reduce of OFDM signal for OFDM-based VLC. The influence of chaos scrambling on peak-to-average power ratio (PAPR) and bit error rate (BER) of systems is studied. The experimental simulation results prove the efficiency of the proposed encryption method for DHT-precoded, OFDM-based VLC systems. Furthermore, the influence of the proposed encryption to the PAPR and BER of systems is evaluated. The experimental results show that the proposed security scheme can protect the DHT-precoded, OFDM-based VLC from eavesdroppers, while keeping the good BER performance of DHT-precoded systems. The BER performance of the encrypted and DHT-precoded system is almost the same as that of the conventional DHT-precoded system without encryption.

  6. Peak-to-average power ratio reduction in orthogonal frequency division multiplexing-based visible light communication systems using a modified partial transmit sequence technique

    Science.gov (United States)

    Liu, Yan; Deng, Honggui; Ren, Shuang; Tang, Chengying; Qian, Xuewen

    2018-01-01

    We propose an efficient partial transmit sequence technique based on genetic algorithm and peak-value optimization algorithm (GAPOA) to reduce high peak-to-average power ratio (PAPR) in visible light communication systems based on orthogonal frequency division multiplexing (VLC-OFDM). By analysis of hill-climbing algorithm's pros and cons, we propose the POA with excellent local search ability to further process the signals whose PAPR is still over the threshold after processed by genetic algorithm (GA). To verify the effectiveness of the proposed technique and algorithm, we evaluate the PAPR performance and the bit error rate (BER) performance and compare them with partial transmit sequence (PTS) technique based on GA (GA-PTS), PTS technique based on genetic and hill-climbing algorithm (GH-PTS), and PTS based on shuffled frog leaping algorithm and hill-climbing algorithm (SFLAHC-PTS). The results show that our technique and algorithm have not only better PAPR performance but also lower computational complexity and BER than GA-PTS, GH-PTS, and SFLAHC-PTS technique.

  7. Plasma wakefields driven by an incoherent combination of laser pulses: a path towards high-average power laser-plasma accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Benedetti, C.; Schroeder, C.B.; Esarey, E.; Leemans, W.P.

    2014-05-01

    he wakefield generated in a plasma by incoherently combining a large number of low energy laser pulses (i.e.,without constraining the pulse phases) is studied analytically and by means of fully-self-consistent particle-in-cell simulations. The structure of the wakefield has been characterized and its amplitude compared with the amplitude of the wake generated by a single (coherent) laser pulse. We show that, in spite of the incoherent nature of the wakefield within the volume occupied by the laser pulses, behind this region the structure of the wakefield can be regular with an amplitude comparable or equal to that obtained from a single pulse with the same energy. Wake generation requires that the incoherent structure in the laser energy density produced by the combined pulses exists on a time scale short compared to the plasma period. Incoherent combination of multiple laser pulses may enable a technologically simpler path to high-repetition rate, high-average power laser-plasma accelerators and associated applications.

  8. Enhancing the well-being of support services staff in higher education: The power of appreciation

    Directory of Open Access Journals (Sweden)

    Laurika van Straaten

    2016-07-01

    Full Text Available Orientation: A literature search for studies on the well-being of support staff of higher education institutions (HEIs produced very little results. Appreciation was then used to identify elements that might enhance the well-being of a selected HEI’s support staff. Research purpose: The aim was to explore the strengths of a selected HEI that might serve as driving forces for enhancing its support staff’s well-being. Motivation for the study: The lack of research on the well-being of support staff motivated the study. A need was identified to explore driving forces that might enhance their well-being. Research design, approach and method: A literature review guided by theoretical perspectives and theories on staff well-being was conducted. Subsequently, a qualitative action research design involving an Appreciative Inquiry (AI workshop with support staff of an institution was followed. Main findings: The following strengths that might serve as driving forces for enhancing the well-being of the institution’s support services staff were identified: hard-working and dedicated support staff, positive relations among colleagues, a willingness to adapt to change,good remuneration and benefits, job security and a supportive work environment. Appreciative Inquiry was found to be well suited for identifying such strengths, as opposed to methods that focus on identifying problems or weaknesses of an organisation. As a result of this study, the relevant institution might react and build on these identified strengths towards promoting the well-being of its support staff. Practical/managerial implications: Institutions should make an effort to enhance staff well being. The results of the study could also be used to encourage HEIs to use AI to establish optimal staff well-being. Contribution/value add: The study confirmed the power of appreciation to identify the strengths that might serve as driving forces for enhancing the well-being of support staff

  9. Investigation of Turkish Higher Education as a Means of Influence in Relation to ‘Soft Power'

    Directory of Open Access Journals (Sweden)

    Hilal BÜYÜKGÖZE

    2016-05-01

    Full Text Available With the realization of soft power's potential in time which was emerged by the canalization of international diplomacy conjuncture directing to new quests after the industrial revolution and cold war, states have been led to apply various ways and methods regarding their foreign policies. Turkey also utilizes soft power, defined generally as the ability to shape the preferences of others through appeal rather than coerce, both through financial co-operations, constructive applications in international policy and through national institutions, foundations and Non-Governmental Organizations for its appearance and perception in international system. Higher education, which has an international characteristic, may also be assessed as a soft power element through international students and academics. Accordingly, in the current study, power and soft power terms were investigated, and the role of soft power in Turkey's foreign policy and Turkish higher education was examined. The findings of the study showed that the strategies toward internationalism in higher education are not sufficient, academics and university students have a certain level of awareness on the issue though. The results were interpreted within the scope of the literature, and managerial and functional characteristics of Turkish higher education as well.

  10. CEZ utility's coal-fired power plants: towards a higher environmental friendliness

    International Nuclear Information System (INIS)

    Kindl, V.; Spilkova, T.; Vanousek, I.; Stehlik, J.

    1996-01-01

    Environmental efforts of the major Czech utility, CEZ a.s., are aimed at reducing air pollution arising from electricity and heat generating facilities. There are 3 main kinds of activity in this respect: phasing out of coal fired power plants; technological provisions to reduce emissions of particulate matter, sulfur dioxide, and nitrogen oxides from those coal fired units that are to remain in operation after 1998; and completion of the Temelin nuclear power plant. In 1995, emissions of particulate matter, sulfur dioxide, nitrogen oxides, and carbon monoxide from CEZ's coal fired power plants were 19%, 79%, 59%, and 60%, respectively, with respect to the situation in 1992. The break-down of electricity generation by CEZ facilities (in GWh) was as follows in 1995: hydroelectric power plants 1673, nuclear power plants 12230, coal fired power plants without desulfurization equipment 30181, and coal fired power plants with desulfurization equipment 2277. Provisions implemented to improve the environmental friendliness of the individual CEZ's coal fired power plants are described in detail. (P.A.). 5 tabs., 1 fig

  11. Higher operational safety of nuclear power plants by evaluating the behaviour of operating personnel

    International Nuclear Information System (INIS)

    Mertins, M.; Glasner, P.

    1990-01-01

    In the GDR power reactors have been operated since 1966. Since that time operational experiences of 73 cumulative reactor years have been collected. The behaviour of operating personnel is an essential factor to guarantee the safety of operation of the nuclear power plant. Therefore a continuous analysis of the behaviour of operating personnel has been introduced at the GDR nuclear power plants. In the paper the overall system of the selection, preparation and control of the behaviour of nuclear power plant operating personnel is presented. The methods concerned are based on recording all errors of operating personnel and on analyzing them in order to find out the reasons. The aim of the analysis of reasons is to reduce the number of errors. By a feedback of experiences the nuclear safety of the nuclear power plant can be increased. All data necessary for the evaluation of errors are recorded and evaluated by a computer program. This method is explained thoroughly in the paper. Selected results of error analysis are presented. It is explained how the activities of the personnel are made safer by means of this analysis. Comparisons with other methods are made. (author). 3 refs, 4 figs

  12. Higher-order power harmonics of pulsed electrical stimulation modulates corticospinal contribution of peripheral nerve stimulation.

    Science.gov (United States)

    Chen, Chiun-Fan; Bikson, Marom; Chou, Li-Wei; Shan, Chunlei; Khadka, Niranjan; Chen, Wen-Shiang; Fregni, Felipe

    2017-03-03

    It is well established that electrical-stimulation frequency is crucial to determining the scale of induced neuromodulation, particularly when attempting to modulate corticospinal excitability. However, the modulatory effects of stimulation frequency are not only determined by its absolute value but also by other parameters such as power at harmonics. The stimulus pulse shape further influences parameters such as excitation threshold and fiber selectivity. The explicit role of the power in these harmonics in determining the outcome of stimulation has not previously been analyzed. In this study, we adopted an animal model of peripheral electrical stimulation that includes an amplitude-adapted pulse train which induces force enhancements with a corticospinal contribution. We report that the electrical-stimulation-induced force enhancements were correlated with the amplitude of stimulation power harmonics during the amplitude-adapted pulse train. In an exploratory analysis, different levels of correlation were observed between force enhancement and power harmonics of 20-80 Hz (r = 0.4247, p = 0.0243), 100-180 Hz (r = 0.5894, p = 0.0001), 200-280 Hz (r = 0.7002, p harmonics. This is a pilot, but important first demonstration that power at high order harmonics in the frequency spectrum of electrical stimulation pulses may contribute to neuromodulation, thus warrant explicit attention in therapy design and analysis.

  13. Internationalization, Regionalization, and Soft Power: China's Relations with ASEAN Member Countries in Higher Education

    Science.gov (United States)

    Yang, Rui

    2012-01-01

    Since the late 1980s, there has been a resurgence of regionalism in world politics. Prospects for new alliances are opened up often on a regional basis. In East and Southeast Asia, regionalization is becoming evident in higher education, with both awareness and signs of a rising ASEAN+3 higher education community. The quest for regional influence…

  14. Replacement power costs due to nuclear-plant outages: a higher standard of care

    International Nuclear Information System (INIS)

    Gransee, M.F.

    1982-01-01

    This article examines recent state public utility commission cases that deal with the high costs of replacement power that utilities must purchase after a nuclear power plant outage. Although most commissions have approved such expenses, it may be that there is a trend toward splitting the costs of such expenses between ratepayer and stockholder. Commissions are demanding a management prudence test to determine the cause of the outage and whether it meets the reasonable man standard before allowing these costs to be passed along to ratepayers. Unless the standard is applied with flexibility, however, utility companies could invoke the defenses covering traditional common law negligence

  15. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  16. Global Discourses and Power/Knowledge: Theoretical Reflections on Futures of Higher Education during the Rise of Asia

    Science.gov (United States)

    Geerlings, L. R. C.; Lundberg, A.

    2018-01-01

    This paper re-reads a selection of critical interdisciplinary theories in an attempt to open a space in higher education for cross-cultural dialogue during the rise of Asia. Theories of globalization, deterritorialization, power/knowledge and postcolonialism indicate that students and academics have the ability to re-imagine and influence…

  17. Knowledge, Power and Meanings Shaping Quality Assurance in Higher Education: A Systemic Critique

    Science.gov (United States)

    Houston, Don; Paewai, Shelley

    2013-01-01

    Internationally, quality assurance schemes persist despite long-standing dissatisfaction and critique of their impact and outcomes. Adopting a critical systems perspective, the article explores the relationships between the knowledge, power and meanings that stakeholder groups bring to the design and implementation of quality assurance systems.…

  18. CONSIDERATIONS FOR FAILURE PREVENTION IN AEROSPACE ELECTRICAL POWER SYSTEMS UTILIZING HIGHER VOLTAGES

    Science.gov (United States)

    2017-07-01

    exist to develop tests that mimic the aerospace environment. Testing and qualification requirements for hermetic components need to consider leakage ...containing significant amounts of water . Different parts of the flight cycle may also be critical for specific components. Open-box power electronics...60270 [A-3]. Specialized test systems have been developed to detect and characterize PD in electrical equipment at atmospheric pressures. Although

  19. Catalytic Reforming of Higher Hydrocarbon Fuels to Hydrogen: Process Investigations with Regard to Auxiliary Power Units

    OpenAIRE

    Kaltschmitt, Torsten

    2012-01-01

    This thesis discusses the investigation of the catalytic partial oxidation on rhodium-coated honeycomb catalysts with respect to the conversion of a model surrogate fuel and commercial diesel fuel into hydrogen for the use in auxiliary power units. Furthermore, the influence of simulated tail-gas recycling was investigated.

  20. The Barkas effect and other higher-order Z1-contributions to the stopping power

    International Nuclear Information System (INIS)

    Andersen, H.H.

    1985-01-01

    The experimental evidence for contributions to the stopping power proportional to Z 1 3 (Barkas effect) and Z 1 4 (Bloch correction) at velocities around 10 v 0 is reviewed. Quantitative evidence is found for both terms but it is not possible experimentally to discern whether hard collisions contribute to the Barkas term. Evidence from single-collision events are drawn into the discussion and some experiments which may turn out to be decisive are discussed. (orig.)

  1. More Stake, Less Gravy? Issues of Knowledge and Power in Higher Education.

    Science.gov (United States)

    Bak, Nelleke; Paterson, Andrew

    1997-01-01

    Two perspectives on higher education's stakeholders and their involvement in the development of knowledge in universities are examined and contrasted: (1) that the "stakeholder" notion of knowledge doesn't allow for critical engagement with knowledge, and (2) that the "stakeholder" view of knowledge acknowledges clear links between knowledge and…

  2. Policy Internationalization, National Variety and Governance: Global Models and Network Power in Higher Education States

    Science.gov (United States)

    King, Roger

    2010-01-01

    This article analyzes policy convergence and the adoption of globalizing models by higher education states, a process we describe, following Thatcher (2007), as policy internationalization. This refers to processes found in many policy domains and which increasingly are exemplified in tertiary education systems too. The focus is on governmental…

  3. Space and Power in the Ivory Tower: Decision Making in Public Higher Education

    Science.gov (United States)

    Blanchette, Sandra McCoskrie

    2011-01-01

    The challenges of managing physical space in higher education are often left unspoken and under researched. In this multiple case study of three urban universities, decision-making processes are examined with particular attention to who has institutional decision-making authority. Effective and efficient space management is important because the…

  4. Harnessing the Power of Information Technology: Open Business Models in Higher Education

    Science.gov (United States)

    Sheets, Robert G.; Crawford, Stephen

    2012-01-01

    Higher education is under enormous pressure to improve outcomes and reduce costs. Information technology can help achieve these goals, but only if it is properly harnessed. This article argues that one key to harnessing information technology is business model innovation that results in more "open" and "unbundled" operations in learning and…

  5. Gender, Power and Management: A Cross-Cultural Analysis of Higher Education

    Science.gov (United States)

    Bagilhole, Barbara, Ed.; White, Kate, Ed.

    2011-01-01

    Women are now part of senior management in higher education (HE) to varying degrees in most countries and actively contribute to the vision and strategic direction of universities. This book attempts to analyse their impact and potential impact on both organisational growth and culture. Contents of this book include: (1) Building a Feminist…

  6. Managing the higher risks of low-cost high-efficiency advanced power generation technologies

    International Nuclear Information System (INIS)

    Pearson, M.

    1997-01-01

    Independent power producers operate large coal-fired installations and gas turbine combined-cycle (GTCC) facilities. Combined cycle units are complex and their reliability and availability is greatly influenced by mechanical, instrumentation and control weaknesses. It was suggested that these weaknesses could be avoided by tighter specifications and more rigorous functional testing before acceptance by the owner. For the present, the difficulties of developing reliable, lower installed cost/kw, more efficient GTCC designs, pressure for lower NO x emissions with 'dry' combustors continue to be the most difficult challenges for all GT manufacturers

  7. Propagation of high power electromagnetic beam in relativistic magnetoplasma: Higher order paraxial ray theory

    Science.gov (United States)

    Gill, Tarsem Singh; Kaur, Ravinder; Mahajan, Ranju

    2010-09-01

    This paper presents an analysis of self-consistent, steady-state, theoretical model, which explains the ring formation in a Gaussian electromagnetic beam propagating in a magnetoplasma, characterized by relativistic nonlinearity. Higher order terms (up to r4) in the expansion of the dielectric function and the eikonal have been taken into account. The condition for the formation of a dark and bright ring derived earlier by Misra and Mishra [J. Plasma Phys. 75, 769 (2009)] has been used to study focusing/defocusing of the beam. It is seen that inclusion of higher order terms does significantly affect the dependence of the beam width on the distance of propagation. Further, the effect of the magnetic field and the nature of nonlinearity on the ring formation and self-focusing of the beam have been explored.

  8. Propagation of high power electromagnetic beam in relativistic magnetoplasma: Higher order paraxial ray theory

    International Nuclear Information System (INIS)

    Gill, Tarsem Singh; Kaur, Ravinder; Mahajan, Ranju

    2010-01-01

    This paper presents an analysis of self-consistent, steady-state, theoretical model, which explains the ring formation in a Gaussian electromagnetic beam propagating in a magnetoplasma, characterized by relativistic nonlinearity. Higher order terms (up to r 4 ) in the expansion of the dielectric function and the eikonal have been taken into account. The condition for the formation of a dark and bright ring derived earlier by Misra and Mishra [J. Plasma Phys. 75, 769 (2009)] has been used to study focusing/defocusing of the beam. It is seen that inclusion of higher order terms does significantly affect the dependence of the beam width on the distance of propagation. Further, the effect of the magnetic field and the nature of nonlinearity on the ring formation and self-focusing of the beam have been explored.

  9. Maximal locality and predictive power in higher-dimensional, compactified field theories

    International Nuclear Information System (INIS)

    Kubo, Jisuke; Nunami, Masanori

    2004-01-01

    To realize maximal locality in a trivial field theory, we maximize the ultraviolet cutoff of the theory by fine tuning the infrared values of the parameters. This optimization procedure is applied to the scalar theory in D + 1 dimensional (D ≥ 4) with one extra dimension compactified on a circle of radius R. The optimized, infrared values of the parameters are then compared with the corresponding ones of the uncompactified theory in D dimensions, which is assumed to be the low-energy effective theory. We find that these values approximately agree with each other as long as R -1 > approx sM is satisfied, where s ≅ 10, 50, 50, 100 for D = 4,5,6,7, and M is a typical scale of the D-dimensional theory. This result supports the previously made claim that the maximization of the ultraviolet cutoff in a nonrenormalizable field theory can give the theory more predictive power. (author)

  10. Higher capacity, lower carbon dioxide emissions. Idle power compensation in HV lines; Mehr Kapazitaet, weniger Kohlendioxid. Blindleistungskompensation bei Hochspannungsleitungen

    Energy Technology Data Exchange (ETDEWEB)

    Auer, Jan-Hendrik von [Alstom Grid GmbH, Berlin (Germany). Team Leistungselektronik und Kompensationsanlagen

    2012-07-01

    Even today, many HP lines have reached their limits. It is therefore highly urgent to find measures for optimum utilization of the available overhead transmssion capacities, e.g. by idle power compensation. Together with a filter for harmonics reduction, this will ensure higher grid stability and enhance transport capacities while reducing transport losses, thus saving money and reducing CO{sub 2} emissions. (orig./AKB)

  11. Production Planning with Respect to Uncertainties. Simulator Based Production Planning of Average Sized Combined Heat and Power Production Plants; Produktionsplanering under osaekerhet. Simulatorbaserad produktionsplanering av medelstora kraftvaermeanlaeggningar

    Energy Technology Data Exchange (ETDEWEB)

    Haeggstaahl, Daniel [Maelardalen Univ., Vaesteraas (Sweden); Dotzauer, Erik [AB Fortum, Stockholm (Sweden)

    2004-12-01

    Production planning in Combined Heat and Power (CHP) systems is considered. The focus is on development and use of mathematical models and methods. Different aspects on production planning are discussed, including weather and load predictions. Questions relevant on the different planning horizons are illuminated. The main purpose with short-term (one week) planning is to decide when to start and stop the production units, and to decide how to use the heat storage. The main conclusion from the outline of pros and cons of commercial planning software are that several are using Mixed Integer Programming (MIP). In that sense they are similar. Building a production planning model means that the planning problem is formulated as a mathematical optimization problem. The accuracy of the input data determines the practical detail level of the model. Two alternatives to the methods used in today's commercial programs are proposed: stochastic optimization and simulator-based optimization. The basic concepts of mathematical optimization are outlined. A simulator-based model for short-term planning is developed. The purpose is to minimize the production costs, depending on the heat demand in the district heating system, prices of electricity and fuels, emission taxes and fees, etc. The problem is simplified by not including any time-linking conditions. The process model is developed in IPSEpro, a heat and mass-balance software from SimTech Simulation Technology. TOMLAB, an optimization toolbox in MATLAB, is used as optimizer. Three different solvers are applied: glcFast, glcCluster and SNOPT. The link between TOMLAB and IPSEpro is accomplished using the Microsoft COM technology. MATLAB is the automation client and contains the control of IPSEpro and TOMLAB. The simulator-based model is applied to the CHP plant in Eskilstuna. Two days are chosen and analyzed. The optimized production is compared to the measured. A sensitivity analysis on how variations in outdoor

  12. On the use of the residue theorem for the efficient evaluation of band-averaged input power into linear second-order dynamic systems

    Science.gov (United States)

    D'Amico, R.; Koo, K.; Huybrechs, D.; Desmet, W.

    2013-12-01

    An alternative to numerical quadrature is proposed to compute the power injected into a vibrating structure over a certain frequency band. Instead of evaluating the system response at several sampling frequencies within the considered band, the integral is computed by estimating the residue at a few complex frequencies, corresponding to the poles of the weighting function. This technique provides considerable benefits in terms of computation time, since the integration is independent of the width of the frequency band. Two application examples show the effectiveness of the approach. Firstly, the use of a Butterworth filter instead of a rectangular weighting function is assessed. Secondly, the accuracy of the approximation in case of hysteretic damping is proven. Finally, the computational performance of the technique is compared with classical numerical quadrature schemes.

  13. Body lift, drag and power are relatively higher in large-eared than in small-eared bat species.

    Science.gov (United States)

    Håkansson, Jonas; Jakobsen, Lasse; Hedenström, Anders; Johansson, L Christoffer

    2017-10-01

    Bats navigate the dark using echolocation. Echolocation is enhanced by external ears, but external ears increase the projected frontal area and reduce the streamlining of the animal. External ears are thus expected to compromise flight efficiency, but research suggests that very large ears may mitigate the cost by producing aerodynamic lift. Here we compare quantitative aerodynamic measures of flight efficiency of two bat species, one large-eared ( Plecotus auritus ) and one small-eared ( Glossophaga soricina ), flying freely in a wind tunnel. We find that the body drag of both species is higher than previously assumed and that the large-eared species has a higher body drag coefficient, but also produces relatively more ear/body lift than the small-eared species, in line with prior studies on model bats. The measured aerodynamic power of P. auritus was higher than predicted from the aerodynamic model, while the small-eared species aligned with predictions. The relatively higher power of the large-eared species results in lower optimal flight speeds and our findings support the notion of a trade-off between the acoustic benefits of large external ears and aerodynamic performance. The result of this trade-off would be the eco-morphological correlation in bat flight, with large-eared bats generally adopting slow-flight feeding strategies. © 2017 The Author(s).

  14. Application of power spectrum, cepstrum, higher order spectrum and neural network analyses for induction motor fault diagnosis

    Science.gov (United States)

    Liang, B.; Iwnicki, S. D.; Zhao, Y.

    2013-08-01

    The power spectrum is defined as the square of the magnitude of the Fourier transform (FT) of a signal. The advantage of FT analysis is that it allows the decomposition of a signal into individual periodic frequency components and establishes the relative intensity of each component. It is the most commonly used signal processing technique today. If the same principle is applied for the detection of periodicity components in a Fourier spectrum, the process is called the cepstrum analysis. Cepstrum analysis is a very useful tool for detection families of harmonics with uniform spacing or the families of sidebands commonly found in gearbox, bearing and engine vibration fault spectra. Higher order spectra (HOS) (also known as polyspectra) consist of higher order moment of spectra which are able to detect non-linear interactions between frequency components. For HOS, the most commonly used is the bispectrum. The bispectrum is the third-order frequency domain measure, which contains information that standard power spectral analysis techniques cannot provide. It is well known that neural networks can represent complex non-linear relationships, and therefore they are extremely useful for fault identification and classification. This paper presents an application of power spectrum, cepstrum, bispectrum and neural network for fault pattern extraction of induction motors. The potential for using the power spectrum, cepstrum, bispectrum and neural network as a means for differentiating between healthy and faulty induction motor operation is examined. A series of experiments is done and the advantages and disadvantages between them are discussed. It has been found that a combination of power spectrum, cepstrum and bispectrum plus neural network analyses could be a very useful tool for condition monitoring and fault diagnosis of induction motors.

  15. Higher Fock states and power counting in exclusive P-wave quarkonium decays

    CERN Document Server

    Bolz, J; Schuler, G A; Bolz, Jan; Kroll, Peter; Schuler, Gerhard A.

    1998-01-01

    Exclusive processes at large momentum transfer Q factor into perturbatively calculable short-distance parts and long-distance hadronic wave functions. Usually, only contributions from the leading Fock states have to be included to leading order in 1/Q. We show that for exclusive decays of P-wave quarkonia the contribution from the next-higher Fock state |Q Qbar g> contributes at the same order in 1/Q. We investigate how the constituent gluon attaches to the hard process in order to form colour-singlet final-state hadrons and argue that a single additional long-distance factor is sufficient to parametrize the size of its contribution. Incorporating transverse degrees of freedom and Sudakov factors, our results are perturbatively stable in the sense that soft phase-space contributions are largely suppressed. Explicit calculations yield good agreement with data on chi_{c J} decays into pairs of pions, kaons, and etas. We also comment on J/psi decays into two pions.

  16. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  17. Multi-objective optimization of MOSFETs channel widths and supply voltage in the proposed dual edge-triggered static D flip-flop with minimum average power and delay by using fuzzy non-dominated sorting genetic algorithm-II.

    Science.gov (United States)

    Keivanian, Farshid; Mehrshad, Nasser; Bijari, Abolfazl

    2016-01-01

    D Flip-Flop as a digital circuit can be used as a timing element in many sophisticated circuits. Therefore the optimum performance with the lowest power consumption and acceptable delay time will be critical issue in electronics circuits. The newly proposed Dual-Edge Triggered Static D Flip-Flop circuit layout is defined as a multi-objective optimization problem. For this, an optimum fuzzy inference system with fuzzy rules is proposed to enhance the performance and convergence of non-dominated sorting Genetic Algorithm-II by adaptive control of the exploration and exploitation parameters. By using proposed Fuzzy NSGA-II algorithm, the more optimum values for MOSFET channel widths and power supply are discovered in search space than ordinary NSGA types. What is more, the design parameters involving NMOS and PMOS channel widths and power supply voltage and the performance parameters including average power consumption and propagation delay time are linked. To do this, the required mathematical backgrounds are presented in this study. The optimum values for the design parameters of MOSFETs channel widths and power supply are discovered. Based on them the power delay product quantity (PDP) is 6.32 PJ at 125 MHz Clock Frequency, L = 0.18 µm, and T = 27 °C.

  18. Anodic biofilms in microbial fuel cells harbor low numbers of higher-power-producing bacteria than abundant genera

    Energy Technology Data Exchange (ETDEWEB)

    Kiely, Patrick D.; Call, Douglas F.; Yates, Matthew D.; Regan, John M.; Logan, Bruce E. [Pennsylvania State Univ., University Park, PA (United States). Dept. of Civil and Environmental Engineering

    2010-09-15

    Microbial fuel cell (MFC) anode communities often reveal just a few genera, but it is not known to what extent less abundant bacteria could be important for improving performance. We examined the microbial community in an MFC fed with formic acid for more than 1 year and determined using 16S rRNA gene cloning and fluorescent in situ hybridization that members of the Paracoccus genus comprised most ({proportional_to}30%) of the anode community. A Paracoccus isolate obtained from this biofilm (Paracoccus denitrificans strain PS-1) produced only 5.6 mW/m{sup 2}, whereas the original mixed culture produced up to 10 mW/m{sup 2}. Despite the absence of any Shewanella species in the clone library, we isolated a strain of Shewanella putrefaciens (strain PS-2) from the same biofilm capable of producing a higher-power density (17.4 mW/m{sup 2}) than the mixed culture, although voltage generation was variable. Our results suggest that the numerical abundance of microorganisms in biofilms cannot be assumed a priori to correlate to capacities of these predominant species for high-power production. Detailed screening of bacterial biofilms may therefore be needed to identify important strains capable of high-power generation for specific substrates. (orig.)

  19. Anodic biofilms in microbial fuel cells harbor low numbers of higher-power-producing bacteria than abundant genera

    KAUST Repository

    Kiely, Patrick D.; Call, Douglas F.; Yates, Matthew D.; Regan, John M.; Logan, Bruce E.

    2010-01-01

    Microbial fuel cell (MFC) anode communities often reveal just a few genera, but it is not known to what extent less abundant bacteria could be important for improving performance. We examined the microbial community in an MFC fed with formic acid for more than 1 year and determined using 16S rRNA gene cloning and fluorescent in situ hybridization that members of the Paracoccus genus comprised most (~30%) of the anode community. A Paracoccus isolate obtained from this biofilm (Paracoccus denitrificans strain PS-1) produced only 5.6 mW/m 2, whereas the original mixed culture produced up to 10 mW/m 2. Despite the absence of any Shewanella species in the clone library, we isolated a strain of Shewanella putrefaciens (strain PS-2) from the same biofilm capable of producing a higher-power density (17.4 mW/m2) than the mixed culture, although voltage generation was variable. Our results suggest that the numerical abundance of microorganisms in biofilms cannot be assumed a priori to correlate to capacities of these predominant species for high-power production. Detailed screening of bacterial biofilms may therefore be needed to identify important strains capable of high-power generation for specific substrates. © 2010 Springer-Verlag.

  20. Anodic biofilms in microbial fuel cells harbor low numbers of higher-power-producing bacteria than abundant genera

    KAUST Repository

    Kiely, Patrick D.

    2010-07-15

    Microbial fuel cell (MFC) anode communities often reveal just a few genera, but it is not known to what extent less abundant bacteria could be important for improving performance. We examined the microbial community in an MFC fed with formic acid for more than 1 year and determined using 16S rRNA gene cloning and fluorescent in situ hybridization that members of the Paracoccus genus comprised most (~30%) of the anode community. A Paracoccus isolate obtained from this biofilm (Paracoccus denitrificans strain PS-1) produced only 5.6 mW/m 2, whereas the original mixed culture produced up to 10 mW/m 2. Despite the absence of any Shewanella species in the clone library, we isolated a strain of Shewanella putrefaciens (strain PS-2) from the same biofilm capable of producing a higher-power density (17.4 mW/m2) than the mixed culture, although voltage generation was variable. Our results suggest that the numerical abundance of microorganisms in biofilms cannot be assumed a priori to correlate to capacities of these predominant species for high-power production. Detailed screening of bacterial biofilms may therefore be needed to identify important strains capable of high-power generation for specific substrates. © 2010 Springer-Verlag.

  1. Analysis of Expediency to Apply LCL Model with Source of Higher Harmonics of Current While Investigating Resonance Condition of Power Supply Network

    OpenAIRE

    M. Pavlovsky; A. Shimansky; Z. Fialkovsky

    2004-01-01

    The paper considers a power system model of a plant with one capacitor bank and with one current source of higher harmonics for higher power factor. The laboratory research results of this system and practical application of the proposed model are given in the paper.

  2. Analysis of Expediency to Apply LCL Model with Source of Higher Harmonics of Current While Investigating Resonance Condition of Power Supply Network

    Directory of Open Access Journals (Sweden)

    M. Pavlovsky

    2004-01-01

    Full Text Available The paper considers a power system model of a plant with one capacitor bank and with one current source of higher harmonics for higher power factor. The laboratory research results of this system and practical application of the proposed model are given in the paper.

  3. Measurements of higher-order mode damping in the PEP-II low-power test cavity

    International Nuclear Information System (INIS)

    Rimmer, R.A.; Goldberg, D.A.

    1993-05-01

    The paper describes the results of measurements of the Higher-Order Mode (HOM) spectrum of the low-power test model of the PEP-II RF cavity and the reduction in the Q's of the modes achieved by the addition of dedicated damping waveguides. All the longitudinal (monopole) and deflecting (dipole) modes below the beam pipe cut-off are identified by comparing their measured frequencies and field distributions with calculations using the URMEL code. Field configurations were determined using a perturbation method with an automated bead positioning system. The loaded Q's agree well with the calculated values reported previously, and the strongest HOMs are damped by more than three orders of magnitude. This is sufficient to reduce the coupled-bunch growth rates to within the capability of a reasonable feedback system. A high power test cavity will now be built to validate the thermal design at the 150 kW nominal operating level, as described elsewhere at this conference

  4. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  5. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  6. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  7. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  8. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  9. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  10. JAERI femtosecond pulsed and tens-kilowatts average-powered free-electron lasers and their applications of large-scaled non-thermal manufacturing in nuclear energy industry

    International Nuclear Information System (INIS)

    Minehara, Eisuke J.

    2004-01-01

    We first reported the novel method that femto-second (fs) lasers of the low average power Ti: Sapphire one, the JAERI high average power free-electron laser, excimer laser, fiber laser and so on could peel off and remove both stress corrosion cracking (SCC) origins of the cold-worked (CW) and very crack-susceptible material, and residual tensile stress in the hardened surface of low-carbon stainless steel cubic samples for nuclear reactor internals as a proof of principle experiment except for the last and third origin of corrosive environment. Because it has been successfully demonstrated that the fs lasers could clearly remove the two SCC origins, we could resultantly prevent the cold-worked SCC in many field near future. The SCC is a well known phenomenon in modern material sciences, technologies, and industries, and defined as an insidious failure mechanism that is caused by the corrosive environment, and the crack-susceptible material and the surface residual tensile stress simultaneously. There are a large number of famous SCC examples for damaging stainless steels, aluminum alloys, brass and other alloy metals in many different cases. In many boiling light-water reactor (BWR) nuclear power plants and a few pressurized light water reactor (PWR) ones in Japan and the world up to now, a large number of the deep and wide cracks have been recently found in the reactor-grade low-carbon stainless steel components of core shroud, control-blade handle, re-circulating pipes, sheath and other internals in the reactor vessel under very low or no applied stresses. These cracks have been thought to be initiated from the crack-susceptible like very small-sized cracks, pinholes, concentrated dislocation defects and so on in the hardened surface, which were originated from cold-work machining processes in reactor manufacturing factories, and to be insidiously penetrated widely into the deep inside under the residual tensile stress and corrosive environment, and under no

  11. Development of a low-cost biogas filtration system to achieve higher-power efficient AC generator

    Science.gov (United States)

    Mojica, Edison E.; Ardaniel, Ar-Ar S.; Leguid, Jeanlou G.; Loyola, Andrea T.

    2018-02-01

    The paper focuses on the development of a low-cost biogas filtration system for alternating current generator to achieve higher efficiency in terms of power production. A raw biogas energy comprises of 57% combustible element and 43% non-combustible elements containing carbon dioxide (36%), water vapor (5%), hydrogen sulfide (0.5%), nitrogen (1%), oxygen (0 - 2%), and ammonia (0 - 1%). The filtration system composes of six stages: stage 1 is the water scrubber filter intended to remove the carbon dioxide and traces of hydrogen sulfide; stage 2 is the silica gel filter intended to reduce the water vapor; stage 3 is the iron sponge filter intended to remove the remaining hydrogen sulfide; stage 4 is the sodium hydroxide solution filter intended to remove the elemental sulfur formed during the interaction of the hydrogen sulfide and the iron sponge and for further removal of carbon dioxide; stage 5 is the silica gel filter intended to further eliminate the water vapor gained in stage 4; and, stage 6 is the activated carbon filter intended to remove the carbon dioxide. The filtration system was able to lower the non-combustible elements by 72% and thus, increasing the combustible element by 54.38%. The unfiltered biogas is capable of generating 16.3 kW while the filtered biogas is capable of generating 18.6 kW. The increased in methane concentration resulted to 14.11% increase in the power output. The outcome resulted to better engine performance in the generation of electricity.

  12. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  13. Power in Practice: Adult Education and the Struggle for Knowledge and Power in Society. The Jossey-Bass Higher and Adult Education Series.

    Science.gov (United States)

    Cervero, Ronald M.; Wilson, Arthur L.

    This book contains 14 papers on adult education and the struggle for knowledge and power in society. The following papers are included: "At the Heart of Practice: The Struggle for Knowledge and Power" (Ronald M. Cervero, Arthur L. Wilson); "The Power of Economic Globalization: Deskilling Immigrant Women through Training"…

  14. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  15. April 25, 2003, FY2003 Progress Summary and FY2002 Program Plan, Statement of Work and Deliverables for Development of High Average Power Diode-Pumped Solid State Lasers,and Complementary Technologies, for Applications in Energy and Defense

    International Nuclear Information System (INIS)

    Meier, W; Bibeau, C

    2005-01-01

    The High Average Power Laser Program (HAPL) is a multi-institutional, synergistic effort to develop inertial fusion energy (IFE). This program is building a physics and technology base to complement the laser-fusion science being pursued by DOE Defense programs in support of Stockpile Stewardship. The primary institutions responsible for overseeing and coordinating the research activities are the Naval Research Laboratory (NRL) and Lawrence Livermore National Laboratory (LLNL). The current LLNL proposal is a companion document to the one submitted by NRL, for which the driver development element is focused on the krypton fluoride excimer laser option. The NRL and LLNL proposals also jointly pursue complementary activities with the associated rep-rated laser technologies relating to target fabrication, target injection, final optics, fusion chamber, target physics, materials and power plant economics. This proposal requests continued funding in FY03 to support LLNL in its program to build a 1 kW, 100 J, diode-pumped, crystalline laser, as well as research into high gain fusion target design, fusion chamber issues, and survivability of the final optic element. These technologies are crucial to the feasibility of inertial fusion energy power plants and also have relevance in rep-rated stewardship experiments. The HAPL Program pursues technologies needed for laser-driven IFE. System level considerations indicate that a rep-rated laser technology will be needed, operating at 5-10 Hz. Since a total energy of ∼2 MJ will ultimately be required to achieve suitable target gain with direct drive targets, the architecture must be scaleable. The Mercury Laser is intended to offer such an architecture. Mercury is a solid state laser that incorporates diodes, crystals and gas cooling technologies

  16. The intermediates take it all: asymptotics of higher criticism statistics and a powerful alternative based on equal local levels.

    Science.gov (United States)

    Gontscharuk, Veronika; Landwehr, Sandra; Finner, Helmut

    2015-01-01

    The higher criticism (HC) statistic, which can be seen as a normalized version of the famous Kolmogorov-Smirnov statistic, has a long history, dating back to the mid seventies. Originally, HC statistics were used in connection with goodness of fit (GOF) tests but they recently gained some attention in the context of testing the global null hypothesis in high dimensional data. The continuing interest for HC seems to be inspired by a series of nice asymptotic properties related to this statistic. For example, unlike Kolmogorov-Smirnov tests, GOF tests based on the HC statistic are known to be asymptotically sensitive in the moderate tails, hence it is favorably applied for detecting the presence of signals in sparse mixture models. However, some questions around the asymptotic behavior of the HC statistic are still open. We focus on two of them, namely, why a specific intermediate range is crucial for GOF tests based on the HC statistic and why the convergence of the HC distribution to the limiting one is extremely slow. Moreover, the inconsistency in the asymptotic and finite behavior of the HC statistic prompts us to provide a new HC test that has better finite properties than the original HC test while showing the same asymptotics. This test is motivated by the asymptotic behavior of the so-called local levels related to the original HC test. By means of numerical calculations and simulations we show that the new HC test is typically more powerful than the original HC test in normal mixture models. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  18. Power Play: The Dynamics of Power and Interpersonal Communication in Higher Education as Reflected in David Mamet's "Oleanna"

    Science.gov (United States)

    Chiaramonte, Peter

    2014-01-01

    David Mamet's play "Oleanna" may be infamous for reasons that do not do justice to the play's real accomplishments. One reason for the controversy is the author's apparent focus on sexual harassment. The play is not about sexual harassment. It is about power. And in particular the power of language to shape relationships…

  19. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...

  20. Fock representation of the renormalized higher powers of White noise and the centreless Virasoro (or Witt)-Zamolodchikov-w∞*-Lie algebra

    International Nuclear Information System (INIS)

    Accardi, Luigi; Boukas, Andreas

    2008-01-01

    The identification of the *-Lie algebra of the renormalized higher powers of White noise (RHPWN) and the analytic continuation of the second quantized centreless Virasoro (or Witt)-Zamolodchikov-w ∞ *-Lie algebra of conformal field theory and high-energy physics, was recently established on results obtained. In the present paper, we show how the RHPWN Fock kernels must be truncated in order to be positive semi-definite and we obtain a Fock representation of the two algebras. We show that the truncated renormalized higher powers of White noise (TRHPWN) Fock spaces of order ≥2 host the continuous binomial and beta processes

  1. Low-peak-to-average power ratio and low-complexity asymmetrically clipped optical orthogonal frequency-division multiplexing uplink transmission scheme for long-reach passive optical network.

    Science.gov (United States)

    Zhou, Ji; Qiao, Yaojun

    2015-09-01

    In this Letter, we propose a discrete Hartley transform (DHT)-spread asymmetrically clipped optical orthogonal frequency-division multiplexing (DHT-S-ACO-OFDM) uplink transmission scheme in which the multiplexing/demultiplexing process also uses the DHT algorithm. By designing a simple encoding structure, the computational complexity of the transmitter can be reduced from O(Nlog(2)(N)) to O(N). At the probability of 10(-3), the peak-to-average power ratio (PAPR) of 2-ary pulse amplitude modulation (2-PAM)-modulated DHT-S-ACO-OFDM is approximately 9.7 dB lower than that of 2-PAM-modulated conventional ACO-OFDM. To verify the feasibility of the proposed scheme, a 4-Gbit/s DHT-S-ACO-OFDM uplink transmission scheme with a 1∶64 way split has been experimentally implemented using 100-km standard single-mode fiber (SSMF) for a long-reach passive optical network (LR-PON).

  2. Re/Thinking Practices of Power: The Discursive Framing of Leadership in "The Chronicle of Higher Education"

    Science.gov (United States)

    Allan, Elizabeth J.; Gordon, Suzanne P.; Iverson, Susan V.

    2006-01-01

    This article examines how discourses of leadership reflect and produce particular perceptions about leaders and leadership in higher education. An analysis of 103 articles published by "The Chronicle of Higher Education" between 2002 and 2003 reveal four predominant discourses shaping images of leaders: autonomy, relatedness, masculinity, and…

  3. Soft Power, University Rankings and Knowledge Production: Distinctions between Hegemony and Self-Determination in Higher Education

    Science.gov (United States)

    Lo, William Yat Wai

    2011-01-01

    The purpose of this article is to analyse the nature of the global hegemonies in higher education. While anti-colonial thinkers describe the dominance of the Western paradigm as an oppression of indigenous culture and knowledge and as neo-colonialism in higher education, their arguments lead to such questions as how much self-determination do…

  4. Identity Matters: The Centrality of "Conferred Identity" as Symbolic Power and Social Capital in Higher Education Mobility

    Science.gov (United States)

    Holt, Brenda

    2012-01-01

    Although any "choice" young people make about higher education incorporates a subtle interplay of individual agency, circumstance and social structure, the centrality of identity in such life choices for rural young people cannot be underestimated. Since mobility is an ontological absolute for most rural young people accessing…

  5. New Higher Education President Integration: Change and Resistance Viewed through Social Power Bases and a Change Model Lens

    Science.gov (United States)

    Gearin, Christopher A.

    2017-01-01

    This study investigates how new presidents of higher education institutions struggle to understand their organisations, paying special attention to campus resistance, and how new presidents manage institutional dynamics and expectations. A qualitative study using a phenomenological approach is conducted with 11 single-campus presidents of…

  6. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  7. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  8. Power

    DEFF Research Database (Denmark)

    Elmholdt, Claus Westergård; Fogsgaard, Morten

    2016-01-01

    and creativity suggests that when managers give people the opportunity to gain power and explicate that there is reason to be more creative, people will show a boost in creative behaviour. Moreover, this process works best in unstable power hierarchies, which implies that power is treated as a negotiable....... It is thus a central point that power is not necessarily something that breaks down and represses. On the contrary, an explicit focus on the dynamics of power in relation to creativity can be productive for the organisation. Our main focus is to elaborate the implications of this for practice and theory...

  9. The power of management in medical services. Can we manage better for higher quality and more productive medical services?

    Directory of Open Access Journals (Sweden)

    Magdalena BARBU

    2010-06-01

    Full Text Available Medical services are the most important services of all since we all depend on them. Their quality and productivity can assure a wealthy nation and therefore good economical results. The offer of medical services depends on medical personnel and more than this, on the management in the medical field since any resource not managed well or not managed at all is only a lost one, regardless its value. Management is therefore the key, the “how to” method of obtaining the desired result. The same approach can be applied into our study in order to reach more productive medical services which to prove high quality to all patients. We need to use and to squeeze the entire force of management tools in order to reach our goal: accessible medical services full of quality. The current worldwide crisis situation makes us think that after job and food, even medical services (also a basic thing after all can become a “luxury” although this should never happen. Therefore we must do whatever needed to improve the way medical organizations are driven so that the quality of their medical services will be better and better and the productivity will be at a higher level. Medical management should have as a goal making it possible for patients to be able to solve their health problems as soon as possible and as good as possible.

  10. Deep tissue optical imaging of upconverting nanoparticles enabled by exploiting higher intrinsic quantum yield through use of millisecond single pulse excitation with high peak power

    DEFF Research Database (Denmark)

    Liu, Haichun; Xu, Can T.; Dumlupinar, Gökhan

    2013-01-01

    We have accomplished deep tissue optical imaging of upconverting nanoparticles at 800 nm, using millisecond single pulse excitation with high peak power. This is achieved by carefully choosing the pulse parameters, derived from time-resolved rate-equation analysis, which result in higher intrinsic...... quantum yield that is utilized by upconverting nanoparticles for generating this near infrared upconversion emission. The pulsed excitation approach thus promises previously unreachable imaging depths and shorter data acquisition times compared with continuous wave excitation, while simultaneously keeping...... therapy and remote activation of biomolecules in deep tissues....

  11. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  12. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  13. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  14. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  15. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  16. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  17. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  18. Cost curves for implantation of small scale hydroelectric power plant project in function of the average annual energy production; Curvas de custo de implantacao de pequenos projetos hidreletricos em funcao da producao media anual de energia

    Energy Technology Data Exchange (ETDEWEB)

    Veja, Fausto Alfredo Canales; Mendes, Carlos Andre Bulhoes; Beluco, Alexandre

    2008-10-15

    Because of its maturity, small hydropower generation is one of the main energy sources to be considered for electrification of areas far from the national grid. Once a site with hydropower potential is identified, technical and economical studies to assess its feasibility shall be done. Cost curves are helpful tools in the appraisal of the economical feasibility of this type of projects. This paper presents a method to determine initial cost curves as a function of the average energy production of the hydropower plant, by using a set of parametric cost curves and the flow duration curve at the analyzed location. The method is illustrated using information related to 18 pre-feasibility studies made in 2002, at the Central-Atlantic rural region of Nicaragua. (author)

  19. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  20. Increase of EEG spectral theta power indicates higher risk of the development of severe cognitive decline in Parkinson’s disease after 3 years

    Directory of Open Access Journals (Sweden)

    Vitalii V Cozac

    2016-11-01

    Full Text Available Objective: We investigated quantitative electroencephalography (qEEG and clinical parameters as potential risk factors of severe cognitive decline in Parkinson’s disease.Methods: We prospectively investigated 37 patients with Parkinson’s disease at baseline and follow-up (after 3 years. Patients had no severe cognitive impairment at baseline. We used a summary score of cognitive tests as the outcome at follow-up. At baseline we assessed motor, cognitive, and psychiatric factors; qEEG variables (global relative median power spectra were obtained by a fully automated processing of high-resolution EEG (256-channels. We used linear regression models with calculation of the explained variance to evaluate the relation of baseline parameters with cognitive deterioration.Results: The following baseline parameters significantly predicted severe cognitive decline: global relative median power theta (4-8 Hz, cognitive task performance in executive functions and working memory.Conclusions: Combination of neurocognitive tests and qEEG improves identification of patients with higher risk of cognitive decline in PD.

  1. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  2. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  3. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  4. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  5. Power generation statistics

    International Nuclear Information System (INIS)

    Kangas, H.

    2001-01-01

    The frost in February increased the power demand in Finland significantly. The total power consumption in Finland during January-February 2001 was about 4% higher than a year before. In January 2001 the average temperature in Finland was only about - 4 deg C, which is nearly 2 degrees higher than in 2000 and about 6 degrees higher than long term average. Power demand in January was slightly less than 7.9 TWh, being about 0.5% less than in 2000. The power consumption in Finland during the past 12 months exceeded 79.3 TWh, which is less than 2% higher than during the previous 12 months. In February 2001 the average temperature was - 10 deg C, which was about 5 degrees lower than in February 2000. Because of this the power consumption in February 2001 increased by 5%. Power consumption in February was 7.5 TWh. The maximum hourly output of power plants in Finland was 13310 MW. Power consumption of Finnish households in February 2001 was about 10% higher than in February 2000, and in industry the increase was nearly zero. The utilization rate in forest industry in February 2001 decreased from the value of February 2000 by 5%, being only about 89%. The power consumption of the past 12 months (Feb. 2000 - Feb. 2001) was 79.6 TWh. Generation of hydroelectric power in Finland during January - February 2001 was 10% higher than a year before. The generation of hydroelectric power in Jan. - Feb. 2001 was nearly 2.7 TWh, corresponding to 17% of the power demand in Finland. The output of hydroelectric power in Finland during the past 12 months was 14.7 TWh. The increase from the previous 12 months was 17% corresponding to over 18% of the power demand in Finland. Wind power generation in Jan. - Feb. 2001 was exceeded slightly 10 GWh, while in 2000 the corresponding output was 20 GWh. The degree of utilization of Finnish nuclear power plants in Jan. - Feb. 2001 was high. The output of these plants was 3.8 TWh, being about 1% less than in Jan. - Feb. 2000. The main cause for the

  6. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  7. Zooming into local active galactic nuclei: the power of combining SDSS-IV MaNGA with higher resolution integral field unit observations

    Science.gov (United States)

    Wylezalek, Dominika; Schnorr Müller, Allan; Zakamska, Nadia L.; Storchi-Bergmann, Thaisa; Greene, Jenny E.; Müller-Sánchez, Francisco; Kelly, Michael; Liu, Guilin; Law, David R.; Barrera-Ballesteros, Jorge K.; Riffel, Rogemar A.; Thomas, Daniel

    2017-05-01

    Ionized gas outflows driven by active galactic nuclei (AGN) are ubiquitous in high-luminosity AGN with outflow speeds apparently correlated with the total bolometric luminosity of the AGN. This empirical relation and theoretical work suggest that in the range Lbol ˜ 1043-45 erg s-1 there must exist a threshold luminosity above which the AGN becomes powerful enough to launch winds that will be able to escape the galaxy potential. In this paper, we present pilot observations of two AGN in this transitional range that were taken with the Gemini North Multi-Object Spectrograph integral field unit (IFU). Both sources have also previously been observed within the Sloan Digital Sky Survey-IV (SDSS) Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey. While the MaNGA IFU maps probe the gas fields on galaxy-wide scales and show that some regions are dominated by AGN ionization, the new Gemini IFU data zoom into the centre with four times better spatial resolution. In the object with the lower Lbol we find evidence of a young or stalled biconical AGN-driven outflow where none was obvious at the MaNGA resolution. In the object with the higher Lbol we trace the large-scale biconical outflow into the nuclear region and connect the outflow from small to large scales. These observations suggest that AGN luminosity and galaxy potential are crucial in shaping wind launching and propagation in low-luminosity AGN. The transition from small and young outflows to galaxy-wide feedback can only be understood by combining large-scale IFU data that trace the galaxy velocity field with higher resolution, small-scale IFU maps.

  8. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  9. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  10. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  11. Short pulse mid-infrared amplifier for high average power

    CSIR Research Space (South Africa)

    Botha, LR

    2006-09-01

    Full Text Available High pressure CO2 lasers are good candidates for amplifying picosecond mid infrared pulses. High pressure CO2 lasers are notorious for being unreliable and difficult to operate. In this paper a high pressure CO2 laser is presented based on well...

  12. Average Power and Brightness Scaling of Diamond Raman Lasers

    Science.gov (United States)

    2012-01-07

    J. Appl. Phys. 92(2), 649–653 (2002). 26. J. Smedley , C. Jaye, J. Bohon, T. Rao, and D. A. Fischer, “Laser patterning of diamond. Part II. Surface...nondiamond carbon formation and its removal,” J. Appl. Phys. 105(12), 123108 (2009). 27. J. Smedley , J. Bohon, Q. Wu, and T. Rao, “Laser patterning...Singh, Dianyuan Fan, Jianquan Yao, Robert F. Walter, Proc. of SPIE Vol. 8551, 85510U · © 2012 SPIE CCC code: 0277-786/12/$18 · doi: 10.1117/12.999857 Proc

  13. Picosecond mid-infrared amplifier for high average power.

    CSIR Research Space (South Africa)

    Botha, LR

    2007-04-01

    Full Text Available High pressure CO2 lasers are good candidates for amplifying picosecond mid infrared pulses. High pressure CO2 lasers are notorious for being unreliable and difficult to operate. In this paper a high pressure CO2 laser is presented based on well...

  14. Significance of power average of sinusoidal and non-sinusoidal ...

    Indian Academy of Sciences (India)

    2016-06-08

    Jun 8, 2016 ... PG & Research Department of Physics, Nehru Memorial College (Autonomous),. Puthanampatti .... for long time intervals, the periodic or chaotic behaviour ..... square force (short dashed line), sawtooth force (long dashed.

  15. In Defence of International Comparative Studies. on the Analytical and Explanatory Power of the Nation State in International Comparative Higher Education Research

    Science.gov (United States)

    Kosmützky, Anna

    2015-01-01

    Higher education is undergoing a process of globalization and new realities of a globalized higher education world are emerging. Globalization also has a profound impact on higher education research. Global and transnational topics are theoretically and empirically elaborated and seem on the rise, whereas the international comparative outlook…

  16. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  17. Planning for Higher Education.

    Science.gov (United States)

    Lindstrom, Caj-Gunnar

    1984-01-01

    Decision processes for strategic planning for higher education institutions are outlined using these parameters: institutional goals and power structure, organizational climate, leadership attitudes, specific problem type, and problem-solving conditions and alternatives. (MSE)

  18. Higher efficiency, lower bonuses. Financial incentives for power from biomass according to EEG 2012; Mehr Effizienz, weniger Boni. Die Foerderung von Strom aus Biomasse nach dem EEG 2012

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Dominik [Ecologic Institute, Berlin (Germany)

    2012-07-01

    The German parliament passed a total of eight new laws for the intended energy turnaround. Apart from changes in atomic law, the focus was on a complete amendment of the Renewables Act (EEG). The contribution outlines the new regulations for power generation from biomass from 2012. It indicates the changes from former regulations and describes the structural changes required for sustainable power supply from biomass, among others.

  19. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  20. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  1. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  2. Higher-order superclustering in the Ostriker explosion scenario I. Three-point correlation functions of clusters in the constant and power-law models

    International Nuclear Information System (INIS)

    Jing Yipeng.

    1989-08-01

    We study the three-point correlation functions ρ(r, u, v) of clusters in the two types of explosion models by numerical simulations. The clusters are identified as the ''knots'' where three shells intersect. The shells are assumed to have the constant radii (the constant models) or have the power law radius distributions (the power law models). In both kinds of models, we find that ρ can be approximately expressed by the scaling form: ρ = Q(ξ 1 ξ 2 + ξ 2 ξ 3 + ξ 3 ξ 1 ), and Q is about 1, which are consistent with the observations. More detailed studies of r-, u- and v-dependences of Q show that Q remains constant in the constant models. In the power-law models, Q is independent of the shape parameters u and v, while it has some moderate r-dependences (variations with r about a factor of 1 or 2). (author). 27 refs, 9 figs

  3. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  4. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  5. Skeletal muscle ATP turnover and muscle fiber conduction velocity are elevated at higher muscle temperatures during maximal power output development in humans.

    Science.gov (United States)

    Gray, Stuart R; De Vito, Giuseppe; Nimmo, Myra A; Farina, Dario; Ferguson, Richard A

    2006-02-01

    The effect of temperature on skeletal muscle ATP turnover and muscle fiber conduction velocity (MFCV) was studied during maximal power output development in humans. Eight male subjects performed a 6-s maximal sprint on a mechanically braked cycle ergometer under conditions of normal (N) and elevated muscle temperature (ET). Muscle temperature was passively elevated through the combination of hot water immersion and electric blankets. Anaerobic ATP turnover was calculated from analysis of muscle biopsies obtained before and immediately after exercise. MFCV was measured during exercise using surface electromyography. Preexercise muscle temperature was 34.2 degrees C (SD 0.6) in N and 37.5 degrees C (SD 0.6) in ET. During ET, the rate of ATP turnover for phosphocreatine utilization [temperature coefficient (Q10) = 3.8], glycolysis (Q10 = 1.7), and total anaerobic ATP turnover [Q10 = 2.7; 10.8 (SD 1.9) vs. 14.6 mmol x kg(-1) (dry mass) x s(-1) (SD 2.3)] were greater than during N (P < 0.05). MFCV was also greater in ET than in N [3.79 (SD 0.47) to 5.55 m/s (SD 0.72)]. Maximal power output (Q10 = 2.2) and pedal rate (Q10 = 1.6) were greater in ET compared with N (P < 0.05). The Q10 of maximal and mean power were correlated (P < 0.05; R = 0.82 and 0.85, respectively) with the percentage of myosin heavy chain type IIA. The greater power output obtained with passive heating was achieved through an elevated rate of anaerobic ATP turnover and MFCV, possibly due to a greater effect of temperature on power production of fibers, with a predominance of myosin heavy chain IIA at the contraction frequencies reached.

  6. Nuclear Power Prospects

    International Nuclear Information System (INIS)

    Cintra do Prado, L.

    1966-01-01

    The present trend is to construct larger plants: the average power of the plants under construction at present, including prototypes, is 300 MW(e), i.e. three times higher than in the case of plants already in operation. Examples of new large-scale plants ares (a) Wylfa, Anglesey, United Kingdom - scheduled power of 1180 MW(e) (800 MW to be installed by 1967), to be completed in 1968; (b) ''Dungeness B'', United Kingdom - scheduled power of 1200 MW(e); (c) second unit for United States Dresden power plant - scheduled power of 715 MW(e) minimum to almost 800 MW(e). Nuclear plants on the whole serve the same purpose as conventional thermal plants

  7. Higher Education

    African Journals Online (AJOL)

    Kunle Amuwo: Higher Education Transformation: A Paradigm Shilt in South Africa? ... ty of such skills, especially at the middle management levels within the higher ... istics and virtues of differentiation and diversity. .... may be forced to close shop for lack of capacity to attract ..... necessarily lead to racial and gender equity,.

  8. Higher Education

    Science.gov (United States)

    & Development (LDRD) National Security Education Center (NSEC) Office of Science Programs Richard P Databases National Security Education Center (NSEC) Center for Nonlinear Studies Engineering Institute Scholarships STEM Education Programs Teachers (K-12) Students (K-12) Higher Education Regional Education

  9. The KhoeSan Early Learning Center Pilot Project: Negotiating Power and Possibility in a South African Institute of Higher Learning

    Science.gov (United States)

    De Wet, Priscilla

    2011-01-01

    As we search for a new paradigm in post-apartheid South Africa, the knowledge base and worldview of the KhoeSan first Indigenous peoples is largely missing. The South African government has established various mechanisms as agents for social change. Institutions of higher learning have implemented transformation programs. KhoeSan peoples, however,…

  10. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  11. 18 CFR 301.5 - Changes in Average System Cost methodology.

    Science.gov (United States)

    2010-04-01

    ... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE... customers, or from three-quarters of Bonneville's direct-service industrial customers may initiate a...

  12. Higher Education.

    Science.gov (United States)

    Hendrickson, Robert M.

    This chapter reports 1982 cases involving aspects of higher education. Interesting cases noted dealt with the federal government's authority to regulate state employees' retirement and raised the questions of whether Title IX covers employment, whether financial aid makes a college a program under Title IX, and whether sex segregated mortality…

  13. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  14. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  15. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  16. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  17. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  18. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  19. MAIN STAGES SCIENTIFIC AND PRODUCTION MASTERING THE TERRITORY AVERAGE URAL

    Directory of Open Access Journals (Sweden)

    V.S. Bochko

    2006-09-01

    Full Text Available Questions of the shaping Average Ural, as industrial territory, on base her scientific study and production mastering are considered in the article. It is shown that studies of Ural resources and particularities of the vital activity of its population were concerned by Russian and foreign scientist in XVIII-XIX centuries. It is noted that in XX century there was a transition to systematic organizing-economic study of production power, society and natures of Average Ural. More attention addressed on new problems of region and on needs of their scientific solving.

  20. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  1. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  2. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  3. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  4. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  5. LNG plant combined with power plant

    Energy Technology Data Exchange (ETDEWEB)

    Aoki, I; Kikkawa, Y [Chiyoda Chemical Engineering and Construction Co. Ltd., Tokyo (Japan)

    1997-06-01

    The LNG plant consumers a lot of power of natural gas cooling and liquefaction. In some LNG plant location, a rapid growth of electric power demand is expected due to the modernization of area and/or the country. The electric power demand will have a peak in day time and low consumption in night time, while the power demand of the LNG plant is almost constant due to its nature. Combining the LNG plant with power plant will contribute an improvement the thermal efficiency of the power plant by keeping higher average load of the power plant, which will lead to a reduction of electrical power generation cost. The sweet fuel gas to the power plant can be extracted from the LNG plant, which will be favorable from view point of clean air of the area. (Author). 5 figs.

  6. LNG plant combined with power plant

    International Nuclear Information System (INIS)

    Aoki, I.; Kikkawa, Y.

    1997-01-01

    The LNG plant consumers a lot of power of natural gas cooling and liquefaction. In some LNG plant location, a rapid growth of electric power demand is expected due to the modernization of area and/or the country. The electric power demand will have a peak in day time and low consumption in night time, while the power demand of the LNG plant is almost constant due to its nature. Combining the LNG plant with power plant will contribute an improvement the thermal efficiency of the power plant by keeping higher average load of the power plant, which will lead to a reduction of electrical power generation cost. The sweet fuel gas to the power plant can be extracted from the LNG plant, which will be favorable from view point of clean air of the area. (Author). 5 figs

  7. Small Bandwidth Asymptotics for Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...

  8. Trend of Average Wages as Indicator of Hypothetical Money Illusion

    Directory of Open Access Journals (Sweden)

    Julian Daszkowski

    2010-06-01

    Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.

  9. 40 CFR 63.652 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... emissions more than the reference control technology, but the combination of the pollution prevention... emissions average. This must include any Group 1 emission points to which the reference control technology... agrees has a higher nominal efficiency than the reference control technology. Information on the nominal...

  10. A urine midmolecule osteocalcin assay shows higher discriminatory power than a serum midmolecule osteocalcin assay during short-term alendronate treatment of osteoporotic patients.

    Science.gov (United States)

    Srivastava, A K; Mohan, S; Singer, F R; Baylink, D J

    2002-07-01

    We isolated and characterized a peptide fragment corresponding to amino acid sequence 14-28 of human osteocalcin in urine from Paget's disease, and developed a polyclonal antibody reactive to this peptide in urine. We used this antibody to measure urinary fragments of osteocalcin and compared to efficacy of the urinary osteocalcin assay with a serum osteocalcin (sOC) assay (ELISA-Osteo, Cis-Bio International) to monitor the short-term changes in bone turnover in response to alendronate treatment. The synthetic peptide-based urinary osteocalcin (uOC) radioimmunoassay (RIA) showed an analytical sensitivity of 6.25 ng/mL, standard curve range of 3.12-400 ng/mL, and mean intra- (n = 20) and interassay (n = 30) coefficient of variation (CV) of sALP) (Alkphose-B, Metra Biosystems) in serum samples. The percent change data obtained between baseline and 30 days (n = 18) posttreatment suggested a rapid decline in uOC concentration (-27%, p sALP (-3.4%, p = 0.689), two specific markers of bone formation. As expected, due to the coupling of bone formation and bone resorption, the concentration of all markers showed a 30%-45% decline compared with baseline values after 90 days (n = 16) of treatment. Correlation of markers after a 30 day treatment with alendronate revealed a higher correlation (r = 0.61, p sALP (r = -0.14, p = 0.295) with uNTx. Similarly, correlation coefficients with r values between 0.48 and 0.55 (p < 0.05) were observed between uOC, sNTx, and sCTx, whereas no significant correlation was observed between sOC and sNTx or sCTx. These results provide indirect evidence that fragments measured by the urine assay probably originated from bone resorption, and suggest that the uOC assay could be used to assess short-term changes in bone metabolism with regard to osteocalcin.

  11. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  12. Nuclear fuel management via fuel quality factor averaging

    International Nuclear Information System (INIS)

    Mingle, J.O.

    1978-01-01

    The numerical procedure of prime number averaging is applied to the fuel quality factor distribution of once and twice-burned fuel in order to evolve a fuel management scheme. The resulting fuel shuffling arrangement produces a near optimal flat power profile both under beginning-of-life and end-of-life conditions. The procedure is easily applied requiring only the solution of linear algebraic equations. (author)

  13. Impact of connected vehicle guidance information on network-wide average travel time

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2016-12-01

    Full Text Available With the emergence of connected vehicle technologies, the potential positive impact of connected vehicle guidance on mobility has become a research hotspot by data exchange among vehicles, infrastructure, and mobile devices. This study is focused on micro-modeling and quantitatively evaluating the impact of connected vehicle guidance on network-wide travel time by introducing various affecting factors. To evaluate the benefits of connected vehicle guidance, a simulation architecture based on one engine is proposed representing the connected vehicle–enabled virtual world, and connected vehicle route guidance scenario is established through the development of communication agent and intelligent transportation systems agents using connected vehicle application programming interface considering the communication properties, such as path loss and transmission power. The impact of connected vehicle guidance on network-wide travel time is analyzed by comparing with non-connected vehicle guidance in response to different market penetration rate, following rate, and congestion level. The simulation results explore that average network-wide travel time in connected vehicle guidance shows a significant reduction versus that in non–connected vehicle guidance. Average network-wide travel time in connected vehicle guidance have an increase of 42.23% comparing to that in non-connected vehicle guidance, and average travel time variability (represented by the coefficient of variance increases as the travel time increases. Other vital findings include that higher penetration rate and following rate generate bigger savings of average network-wide travel time. The savings of average network-wide travel time increase from 17% to 38% according to different congestion levels, and savings of average travel time in more serious congestion have a more obvious improvement for the same penetration rate or following rate.

  14. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  15. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  16. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  17. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  18. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  19. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  20. Wind power and the conditions at a liberalized power market

    International Nuclear Information System (INIS)

    Morthorst, P.E.

    2003-01-01

    Wind power is undergoing a rapid development nationally as well as globally and in a number of countries covers an increasing part of the power supply. At the same time an ongoing liberalization of power markets is taking place and to an increasing extent the owners of wind power plants will themselves have to be responsible for trading the power at the spot market and financially handling the balancing. In the western part of Denmark (Jutland/Funen area), wind-generated power from time to time covers almost 100% of total power consumption. Therefore some examples are chosen from this area to analyse in more detail how well large amounts of wind power in the short-term are handled at the power spot market. It turns out that there is a tendency that more wind power in the system in the short run leads to relatively lower spot prices, while less wind power implies relatively higher spot prices, although, with the exception of December 2002, in general no strong relationship is found. A stronger relationship is found at the regulating market, where there is a fairly clear tendency that the more wind power produced, the higher is the need for down-regulation, and, correspondingly, the less wind power produced, the higher is the need for up-regulation. In general for the Jutland/Funen area the average cost of down-regulation is calculated as 1 2 c euros/kWh regulated for 2002, while the cost of up-regulation amounts to 0 7 c euros/kWh regulated. (author)

  1. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  2. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  3. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  4. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  5. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  7. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  8. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  9. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  10. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  11. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  12. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  13. Wind power and market power in competitive markets

    International Nuclear Information System (INIS)

    Twomey, Paul; Neuhoff, Karsten

    2010-01-01

    Average market prices for intermittent generation technologies are lower than for conventional generation. This has a technical reason but can be exaggerated in the presence of market power. When there is much wind smaller amounts of conventional generation technologies are required, and prices are lower, while at times of little wind prices are higher. This effect reflects the value of different generation technologies to the system. But under conditions of market power, conventional generators with market power can further depress the prices if they have to buy back energy at times of large wind output and can increase prices if they have to sell additional power at times of little wind output. This greatly exaggerates the effect. Forward contracting does not reduce the effect. An important consequence is that allowing market power profit margins as a support mechanism for generation capacity investment is not a technologically neutral policy.

  14. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  15. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  16. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  17. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  18. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  19. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  20. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  1. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  2. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  3. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  4. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  5. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  6. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  7. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  8. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  9. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  10. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  11. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  12. Chromospheric oscillations observed with OSO 8. III. Average phase spectra for Si II

    International Nuclear Information System (INIS)

    White, O.R.; Athay, R.G.

    1979-01-01

    Time series of intensity and Doppler-shift fluctuations in the Si II emission lines lambda816.93 and lambda817.45 are Fourier analyzed to determine the frequency variation of phase differences between intensity and velocity and between these two lines formed 300 km apart in the middle chromosphere. Average phase spectra show that oscillations between 2 and 9 mHz in the two lines have time delays from 35 to 40 s, which is consistent with the upward propagation of sound wave at 8.6-7.5 km s -1 . In this same frequency band near 3 mHz, maximum brightness leads maximum blueshift by 60 0 . At frequencies above 11 mHz where the power spectrum is flat, the phase differences are uncertain, but approximately 65% of the cases indicate upward propagation. At these higher frequencies, the phase lead between intensity and blue Doppler shift ranges from 0 0 to 180 0 with an average value of 90 0 . However, the phase estimates in this upper band are corrupted by both aliasing and randomness inherent to the measured signals. Phase differences in the two narrow spectral features seen at 10.5 and 27 mHz in the power spectra are shown to be consistent with properties expected for aliases of the wheel rotation rate of the spacecraft wheel section

  13. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  14. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  15. High content of MYHC II in vastus lateralis is accompanied by higher VO2/power output ratio during moderate intensity cycling performed both at low and at high pedalling rates.

    Science.gov (United States)

    Majerczak, J; Szkutnik, Z; Karasinski, J; Duda, K; Kolodziejski, L; Zoladz, J A

    2006-06-01

    The aim of this study was to examine the relationship between the content of various types of myosin heavy chain isoforms (MyHC) in the vastus lateralis muscle and pulmonary oxygen uptake during moderate power output incremental exercise, performed at low and at high pedalling rates. Twenty one male subjects (mean +/- SD) aged 24.1 +/- 2.8 years; body mass 72.9 +/- 7.2 kg; height 179.1 +/- 4.8 cm; BMI 22.69 +/- 1.89 kg.m(-2); VO2max 50.6 +/- 5.3 ml.kg.min(-1), participated in this study. On separate days, they performed two incremental exercise tests at 60 rev.min(-1) and at 120 rev.min(-1), until exhaustion. Gas exchange variables were measured continuously breath by breath. Blood samples were taken for measurements of plasma lactate concentration prior to the exercise test and at the end of each step of the incremental exercise. Muscle biopsies were taken from the vastus lateralis muscle, using Bergström needle, and they were analysed for the content of MyHC I and MyHC II using SDS--PAGE and two groups (n=7, each) were selected: group H with the highest content of MyHC II (60.7 % +/- 10.5 %) and group L with the lowest content of MyHC II (27.6 % +/- 6.1 %). We have found that during incremental exercise at the power output between 30-120 W, performed at 60 rev.min(-1), oxygen uptake in the group H was significantly greater than in the group L (ANCOVA, p=0.003, upward shift of the intercept in VO2/power output relationship). During cycling at the same power output but at 120 rev.min(-1), the oxygen uptake was also higher in the group H, when compared to the group L (i.e. upward shift of the intercept in VO2/power output relationship, ANCOVA, p=0.002). Moreover, the increase in pedalling rate from 60 to 120 rev.min(-1) was accompanied by a significantly higher increase of oxygen cost of cycling and by a significantly higher plasma lactate concentration in subjects from group H. We concluded that the muscle mechanical efficiency, expressed by the VO2/PO ratio

  16. Higher Efficiency HVAC Motors

    Energy Technology Data Exchange (ETDEWEB)

    Flynn, Charles Joseph [QM Power, Inc., Kansas City, MO (United States)

    2018-02-13

    failure prone capacitors from the power stage. Q-Sync’s simpler electronics also result in higher efficiency because it eliminates the power required by the PCB to perform the obviated power conversions and PWM processes after line synchronous operating speed is reached in the first 5 seconds of operation, after which the PWM circuits drop out and a much less energy intensive “pass through” circuit takes over, allowing the grid-supplied AC power to sustain the motor’s ongoing operation.

  17. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  18. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  19. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  20. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  1. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  2. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  3. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  4. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  5. New Nordic diet versus average Danish diet

    DEFF Research Database (Denmark)

    Khakimov, Bekzod; Poulsen, Sanne Kellebjerg; Savorani, Francesco

    2016-01-01

    and 3-hydroxybutanoic acid were related to a higher weight loss, while higher concentrations of salicylic, lactic and N-aspartic acids, and 1,5-anhydro-D-sorbitol were related to a lower weight loss. Specific gender- and seasonal differences were also observed. The study strongly indicates that healthy...... metabolites reflecting specific differences in the diets, especially intake of plant foods and seafood, and in energy metabolism related to ketone bodies and gluconeogenesis, formed the predominant metabolite pattern discriminating the intervention groups. Among NND subjects higher levels of vaccenic acid...

  6. Effect of random edge failure on the average path length

    Energy Technology Data Exchange (ETDEWEB)

    Guo Dongchao; Liang Mangui; Li Dandan; Jiang Zhongyuan, E-mail: mgliang58@gmail.com, E-mail: 08112070@bjtu.edu.cn [Institute of Information Science, Beijing Jiaotong University, 100044, Beijing (China)

    2011-10-14

    We study the effect of random removal of edges on the average path length (APL) in a large class of uncorrelated random networks in which vertices are characterized by hidden variables controlling the attachment of edges between pairs of vertices. A formula for approximating the APL of networks suffering random edge removal is derived first. Then, the formula is confirmed by simulations for classical ER (Erdoes and Renyi) random graphs, BA (Barabasi and Albert) networks, networks with exponential degree distributions as well as random networks with asymptotic power-law degree distributions with exponent {alpha} > 2. (paper)

  7. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  8. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  9. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  10. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  11. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  12. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  13. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  14. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  15. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  16. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  17. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  18. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  19. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  20. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  1. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  2. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  3. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  4. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  5. Quetelet, the average man and medical knowledge.

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  6. [Quetelet, the average man and medical knowledge].

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  7. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  8. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  9. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  10. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  11. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  12. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  13. Evaluation of soft x-ray average recombination coefficient and average charge for metallic impurities in beam-heated plasmas

    International Nuclear Information System (INIS)

    Sesnic, S.S.; Bitter, M.; Hill, K.W.; Hiroe, S.; Hulse, R.; Shimada, M.; Stratton, B.; von Goeler, S.

    1986-05-01

    The soft x-ray continuum radiation in TFTR low density neutral beam discharges can be much lower than its theoretical value obtained by assuming a corona equilibrium. This reduced continuum radiation is caused by an ionization equilibrium shift toward lower states, which strongly changes the value of the average recombination coefficient of metallic impurities anti γ, even for only slight changes in the average charge, anti Z. The primary agent for this shift is the charge exchange between the highly ionized impurity ions and the neutral hydrogen, rather than impurity transport, because the central density of the neutral hydrogen is strongly enhanced at lower plasma densities with intense beam injection. In the extreme case of low density, high neutral beam power TFTR operation (energetic ion mode) the reduction in anti γ can be as much as one-half to two-thirds. We calculate the parametric dependence of anti γ and anti Z for Ti, Cr, Fe, and Ni impurities on neutral density (equivalent to beam power), electron temperature, and electron density. These values are obtained by using either a one-dimensional impurity transport code (MIST) or a zero-dimensional code with a finite particle confinement time. As an example, we show the variation of anti γ and anti Z in different TFTR discharges

  14. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    Energy Technology Data Exchange (ETDEWEB)

    Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chorin, Alexandre J. [Univ. of California, Berkeley, CA (United States); Crutchfield, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-04-24

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  15. Glycogen with short average chain length enhances bacterial durability

    Science.gov (United States)

    Wang, Liang; Wise, Michael J.

    2011-09-01

    Glycogen is conventionally viewed as an energy reserve that can be rapidly mobilized for ATP production in higher organisms. However, several studies have noted that glycogen with short average chain length in some bacteria is degraded very slowly. In addition, slow utilization of glycogen is correlated with bacterial viability, that is, the slower the glycogen breakdown rate, the longer the bacterial survival time in the external environment under starvation conditions. We call that a durable energy storage mechanism (DESM). In this review, evidence from microbiology, biochemistry, and molecular biology will be assembled to support the hypothesis of glycogen as a durable energy storage compound. One method for testing the DESM hypothesis is proposed.

  16. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  17. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  18. The concept of average LET values determination

    International Nuclear Information System (INIS)

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  19. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  20. Applications of ordered weighted averaging (OWA operators in environmental problems

    Directory of Open Access Journals (Sweden)

    Carlos Llopis-Albert

    2017-04-01

    Full Text Available This paper presents an application of a prioritized weighted aggregation operator based on ordered weighted averaging (OWA to deal with stakeholders' constructive participation in water resources projects. They have different degree of acceptance or preference regarding the measures and policies to be carried out, which lead to different environmental and socio-economic outcomes, and hence, to different levels of stakeholders’ satisfaction. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology has been successfully applied to a Public Participation Project (PPP in watershed management, thus obtaining efficient environmental measures in conflict resolution problems under actors’ preference uncertainties.

  1. Statement on Wind Power

    Energy Technology Data Exchange (ETDEWEB)

    2010-01-15

    Wind power will grow in importance in future electricity supply. In the next few decades it will to some degree replace fossil power but it will, at the same time also depend on fossil-b beyond, when wind power is expected to have a substantial share of the electricity market, CO{sub 2} emission-free electricity plants that are well suited for balancing the wind intermittency will be required. Predictions of the future penetration of wind power into the electricity market are critically dependent on a number of policy measures and will be especially influenced by climate driven energy policies. Very large investments will also be necessary as is shown by the lEA's Blue Map Scenario which includes 5,000 TWh wind electricity by 2050 at a cost of USD 700 billion. This implies an average 8% increase of wind electricity per year energy system, i.e. an energy system so large that it affects the entire world. The Energy Committee's scenario for electricity production in the year 2050 includes 5,000 TWh wind electricity out of a total of 45,000 TWh. Wind electricity thus has a within presently reached penetration of wind energy in a single country and within the calculated future projections of its penetration. Future large continental and intercontinental power grids may enable higher penetrations of wind energy since contributions of wind power from a larger area will tend to reduce its intermittency. Also, large-scale storage systems (thermal storage as is intermittent power systems. These alternatives have been discussed from a technical point of view [3] but for the required large-scale systems, further studies on the social, environmental and economical implications are needed

  2. Challenges in higher order mode Raman amplifiers

    DEFF Research Database (Denmark)

    Rottwitt, Karsten; Nielsen, Kristian; Friis, Søren Michael Mørk

    2015-01-01

    A higher order Raman amplifier model that take random mode coupling into account ispresented. Mode dependent gain and signal power fluctuations at the output of the higher order modeRaman amplifier are discussed......A higher order Raman amplifier model that take random mode coupling into account ispresented. Mode dependent gain and signal power fluctuations at the output of the higher order modeRaman amplifier are discussed...

  3. Nuclear Power in Korea

    International Nuclear Information System (INIS)

    Ha, Duk-Sang

    2009-01-01

    Full text: Korea's nuclear power program has been promoted by step-by-step approach; the first stage was 1970's when it depended on the foreign contractors' technology and the second was 1980's when it accumulated lots of technology and experience by jointly implementing the project. Lastly in the third stage in 1990's, Korea successfully achieved the nuclear power technological self-reliance and developed its standard nuclear power plant, so-called Optimized Power Reactor 1000 (OPR 1000). Following the development of OPR 1000, Korea has continued to upgrade the design, known as the Advanced Power Reactor 1400 (APR 1400) and APR+. Korea is one of the countries which continuously developed the nuclear power plant projects during the last 30 years while the other advanced countries ceased the project, and therefore, significant reduction of project cost and construction schedule were possible which benefits from the repetition of construction project. And now, its nuclear industry infrastructure possesses the strong competitiveness in this field.The electricity produced from the nuclear power is 150,958 MWh in 2008, which covers approximately 36% of the total electricity demand in Korea, while the installed capacity of nuclear power is 17,716 MW which is 24% of the total installed capacity. We are currently operating 20 units of nuclear power plants in Korea, and also are constructing 8 additional units (9,600 MW). Korea's nuclear power plants have displayed their excellent operating performance; the average plant capacity factor was 93.4% in 2008, which are about 15% higher than the world average of 77.8%. Moreover, the number of unplanned trips per unit was only 0.35 in 2008, which is the world top class performance. Also currently we are operating four CANDU nuclear units in Korea which are the same reactor type and capacity as the Cernavoda Units. They have been showing the excellent operating performance, of which capacity in 2008 is 92.8%. All the Korean

  4. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  5. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  6. Public power costs less

    International Nuclear Information System (INIS)

    Moody, D.

    1993-01-01

    The reasons why residential customers of public power utilities paid less for power than private sector customers is discussed. Residential customers of investor-owned utilities (IOU's) paid average rates that were 28% above those paid by customers by possibly owned systems during 1990. The reasons for this disparity are that management costs faced by public power systems are below those of private power companies, indicating a greater efficiency of management among public power systems, and customer accounts expenses averaged $33.00 per customer for publicly owned electric utilities compared to $39.00 per customer for private utilities

  7. Higher Education and European Regionalism.

    Science.gov (United States)

    Paterson, Lindsay

    2001-01-01

    Speculates about the relationship between two fundamental social changes occurring in Europe: the development of a mass higher education system and the slow decay of the old states that were inherited from the 19th century, eroded from below by various movements for national and regional autonomy, and eroded from above by the growing power and…

  8. Three-dimensional Core Design of a Super Fast Reactor with a High Power Density

    International Nuclear Information System (INIS)

    Cao, Liangzhi; Oka, Yoshiaki; Ishiwatari, Yuki; Ikejiri, Satoshi; Ju, Haitao

    2010-01-01

    The SuperCritical Water-cooled Reactor (SCWR) pursues high power density to reduce its capital cost. The fast spectrum SCWR, called a super fast reactor, can be designed with a higher power density than thermal spectrum SCWR. The mechanism of increasing the average power density of the super fast reactor is studied theoretically and numerically. Some key parameters affecting the average power density, including fuel pin outer diameter, fuel pitch, power peaking factor, and the fraction of seed assemblies, are analyzed and optimized to achieve a more compact core. Based on those sensitivity analyses, a compact super fast reactor is successfully designed with an average power density of 294.8 W/cm 3 . The core characteristics are analyzed by using three-dimensional neutronics/thermal-hydraulics coupling method. Numerical results show that all of the design criteria and goals are satisfied

  9. Nonlinearity management in higher dimensions

    International Nuclear Information System (INIS)

    Kevrekidis, P G; Pelinovsky, D E; Stefanov, A

    2006-01-01

    In the present paper, we revisit nonlinearity management of the time-periodic nonlinear Schroedinger equation and the related averaging procedure. By means of rigorous estimates, we show that the averaged nonlinear Schroedinger equation does not blow up in the higher dimensional case so long as the corresponding solution remains smooth. In particular, we show that the H 1 norm remains bounded, in contrast with the usual blow-up mechanism for the focusing Schroedinger equation. This conclusion agrees with earlier works in the case of strong nonlinearity management but contradicts those in the case of weak nonlinearity management. The apparent discrepancy is explained by the divergence of the averaging procedure in the limit of weak nonlinearity management

  10. Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock

    Science.gov (United States)

    Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.

    2001-01-01

    Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.

  11. An Experimental Observation of Axial Variation of Average Size of Methane Clusters in a Gas Jet

    International Nuclear Information System (INIS)

    Ji-Feng, Han; Chao-Wen, Yang; Jing-Wei, Miao; Jian-Feng, Lu; Meng, Liu; Xiao-Bing, Luo; Mian-Gong, Shi

    2010-01-01

    Axial variation of average size of methane clusters in a gas jet produced by supersonic expansion of methane through a cylindrical nozzle of 0.8 mm in diameter is observed using a Rayleigh scattering method. The scattered light intensity exhibits a power scaling on the backing pressure ranging from 16 to 50 bar, and the power is strongly Z dependent varying from 8.4 (Z = 3 mm) to 5.4 (Z = 11 mm), which is much larger than that of the argon cluster. The scattered light intensity versus axial position shows that the position of 5 mm has the maximum signal intensity. The estimation of the average cluster size on axial position Z indicates that the cluster growth process goes forward until the maximum average cluster size is reached at Z = 9 mm, and the average cluster size will decrease gradually for Z > 9 mm

  12. The average crossing number of equilateral random polygons

    International Nuclear Information System (INIS)

    Diao, Y; Dobay, A; Kusner, R B; Millett, K; Stasiak, A

    2003-01-01

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form (3/16)n ln n + O(n). A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type K can be described by a function of the form = a(n-n 0 )ln(n-n 0 ) + b(n-n 0 ) + c where a, b and c are constants depending on K and n 0 is the minimal number of segments required to form K. The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the profile of all closed walks. These points of intersection define the equilibrium length of K, i.e., the chain length n e (K) at which a statistical ensemble of configurations with given knot type K-upon cutting, equilibration and reclosure to a new knot type K'-does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration g >

  13. The effects of average revenue regulation on electricity transmission investment and pricing

    International Nuclear Information System (INIS)

    Matsukawa, Isamu

    2008-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two-part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist's expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occurs, average revenue regulation is allocatively more efficient than a Coasian two-part tariff if the level of capacity under average revenue regulation is higher than that under a Coasian two-part tariff. (author)

  14. “Simpson’s paradox” as a manifestation of the properties of weighted average (part 2)

    OpenAIRE

    Zhekov, Encho

    2012-01-01

    The article proves that the so-called “Simpson's paradox” is a special case of manifestation of the properties of weighted average. In this case always comes to comparing two weighted averages, where the average of the larger variables is less than that of the smaller. The article demonstrates one method for analyzing the relative change of magnitudes of the type: k S = Σ x iy i i=1 who gives answer to question: what is the reason, the weighted average of few variables with higher values, to ...

  15. “Simpson’s paradox” as a manifestation of the properties of weighted average (part 1)

    OpenAIRE

    Zhekov, Encho

    2012-01-01

    The article proves that the so-called “Simpson's paradox” is a special case of manifestation of the properties of weighted average. In this case always comes to comparing two weighted averages, where the average of the larger variables is less than that of the smaller. The article demonstrates one method for analyzing the relative change of magnitudes of the type: S = Σ ki=1x iy i who gives answer to question: what is the reason, the weighted average of few variables with higher values, to be...

  16. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  17. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  18. Deregulated power prices: comparison of diurnal patterns

    International Nuclear Information System (INIS)

    Ying Li; Flynn, P.C.

    2004-01-01

    We examine electrical power price, and in particular its daily and average weekday vs. weekend pattern of change, for 14 deregulated markets. Power price in deregulated markets shows fundamentally different patterns. North American markets show a monotonic diurnal weekday price pattern, while all other markets studied show more than one price peak. Deregulated power markets differ in maximum vs. minimum daily average price and in average weekday to weekend price, in turn creating a different incentive for a consumer to time shift power consuming activities. Markets differ in the extent to which a small fraction of the days shapes the average diurnal pattern and value of price. Deregulated markets show a wide variation in the correlation between load and price. Some deregulated markets, most notably Britain and Spain, show patterns that are predictable and consistent, and hence that can encourage a customer to shape consumption behaviors. Other markets, for example South Australia, have patterns that are inconsistent and irregular, and hence are hard for a customer to interpret; a customer in such a market will have a higher incentive to escape risk through hedging mechanisms. (Author)

  19. Deregulated power prices: comparison of diurnal patterns

    International Nuclear Information System (INIS)

    Li Ying; Flynn, Peter C.

    2004-01-01

    We examine electrical power price, and in particular its daily and average weekday vs. weekend pattern of change, for 14 deregulated markets. Power price in deregulated markets shows fundamentally different patterns. North American markets show a monotonic diurnal weekday price pattern, while all other markets studied show more than one price peak. Deregulated power markets differ in maximum vs. minimum daily average price and in average weekday to weekend price, in turn creating a different incentive for a consumer to time shift power consuming activities. Markets differ in the extent to which a small fraction of the days shapes the average diurnal pattern and value of price. Deregulated markets show a wide variation in the correlation between load and price. Some deregulated markets, most notably Britain and Spain, show patterns that are predictable and consistent, and hence that can encourage a customer to shape consumption behaviors. Other markets, for example South Australia, have patterns that are inconsistent and irregular, and hence are hard for a customer to interpret; a customer in such a market will have a higher incentive to escape risk through hedging mechanisms

  20. Justification of the averaging method for parabolic equations containing rapidly oscillating terms with large amplitudes

    International Nuclear Information System (INIS)

    Levenshtam, V B

    2006-01-01

    We justify the averaging method for abstract parabolic equations with stationary principal part that contain non-linearities (subordinate to the principal part) some of whose terms are rapidly oscillating in time with zero mean and are proportional to the square root of the frequency of oscillation. Our interest in the exponent 1/2 is motivated by the fact that terms proportional to lower powers of the frequency have no influence on the average. For linear equations of the same type, we justify an algorithm for the study of the stability of solutions in the case when the stationary averaged problem has eigenvalues on the imaginary axis (the critical case)

  1. Large-signal analysis of DC motor drive system using state-space averaging technique

    International Nuclear Information System (INIS)

    Bekir Yildiz, Ali

    2008-01-01

    The analysis of a separately excited DC motor driven by DC-DC converter is realized by using state-space averaging technique. Firstly, a general and unified large-signal averaged circuit model for DC-DC converters is given. The method converts power electronic systems, which are periodic time-variant because of their switching operation, to unified and time independent systems. Using the averaged circuit model enables us to combine the different topologies of converters. Thus, all analysis and design processes about DC motor can be easily realized by using the unified averaged model which is valid during whole period. Some large-signal variations such as speed and current relating to DC motor, steady-state analysis, large-signal and small-signal transfer functions are easily obtained by using the averaged circuit model

  2. Correlation between Grade Point Averages and Student Evaluation of Teaching Scores: Taking a Closer Look

    Science.gov (United States)

    Griffin, Tyler J.; Hilton, John, III.; Plummer, Kenneth; Barret, Devynne

    2014-01-01

    One of the most contentious potential sources of bias is whether instructors who give higher grades receive higher ratings from students. We examined the grade point averages (GPAs) and student ratings across 2073 general education religion courses at a large private university. A moderate correlation was found between GPAs and student evaluations…

  3. A Single Image Dehazing Method Using Average Saturation Prior

    Directory of Open Access Journals (Sweden)

    Zhenfei Gu

    2017-01-01

    Full Text Available Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP, which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.

  4. Power engineers in Paernu

    International Nuclear Information System (INIS)

    Veski, Rein

    1999-01-01

    There was a meeting of the Estonian Power and Heat Association in Paernu summarizing the Association's activities in 1998. Only local fuels such as peat and wood chips (70 %) and oil shale (30 %) are used for district heating in Paernu. There is an interest in the combined production of heat and power. The Association plans to set up the respective committee on engineering. The Energy Market Inspectorate was formed in Estonia on January 22, 1998. On April 1, 1999, the Estonian Center for Engineering Inspectorate was opened. The newly formed body will be dealing with accidents likely to happen. The banks are interested in financing Estonian energy projects as power engineering is a field of vital importance with stable money flows and low risk. One can get capital from Estonia more easily and quickly, so far to a limited amount and at a higher interest (small projects at 15 to 20 per cent) than from outside (in case of 2 thousand million EEK at 5 %). The weighted average of heat sold in Estonia, without turnover tax, was 302 EEK/MWh), variance 150 to 490 EEK/MWh. (author)

  5. Effects of Transverse Power Distribution on Fuel Temperature

    International Nuclear Information System (INIS)

    Jo, Daeseong; Park, Jonghark; Seo, Chul Gyo; Chae, Heetaek

    2014-01-01

    In the present study, transverse power distributions with segments of 4 and 18 are evaluated. Based on the power distribution, the fuel temperatures are evaluated with a consideration of lateral heat conduction. In the present study, the effect of the transverse power distribution on the fuel temperature is investigated. The transverse power distributions with variation of fuel segment number are evaluated. The maximum power peaking with 12 segments is higher than that with 4 segments. Based on the calculation, 6-order polynomial is generated to express the transverse power distributions. The maximum power peaking factor increases with segments. The averaged power peaking is 2.10, and the maximum power peaking with 18 segments is 2.80. With the uniform power distribution, the maximum fuel temperature is found in the middle of the fuel. As the power near the side ends of the fuel increases, the maximum fuel temperature is found near the side ends. However, the maximum fuel temperature is not found where the maximum transverse power is. This is because the high power locally released from the edge of the fuel is laterally conducted to the cladding. As a result of the present study, it can be concluded that the effect of the high power peaking at the edge of the fuel on the fuel outer wall temperature is not significant

  6. Wind speed power spectrum analysis for Bushland, Texas

    Energy Technology Data Exchange (ETDEWEB)

    Eggleston, E.D. [USDA-Agricultural Research Service, Bushland, TX (United States)

    1996-12-31

    Numerous papers and publications on wind turbulence have referenced the wind speed spectrum presented by Isaac Van der Hoven in his article entitled Power Spectrum of Horizontal Wind Speed Spectrum in the Frequency Range from 0.0007 to 900 Cycles per Hour. Van der Hoven used data measured at different heights between 91 and 125 meters above the ground, and represented the high frequency end of the spectrum with data from the peak hour of hurricane Connie. These facts suggest we should question the use of his power spectrum in the wind industry. During the USDA - Agricultural Research Service`s investigation of wind/diesel system power storage, using the appropriate wind speed power spectrum became a significant issue. We developed a power spectrum from 13 years of hourly average data, 1 year of 5 minute average data, and 2 particularly gusty day`s 1 second average data all collected at a height of 10 meters. While the general shape is similar to the Van der Hoven spectrum, few of his peaks were found in the Bushland spectrum. While higher average wind speeds tend to suggest higher amplitudes in the high frequency end of the spectrum, this is not always true. Also, the high frequency end of the spectrum is not accurately described by simple wind statistics such as standard deviation and turbulence intensity. 2 refs., 5 figs., 1 tab.

  7. Determination of an optimum reactor coolant system average temperature within the licensed operating window

    International Nuclear Information System (INIS)

    Thaulez, F.; Basic, I.; Vrbanic, I.

    2003-01-01

    The Krsko modernization power uprate analyses have been performed in such a way as to cover plant operation in a range of average reactor coolant temperatures (Tavg) of 301.7 deg C to 307.4 deg C, with steam generator tube plugging levels of up to 5%. The upper bound is temporarily restricted to 305.7 deg C, as long as Zirc-4 fuel is present in the core. (It is, however,acceptable to operate at 307.4 deg C with a few Zirc-4 assemblies, if meeting certain conditionsand subjected to a corrosion and rod internal pressure evaluation in the frame of the cyclespecificnuclear core design.) The Tavg optimization method takes into account two effects, that are opposed to each other: the impact of steam pressure on the electrical power output versus the impact of Tavg on the cost of reactor fuel. The positive economical impact of a Tavg increase through the increase in MWe output is around 6 to 8 times higher than the corresponding negative impact on the fuel cost. From this perspective, it is desirable to have Tavg as high as possible. This statement is not affected by a change in the relationship between steam pressure and Tavg level. However, there are also other considerations intervening in the definition of the optimum. This paper discusses the procedure for selection of optimal Tavg for the forthcoming cycle in relation to the impacts of change in Tavg level and/or variations of the steam pressure versus Tavg relationship. (author)

  8. Categorification and higher representation theory

    CERN Document Server

    Beliakova, Anna

    2017-01-01

    The emergent mathematical philosophy of categorification is reshaping our view of modern mathematics by uncovering a hidden layer of structure in mathematics, revealing richer and more robust structures capable of describing more complex phenomena. Categorified representation theory, or higher representation theory, aims to understand a new level of structure present in representation theory. Rather than studying actions of algebras on vector spaces where algebra elements act by linear endomorphisms of the vector space, higher representation theory describes the structure present when algebras act on categories, with algebra elements acting by functors. The new level of structure in higher representation theory arises by studying the natural transformations between functors. This enhanced perspective brings into play a powerful new set of tools that deepens our understanding of traditional representation theory. This volume exhibits some of the current trends in higher representation theory and the diverse te...

  9. Power in Households: Disentangling Bargaining Power

    OpenAIRE

    Mabsout, Ramzi; Staveren, Irene

    2009-01-01

    textabstractIntroduction Within the household bargaining literature, bargaining power is generally understood in terms of economic resources, such as income or assets. Empirical analyses of women’s bargaining power in households in developed and developing countries find that, in general, higher female incomes lead to higher bargaining power, which in turn tends to increase women’s relative wellbeing (Quisumbing, 2003). For assets, the empirical literature comes up with similar results, indic...

  10. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  11. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  12. A 900 MHz RF energy harvesting system in 40 nm CMOS technology with efficiency peaking at 47% and higher than 30% over a 22dB wide input power range

    NARCIS (Netherlands)

    Wang, J.; Jiang, Y.; Dijkhuis, J.; Dolmans, G.; Gao, H.; Baltus, P.G.M.

    2017-01-01

    A 900 MHz RF energy harvesting system is proposed for a far-field wireless power transfer application. The topology of a single-stage CMOS rectifier loaded with an integrated boost DC-DC converter is implemented in a 40 nm CMOS technology. The co-design of a cross-coupled CMOS rectifier and an

  13. 18 CFR 301.4 - Exchange Period Average System Cost determination.

    Science.gov (United States)

    2010-04-01

    ... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE... Period and extend through four (4) years after the Exchange Period. The load forecast for Contract System... Utility's ASC until the change in service territory takes place. (g) ASC determination for Consumer-owned...

  14. On critical cases in limit theory for stationary increments Lévy driven moving averages

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas; Podolskij, Mark

    averages. The limit theory heavily depends on the interplay between the given order of the increments, the considered power, the Blumenthal-Getoor index of the driving pure jump Lévy process L and the behavior of the kernel function g at 0. In this work we will study the critical cases, which were...

  15. Average fast neutron flux in three energy ranges in the Quinta assembly irradiated by two types of beams

    Directory of Open Access Journals (Sweden)

    Strugalska-Gola Elzbieta

    2017-01-01

    Full Text Available This work was performed within the international project “Energy plus Transmutation of Radioactive Wastes” (E&T - RAW for investigations of energy production and transmutation of radioactive waste of the nuclear power industry. 89Y (Yttrium 89 samples were located in the Quinta assembly in order to measure an average high neutron flux density in three different energy ranges using deuteron and proton beams from Dubna accelerators. Our analysis showed that the neutron density flux for the neutron energy range 20.8 - 32.7 MeV is higher than for the neutron energy range 11.5 - 20.8 MeV both for protons with an energy of 0.66 GeV and deuterons with an energy of 2 GeV, while for deuteron beams of 4 and 6 GeV we did not observe this.

  16. Averaging hydraulic head, pressure head, and gravitational head in subsurface hydrology, and implications for averaged fluxes, and hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    G. H. de Rooij

    2009-07-01

    Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.

  17. Cutting-Edge High-Power Ultrafast Thin Disk Oscillators

    Directory of Open Access Journals (Sweden)

    Thomas Südmeyer

    2013-04-01

    Full Text Available A growing number of applications in science and industry are currently pushing the development of ultrafast laser technologies that enable high average powers. SESAM modelocked thin disk lasers (TDLs currently achieve higher pulse energies and average powers than any other ultrafast oscillator technology, making them excellent candidates in this goal. Recently, 275 W of average power with a pulse duration of 583 fs were demonstrated, which represents the highest average power so far demonstrated from an ultrafast oscillator. In terms of pulse energy, TDLs reach more than 40 μJ pulses directly from the oscillator. In addition, another major milestone was recently achieved, with the demonstration of a TDL with nearly bandwidth-limited 96-fs long pulses. The progress achieved in terms of pulse duration of such sources enabled the first measurement of the carrier-envelope offset frequency of a modelocked TDL, which is the first key step towards full stabilization of such a source. We will present the key elements that enabled these latest results, as well as an outlook towards the next scaling steps in average power, pulse energy and pulse duration of such sources. These cutting-edge sources will enable exciting new applications, and open the door to further extending the current performance milestones.

  18. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  19. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  20. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  1. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  2. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  3. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  4. New developments in RF power sources

    International Nuclear Information System (INIS)

    Miller, R.H.

    1994-06-01

    The most challenging rf source requirements for high-energy accelerators presently being studied or designed come from the various electron-positron linear collider studies. All of these studies except TESLA (the superconducting entry in the field) have specified rf sources with much higher peak powers than any existing tubes at comparable high frequencies. While circular machines do not, in general, require high peak power, the very high luminosity electron-positron rings presently being designed as B factories require prodigious total average rf power. In this age of energy conservation, this puts a high priority on high efficiency for the rf sources. Both modulating anodes and depressed collectors are being investigated in the quest for high efficiency at varying output powers

  5. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  6. Wind power

    International Nuclear Information System (INIS)

    Gipe, P.

    2007-01-01

    This book is a translation of the edition published in the USA under the title of ''wind power: renewable energy for home, farm and business''. In the wake of mass blackouts and energy crises, wind power remains a largely untapped resource of renewable energy. It is a booming worldwide industry whose technology, under the collective wing of aficionados like author Paul Gipe, is coming of age. Wind Power guides us through the emergent, sometimes daunting discourse on wind technology, giving frank explanations of how to use wind technology wisely and sound advice on how to avoid common mistakes. Since the mid-1970's, Paul Gipe has played a part in nearly every aspect of wind energy development from installing small turbines to promoting wind energy worldwide. As an American proponent of renewable energy, Gipe has earned the acclaim and respect of European energy specialists for years, but his arguments have often fallen on deaf ears at home. Today, the topic of wind power is cropping up everywhere from the beaches of Cape Cod to the Oregon-Washington border, and one wind turbine is capable of producing enough electricity per year to run 200 average American households. Now, Paul Gipe is back to shed light on this increasingly important energy source with a revised edition of Wind Power. Over the course of his career, Paul Gipe has been a proponent, participant, observer, and critic of the wind industry. His experience with wind has given rise to two previous books on the subject, Wind Energy Basics and Wind Power for Home and Business, which have sold over 50,000 copies. Wind Power for Home and Business has become a staple for both homeowners and professionals interested in the subject, and now, with energy prices soaring, interest in wind power is hitting an all-time high. With chapters on output and economics, Wind Power discloses how much you can expect from each method of wind technology, both in terms of energy and financial savings. The book updated models

  7. Less Physician Practice Competition Is Associated With Higher Prices Paid For Common Procedures.

    Science.gov (United States)

    Austin, Daniel R; Baker, Laurence C

    2015-10-01

    Concentration among physician groups has been steadily increasing, which may affect prices for physician services. We assessed the relationship in 2010 between physician competition and prices paid by private preferred provider organizations for fifteen common, high-cost procedures to understand whether higher concentration of physician practices and accompanying increased market power were associated with higher prices for services. Using county-level measures of the concentration of physician practices and county average prices, and statistically controlling for a range of other regional characteristics, we found that physician practice concentration and prices were significantly associated for twelve of the fifteen procedures we studied. For these procedures, counties with the highest average physician concentrations had prices 8-26 percent higher than prices in the lowest counties. We concluded that physician competition is frequently associated with prices. Policies that would influence physician practice organization should take this into consideration. Project HOPE—The People-to-People Health Foundation, Inc.

  8. Sedimentological regimes for turbidity currents: Depth-averaged theory

    Science.gov (United States)

    Halsey, Thomas C.; Kumar, Amit; Perillo, Mauricio M.

    2017-07-01

    Turbidity currents are one of the most significant means by which sediment is moved from the continents into the deep ocean; their properties are interesting both as elements of the global sediment cycle and due to their role in contributing to the formation of deep water oil and gas reservoirs. One of the simplest models of the dynamics of turbidity current flow was introduced three decades ago, and is based on depth-averaging of the fluid mechanical equations governing the turbulent gravity-driven flow of relatively dilute turbidity currents. We examine the sedimentological regimes of a simplified version of this model, focusing on the role of the Richardson number Ri [dimensionless inertia] and Rouse number Ro [dimensionless sedimentation velocity] in determining whether a current is net depositional or net erosional. We find that for large Rouse numbers, the currents are strongly net depositional due to the disappearance of local equilibria between erosion and deposition. At lower Rouse numbers, the Richardson number also plays a role in determining the degree of erosion versus deposition. The currents become more erosive at lower values of the product Ro × Ri, due to the effect of clear water entrainment. At higher values of this product, the turbulence becomes insufficient to maintain the sediment in suspension, as first pointed out by Knapp and Bagnold. We speculate on the potential for two-layer solutions in this insufficiently turbulent regime, which would comprise substantial bedload flow with an overlying turbidity current.

  9. A 35-year comparison of children labelled as gifted, unlabelled as gifted and average-ability

    Directory of Open Access Journals (Sweden)

    Joan Freeman

    2014-09-01

    Full Text Available http://dx.doi.org/10.5902/1984686X14273Why are some children seen as gifted while others of identical ability are not?  To find out why and what the consequences might be, in 1974 I began in England with 70 children labelled as gifted.  Each one was matched for age, sex and socio-economic level with two comparison children in the same school class. The first comparison child had an identical gift, and the second taken at random.  Investigation was by a battery of tests and deep questioning of pupils, teachers and parents in their schools and homes which went on for 35 years. A major significant difference was that those labelled gifted had significantly more emotional problems than either the unlabelled but identically gifted or the random controls.  The vital aspects of success for the entire sample, whether gifted or not, have been hard work, emotional support and a positive personal outlook.  But in general, the higher the individual’s intelligence the better their chances in life. 

  10. Thermoelectric power generator for variable thermal power source

    Science.gov (United States)

    Bell, Lon E; Crane, Douglas Todd

    2015-04-14

    Traditional power generation systems using thermoelectric power generators are designed to operate most efficiently for a single operating condition. The present invention provides a power generation system in which the characteristics of the thermoelectrics, the flow of the thermal power, and the operational characteristics of the power generator are monitored and controlled such that higher operation efficiencies and/or higher output powers can be maintained with variably thermal power input. Such a system is particularly beneficial in variable thermal power source systems, such as recovering power from the waste heat generated in the exhaust of combustion engines.

  11. Four-quadrant flyback converter for direct audio power amplification

    Energy Technology Data Exchange (ETDEWEB)

    Ljusev, P.; Andersen, Michael A.E.

    2005-07-01

    This paper presents a bidirectional, four-quadrant yback converter for use in direct audio power amplication. When compared to the standard Class-D switching-mode audio power amplier with separate power supply, the proposed four-quadrant flyback converter provides simple and compact solution with high efciency, higher level of integration, lower component count, less board space and eventually lower cost. Both peak and average current-mode control for use with 4Q flyback power converters are described and compared. Integrated magnetics is presented which simplies the construction of the auxiliary power supplies for control biasing and isolated gate drives. The feasibility of the approach is proven on audio power amplier prototype for subwoofer applications. (au)

  12. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  13. Average glandular dose in digital mammography and breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Olgar, T. [Ankara Univ. (Turkey). Dept. of Engineering Physics; Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Kahn, T.; Gosch, D. [Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie

    2012-10-15

    Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.

  14. Average glandular dose in digital mammography and breast tomosynthesis

    International Nuclear Information System (INIS)

    Olgar, T.; Universitaetsklinikum Leipzig AoeR; Kahn, T.; Gosch, D.

    2012-01-01

    Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.

  15. Are average and symmetric faces attractive to infants? Discrimination and looking preferences.

    Science.gov (United States)

    Rhodes, Gillian; Geddes, Keren; Jeffery, Linda; Dziurawiec, Suzanne; Clark, Alison

    2002-01-01

    Young infants prefer to look at faces that adults find attractive, suggesting a biological basis for some face preferences. However, the basis for infant preferences is not known. Adults find average and symmetric faces attractive. We examined whether 5-8-month-old infants discriminate between different levels of averageness and symmetry in faces, and whether they prefer to look at faces with higher levels of these traits. Each infant saw 24 pairs of female faces. Each pair consisted of two versions of the same face differing either in averageness (12 pairs) or symmetry (12 pairs). Data from the mothers confirmed that adults preferred the more average and more symmetric versions in each pair. The infants were sensitive to differences in both averageness and symmetry, but showed no looking preference for the more average or more symmetric versions. On the contrary, longest looks were significantly longer for the less average versions, and both longest looks and first looks were marginally longer for the less symmetric versions. Mean looking times were also longer for the less average and less symmetric versions, but those differences were not significant. We suggest that the infant looking behaviour may reflect a novelty preference rather than an aesthetic preference.

  16. Data base of system-average dose rates at nuclear power plants: Final report

    International Nuclear Information System (INIS)

    Beal, S.K.; Britz, W.L.; Cohen, S.C.; Goldin, A.S.; Goldin, D.J.

    1987-10-01

    In this work, a data base is derived of area dose rates for systems and components listed in the Energy Economic Data Base (EEDB). The data base is derived from area surveys obtained during outages at four boiling water reactors (BWRs) at three stations and eight pressurized water reactors (PWRs) at four stations. Separate tables are given for BWRs and PWRs. These tables may be combined with estimates of labor hours to provide order-of-magnitude estimates of exposure for purposes of regulatory analysis. They are only valid for work involving entire systems or components. The estimates of labor hours used in conjunction with the dose rates to estimate exposure must be adjusted to account for in-field time. Finally, the dose rates given in the data base do not reflect ALARA considerations. 11 refs., 2 figs., 3 tabs

  17. Field control in a standing wave structure at high average beam power

    International Nuclear Information System (INIS)

    McKeown, J.; Fraser, J.S.; McMichael, G.E.

    1976-01-01

    A 100% duty factor electron beam has been accelerated through a graded-β side-coupled standing wave structure operating in π/2 mode. Three non-interacting control loops are necessary to provide the accelerating field amplitude and phase and to control structure resonance. The principal disturbances have been identified and measured over the beam current range of 0 to 20 mA. Design details are presented of control loops which regulate the accelerating field amplitude to +-0.3% and its phase to +-0.5 deg for 50% beam loading. (author)

  18. Radiation chemical research around a 15 MeV high average power linac

    International Nuclear Information System (INIS)

    Lahorte, P.; Mondelaers, W.; Masschaele, B.; Cauwels, P.

    1998-01-01

    Complete text of publication follows. The Laboratory of Subatomic and Radiation Physics of the University of Gent is equipped with a 15 MeV 20 kW linear electron accelerator (linac) facility. This accelerator was initially designed for fundamental nuclear physics research but was modified to generate beams for new experimental interdisciplinary projects. In its present configuration the accelerator is used as a multipurpose apparatus for research in the fields of polymer chemistry (crosslinking), biomaterials (hydrogels, drug delivery systems, implants), medicine (extracorporeal bone irradiation, human grafts), biomedical materials, food technology (package materials, food preservation), dosimetry (EPR of alanine systems, geldosimetry), solid-state physics, agriculture and nuclear and radiation physics. In this paper an overview will be presented of both the various research projects around our linac facility involving radiation chemistry and the specialised technologies facilitating this research

  19. High Average Power Raman Conversion in Diamond: ’Eyesafe’ Output and Fiber Laser Conversion

    Science.gov (United States)

    2015-06-19

    Kitzler and RP. Mildren, Laser & Photon. Reviews, vol. 8, L37 -L41 (2014) 5 Distribution Code A: Approved for public release, distribution is... L37 -L41 (2014) O. Kitzler, A. McKay, D.J. Spence and R.P. Mildren, "Modelling and Optimization of Continuous-Wave External Cavity Raman Lasers

  20. High average power, highly brilliant laser-produced plasma source for soft X-ray spectroscopy.

    Science.gov (United States)

    Mantouvalou, Ioanna; Witte, Katharina; Grötzsch, Daniel; Neitzel, Michael; Günther, Sabrina; Baumann, Jonas; Jung, Robert; Stiel, Holger; Kanngiesser, Birgit; Sandner, Wolfgang

    2015-03-01

    In this work, a novel laser-produced plasma source is presented which delivers pulsed broadband soft X-radiation in the range between 100 and 1200 eV. The source was designed in view of long operating hours, high stability, and cost effectiveness. It relies on a rotating and translating metal target and achieves high stability through an on-line monitoring device using a four quadrant extreme ultraviolet diode in a pinhole camera arrangement. The source can be operated with three different laser pulse durations and various target materials and is equipped with two beamlines for simultaneous experiments. Characterization measurements are presented with special emphasis on the source position and emission stability of the source. As a first application, a near edge X-ray absorption fine structure measurement on a thin polyimide foil shows the potential of the source for soft X-ray spectroscopy.

  1. Kilowatt average power 100 J-level diode pumped solid state laser

    Czech Academy of Sciences Publication Activity Database

    Mason, P.; Divoký, Martin; Ertel, K.; Pilař, Jan; Butcher, T.; Hanuš, Martin; Banerjee, S.; Phillips, J.; Smith, J.; De Vido, M.; Lucianetti, Antonio; Hernandez-Gomez, C.; Edwards, C.; Mocek, Tomáš; Collier, J.

    2017-01-01

    Roč. 4, č. 4 (2017), s. 438-439 ISSN 2334-2536 R&D Projects: GA MŠk LO1602; GA MŠk LM2015086 Institutional support: RVO:68378271 Keywords : diode-pumped * solid state * laser Subject RIV: BH - Optics, Masers, Lasers OBOR OECD: Optics (including laser optics and quantum optics) Impact factor: 7.727, year: 2016

  2. Characterization of a klystrode as a RF source for high-average-power accelerators

    International Nuclear Information System (INIS)

    Rees, D.; Keffeler, D.; Roybal, W.; Tallerico, P.J.

    1995-01-01

    The klystrode is a relatively new type of RF source that has demonstrated dc-to-RF conversion efficiencies in excess of 70% and a control characteristic uniquely different from those for klystron amplifiers. The different control characteristic allows the klystrode to achieve this high conversion efficiency while still providing a control margin for regulation of the accelerator cavity fields. The authors present test data from a 267-MHz, 250-kW, continuous-wave (CW) klystrode amplifier and contrast this data with conventional klystron performance, emphasizing the strengths and weaknesses of the klystrode technology for accelerator applications. They present test results describing that limitation for the 250-kW, CW klystrode and extrapolate the data to other frequencies. A summary of the operating regime explains the clear advantages of the klystrode technology over the klystron technology

  3. Pulse repetition frequency effects in a high average power x-ray preionized excimer laser

    International Nuclear Information System (INIS)

    Fontaine, B.; Forestier, B.; Delaporte, P.; Canarelli, P.

    1989-01-01

    Experimental study of waves damping in a high repetition rate excimer laser is undertaken. Excitation of laser active medium in a subsonic loop is achieved by means of a classical discharge, through transfer capacitors. The discharge stability is controlled by a wire ion plasma (w.i.p.) X-rays gun. The strong acoustic waves induced by the active medium excitation may lead to a decrease, at high PRF, of the energy per pulse. First results of the influence of a damping of induced density perturbations between two successive pulses are presented

  4. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  5. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  6. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  7. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  8. Globalisation and Higher Education

    NARCIS (Netherlands)

    Marginson, Simon; van der Wende, Marijk

    2007-01-01

    Economic and cultural globalisation has ushered in a new era in higher education. Higher education was always more internationally open than most sectors because of its immersion in knowledge, which never showed much respect for juridical boundaries. In global knowledge economies, higher education

  9. Are powerful females powerful enough? Acceleration in gravid green iguanas (Iguana iguana).

    Science.gov (United States)

    Scales, Jeffrey; Butler, Marguerite

    2007-08-01

    One demand placed exclusively on the musculoskeletal system of females is maintaining locomotor performance with an increasing load over the reproductive cycle. Here, we examine whether gravid (i.e., "pregnant") iguanas can increase their force and power production to support, stabilize, and accelerate the additional mass of a clutch of eggs. At any acceleration, gravid iguanas produced very high mechanical power (average total power = 673 w/kg; total peak power = 1175 w/kg). While the increase in total power was partly a result of greater propulsive power (average propulsive power = 25% higher, peak propulsive power = 38% higher), increased vertical power (roughly 200% increase) was the main contributor. Gravid iguanas were also able to increase peak forces (propulsive = 23%, mediolateral = 44%, vertical = 42%), and step duration (44%) resulting in greater impulses (i.e., the sum of force produced during a step) to accelerate, balance, and support their increased mass. The increase in step duration and smaller increase in peak propulsive force suggests that gravid iguanas may be force-limited in the direction of motion. We discuss how biomechanical constraints due to females' reproductive role may influence the evolution of the female musculoskeletal systems and contribute to the evolution and maintenance of ecological dimorphism in lizards.

  10. Extension of the time-average model to Candu refueling schemes involving reshuffling

    International Nuclear Information System (INIS)

    Rouben, Benjamin; Nichita, Eleodor

    2008-01-01

    Candu reactors consist of a horizontal non-pressurized heavy-water-filled vessel penetrated axially by fuel channels, each containing twelve 50-cm-long fuel bundles cooled by pressurized heavy water. Candu reactors are refueled on-line and, as a consequence, the core flux and power distributions change continuously. For design purposes, a 'time-average' model was developed in the 1970's to calculate the average over time of the flux and power distribution and to study the effects of different refueling schemes. The original time-average model only allows treatment of simple push-through refueling schemes whereby fresh fuel is inserted at one end of the channel and irradiated fuel is removed from the other end. With the advent of advanced fuel cycles and new Candu designs, novel refueling schemes may be considered, such as reshuffling discharged fuel from some channels into other channels, to achieve better overall discharge burnup. Such reshuffling schemes cannot be handled by the original time-average model. This paper presents an extension of the time-average model to allow for the treatment of refueling schemes with reshuffling. Equations for the extended model are presented, together with sample results for a simple demonstration case. (authors)

  11. Applications of resonance-averaged gamma-ray spectroscopy with tailored beams

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1982-01-01

    The use of techniques based on the direct experimental averaging over compound nuclear capturing states has proved valuable for investigations of nuclear structure. The various methods that have been employed are described, with particular emphasis on the transmission filter, or tailored beam technique. The mathematical limitations on averaging imposed by the filter band pass are discussed. It can readily be demonstrated that a combination of filters at different energies can form a powerful method for spin and parity predictions. Several recent examples from the HFBR program are presented

  12. An averaging battery model for a lead-acid battery operating in an electric car

    Science.gov (United States)

    Bozek, J. M.

    1979-01-01

    A battery model is developed based on time averaging the current or power, and is shown to be an effective means of predicting the performance of a lead acid battery. The effectiveness of this battery model was tested on battery discharge profiles expected during the operation of an electric vehicle following the various SAE J227a driving schedules. The averaging model predicts the performance of a battery that is periodically charged (regenerated) if the regeneration energy is assumed to be converted to retrievable electrochemical energy on a one-to-one basis.

  13. Applications of resonance-averaged gamma-ray spectroscopy with tailored beams

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1982-01-01

    The use of techniques based on the direct experimental averaging over compound nuclear capturing states has proved valuable for investigations of nuclear structure. The various methods that have been employed are described, with particular emphasis on the transmission filter, or tailored beam technique. The mathematical limitations on averaging imposed by the filtre band pass are discussed. It can readily be demonstrated that a combination of filters at different energies can form a powerful method for spin and parity predictions. Several recent examples from the HFBR program are presented. (author)

  14. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  15. The Average Temporal and Spectral Evolution of Gamma-Ray Bursts

    International Nuclear Information System (INIS)

    Fenimore, E.E.

    1999-01-01

    We have averaged bright BATSE bursts to uncover the average overall temporal and spectral evolution of gamma-ray bursts (GRBs). We align the temporal structure of each burst by setting its duration to a standard duration, which we call T left-angleDurright-angle . The observed average open-quotes aligned T left-angleDurright-angle close quotes profile for 32 bright bursts with intermediate durations (16 - 40 s) has a sharp rise (within the first 20% of T left-angleDurright-angle ) and then a linear decay. Exponentials and power laws do not fit this decay. In particular, the power law seen in the X-ray afterglow (∝T -1.4 ) is not observed during the bursts, implying that the X-ray afterglow is not just an extension of the average temporal evolution seen during the gamma-ray phase. The average burst spectrum has a low-energy slope of -1.03, a high-energy slope of -3.31, and a peak in the νF ν distribution at 390 keV. We determine the average spectral evolution. Remarkably, it is also a linear function, with the peak of the νF ν distribution given by ∼680-600(T/T left-angleDurright-angle ) keV. Since both the temporal profile and the peak energy are linear functions, on average, the peak energy is linearly proportional to the intensity. This behavior is inconsistent with the external shock model. The observed temporal and spectral evolution is also inconsistent with that expected from variations in just a Lorentz factor. Previously, trends have been reported for GRB evolution, but our results are quantitative relationships that models should attempt to explain. copyright copyright 1999. The American Astronomical Society

  16. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  17. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  18. 47 CFR 1.959 - Computation of average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...

  19. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...

  20. The average covering tree value for directed graph games

    NARCIS (Netherlands)

    Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf

    We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering

  1. The Average Covering Tree Value for Directed Graph Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.

    2012-01-01

    Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all

  2. Analytic computation of average energy of neutrons inducing fission

    International Nuclear Information System (INIS)

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  3. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  4. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail

    2015-01-01

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees

  5. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any

  6. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  7. Anomalous behavior of q-averages in nonextensive statistical mechanics

    International Nuclear Information System (INIS)

    Abe, Sumiyoshi

    2009-01-01

    A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases

  8. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  9. Forecasting of Average Monthly River Flows in Colombia

    Science.gov (United States)

    Mesa, O. J.; Poveda, G.

    2006-05-01

    The last two decades have witnessed a marked increase in our knowledge of the causes of interannual hydroclimatic variability and our ability to make predictions. Colombia, located near the seat of the ENSO phenomenon, has been shown to experience negative (positive) anomalies in precipitation in concert with El Niño (La Niña). In general besides the Pacific Ocean, Colombia has climatic influences from the Atlantic Ocean and the Caribbean Sea through the tropical forest of the Amazon basin and the savannas of the Orinoco River, in top of the orographic and hydro-climatic effects introduced by the Andes. As in various other countries of the region, hydro-electric power contributes a large proportion (75 %) of the total electricity generation in Colombia. Also, most agriculture is rain-fed dependant, and domestic water supply relies mainly on surface waters from creeks and rivers. Besides, various vector borne tropical diseases intensify in response to rain and temperature changes. Therefore, there is a direct connection between climatic fluctuations and national and regional economies. This talk specifically presents different forecasts of average monthly stream flows for the inflow into the largest reservoir used for hydropower generation in Colombia, and illustrates the potential economic savings of such forecasts. Because of planning of the reservoir operation, the most appropriated time scale for this application is the annual to interannual. Fortunately, this corresponds to the scale at which hydroclimate variability understanding has improved significantly. Among the different possibilities we have explored: traditional statistical ARIMA models, multiple linear regression, natural and constructed analogue models, the linear inverse model, neural network models, the non-parametric regression splines (MARS) model, regime dependant Markovian models and one we termed PREBEO, which is based on spectral bands decomposition using wavelets. Most of the methods make

  10. Record high-average current from a high-brightness photoinjector

    Energy Technology Data Exchange (ETDEWEB)

    Dunham, Bruce; Barley, John; Bartnik, Adam; Bazarov, Ivan; Cultrera, Luca; Dobbins, John; Hoffstaetter, Georg; Johnson, Brent; Kaplan, Roger; Karkare, Siddharth; Kostroun, Vaclav; Li Yulin; Liepe, Matthias; Liu Xianghong; Loehl, Florian; Maxson, Jared; Quigley, Peter; Reilly, John; Rice, David; Sabol, Daniel [Cornell Laboratory for Accelerator-Based Sciences and Education, Cornell University, Ithaca, New York 14853 (United States); and others

    2013-01-21

    High-power, high-brightness electron beams are of interest for many applications, especially as drivers for free electron lasers and energy recovery linac light sources. For these particular applications, photoemission injectors are used in most cases, and the initial beam brightness from the injector sets a limit on the quality of the light generated at the end of the accelerator. At Cornell University, we have built such a high-power injector using a DC photoemission gun followed by a superconducting accelerating module. Recent results will be presented demonstrating record setting performance up to 65 mA average current with beam energies of 4-5 MeV.

  11. Higher prices at Canadian gas pumps: international crude oil prices or local market concentration? An empirical investigation

    International Nuclear Information System (INIS)

    Anindya Sen

    2003-01-01

    There is little consensus on whether higher retail gasoline prices in Canada are the result of international crude oil price fluctuations or local market power exercised by large vertically-integrated firms. I find that although both increasing local market concentration and higher average monthly wholesale prices are positively and significantly associated with higher retail prices, wholesale prices are more important than local market concentration. Similarly, crude oil prices are more important than the number of local wholesalers in determining wholesale prices. These results suggest that movements in gasoline prices are largely the result of input price fluctuations rather than local market structure. (author)

  12. Increasing average period lengths by switching of robust chaos maps in finite precision

    Science.gov (United States)

    Nagaraj, N.; Shastry, M. C.; Vaidya, P. G.

    2008-12-01

    Grebogi, Ott and Yorke (Phys. Rev. A 38, 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T) of a dynamical system scales as a function of computer precision (ɛ) and the correlation dimension (d) of the chaotic attractor: T ˜ɛ-d/2. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.

  13. Aperture averaging and BER for Gaussian beam in underwater oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-03-01

    In an underwater wireless optical communication (UWOC) link, power fluctuations over finite-sized collecting lens are investigated for a horizontally propagating Gaussian beam wave. The power scintillation index, also known as the irradiance flux variance, for the received irradiance is evaluated in weak oceanic turbulence by using the Rytov method. This lets us further quantify the associated performance indicators, namely, the aperture averaging factor and the average bit-error rate (). The effects on the UWOC link performance of the oceanic turbulence parameters, i.e., the rate of dissipation of kinetic energy per unit mass of fluid, the rate of dissipation of mean-squared temperature, Kolmogorov microscale, the ratio of temperature to salinity contributions to the refractive index spectrum as well as system parameters, i.e., the receiver aperture diameter, Gaussian source size, laser wavelength and the link distance are investigated.

  14. The characteristic analysis of the solar energy photovoltaic power generation system

    Science.gov (United States)

    Liu, B.; Li, K.; Niu, D. D.; Jin, Y. A.; Liu, Y.

    2017-01-01

    Solar energy is an inexhaustible, clean, renewable energy source. Photovoltaic cells are a key component in solar power generation, so thorough research on output characteristics is of far-reaching importance. In this paper, an illumination model and a photovoltaic power station output power model were established, and simulation analysis was conducted using Matlab and other software. The analysis evaluated the condition of solar energy resources in the Baicheng region in the western part of Jilin province, China. The characteristic curve of the power output from a photovoltaic power station was obtained by simulation calculation. It was shown that the monthly average output power of the photovoltaic power station is affected by seasonal changes; the output power is higher in summer and autumn, and lower in spring and winter.

  15. Nonlocal higher order evolution equations

    KAUST Repository

    Rossi, Julio D.

    2010-06-01

    In this article, we study the asymptotic behaviour of solutions to the nonlocal operator ut(x, t)1/4(-1)n-1 (J*Id -1)n (u(x, t)), x ∈ ℝN, which is the nonlocal analogous to the higher order local evolution equation vt(-1)n-1(Δ)nv. We prove that the solutions of the nonlocal problem converge to the solution of the higher order problem with the right-hand side given by powers of the Laplacian when the kernel J is rescaled in an appropriate way. Moreover, we prove that solutions to both equations have the same asymptotic decay rate as t goes to infinity. © 2010 Taylor & Francis.

  16. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  17. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  18. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  19. Power variables and bilateral force differences during unloaded and loaded squat jumps in high performance alpine ski racers.

    Science.gov (United States)

    Patterson, Carson; Raschner, Christian; Platzer, Hans-Peter

    2009-05-01

    The purpose of this paper was to investigate the power-load relationship and to compare power variables and bilateral force imbalances between sexes with squat jumps. Twenty men and 17 women, all members of the Austrian alpine ski team (junior and European Cup), performed unloaded and loaded (barbell loads equal to 25, 50, 75, and 100% body weight [BW]) squat jumps with free weights using a specially designed spotting system. Ground reaction force records from 2 force platforms were used to calculate relative average power (P), relative average power in the first 100 ms of the jump (P01), relative average power in the first 200 ms of the jump (P02), jump height, percentage of best jump height (%Jump), and maximal force difference between dominant and nondominant leg (Fmaxdiff). The men displayed significantly higher values at all loads for P and jump height (p free weights.

  20. Exposure to power frequency electromagnetic fields

    International Nuclear Information System (INIS)

    Skotte, J.

    1993-01-01

    The purpose was to asses personal exposure to power frequency electromagnetic fields in Denmark. Exposure to electrical and magnetic 50 Hz fields were measured with personal dosimeters in periods of 24 hours covering both occupational and residential environments. The study included both highly exposed and 'normal' exposed jobs. Measurements were carried out with dosimeters, which sample electrical and magnetic fields every 5 sec. Participants also wore the dosimeter during transportation. The dynamic range of the dosimeters was 0.01-200 μT and 0.6-10000 V/m. The highest average exposure in homes near high power lines was 2.24 μT. In most homes without nearby high power lines the average exposure was below 0.05 μT. Average values of '24-hour-dose' (μT times hours) for the generator facility, transmission line and substation workers were approximately the same as for the people living near high power lines (5 μT x hours). Electric field measurements with personal dosimeters involve several factors of uncertainty, as the body, posture, position of dosimeter etc. influence the results. The highest exposed groups were transmission line workers (GM: 44 V/m) and substation workers (GM: 23 V/m) but there were large variations (GSD: 4.7-4.8). In the work time the exposure level was the same for office workers and workers in the industry groups (GM: 12-13 V/m). In homes near high power lines (GM: 23 V/m) there was a non-significant tendency to higher exposure compared to homes without nearby high power lines. (AB) (11 refs.)

  1. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  2. Average L-shell fluorescence, Auger, and electron yields

    International Nuclear Information System (INIS)

    Krause, M.O.

    1980-01-01

    The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization

  3. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  4. Salecker-Wigner-Peres clock and average tunneling times

    International Nuclear Information System (INIS)

    Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.

    2011-01-01

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  5. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  6. Average multiplications in deep inelastic processes and their interpretation

    International Nuclear Information System (INIS)

    Kiselev, A.V.; Petrov, V.A.

    1983-01-01

    Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity

  7. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  8. Average wind statistics for SRP area meteorological towers

    International Nuclear Information System (INIS)

    Laurinat, J.E.

    1987-01-01

    A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics

  9. Power in Households: Disentangling Bargaining Power

    NARCIS (Netherlands)

    R. Mabsout (Ramzi); I.P. van Staveren (Irene)

    2009-01-01

    textabstractIntroduction Within the household bargaining literature, bargaining power is generally understood in terms of economic resources, such as income or assets. Empirical analyses of women’s bargaining power in households in developed and developing countries find that, in general, higher

  10. Output power distributions of terminals in a 3G mobile communication network.

    Science.gov (United States)

    Persson, Tomas; Törnevik, Christer; Larsson, Lars-Eric; Lovén, Jan

    2012-05-01

    The objective of this study was to examine the distribution of the output power of mobile phones and other terminals connected to a 3G network in Sweden. It is well known that 3G terminals can operate with very low output power, particularly for voice calls. Measurements of terminal output power were conducted in the Swedish TeliaSonera 3G network in November 2008 by recording network statistics. In the analysis, discrimination was made between rural, suburban, urban, and dedicated indoor networks. In addition, information about terminal output power was possible to collect separately for voice and data traffic. Information from six different Radio Network Controllers (RNCs) was collected during at least 1 week. In total, more than 800000 h of voice calls were collected and in addition to that a substantial amount of data traffic. The average terminal output power for 3G voice calls was below 1 mW for any environment including rural, urban, and dedicated indoor networks. This is <1% of the maximum available output power. For data applications the average output power was about 6-8 dB higher than for voice calls. For rural areas the output power was about 2 dB higher, on average, than in urban areas. Copyright © 2011 Wiley Periodicals, Inc.

  11. Higher Education and Inequality

    Science.gov (United States)

    Brown, Roger

    2018-01-01

    After climate change, rising economic inequality is the greatest challenge facing the advanced Western societies. Higher education has traditionally been seen as a means to greater equality through its role in promoting social mobility. But with increased marketisation higher education now not only reflects the forces making for greater inequality…

  12. Higher Education in California

    Science.gov (United States)

    Public Policy Institute of California, 2016

    2016-01-01

    Higher education enhances Californians' lives and contributes to the state's economic growth. But population and education trends suggest that California is facing a large shortfall of college graduates. Addressing this short­fall will require strong gains for groups that have been historically under­represented in higher education. Substantial…

  13. Reimagining Christian Higher Education

    Science.gov (United States)

    Hulme, E. Eileen; Groom, David E., Jr.; Heltzel, Joseph M.

    2016-01-01

    The challenges facing higher education continue to mount. The shifting of the U.S. ethnic and racial demographics, the proliferation of advanced digital technologies and data, and the move from traditional degrees to continuous learning platforms have created an unstable environment to which Christian higher education must adapt in order to remain…

  14. Happiness in Higher Education

    Science.gov (United States)

    Elwick, Alex; Cannizzaro, Sara

    2017-01-01

    This paper investigates the higher education literature surrounding happiness and related notions: satisfaction, despair, flourishing and well-being. It finds that there is a real dearth of literature relating to profound happiness in higher education: much of the literature using the terms happiness and satisfaction interchangeably as if one were…

  15. Gender and Higher Education

    Science.gov (United States)

    Bank, Barbara J., Ed.

    2011-01-01

    This comprehensive, encyclopedic review explores gender and its impact on American higher education across historical and cultural contexts. Challenging recent claims that gender inequities in U.S. higher education no longer exist, the contributors--leading experts in the field--reveal the many ways in which gender is embedded in the educational…

  16. Energy-Efficient Power Allocation for Underlay Cognitive Radio Systems

    KAUST Repository

    Sboui, Lokman

    2015-09-01

    We present a power allocation framework for spectrum sharing Cognitive Radio (CR) systems based on maximizing the energy efficiency (EE). First, we show that the relation between the EE and the spectral efficiency (SE) is strictly increasing in contrast with the SE-EE trade-off discussed in the literature. We also solve a non-convex problem and explicitly derive the optimal power for the proposed average EE under either a peak or an average power constraint. We apply our results to the underlay CR systems where the power is limited by an additional interference constraint. When the instantaneous channel is not available, we provide a necessary and sufficient condition for the optimal power and present a simple sub-optimal power. In the numerical results, we show that the proposed EE corresponds to a higher SE at mid-range and high power regime compared to the classical EE. We also show that the sup-optimal solution is very close to the optimal solution. In addition, we deduce that the absence of instantaneous CSI affects the EE and the SE at high power regime compared to full CSI. In the CR context, we show that the interference threshold has a minimal effect on the EE compared to the SE.

  17. Energy-Efficient Power Allocation for Underlay Cognitive Radio Systems

    KAUST Repository

    Sboui, Lokman; Rezki, Zouheir; Alouini, Mohamed-Slim

    2015-01-01

    We present a power allocation framework for spectrum sharing Cognitive Radio (CR) systems based on maximizing the energy efficiency (EE). First, we show that the relation between the EE and the spectral efficiency (SE) is strictly increasing in contrast with the SE-EE trade-off discussed in the literature. We also solve a non-convex problem and explicitly derive the optimal power for the proposed average EE under either a peak or an average power constraint. We apply our results to the underlay CR systems where the power is limited by an additional interference constraint. When the instantaneous channel is not available, we provide a necessary and sufficient condition for the optimal power and present a simple sub-optimal power. In the numerical results, we show that the proposed EE corresponds to a higher SE at mid-range and high power regime compared to the classical EE. We also show that the sup-optimal solution is very close to the optimal solution. In addition, we deduce that the absence of instantaneous CSI affects the EE and the SE at high power regime compared to full CSI. In the CR context, we show that the interference threshold has a minimal effect on the EE compared to the SE.

  18. Quality of Higher Education

    DEFF Research Database (Denmark)

    Zou, Yihuan

    is about constructing a more inclusive understanding of quality in higher education through combining the macro, meso and micro levels, i.e. from the perspectives of national policy, higher education institutions as organizations in society, individual teaching staff and students. It covers both......Quality in higher education was not invented in recent decades – universities have always possessed mechanisms for assuring the quality of their work. The rising concern over quality is closely related to the changes in higher education and its social context. Among others, the most conspicuous...... changes are the massive expansion, diversification and increased cost in higher education, and new mechanisms of accountability initiated by the state. With these changes the traditional internally enacted academic quality-keeping has been given an important external dimension – quality assurance, which...

  19. The JLab high power ERL light source

    International Nuclear Information System (INIS)

    Neil, G.R.; Behre, C.; Benson, S.V.

    2006-01-01

    A new THz/IR/UV photon source at Jefferson Lab is the first of a new generation of light sources based on an Energy-Recovered (superconducting) Linac (ERL). The machine has a 160MeV electron beam and an average current of 10mA in 75MHz repetition rate hundred femtosecond bunches. These electron bunches pass through a magnetic chicane and therefore emit synchrotron radiation. For wavelengths longer than the electron bunch the electrons radiate coherently a broadband THz ∼ half cycle pulse whose average brightness is >5 orders of magnitude higher than synchrotron IR sources. Previous measurements showed 20W of average power extracted [Carr, et al., Nature 420 (2002) 153]. The new facility offers simultaneous synchrotron light from the visible through the FIR along with broadband THz production of 100fs pulses with >200W of average power. The FELs also provide record-breaking laser power [Neil, et al., Phys. Rev. Lett. 84 (2000) 662]: up to 10kW of average power in the IR from 1 to 14μm in 400fs pulses at up to 74.85MHz repetition rates and soon will produce similar pulses of 300-1000nm light at up to 3kW of average power from the UV FEL. These ultrashort pulses are ideal for maximizing the interaction with material surfaces. The optical beams are Gaussian with nearly perfect beam quality. See www.jlab.org/FEL for details of the operating characteristics; a wide variety of pulse train configurations are feasible from 10ms long at high repetition rates to continuous operation. The THz and IR system has been commissioned. The UV system is to follow in 2005. The light is transported to user laboratories for basic and applied research. Additional lasers synchronized to the FEL are also available. Past activities have included production of carbon nanotubes, studies of vibrational relaxation of interstitial hydrogen in silicon, pulsed laser deposition and ablation, nitriding of metals, and energy flow in proteins. This paper will present the status of the system and

  20. The JLab high power ERL light source

    Energy Technology Data Exchange (ETDEWEB)

    G.R. Neil; C. Behre; S.V. Benson; M. Bevins; G. Biallas; J. Boyce; J. Coleman; L.A. Dillon-Townes; D. Douglas; H.F. Dylla; R. Evans; A. Grippo; D. Gruber; J. Gubeli; D. Hardy; C. Hernandez-Garcia; K. Jordan; M.J. Kelley; L. Merminga; J. Mammosser; W. Moore; N. Nishimori; E. Pozdeyev; J. Preble; R. Rimmer; Michelle D. Shinn; T. Siggins; C. Tennant; R. Walker; G.P. Williams and S. Zhang

    2005-03-19

    A new THz/IR/UV photon source at Jefferson Lab is the first of a new generation of light sources based on an Energy-Recovered, (superconducting) Linac (ERL). The machine has a 160 MeV electron beam and an average current of 10 mA in 75 MHz repetition rate hundred femtosecond bunches. These electron bunches pass through a magnetic chicane and therefore emit synchrotron radiation. For wavelengths longer than the electron bunch the electrons radiate coherently a broadband THz {approx} half cycle pulse whose average brightness is > 5 orders of magnitude higher than synchrotron IR sources. Previous measurements showed 20 W of average power extracted[1]. The new facility offers simultaneous synchrotron light from the visible through the FIR along with broadband THz production of 100 fs pulses with >200 W of average power. The FELs also provide record-breaking laser power [2]: up to 10 kW of average power in the IR from 1 to 14 microns in 400 fs pulses at up to 74.85 MHz repetition rates and soon will produce similar pulses of 300-1000 nm light at up to 3 kW of average power from the UV FEL. These ultrashort pulses are ideal for maximizing the interaction with material surfaces. The optical beams are Gaussian with nearly perfect beam quality. See www.jlab.org/FEL for details of the operating characteristics; a wide variety of pulse train configurations are feasible from 10 microseconds long at high repetition rates to continuous operation. The THz and IR system has been commissioned. The UV system is to follow in 2005. The light is transported to user laboratories for basic and applied research. Additional lasers synchronized to the FEL are also available. Past activities have included production of carbon nanotubes, studies of vibrational relaxation of interstitial hydrogen in silicon, pulsed laser deposition and ablation, nitriding of metals, and energy flow in proteins. This paper will present the status of the system and discuss some of the discoveries we have made