WorldWideScience

Sample records for megawatt average polyphase

  1. A 1 MEGAWATT POLYPHASE BOOST CONVERTER-MODULATOR FOR KLYSTRON PULSE APPLICATION

    International Nuclear Information System (INIS)

    Reass, W.A.; Doss, J.D.; Gribble, R.F.

    2001-01-01

    This paper describes electrical design criteria and first operational results a 140 kV, 1 MW average, 11 MW peak, zero-voltage-switching 20 kHz polyphase bridge, boost converter/modulator for klystron pulse application. The DC-DC converter derives the buss voltages from a standard 13.8 kV to 2300 Y substation cast-core transformer. Energy storage and filtering is provided by self-clearing metallized hazy polypropylene traction capacitors. Three ''H-Bridge'' Insulated Gate Bipolar Transistor (IGBT) switching networks are used to generate the polyphase 20 kHz transformer primary drive waveforms. The 20 kHz drive waveforms are chirped the appropriate duration to generate the desired klystron pulse width. PWM (pulse width modulation) of the individual 20 kHz pulses is utilized to provide regulated output waveforms with adaptive feedforward and feedback techniques. The boost transformer design utilizes amorphous nanocrystalline material that provides the required low core loss at design flux levels and switching frequencies. Resonant shunt-peaking is used on the transformer secondary to boost output voltage and resonate transformer leakage inductance. With the appropriate transformer leakage inductance and peaking capacitance, zero-voltage-switching of the IGBT's is attained, minimizing switching losses. A review of these design parameters and the first results of the performance characteristics will be presented

  2. Capabilities, performance, and future possibilities of high frequency polyphase resonant converters

    International Nuclear Information System (INIS)

    Reass, W.A.; Baca, D.M.; Bradley, J.T. III; Hardek, T.W.; Kwon, S.I.; Lynch, M.T.; Rees, D.E.

    2004-01-01

    High Frequency Polyphase Resonant Power Conditioning (PRPC) techniques developed at Los Alamos National Laboratory (LANL) are now being utilized for the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source (SNS) accelerator klystron RF amplifier power systems. Three different styles of polyphase resonant converter modulators were developed for the SNS application. The various systems operate up to 140 kV, or 11 MW pulses, or up to 1.1 MW average power, all from a DC input of +/- 1.2 kV. Component improvements realized with the SNS effort coupled with new applied engineering techniques have resulted in dramatic changes in RF power conditioning topology. As an example, the high-voltage transformers are over 100 times smaller and lighter than equivalent 60 Hz versions. With resonant conversion techniques, load protective networks are not required. A shorted load de-tunes the resonance and little power transfer can occur. This provides for power conditioning systems that are inherently self-protective, with automatic fault 'ride-through' capabilities. By altering the Los Alamos design, higher power and CW power conditioning systems can be realized without further demands of the individual component voltage or current capabilities. This has led to designs that can accommodate 30 MW long pulse applications and megawatt class CW systems with high efficiencies. The same PRPC techniques can also be utilized for lower average power systems (∼250 kW). This permits the use of significantly higher frequency conversion techniques that result in extremely compact systems with short pulse (10 to 100 us) capabilities. These lower power PRPC systems may be suitable for medical Linacs and mobile RF systems. This paper will briefly review the performance achieved for the SNS accelerator and examine designs for high efficiency megawatt class CW systems and 30 MW peak power applications. The devices and designs for compact higher frequency converters utilized for short pulse

  3. Distortion Cancellation via Polyphase Multipath Circuits

    NARCIS (Netherlands)

    Mensink, E.; Klumperink, Eric A.M.; Nauta, Bram

    The central question of this paper is: can we enhance the spectral purity of nonlinear circuits with the help of polyphase multipath circuits. Polyphase multipath circuits are circuits with two or more paths that exploit phase differences between the paths to cancel unwanted signals. It turns out

  4. Spectral Purity Enhancement via Polyphase Multipath Circuits

    NARCIS (Netherlands)

    Mensink, E.; Klumperink, Eric A.M.; Nauta, Bram

    2004-01-01

    The central question of this paper is: can we enhance the spectral purity of nonlinear circuits by using polyphase multipath circuits? The basic idea behind polyphase multipath circuits is to split the nonlinear circuits into two or more paths and exploit phase differences between these paths to

  5. A polyphase filter for many-core architectures

    Science.gov (United States)

    Adámek, K.; Novotný, J.; Armour, W.

    2016-07-01

    In this article we discuss our implementation of a polyphase filter for real-time data processing in radio astronomy. The polyphase filter is a standard tool in digital signal processing and as such a well established algorithm. We describe in detail our implementation of the polyphase filter algorithm and its behaviour on three generations of NVIDIA GPU cards (Fermi, Kepler, Maxwell), on the Intel Xeon CPU and Xeon Phi (Knights Corner) platforms. All of our implementations aim to exploit the potential for data reuse that the algorithm offers. Our GPU implementations explore two different methods for achieving this, the first makes use of L1/Texture cache, the second uses shared memory. We discuss the usability of each of our implementations along with their behaviours. We measure performance in execution time, which is a critical factor for real-time systems, we also present results in terms of bandwidth (GB/s), compute (GFLOP/s/s) and type conversions (GTc/s). We include a presentation of our results in terms of the sample rate which can be processed in real-time by a chosen platform, which more intuitively describes the expected performance in a signal processing setting. Our findings show that, for the GPUs considered, the performance of our polyphase filter when using lower precision input data is limited by type conversions rather than device bandwidth. We compare these results to an implementation on the Xeon Phi. We show that our Xeon Phi implementation has a performance that is 1.5 × to 1.92 × greater than our CPU implementation, however is not insufficient to compete with the performance of GPUs. We conclude with a comparison of our best performing code to two other implementations of the polyphase filter, showing that our implementation is faster in nearly all cases. This work forms part of the Astro-Accelerate project, a many-core accelerated real-time data processing library for digital signal processing of time-domain radio astronomy data.

  6. Pipeline Implementation of Polyphase PSO for Adaptive Beamforming Algorithm

    Directory of Open Access Journals (Sweden)

    Shaobing Huang

    2017-01-01

    Full Text Available Adaptive beamforming is a powerful technique for anti-interference, where searching and tracking optimal solutions are a great challenge. In this paper, a partial Particle Swarm Optimization (PSO algorithm is proposed to track the optimal solution of an adaptive beamformer due to its great global searching character. Also, due to its naturally parallel searching capabilities, a novel Field Programmable Gate Arrays (FPGA pipeline architecture using polyphase filter bank structure is designed. In order to perform computations with large dynamic range and high precision, the proposed implementation algorithm uses an efficient user-defined floating-point arithmetic. In addition, a polyphase architecture is proposed to achieve full pipeline implementation. In the case of PSO with large population, the polyphase architecture can significantly save hardware resources while achieving high performance. Finally, the simulation results are presented by cosimulation with ModelSim and SIMULINK.

  7. Lifted linear phase filter banks and the polyphase-with-advance representation

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, C. M. (Christopher M.); Wohlberg, B. E. (Brendt E.)

    2004-01-01

    A matrix theory is developed for the noncausal polyphase-with-advance representation that underlies the theory of lifted perfect reconstruction filter banks and wavelet transforms as developed by Sweldens and Daubechies. This theory provides the fundamental lifting methodology employed in the ISO/IEC JPEG-2000 still image coding standard, which the authors helped to develop. Lifting structures for polyphase-with-advance filter banks are depicted in Figure 1. In the analysis bank of Figure 1(a), the first lifting step updates x{sub 0} with a filtered version of x{sub 1} and the second step updates x{sub 1} with a filtered version of x{sub 0}; gain factors 1/K and K normalize the lowpass- and highpass-filtered output subbands. Each of these steps is inverted by the corresponding operations in the synthesis bank shown in Figure 1(b). Lifting steps correspond to upper- or lower-triangular matrices, S{sub i}(z), in a cascade-form decomposition of the polyphase analysis matrix, H{sub a}(z). Lifting structures can also be implemented reversibly (i.e., losslessly in fixed-precision arithmetic) by rounding the lifting updates to integer values. Our treatment of the polyphase-with-advance representation develops an extensive matrix algebra framework that goes far beyond the results of. Specifically, we focus on analyzing and implementing linear phase two-channel filter banks via linear phase lifting cascade schemes. Whole-sample symmetric (WS) and half-sample symmetric (HS) linear phase filter banks are characterized completely in terms of the polyphase-with-advance representation. The theory benefits significantly from a number of new group-theoretic structures arising in the polyphase-with-advance matrix algebra from the lifting factorization of linear phase filter banks.

  8. Polyphase-discrete Fourier transform spectrum analysis for the Search for Extraterrestrial Intelligence sky survey

    Science.gov (United States)

    Zimmerman, G. A.; Gulkis, S.

    1991-01-01

    The sensitivity of a matched filter-detection system to a finite-duration continuous wave (CW) tone is compared with the sensitivities of a windowed discrete Fourier transform (DFT) system and an ideal bandpass filter-bank system. These comparisons are made in the context of the NASA Search for Extraterrestrial Intelligence (SETI) microwave observing project (MOP) sky survey. A review of the theory of polyphase-DFT filter banks and its relationship to the well-known windowed-DFT process is presented. The polyphase-DFT system approximates the ideal bandpass filter bank by using as few as eight filter taps per polyphase branch. An improvement in sensitivity of approx. 3 dB over a windowed-DFT system can be obtained by using the polyphase-DFT approach. Sidelobe rejection of the polyphase-DFT system is vastly superior to the windowed-DFT system, thereby improving its performance in the presence of radio frequency interference (RFI).

  9. Experimental deformation of polyphase rock analogues

    NARCIS (Netherlands)

    Bons, P.D.

    1993-01-01

    This thesis presents an investigation into the mechanical properties of ductile polyphase materials, which were studied by a number of different techniques. The first approach was to do creep tests and transparent deformation cell experiments with two-phase composites of organic crystalline

  10. Polyphasic taxonomy of Aspergillus section Cervini

    DEFF Research Database (Denmark)

    Chen, A.J.; Varga, J.; Frisvad, Jens Christian

    2016-01-01

    Species belonging to Aspergillus section Cervini are characterised by radiate or short columnar, fawn coloured, uniseriate conidial heads. The morphology of the taxa in this section is very similar and isolates assigned to these species are frequently misidentified. In this study, a polyphasic...

  11. Time and Power Optimizations in FPGA-Based Architectures for Polyphase Channelizers

    DEFF Research Database (Denmark)

    Awan, Mehmood-Ur-Rehman; Harris, Fred; Koch, Peter

    2012-01-01

    This paper presents the time and power optimization considerations for Field Programmable Gate Array (FPGA) based architectures for a polyphase filter bank channelizer with an embedded square root shaping filter in its polyphase engine. This configuration performs two different re-sampling tasks......% slice register resources of a Xilinx Virtex-5 FPGA, operating at 400 and 480 MHz, and consuming 1.9 and 2.6 Watts of dynamic power, respectively....

  12. Multi-megawatt inverter/converter technology for space power applications

    Science.gov (United States)

    Myers, Ira T.; Baumann, Eric D.; Kraus, Robert; Hammoud, Ahmad N.

    1992-01-01

    Large power conditioning mass reductions will be required to enable megawatt power systems envisioned by the Strategic Defense Initiative, the Air Force, and NASA. Phase 1 of a proposed two phase interagency program has been completed to develop an 0.1 kg/kW DC/DC converter technology base for these future space applications. Three contractors, Hughes, General Electric (GE), and Maxwell were Phase 1 contractors in a competitive program to develop a megawatt lightweight DC/DC converter. Researchers at NASA Lewis Research Center and the University of Wisconsin also investigated technology in topology and control. All three contractors, as well as the University of Wisconsin, concluded at the end of the Phase 1 study, which included some critical laboratory work, that 0.1-kg/kW megawatt DC/DC converters can be built. This is an order of magnitude lower specific weight than is presently available. A brief description of each of the concepts used to meet the ambitious goals of this program are presented.

  13. Polyphase Filter Banks for Embedded Sample Rate Changes in Digital Radio Front-Ends

    DEFF Research Database (Denmark)

    Awan, Mehmood-Ur-Rehman; Le Moullec, Yannick; Koch, Peter

    2011-01-01

    . A non-maximally-decimated polyphase filter bank (where the number of data loads is not equal to the number of M subfilters) processes M subfilters in a time period that is less than or greater than the M data loads. A polyphase filter bank with five different resampling modes is used as a case study...

  14. Multi-megawatt neutral beams for MFTF-B

    International Nuclear Information System (INIS)

    Kerr, R.G.

    1982-01-01

    Multi-megawatt neutral-beam sources have successfully made the transition from prototype to commercial production, with some operational improvements due to the commercialization. Long pulse source operation results will be available soon

  15. Low-power implementation of polyphase filters in Quadratic Residue Number System

    DEFF Research Database (Denmark)

    Cardarilli, Gian Carlo; Re, Andrea Del; Nannarelli, Alberto

    2004-01-01

    The aim of this work is the reduction of the power dissipated in digital filters, while maintaining the timing unchanged. A polyphase filter bank in the Quadratic Residue Number System (QRNS) has been implemented and then compared, in terms of performance, area, and power dissipation...... to the implementation of a polyphase filter bank in the traditional two's complement system (TCS). The resulting implementations, designed to have the same clock rates, show that the QRNS filter is smaller and consumes less power than the TCS one....

  16. Hardware Architecture of Polyphase Filter Banks Performing Embedded Resampling for Software-Defined Radio Front-Ends

    DEFF Research Database (Denmark)

    Awan, Mehmood-Ur-Rehman; Le Moullec, Yannick; Koch, Peter

    2012-01-01

    , and power optimization for field programmable gate array (FPGA) based architectures in an M -path polyphase filter bank with modified N -path polyphase filter. Such systems allow resampling by arbitrary ratios while simultaneously performing baseband aliasing from center frequencies at Nyquist zones......In this paper, we describe resource-efficient hardware architectures for software-defined radio (SDR) front-ends. These architectures are made efficient by using a polyphase channelizer that performs arbitrary sample rate changes, frequency selection, and bandwidth control. We discuss area, time...... that are not multiples of the output sample rate. A non-maximally decimated polyphase filter bank, where the number of data loads is not equal to the number of M subfilters, processes M subfilters in a time period that is either less than or greater than the M data-load’s time period. We present a load...

  17. Polyphasic taxonomic characterization of lactic acid bacteria ...

    African Journals Online (AJOL)

    The results of these analyses showed that ting fermentation involved at least three different species of LAB, i.e. Lactobacillus fermentum, L. plantarum and L. rhamnosus. To our knowledge, this is the first report of polyphasic taxonomic characterization of LAB from this food. This research forms an essential first step towards ...

  18. PEGASUS: a multi-megawatt nuclear electric propulsion system

    International Nuclear Information System (INIS)

    Coomes, E.P.; Cuta, J.M.; Webb, B.J.; King, D.Q.

    1985-06-01

    With the Space Transportation System (STS), the advent of space station Columbus and the development of expertise at working in space that this will entail, the gateway is open to the final frontier. The exploration of this frontier is possible with state-of-the-art hydrogen/oxygen propulsion but would be greatly enhanced by the higher specific impulse of electric propulsion. This paper presents a concept that uses a multi-megawatt nuclear power plant to drive an electric propulsion system. The concept has been named PEGASUS, PowEr GenerAting System for Use in Space, and is intended as a ''work horse'' for general space transportation needs, both long- and short-haul missions. The recent efforts of the SP-100 program indicate that a power system capable of producing upwards of 1 megawatt of electric power should be available in the next decade. Additionally, efforts in other areas indicate that a power system with a constant power capability an order of magnitude greater could be available near the turn of the century. With the advances expected in megawatt-class space power systems, the high specific impulse propulsion systems must be reconsidered as potential propulsion systems. The power system is capable of meeting both the propulsion system and spacecraft power requirements

  19. Symbol Synchronization for SDR Using a Polyphase Filterbank Based on an FPGA

    Directory of Open Access Journals (Sweden)

    P. Fiala

    2015-09-01

    Full Text Available This paper is devoted to the proposal of a highly efficient symbol synchronization subsystem for Software Defined Radio. The proposed feedback phase-locked loop timing synchronizer is suitable for parallel implementation on an FPGA. The polyphase FIR filter simultaneously performs matched-filtering and arbitrary interpolation between acquired samples. Determination of the proper sampling instant is achieved by selecting a suitable polyphase filterbank using a derived index. This index is determined based on the output either the Zero-Crossing or Gardner Timing Error Detector. The paper will extensively focus on simulation of the proposed synchronization system. On the basis of this simulation, a complete, fully pipelined VHDL description model is created. This model is composed of a fully parallel polyphase filterbank based on distributed arithmetic, timing error detector and interpolation control block. Finally, RTL synthesis on an Altera Cyclone IV FPGA is presented and resource utilization in comparison with a conventional model is analyzed.

  20. Towards suppression of all harmonics in a polyphase multipath transmitter

    NARCIS (Netherlands)

    Subhan, S.; Klumperink, Eric A.M.; Nauta, Bram

    2011-01-01

    This work proposes a direct conversion transmitter architecture intended for cognitive radio applications. The architecture is based on the poly-phase multipath technique, which has been shown to cancel out many of the harmonics, sidebands and nonlinearity contributions of a power up-converter using

  1. D/A Resolution Impact on a Poly-phase Multipath Transmitter

    NARCIS (Netherlands)

    Subhan, S.; Klumperink, Eric A.M.; Nauta, Bram

    2008-01-01

    In recent publications the Poly-phase multipath technique has been shown to produce a clean output spectrum for a power upconverter (PU) architecture. The technique utilizes frequency independent phase shifts before and after a nonlinear element to cancel out the harmonics and sidebands due to the

  2. Fault-tolerant design approach for reliable offshore multi-megawatt variable frequency converters

    Directory of Open Access Journals (Sweden)

    N. Vedachalam

    2016-09-01

    Full Text Available Inverters play a key role in realizing reliable multi-megawatt power electronic converters used in offshore applications, as their failure leads to production losses and impairs safety. The performance of high power handing semiconductor devices with high speed control capabilities and redundant configurations helps in realizing a fault-tolerant design. This paper describes the reliability modeling done for an industry standard, 3-level neutral point clamped multi-megawatt inverter, the significance of semiconductor redundancy in reducing inverter failure rates, and proposes methods for achieving static and dynamic redundancy in series connected press pack type insulated gate bipolar transistors (IGBT. It is identified that, with the multi megawatt inverter having 3+2 IGBT in each half leg with dynamic redundancy incorporated, it is possible to reduce the failure rate of the inverter from 53.8% to 15% in 5 years of continuous operation. The simulation results indicate that with dynamic redundancy, it is possible to force an untriggered press pack IGBT to short circuit in <1s, when operated with a pulse width modulation frequency of 1kHz.

  3. Design, status and first operations of the spallation neutron source polyphase resonant converter modulator system

    Energy Technology Data Exchange (ETDEWEB)

    Reass, W. A. (William A.); Apgar, S. E. (Sean E.); Baca, D. M. (David M.); Doss, James D.; Gonzales, J. (Jacqueline); Gribble, R. F. (Robert F.); Hardek, T. W. (Thomas W.); Lynch, M. T. (Michael T.); Rees, D. E. (Daniel E.); Tallerico, P. J. (Paul J.); Trujillo, P. B. (Pete B.); Anderson, D. E. (David E.); Heidenreich, D. A. (Dale A.); Hicks, J. D. (Jim D.); Leontiev, V. N.

    2003-01-01

    The Spallation Neutron Source (SNS) is a new 1.4 MW average power beam, 1 GeV accelerator being built at Oak Ridge National Laboratory. The accelerator requires 15 converter-modulator stations each providing between 9 and 11 MW pulses with up to a 1 .I MW average power. The converter-modulator can be described as a resonant 20 kHz polyphase boost inverter. Each converter modulator derives its buss voltage from a standard substation cast-core transformer. Each substation is followed by an SCR pre-regulator to accommodate voltage changes from no load to full load, in addition to providing a soft-start function. Energy storage is provided by self-clearing metallized hazy polypropylene traction capacitors. These capacitors do not fail short, but clear any internal anomaly. Three 'H-Bridge' IGBT transistor networks are used to generate the polyphase 20 kHz transformer primary drive waveforms. The 20 kHz drive waveforms are time-gated to generate the desired klystron pulse width. Pulse width modulation of the individual 20 lcHz pulses is utilized to provide regulated output waveforms with DSP based adaptive feedforward and feedback techniques. The boost transformer design utilizes nanocrystalline alloy that provides low core loss at design flux levels and switching frequencies. Capacitors are used on the transformer secondary networks to resonate the leakage inductance. The transformers are wound for a specific leakage inductance, not turns ratio. This design technique generates multiple secondary volts per turn as compared to the primary. With the appropriate tuning conditions, switching losses are minimized. The resonant topology has the added benefit of being deQed in a klystron fault condition, with little energy deposited in the arc. This obviates the need of crowbars or other related networks. A review of these design parameters, operational performance, production status, and OWL installation and performance to date will be presented.

  4. Design, status and first operations of the spallation neutron source polyphase resonant converter modulator system

    International Nuclear Information System (INIS)

    Reass, W.A.; Apgar, S.E.; Baca, D.M.; Doss, James D.; Gonzales, J.; Gribble, R.F.; Hardek, T.W.; Lynch, M.T.; Rees, D.E.; Tallerico, P.J.; Trujillo, P.B.; Anderson, D.E.; Heidenreich, D.A.; Hicks, J.D.; Leontiev, V.N.

    2003-01-01

    The Spallation Neutron Source (SNS) is a new 1.4 MW average power beam, 1 GeV accelerator being built at Oak Ridge National Laboratory. The accelerator requires 15 converter-modulator stations each providing between 9 and 11 MW pulses with up to a 1 .I MW average power. The converter-modulator can be described as a resonant 20 kHz polyphase boost inverter. Each converter modulator derives its buss voltage from a standard substation cast-core transformer. Each substation is followed by an SCR pre-regulator to accommodate voltage changes from no load to full load, in addition to providing a soft-start function. Energy storage is provided by self-clearing metallized hazy polypropylene traction capacitors. These capacitors do not fail short, but clear any internal anomaly. Three 'H-Bridge' IGBT transistor networks are used to generate the polyphase 20 kHz transformer primary drive waveforms. The 20 kHz drive waveforms are time-gated to generate the desired klystron pulse width. Pulse width modulation of the individual 20 lcHz pulses is utilized to provide regulated output waveforms with DSP based adaptive feedforward and feedback techniques. The boost transformer design utilizes nanocrystalline alloy that provides low core loss at design flux levels and switching frequencies. Capacitors are used on the transformer secondary networks to resonate the leakage inductance. The transformers are wound for a specific leakage inductance, not turns ratio. This design technique generates multiple secondary volts per turn as compared to the primary. With the appropriate tuning conditions, switching losses are minimized. The resonant topology has the added benefit of being deQed in a klystron fault condition, with little energy deposited in the arc. This obviates the need of crowbars or other related networks. A review of these design parameters, operational performance, production status, and OWL installation and performance to date will be presented.

  5. A polyphasic approach for the taxonomy of cyanobacteria: principles and applications.

    Czech Academy of Sciences Publication Activity Database

    Komárek, Jiří

    2016-01-01

    Roč. 51, č. 3 (2016), s. 346-353 ISSN 0967-0262 R&D Projects: GA ČR GA15-00113S; GA ČR GAP506/12/1818 Institutional support: RVO:67985939 Keywords : cyanobacteria * taxonomy * polyphasic approach Subject RIV: EF - Botanics Impact factor: 2.412, year: 2016

  6. The Implementation of a Real-Time Polyphase Filter

    OpenAIRE

    Adámek, Karel; Novotný, Jan; Armour, Wes

    2014-01-01

    In this article we study the suitability of dierent computational accelerators for the task of real-time data processing. The algorithm used for comparison is the polyphase filter, a standard tool in signal processing and a well established algorithm. We measure performance in FLOPs and execution time, which is a critical factor for real-time systems. For our real-time studies we have chosen a data rate of 6.5GB/s, which is the estimated data rate for a single channel on the SKAs Low Frequenc...

  7. A polyphasic approach to the taxonomy of the Alternaria infectoria species-group

    DEFF Research Database (Denmark)

    Andersen, Birgitte; Sørensen, Jens Laurids; Nielsen, Kristian Fog

    2009-01-01

    Different taxa in the species-group of Alternaria infectoria (teleomorph Lewia spp.) are often isolated from various cereals including barley, maize and wheat grain, ornamental plants and skin lesions from animals and humans. In the present study we made a polyphasic characterization of 39 strains...

  8. The glacial sequence at Killiney, SE Ireland: terrestrial deglaciation and polyphase glacitectonic deformation

    NARCIS (Netherlands)

    Rijsdijk, K.F.; Warren, W.P.; van der Meer, J.J.M.

    2010-01-01

    Depositional conditions of a complexly deformed glacigenic sequence at Killiney Bay, on the west-central margin of the Irish Sea are reconstructed. Deformation geometries provide conclusive evidence for polyphase glacitectonic deformation generated by terrestrial ice sheets. They are dominated by

  9. A polyphasic approach for the characterization of endophytic Alternaria strains isolated from grapevines

    DEFF Research Database (Denmark)

    Polizzotto, Rachele; Andersen, Birgitte; Martini, Marta

    2012-01-01

    A polyphasic approach was set up and applied to characterize 20 fungal endophytes belonging to the genus Alternaria, recovered from grapevine in different Italian regions.Morphological, microscopical, molecular and chemical investigations were performed and the obtained results were combined in a...

  10. Short-Circuit Robustness Assessment in Power Electronic Modules for Megawatt Applications

    DEFF Research Database (Denmark)

    Iannuzzo, Francesco

    2016-01-01

    In this paper, threats and opportunities in testing of megawatt power electronic modules under short circuit are presented and discussed, together with the introduction of some basic principles of non-destructive testing, a key technique to allow post-failure analysis. The non-destructive testing...

  11. .i.Aspergillus viridinutans./i. complex: polyphasic taxonomy, mating behaviour and antifungal susceptibility testing

    Czech Academy of Sciences Publication Activity Database

    Dudová, Z.; Hubka, V.; Svobodová, L.; Hamal, P.; Nováková, Alena; Matsuzawa, T.; Yaguchi, T.; Kubátová, A.; Kolařík, Miroslav

    2013-01-01

    Roč. 56, Suppl. 3 (2013), s. 162-163 ISSN 0933-7407. [Trends in Medical Mycology /6./. 11.10.2013-14.10.2013, Copenhagen] Institutional support: RVO:60077344 ; RVO:61388971 Keywords : Aspergillus viridinutans * polyphasic taxonomy * mating behaviour * antifungal susceptibility testing Subject RIV: EE - Microbiology, Virology

  12. A new RF tagging pulse based on the Frank poly-phase perfect sequence

    DEFF Research Database (Denmark)

    Laustsen, Christoffer; Greferath, Marcus; Ringgaard, Steffen

    2014-01-01

    Radio frequency (RF) spectrally selective multiband pulses or tagging pulses, are applicable in a broad range of magnetic resonance methods. We demonstrate through simulations and experiments a new phase-modulation-only RF pulse for RF tagging based on the Frank poly-phase perfect sequence...

  13. Cavitation and polyphase flow forum, 1975. Joint meeting of Fluids Engineering and Lubrication Division, Minneapolis, Minnesota, May 5--7, 1975

    International Nuclear Information System (INIS)

    Waid, R.L.

    1975-01-01

    The nine papers which comprise the 1975 Forum present a wide range of primarily experimental studies of cavitation and polyphase flows. These papers include the polyphase mechanism in froths, the cavitation collapse pressures in venturi flow, the effects of test conditions on developed cavity flow, a cavitation hypothesis for geophysical phenomena, the character and design of centrifugal pumps for cavitation performance, and the effect of fluid and magnetic and electrical fields on cavitation erosion

  14. Automatic Modulation Classification of LFM and Polyphase-coded Radar Signals

    Directory of Open Access Journals (Sweden)

    S. B. S. Hanbali

    2017-12-01

    Full Text Available There are several techniques for detecting and classifying low probability of intercept radar signals such as Wigner distribution, Choi-Williams distribution and time-frequency rate distribution, but these distributions require high SNR. To overcome this problem, we propose a new technique for detecting and classifying linear frequency modulation signal and polyphase coded signals using optimum fractional Fourier transform at low SNR. The theoretical analysis and simulation experiments demonstrate the validity and efficiency of the proposed method.

  15. High power operation of the polyphase resonant converter modulator system for the spallation neutron source linear accelerator

    CERN Document Server

    Reass, W A; Baca, D M; Doss, J D; Gonzáles, J M; Gribble, R F; Trujillo, P G

    2003-01-01

    The spallation neutron source (SNS) is a new 1.4 MW average power beam, 1 GeV accelerator being built at Oak Ridge national laboratory. The accelerator requires 15 "long-pulse" converter-modulator stations each providing a maximum of 11 MW pulses with a 1.1 MW average power. Two variants of the converter-modulator are utilized, an 80 kV and a 140 kV design, the voltage dependant on the type of klystron load. The converter-modulator can be described as a resonant zero-voltage- switching polyphase boost inverter. As noted in Figure 1, each converter modulator derives its buss voltage from a standard 13.8 kV to 2100 Y (1.5 MVA) substation cast-core transformer. The substation also contains harmonic traps and filters to accommodate IEEE 519 and 141 regulations. Each substation is followed by an SCR preregulator to accommodate system voltage changes from no load to full load, in addition to providing a soft-start function. Energy storage and filtering is provided by special low inductance self-clearing metallized ...

  16. Handling technology of Mega-Watt millimeter-waves for optimized heating of fusion plasmas

    NARCIS (Netherlands)

    Shimozuma, T.; Kubo, S.; Yoshimura, Y.; Igami, H.; Takahashi, H.; Takita, Y.; Kobayashi, S.; Ito, S.; Mizuno, Y.; Idei, H.; Notake, T.; Shapiro, M.A.; Temkin, R.J.; Felici, F.; Goodman, T.P.; Sauter, O.; Minami, R.; Kariya, T.; Imai, T.; Mutoh, T.

    2009-01-01

    Millimeter-wave components were re-examined for high power (Mega-Watt) and steady-state (greater than one hour) operation. Some millimeter-wave components, including waveguide joints, vacuum pumping sections, power monitors, sliding waveguides, and injection windows, have been improved for high

  17. Polyphasic Temporal Behavior of Finger-Tapping Performance: A Measure of Motor Skills and Fatigue.

    Science.gov (United States)

    Aydin, Leyla; Kiziltan, Erhan; Gundogan, Nimet Unay

    2016-01-01

    Successive voluntary motor movement involves a number of physiological mechanisms and may reflect motor skill development and neuromuscular fatigue. In this study, the temporal behavior of finger tapping was investigated in relation to motor skills and fatigue by using a long-term computer-based test. The finger-tapping performances of 29 healthy male volunteers were analyzed using linear and nonlinear regression models established for inter-tapping interval. The results suggest that finger-tapping performance exhibits a polyphasic nature, and has several characteristic time points, which may be directly related to muscle dynamics and energy consumption. In conclusion, we believe that future studies evaluating the polyphasic nature of the maximal voluntary movement will lead to the definition of objective scales that can be used in the follow up of some neuromuscular diseases, as well as, the determination of motor skills, individual ability, and peripheral fatigue through the use of a low cost, easy-to-use computer-based finger-tapping test.

  18. REM sleep complicates period adding bifurcations from monophasic to polyphasic sleep behavior in a sleep-wake regulatory network model for human sleep

    OpenAIRE

    Kalmbach, K.; Booth, V.; Behn, C. G. Diniz

    2017-01-01

    The structure of human sleep changes across development as it consolidates from the polyphasic sleep of infants to the single nighttime sleep period typical in adults. Across this same developmental period, time scales of the homeostatic sleep drive, the physiological drive to sleep that increases with time spent awake, also change and presumably govern the transition from polyphasic to monophasic sleep behavior. Using a physiologically-based, sleep-wake regulatory network model for human sle...

  19. Polyphasic characterization of Westiellopsis prolifica (Hapalosiphonaceae, Cyanobacteria) from the El-Farafra Oasis (Western Desert, Egypt)

    Czech Academy of Sciences Publication Activity Database

    Saber, A. A.; Cantonati, M.; Mareš, Jan; Anesi, A.; Guella, G.

    2017-01-01

    Roč. 56, č. 6 (2017), s. 697-709 ISSN 0031-8884 Institutional support: RVO:60077344 Keywords : 16S rRNA * autecology * bio-organic screening * Egypt * polyphasic study Subject RIV: EF - Botanics OBOR OECD: Plant sciences, botany Impact factor: 1.826, year: 2016

  20. Carbonate fuel cells: Milliwatts to megawatts

    Science.gov (United States)

    Farooque, M.; Maru, H. C.

    The carbonate fuel cell power plant is an emerging high efficiency, ultra-clean power generator utilizing a variety of gaseous, liquid, and solid carbonaceous fuels for commercial and industrial applications. The primary mover of this generator is a carbonate fuel cell. The fuel cell uses alkali metal carbonate mixtures as electrolyte and operates at ∼650 °C. Corrosion of the cell hardware and stability of the ceramic components have been important design considerations in the early stages of development. The material and electrolyte choices are founded on extensive fundamental research carried out around the world in the 60s and early 70s. The cell components were developed in the late 1970s and early 1980s. The present day carbonate fuel cell construction employs commonly available stainless steels. The electrodes are based on nickel and well-established manufacturing processes. Manufacturing process development, scale-up, stack tests, and pilot system tests dominated throughout the 1990s. Commercial product development efforts began in late 1990s leading to prototype field tests beginning in the current decade leading to commercial customer applications. Cost reduction has been an integral part of the product effort. Cost-competitive product designs have evolved as a result. Approximately half a dozen teams around the world are pursuing carbonate fuel cell product development. The power plant development efforts to date have mainly focused on several hundred kW (submegawatt) to megawatt-class plants. Almost 40 submegawatt units have been operating at customer sites in the US, Europe, and Asia. Several of these units are operating on renewable bio-fuels. A 1 MW unit is operating on the digester gas from a municipal wastewater treatment plant in Seattle, Washington (US). Presently, there are a total of approximately 10 MW capacity carbonate fuel cell power plants installed around the world. Carbonate fuel cell products are also being developed to operate on

  1. Introduction to polyphasic dispersed systems theory application to open systems of microorganisms’ culture

    CERN Document Server

    Thierie, Jacques

    2016-01-01

    This book introduces a new paradigm in system description and modelling. The author shows the theoretical and practical successes of his approach, which involves replacing a traditional uniform description with a polyphasic description. This change of perspective reveals new fluxes that are cryptic in the classical description. Several case studies are given in this book, which is of interest of those working with biotechnology and green chemistry.

  2. Advanced multi-megawatt wind turbine design for utility application

    Science.gov (United States)

    Pijawka, W. C.

    1984-08-01

    A NASA/DOE program to develop a utility class multimegawatt wind turbine, the MOD-5A, is described. The MOD-5A features a 400 foot diameter rotor which is teetered and positioned upwind of the tower; a 7.3 megawatt power rating with a variable speed electric generating system; and a redundant rotor support and torque transmission structure. The rotor blades were fabricated from an epoxy-bonded wood laminate material which was a successful outgrowth of the MOD-OA airfoil design. Preliminary data from operational tests carried out at the NASA Plumbrook test facility are presented.

  3. Advanced multi-megawatt wind turbine design for utility application

    Science.gov (United States)

    Pijawka, W. C.

    1984-01-01

    A NASA/DOE program to develop a utility class multimegawatt wind turbine, the MOD-5A, is described. The MOD-5A features a 400 foot diameter rotor which is teetered and positioned upwind of the tower; a 7.3 megawatt power rating with a variable speed electric generating system; and a redundant rotor support and torque transmission structure. The rotor blades were fabricated from an epoxy-bonded wood laminate material which was a successful outgrowth of the MOD-OA airfoil design. Preliminary data from operational tests carried out at the NASA Plumbrook test facility are presented.

  4. Megawatt low-temperature DC plasma generator with divergent channels of gas-discharge tract

    Science.gov (United States)

    Gadzhiev, M. Kh.; Isakaev, E. Kh.; Tyuftyaev, A. S.; Yusupov, D. I.; Sargsyan, M. A.

    2017-04-01

    We have developed and studied a new effective megawatt double-unit generator of low-temperature argon plasma, which belongs to the class of dc plasmatrons and comprises the cathode and anode units with divergent gas-discharge channels. The generator has an efficiency of about 80-85% and ensures a long working life at operating currents up to 4000 A.

  5. From medium-sized to megawatt turbines...

    Energy Technology Data Exchange (ETDEWEB)

    Dongen, W. van [NedWind bv, Rhenen (Netherlands)

    1996-12-31

    One of the world`s first 500 kW turbines was installed in 1989 in the Netherlands. This forerunner of the current NedWind 500 kW range also represents the earliest predesign of the NedWind megawatt turbine. After the first 500 kW turbines with steel rotor blades and rotor diameter of 34 m, several design modifications followed, e.g. the rotor diameter was increased to 35 m and a tip brake was added. Later polyester blades were introduced and the rotor diameter was increased with 5 in. The drive train was also redesigned. Improvements on the 500 kW turbine concept has resulted in decreased cost, whereas annual energy output has increased to approx. 1.3 million kWh. Wind energy can substantially contribute to electricity supply. Maximum output in kiloWatthours is the target. Further improvement of the existing technology and implementation of flexible components may well prove to be a way to increase energy output, not only in medium or large sized wind turbines. 7 figs.

  6. Cyanobacterial composition of microbial mats from an Australian thermal spring: a polyphasic evaluation.

    Science.gov (United States)

    McGregor, Glenn B; Rasmussen, J Paul

    2008-01-01

    Cyanobacterial composition of microbial mats from an alkaline thermal spring issuing at 43-71 degrees C from tropical north-eastern Australia are described using a polyphasic approach. Eight genera and 10 species from three cyanobacterial orders were identified based on morphological characters. These represented taxa previously known as thermophilic from other continents. Ultrastructural analysis of the tower mats revealed two filamentous morphotypes contributed the majority of the biomass. Both types had ultrastructural characteristics of the family Pseudanabaenaceae. DNA extracts were made from sections of the tentaculiform towers and the microbial community analysed by 16S cyanobacteria-specific PCR and denaturing-gradient gel electrophoresis. Five significant bands were identified and sequenced. Two bands clustered closely with Oscillatoria amphigranulata isolated from New Zealand hot springs; one unique phylotype had only moderate similarity to a range of Leptolyngbya species; and one phylotype was closely related to a number of Geitlerinema species. Generally the approaches yielded complementary information, however the results suggest that species designation based on morphological and ultrastructural criteria alone often fails to recognize their true phylogenetic position. Conversely some molecular techniques may fail to detect rare taxa suggesting that the widest possible suite of techniques be applied when conducting analyses of cyanobacterial diversity of natural populations. This is the first polyphasic evaluation of thermophilic cyanobacterial communities from the Australian continent.

  7. Polyphasic identification of Lechevaliera fradia subsp. Iranica, A rare actinomycete isolated from Loshan region of Iran

    Directory of Open Access Journals (Sweden)

    Mahdi Moshtaghi Nikou

    2015-02-01

    Full Text Available Introduction: Actinomycetes are widely distributed in natural and man-made environments and have capability for degradation of organic matter. They are also well known as a rich source of antibiotics and bioactive molecules and are of considerable importance in industry. Materials and methods: In this study, a rare actinomycete was isolated and subjected to polyphasic identification. Identification of this rare actinomycetes was carried on according to polyphasic taxonomic approach which has been outlined by International Committee on Systematics of Prokaryotes (ICSP. Results: The cell wall of strain LO5 contained meso-diaminopimelic acid as diaminoacid and galactose, mannose and rhamnose as diagnostic sugars. The phospholipids consisted of diphosphatidylglycerol, phosphatidylglycerol and phosphatidylethanolamine. Phylogenetic analysis based on nearly complete 16S rRNA gene sequence comparison revealed affiliation to the family Pseudonocardiaceae and its similarity to the most closely related neighbor the Lechevalieria fradiae CGMCC 4.3506T was 98.6%. The level of DNA–DNA relatedness between the novel strain and Lechevalieria fradiae CGMCC 4.3506T was only 75 %. So according to the recommendations of a threshold value of 70% DNA-DNA similarity, LO5 belongs to the species Lechevalieria fradiae CGMCC 4.3506T. On the basis of genomic and phenotypic properties, the subspecies Lechevalieria fradiae subsp. Iranica IBRC-M 10378 is proposed. Discussion and conclusion: A polyphasic approach based on phenotypic, chemotaxonomic and phylogenetic investigations has been proposed for categorizing of a rare actinomycete in subspecies level. The techniques used to obtain the data required for determination of the taxonomic status of this isolate are based on minimal standards that have been established by some of the taxonomic subcommittees of the International Committee on Systematics of Prokaryotes (ICSP for specific groups of organisms.

  8. Narrow linewidth picosecond UV pulsed laser with mega-watt peak power.

    Science.gov (United States)

    Huang, Chunning; Deibele, Craig; Liu, Yun

    2013-04-08

    We demonstrate a master oscillator power amplifier (MOPA) burst mode laser system that generates 66 ps/402.5 MHz pulses with mega-watt peak power at 355 nm. The seed laser consists of a single frequency fiber laser (linewidth laser is operating in a 5-μs/10-Hz macropulse mode. The laser output has a transform-limited spectrum with a very narrow linewidth of individual longitudinal modes. The immediate application of the laser system is the laser-assisted hydrogen ion beam stripping for the Spallation Neutron Source (SNS).

  9. Recent developments in high average power driver technology

    International Nuclear Information System (INIS)

    Prestwich, K.R.; Buttram, M.T.; Rohwein, G.J.

    1979-01-01

    Inertial confinement fusion (ICF) reactors will require driver systems operating with tens to hundreds of megawatts of average power. The pulse power technology that will be required to build such drivers is in a primitive state of development. Recent developments in repetitive pulse power are discussed. A high-voltage transformer has been developed and operated at 3 MV in a single pulse experiment and is being tested at 1.5 MV, 5 kj and 10 pps. A low-loss, 1 MV, 10 kj, 10 pps Marx generator is being tested. Test results from gas-dynamic spark gaps that operate both in the 100 kV and 700 kV range are reported. A 250 kV, 1.5 kA/cm 2 , 30 ns electron beam diode has operated stably for 1.6 x 10 5 pulses

  10. An OFDM System Using Polyphase Filter and DFT Architecture for Very High Data Rate Applications

    Science.gov (United States)

    Kifle, Muli; Andro, Monty; Vanderaar, Mark J.

    2001-01-01

    This paper presents a conceptual architectural design of a four-channel Orthogonal Frequency Division Multiplexing (OFDM) system with an aggregate information throughput of 622 megabits per second (Mbps). Primary emphasis is placed on the generation and detection of the composite waveform using polyphase filter and Discrete Fourier Transform (DFT) approaches to digitally stack and bandlimit the individual carriers. The four-channel approach enables the implementation of a system that can be both power and bandwidth efficient, yet enough parallelism exists to meet higher data rate goals. It also enables a DC power efficient transmitter that is suitable for on-board satellite systems, and a moderately complex receiver that is suitable for low-cost ground terminals. The major advantage of the system as compared to a single channel system is lower complexity and DC power consumption. This is because the highest sample rate is half that of the single channel system and synchronization can occur at most, depending on the synchronization technique, a quarter of the rate of a single channel system. The major disadvantage is the increased peak-to-average power ratio over the single channel system. Simulation results in a form of bit-error-rate (BER) curves are presented in this paper.

  11. Contribution to the study of diffusion in poly-phase system; Contribution a l'etude de la diffusion en systeme polyphase

    Energy Technology Data Exchange (ETDEWEB)

    Adda, Y; Philibert, J [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires; Institut de Recherches de la Siderurgie Francaise (IRSID), 78 - Saint-Germain-en-Laye (France)

    1959-07-01

    After chemical diffusion between two metals, at temperatures where, according to the equilibrium diagram, several phases exist, parallel bands corresponding to these various phases can be seen in a section which is perpendicular to the diffusion front. It is known that in this case there are discontinuities in the concentration-penetration curve, corresponding to the interfaces. The concentrations at the point where the discontinuities occur give the limits of solubility in each of the present phases. During our experiments on the system uranium-zirconium, we verified that these concentrations do not vary with the diffusion time and therefore that the conditions of thermodynamical equilibrium are obeyed. It follows that an interesting method is available for determining the equilibrium diagram for the solid state. We have applied this method to the U-Zr system. Kinetic studies of poly-phase diffusion are as yet relatively scarce as a result of difficulty of experimentation. Various methods based on purely micro-graphical studies (measurement of the thickness of intermediate phases) are also proposed for evaluating the coefficient of diffusion. Our experimental results show that the hypotheses on which these methods are based are rarely valid. We have established concentration-penetration curves for the systems U-Zr (between 590 deg. C and 950 deg. C) and U-Mo (between 800 deg. C and 1050 deg. C). These curves have very often a very accentuated curvature, thus indicating variations in the diffusion coefficient, which cannot be expressed by simple relationships. Finally, we have observed certain anomalies in the neighbourhood of the interfaces between adjacent phases. Further we have studied the Kirkendall effect in poly-phase system by marking the plane of welding with tungsten wires, and compared these results to those from a previous study in the homogeneous phase. We have found that the presence of phase boundaries accentuates this effect. The interpretation of

  12. Multi-megawatt wind-power installations call for new, high-performance solutions

    International Nuclear Information System (INIS)

    2004-01-01

    This article discusses the development of increasingly powerful and profitable wind-energy installations for off-shore, on-shore and refurbishment sites. In particular, the rapid development of megawatt-class units is discussed. The latest products of various companies with rotor diameters of up to 120 metres and with power ratings of up to 5 MW are looked at and commented on. The innovations needed for the reduction of weight and the extreme demands placed on gearing systems are discussed. Also, the growing markets for wind energy installations in Europe and the United States are discussed and plans for new off-shore wind parks are looked at

  13. Possible Gems and Ultra-Fine Grained Polyphase Units in Comet Wild 2.

    Science.gov (United States)

    Gainsforth, Z.; Butterworth, A. L.; Jilly-Rehak, C. E.; Westphal, A. J.; Brownlee, D. E.; Joswiak, D.; Ogliore, R. C.; Zolensky, M. E.; Bechtel, H. A.; Ebel, D. S.; hide

    2016-01-01

    GEMS and ultrafine grained polyphase units (UFG-PU) in anhydrous IDPs are probably some of the most primitive materials in the solar system. UFG-PUs contain nanocrystalline silicates, oxides, metals and sulfides. GEMS are rounded approximately 100 nm across amorphous silicates containing embedded iron-nickel metal grains and sulfides. GEMS are one of the most abundant constituents in some anhydrous CPIDPs, often accounting for half the material or more. When NASA's Stardust mission returned with samples from comet Wild 2 in 2006, it was thought that UFG-PUs and GEMS would be among the most abundant materials found. However, possibly because of heating during the capture process in aerogel, neither GEMS nor UFG-PUs have been clearly found.

  14. Multi-megawatt power system trade study

    Science.gov (United States)

    Longhurst, Glen R.; Schnitzler, Bruce G.; Parks, Benjamin T.

    2002-01-01

    A concept study was undertaken to evaluate potential multi-megawatt power sources for nuclear electric propulsion. The nominal electric power requirement was set at 15 MWe with an assumed mission profile of 120 days at full power, 60 days in hot standby, and another 120 days of full power, repeated several times for 7 years of service. Two configurations examined were (1) a gas-cooled reactor based on the NERVA Derivative design, operating a closed cycle Brayton power conversion system; and (2) a molten metal-cooled reactor based on SP-100 technology, driving a boiling potassium Rankine power conversion system. This study considered the relative merits of these two systems, seeking to optimize the specific mass. Conclusions were that either concept appeared capable of reaching the specific mass goal of 3-5 kg/kWe estimated to be needed for this class of mission, though neither could be realized without substantial development in reactor fuels technology, thermal radiator mass and volume efficiency, and power conversion and distribution electronics and systems capable of operating at high temperatures. The gas-Brayton system showed a specific mass advantage (3.17 vs 6.43 kg/kWe for the baseline cases) under the set of assumptions used and eliminated the need to deal with two-phase working fluid flows in the microgravity environment of space. .

  15. Contribution to the study of diffusion in poly-phase system; Contribution a l'etude de la diffusion en systeme polyphase

    Energy Technology Data Exchange (ETDEWEB)

    Adda, Y.; Philibert, J. [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires; Institut de Recherches de la Siderurgie Francaise (IRSID), 78 - Saint-Germain-en-Laye (France)

    1959-07-01

    After chemical diffusion between two metals, at temperatures where, according to the equilibrium diagram, several phases exist, parallel bands corresponding to these various phases can be seen in a section which is perpendicular to the diffusion front. It is known that in this case there are discontinuities in the concentration-penetration curve, corresponding to the interfaces. The concentrations at the point where the discontinuities occur give the limits of solubility in each of the present phases. During our experiments on the system uranium-zirconium, we verified that these concentrations do not vary with the diffusion time and therefore that the conditions of thermodynamical equilibrium are obeyed. It follows that an interesting method is available for determining the equilibrium diagram for the solid state. We have applied this method to the U-Zr system. Kinetic studies of poly-phase diffusion are as yet relatively scarce as a result of difficulty of experimentation. Various methods based on purely micro-graphical studies (measurement of the thickness of intermediate phases) are also proposed for evaluating the coefficient of diffusion. Our experimental results show that the hypotheses on which these methods are based are rarely valid. We have established concentration-penetration curves for the systems U-Zr (between 590 deg. C and 950 deg. C) and U-Mo (between 800 deg. C and 1050 deg. C). These curves have very often a very accentuated curvature, thus indicating variations in the diffusion coefficient, which cannot be expressed by simple relationships. Finally, we have observed certain anomalies in the neighbourhood of the interfaces between adjacent phases. Further we have studied the Kirkendall effect in poly-phase system by marking the plane of welding with tungsten wires, and compared these results to those from a previous study in the homogeneous phase. We have found that the presence of phase boundaries accentuates this effect. The interpretation of

  16. Development of Megawatt Demand Setter for Plant Operating Flexibility

    International Nuclear Information System (INIS)

    Kim, Se Chang; Hah, Yeong Joon; Song, In Ho; Lee, Myeong Hun; Chang, Do Ik; Choi, Jung In

    1993-05-01

    The Conceptual design of the Megawatt Demand Setter (MDS) is presented for the Korean Standardized Nuclear Power Plant. The MDS is a digital supervisory limitation system. The MDS assures that the plant does not exceed the operating limits by regulating the plant operations through monitoring the operating margins of the critical parameters. MDS is aimed at increasing the operating flexibility which allow the nuclear plant to meet the grid demand in very efficient manner. It responds to the grid demand without penalizing plant availability by limiting the load demand when the operating limits are approached or violated. MDS design concepts were tested using simulation responses of Yonggwang Units 3, 4. The design of the Yonggwang Units 3, 4 would be used as a reference which designs of Korean Standardized Nuclear Power Plants would be based upon. The simulation results illustrate that the MDS can be used to improve operating flexibility. (Author)

  17. Phase 1 Integrated Systems Test and Characterization Report for the 5-Megawatt Dynamometer and Controllable Grid Interface

    Energy Technology Data Exchange (ETDEWEB)

    Wallen, Robert B [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Lambert, Scott R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gevorgian, Vahan [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dana, Scott [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-03-13

    This report details the commissioning of the 5-megawatt dynamometer at the National Wind Technology Center at the National Renewable Energy Laboratory. The purpose of these characterization tests were to verify the dynamometer's performance over the widest possible range of operating conditions, gain insight into system-level behavior, and establish confidence in measurement data.

  18. LCA sensitivity analysis of a multi-megawatt wind turbine

    International Nuclear Information System (INIS)

    Martinez, E.; Jimenez, E.; Blanco, J.; Sanz, F.

    2010-01-01

    During recent years renewables have been acquiring gradually a significant importance in the world market (especially in the Spanish energetic market) and in society; this fact makes clear the need to increase and improve knowledge of these power sources. Starting from the results of a Life Cycle Assessment (LCA) of a multi-megawatt wind turbine, this work is aimed to assess the relevance of different choices that have been made during its development. Looking always to cover the largest possible spectrum of options, four scenarios have been analysed, focused on four main phases of lifecycle: maintenance, manufacturing, dismantling, and recycling. These scenarios facilitate to assess the degree of uncertainty of the developed LCA due to choices made, excluding from the assessment the uncertainty due to the inaccuracy and the simplification of the environmental models used or spatial and temporal variability in different parameters. The work has been developed at all times using the of Eco-indicator99 LCA method. (author)

  19. LCA sensitivity analysis of a multi-megawatt wind turbine

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, E. [Grupo Eolicas Riojanas, Carretera de Laguardia, 91-93, 26006 Logrono, La Rioja (Spain); Department of Mechanical Engineering, University of La Rioja, Logrono, La Rioja (Spain); Jimenez, E. [Department of Electrical Engineering, University of La Rioja, Logrono, La Rioja (Spain); Blanco, J.; Sanz, F. [Department of Mechanical Engineering, University of La Rioja, Logrono, La Rioja (Spain)

    2010-07-15

    During recent years renewables have been acquiring gradually a significant importance in the world market (especially in the Spanish energetic market) and in society; this fact makes clear the need to increase and improve knowledge of these power sources. Starting from the results of a Life Cycle Assessment (LCA) of a multi-megawatt wind turbine, this work is aimed to assess the relevance of different choices that have been made during its development. Looking always to cover the largest possible spectrum of options, four scenarios have been analysed, focused on four main phases of lifecycle: maintenance, manufacturing, dismantling, and recycling. These scenarios facilitate to assess the degree of uncertainty of the developed LCA due to choices made, excluding from the assessment the uncertainty due to the inaccuracy and the simplification of the environmental models used or spatial and temporal variability in different parameters. The work has been developed at all times using the of Eco-indicator99 LCA method. (author)

  20. Test facility for the development of 150-keV, multi-megawatt neutral beam systems

    International Nuclear Information System (INIS)

    Haughian, W.; Baker, W.R.; Biagi, L.A.; Hopkins, D.B.

    1975-11-01

    The next generation of CTR experiments, such as the Tokamak Fusion Test Reactor (TFTR), will require neutral-beam injection systems that produce multi-megawatt, 120-keV deuterium-beam pulses of 0.5-second duration. Since present injection systems are operating in the 10- to 40-keV range, an intensive development effort is in progress to meet a 150-keV requirement. The vacuum system and power supplies that make up a test facility to be used in the development of these injectors are described

  1. Megawatt Class Nuclear Space Power Systems (MCNSPS) conceptual design and evaluation report. Volume 2, technologies 1: Reactors, heat transport, integration issues

    Science.gov (United States)

    Wetch, J. R.

    1988-01-01

    The objectives of the Megawatt Class Nuclear Space Power System (MCNSPS) study are summarized and candidate systems and subsystems are described. Particular emphasis is given to the heat rejection system and the space reactor subsystem.

  2. Design of megawatt power level heat pipe reactors

    Energy Technology Data Exchange (ETDEWEB)

    Mcclure, Patrick Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Poston, David Irvin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dasari, Venkateswara Rao [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Reid, Robert Stowers [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-11-12

    An important niche for nuclear energy is the need for power at remote locations removed from a reliable electrical grid. Nuclear energy has potential applications at strategic defense locations, theaters of battle, remote communities, and emergency locations. With proper safeguards, a 1 to 10-MWe (megawatt electric) mobile reactor system could provide robust, self-contained, and long-term power in any environment. Heat pipe-cooled fast-spectrum nuclear reactors have been identified as a candidate for these applications. Heat pipe reactors, using alkali metal heat pipes, are perfectly suited for mobile applications because their nature is inherently simpler, smaller, and more reliable than “traditional” reactors. The goal of this project was to develop a scalable conceptual design for a compact reactor and to identify scaling issues for compact heat pipe cooled reactors in general. Toward this goal two detailed concepts were developed, the first concept with more conventional materials and a power of about 2 MWe and a the second concept with less conventional materials and a power level of about 5 MWe. A series of more qualitative advanced designs were developed (with less detail) that show power levels can be pushed to approximately 30 MWe.

  3. Performance of a 2-megawatt high voltage test load

    International Nuclear Information System (INIS)

    Horan, D.; Kustom, R.; Ferguson, M.

    1995-01-01

    A high-power, water-cooled resistive load which simulates the electrical load characteristics of a high-power klystron, capable of 2 megawatts dissipation at 95 kV DC, was built and installed at the Advanced Photon Source for use in load-testing high voltage power supplies. During this testing, the test load has logged approximately 35 hours of operation at power levels in excess of one mezawatt. Slight variations in the resistance of the load during operation indicate that leakage currents in the cooling water may be a significant factor affecting the performance of the load. Sufficient performance data have been collected to indicate that leakage current through the deionized (DI) water coolant shunts roughly 15 percent of the full-load current around the load resistor elements. The leakage current could cause deterioration of internal components of the load. The load pressure vessel was disassembled and inspected internally for any signs of significant wear and distress. Results of this inspection and possible modifications for improved performance will be discussed

  4. Analogue modelling of a reactivated, basement controlled strike-slip zone, Sierra de Albarracín, Spain: application of sandbox modelling to polyphase deformation

    NARCIS (Netherlands)

    Merten, S.; Smit, W.G.; Nieuwland, D.A.; Rondeel, H.E.

    2006-01-01

    This paper presents the results of an analogue modelling study on the reactivation of Riedel shears generated by basement-induced sinistral strike-slip faulting. It is based on a natural example in the Sierra de Albarracín, Iberian Range (Spain). The area has a polyphase deformation history, defined

  5. 1 megawatt, 100 GHz gyrotron study. Final report, March 21-September 1, 1983

    International Nuclear Information System (INIS)

    Dionne, N.J.; Mallavarpu, R.; Palevsky, A.

    1983-01-01

    This report provides the results of a design study on a gyrotron device employing a new type of hollow gyrobeam formation system and having a capability for delivering megawatt CW power at 100 GHz to an ECRH-heated, magnetically-confined plasma. The conceptual basis for the beam formation system is the tilt-angle gun (TAG) in which a conically-shaped electron beam is formed in a magnetically-shielded region and is then injected into the stray-field region of the main magnetic focusing system. Because fluid coolants can be accessed through the central pole of the TAG-type gun, rf interaction can be contemplated with cavity configurations not practical with the conventional MIG-type gyrobeam formation systems

  6. Cyanobacterial diversity in extreme environments in Baja California, Mexico: a polyphasic study.

    Science.gov (United States)

    López-Cortés, A; García-Pichel, F; Nübel, U; Vázquez-Juárez, R

    2001-12-01

    Cyanobacterial diversity from two geographical areas of Baja California Sur, Mexico, were studied: Bahia Concepcion, and Ensenada de Aripez. The sites included hypersaline ecosystems, sea bottom, hydrothermal springs, and a shrimp farm. In this report we describe four new morphotypes, two are marine epilithic from Bahia Concepcion, Dermocarpa sp. and Hyella sp. The third, Geitlerinema sp., occurs in thermal springs and in shrimp ponds, and the fourth, Tychonema sp., is from a shrimp pond. The partial sequences of the 16S rRNA genes and the phylogenetic relationship of four cyanobacterial strains (Synechococcus cf. elongatus, Leptolyngbya cf. thermalis, Leptolyngbya sp., and Geitlerinema sp.) are also presented. Polyphasic studies that include the combination of light microscopy, cultures and the comparative analysis of 16S rRNA gene sequences provide the most powerful approach currently available to establish the diversity of these oxygenic photosynthetic microorganisms in culture and in nature.

  7. Grid Filter Design for a Multi-Megawatt Medium-Voltage Voltage Source Inverter

    DEFF Research Database (Denmark)

    Rockhill, A.A.; Liserre, Marco; Teodorescu, Remus

    2011-01-01

    This paper describes the design procedure and performance of an LCL grid filter for a medium-voltage neutral point clamped (NPC) converter to be adopted for a multimegawatt wind turbine. The unique filter design challenges in this application are driven by a combination of the medium voltage...... converter, a limited allowable switching frequency, component physical size and weight concerns, and the stringent limits for allowable injected current harmonics. Traditional design procedures of grid filters for lower power and higher switching frequency converters are not valid for a multi......-megawatt filter connecting a medium-voltage converter switching at low frequency to the electric grid. This paper demonstrates a frequency domain model based approach to determine the optimum filter parameters that provide the necessary performance under all operating conditions given the necessary design...

  8. The Energetik V microcomputer system for a 210 megawatt power unit

    Energy Technology Data Exchange (ETDEWEB)

    Mumdzhiyan, G.; Astradzhiyan, G.; Ganchev, T.; Stanchev, V.

    1982-01-01

    Contemporary concepts of management and testing of a thermoelectric power plant (TES) using microcomputer systems (MKS) characterize four principles: the division of the whole system into functional groups; clear formulation and limitation of the functions of the information system, shielding, the automatic regulation system (SAR) and distinct control; strict standardization of the heirarchical levels and decentralization, including the preparation of signals for regulation and control; maximal centralization at the upper level of testing. In developing a microcomputer system for a 210 megawatt unit, the constant expansion of the functions of the microcomputer system and its software, realized in the form of program modules was provided, along with the back up of measurement systems, the wide use of dialog terminals and so on. SM 601 or MS 6800 microprocessors are used in the system. The capacity of the basic storage devices are a passive 34 kilobytes and an operational storage capacity of 16 kilobytes.

  9. Polyphasic taxonomy of Aspergillus section Aspergillus (formerly Eurotium, and its occurrence in indoor environments and food

    Directory of Open Access Journals (Sweden)

    A.J. Chen

    2017-09-01

    Full Text Available Aspergillus section Aspergillus (formerly the genus Eurotium includes xerophilic species with uniseriate conidiophores, globose to subglobose vesicles, green conidia and yellow, thin walled eurotium-like ascomata with hyaline, lenticular ascospores. In the present study, a polyphasic approach using morphological characters, extrolites, physiological characters and phylogeny was applied to investigate the taxonomy of this section. Over 500 strains from various culture collections and new isolates obtained from indoor environments and a wide range of substrates all over the world were identified using calmodulin gene sequencing. Of these, 163 isolates were subjected to molecular phylogenetic analyses using sequences of ITS rDNA, partial β-tubulin (BenA, calmodulin (CaM and RNA polymerase II second largest subunit (RPB2 genes. Colony characteristics were documented on eight cultivation media, growth parameters at three incubation temperatures were recorded and micromorphology was examined using light microscopy as well as scanning electron microscopy to illustrate and characterize each species. Many specific extrolites were extracted and identified from cultures, including echinulins, epiheveadrides, auroglaucins and anthraquinone bisanthrons, and to be consistent in strains of nearly all species. Other extrolites are species-specific, and thus valuable for identification. Several extrolites show antioxidant effects, which may be nutritionally beneficial in food and beverages. Important mycotoxins in the strict sense, such as sterigmatocystin, aflatoxins, ochratoxins, citrinin were not detected despite previous reports on their production in this section. Adopting a polyphasic approach, 31 species are recognized, including nine new species. ITS is highly conserved in this section and does not distinguish species. All species can be differentiated using CaM or RPB2 sequences. For BenA, Aspergillus brunneus and A. niveoglaucus share identical

  10. Recent advances in the development of high average power induction accelerators for industrial and environmental applications

    International Nuclear Information System (INIS)

    Neau, E.L.

    1994-01-01

    Short-pulse accelerator technology developed during the early 1960's through the late 1980's is being extended to high average power systems capable of use in industrial and environmental applications. Processes requiring high dose levels and/or high volume throughput will require systems with beam power levels from several hundreds of kilowatts to megawatts. Beam accelerating potentials can range from less than 1 MeV to as much as 10 MeV depending on the type of beam, depth of penetration required, and the density of the product being treated. This paper addresses the present status of a family of high average power systems, with output beam power levels up to 200 kW, now in operation that use saturable core switches to achieve output pulse widths of 50 to 80 nanoseconds. Inductive adders and field emission cathodes are used to generate beams of electrons or x-rays at up to 2.5 MeV over areas of 1000 cm 2 . Similar high average power technology is being used at ≤ 1 MeV to drive repetitive ion beam sources for treatment of material surfaces over 100's of cm 2

  11. Design of a sodium-air heat dissipator capable of transmitting powers till a megawatt

    International Nuclear Information System (INIS)

    Castellanos C, G.

    1977-01-01

    This is a theoretical study of the transport phenomenon in which emphasis is put on heat transference. From the chemical and nuclear point of view a revision is made of the sodium behavior as an agent of heat transference and as a fluid. The heat transference is analyzed on wide surfaces and the design of a sodium air heat dissipator capable of transferring powers at the range of a megawatt is presented with a simulation by computer. The results show that the heat transference coefficients don't vary in a great measure in relation with the temperature. This way we can use the caloric temperature for the determination of the sodium properties and the medium temperature for the determination of the air properties. (author)

  12. A 2-megawatt load for testing high voltage DC power supplies

    International Nuclear Information System (INIS)

    Horan, D.; Kustom, R.; Ferguson, M.; Primdahl, K.

    1993-01-01

    A high power water-cooled resistive load, capable of dissipating 2 Megawatts at 95 kilovolts is being designed and built. The load utilizes wirewound resistor elements suspended inside insulating tubing contained within a pressure vessel which is supplied a continuous flow of deionized water for coolant. A sub-system of the load is composed of non-inductive resistor elements in an oil tank. Power tests conducted on various resistor types indicate that dissipation levels as high as 22 times the rated dissipation in air can be achieved when the resistors are placed in a turbulent water flow of at least 15 gallons per minute. Using this data, the load was designed using 100 resistor elements in a series arrangement. A single-wall 316 stainless steel pressure vessel with flanged torispherical heads is built to contain the resistor assembly and deionized water. The resistors are suspended within G-11 tubing which span the cylindrical length of the vessel. These tubes are supported by G-10 baffles which also increase convection from the tubes by promoting turbulence within the surrounding water

  13. Polyphasic taxonomic analysis establishes Mycobacterium indicus pranii as a distinct species.

    Directory of Open Access Journals (Sweden)

    Vikram Saini

    Full Text Available BACKGROUND: Mycobacterium indicus pranii (MIP, popularly known as Mw, is a cultivable, non-pathogenic organism, which, based on its growth and metabolic properties, is classified in Runyon Group IV along with M. fortuitum, M. smegmatis and M. vaccae. The novelty of this bacterium was accredited to its immunological ability to undergo antigen driven blast transformation of leukocytes and delayed hypersensitivity skin test in leprosy patients, a disease endemic in the Indian sub-continent. Consequently, MIP has been extensively evaluated for its biochemical and immunological properties leading to its usage as an immunomodulator in leprosy and tuberculosis patients. However, owing to advances in sequencing and culture techniques, the citing of new strains with almost 100% similarity in the sequences of marker genes like 16S rRNA, has compromised the identity of MIP as a novel species. Hence, to define its precise taxonomic position, we have carried out polyphasic taxonomic studies on MIP that integrate its phenotypic, chemotaxonomic and molecular phylogenetic attributes. METHODOLOGY/PRINCIPAL FINDINGS: The comparative analysis of 16S rRNA sequence of MIP by using BLAST algorithm at NCBI (nr database revealed a similarity of > or =99% with M. intracellulare, M. arosiense, M. chimaera, M. seoulense, M. avium subsp. hominissuis, M. avium subsp. paratuberculosis and M. bohemicum. Further analysis with other widely used markers like rpoB and hsp65 could resolve the phylogenetic relationship between MIP and other closely related mycobacteria apart from M. intracellulare and M. chimaera, which shares > or =99% similarity with corresponding MIP orthologues. Molecular phylogenetic analysis, based on the concatenation of candidate orthologues of 16S rRNA, hsp65 and rpoB, also substantiated its distinctiveness from all the related organisms used in the analysis excluding M. intracellulare and M. chimaera with which it exhibited a close proximity. This

  14. Comparison of methods for the identification and sub-typing of O157 and non-O157 Escherichia coli serotypes and their integration into a polyphasic taxonomy approach

    Directory of Open Access Journals (Sweden)

    Prieto-Calvo M.A.

    2016-12-01

    Full Text Available Phenotypic, chemotaxonomic and genotypic data from 12 strains of Escherichia coli were collected, including carbon source utilisation profiles, ribotypes, sequencing data of the 16S–23S rRNA internal transcribed region (ITS and Fourier transform-infrared (FT-IR spectroscopic profiles. The objectives were to compare several identification systems for E. coli and to develop and test a polyphasic taxonomic approach using the four methodologies combined for the sub-typing of O157 and non-O157 E. coli. The nucleotide sequences of the 16S–23S rRNA ITS regions were amplified by polymerase chain reaction (PCR, sequenced and compared with reference data available at the GenBank database using the Basic Local Alignment Search Tool (BLAST . Additional information comprising the utilisation of carbon sources, riboprint profiles and FT-IR spectra was also collected. The capacity of the methods for the identification and typing of E. coli to species and subspecies levels was evaluated. Data were transformed and integrated to present polyphasic hierarchical clusters and relationships. The study reports the use of an integrated scheme comprising phenotypic, chemotaxonomic and genotypic information (carbon source profile, sequencing of the 16S–23S rRNA ITS, ribotyping and FT-IR spectroscopy for a more precise characterisation and identification of E. coli. The results showed that identification of E. coli strains by each individual method was limited mainly by the extension and quality of reference databases. On the contrary, the polyphasic approach, whereby heterogeneous taxonomic data were combined and weighted, improved the identification results, gave more consistency to the final clustering and provided additional information on the taxonomic structure and phenotypic behaviour of strains, as shown by the close clustering of strains with similar stress resistance patterns.

  15. Regional polyphase deformation of the Eastern Sierras Pampeanas (Argentina Andean foreland): strengths and weaknesses of paleostress inversion

    Science.gov (United States)

    Traforti, Anna; Zampieri, Dario; Massironi, Matteo; Viola, Giulio; Alvarado, Patricia; Di Toro, Giulio

    2016-04-01

    The Eastern Sierras Pampeanas of central Argentina are composed of a series of basement-cored ranges, located in the Andean foreland c. 600 km east of the Andean Cordillera. Although uplift of the ranges is partly attributed to the regional Neogene evolution (Ramos et al. 2002), many questions remain as to the timing and style of deformation. In fact, the Eastern Sierras Pampeanas show compelling evidence of a long lasting brittle history (spanning the Early Carboniferous to Present time), characterised by several deformation events reflecting different tectonic regimes. Each deformation phase resulted in further strain increments accommodated by reactivation of inherited structures and rheological anisotropies (Martino 2003). In the framework of such a polyphase brittle tectonic evolution affecting highly anisotropic basement rocks, the application of paleostress inversion methods, though powerful, suffers from some shortcomings, such as the likely heterogeneous character of fault slip datasets and the possible reactivation of even highly misoriented structures, and thus requires careful analysis. The challenge is to gather sufficient fault-slip data, to develop a proper understanding of the regional evolution. This is done by the identification of internally consistent fault and fracture subsets (associated to distinct stress states on the basis of their geometric and kinematic compatibility) in order to generate a chronologically-constrained evolutionary conceptual model. Based on large fault-slip datasets collected in the Sierras de Cordoba (Eastern Sierras Pampeanas), reduced stress tensors have been generated and interpreted as part of an evolutionary model by considering the obtained results against: (i) existing K-Ar illite ages of fault gouges in the study area (Bense et al. 2013), (ii) the nature and orientation of pre-existing anisotropies and (iii) the present-day stress field due to the convergence of the Nazca and South America plates (main shortening

  16. Deformable trailing edge flaps for modern megawatt wind turbine controllers using strain gauge sensors

    DEFF Research Database (Denmark)

    Andersen, Peter Bjørn; Henriksen, Lars Christian; Gaunaa, Mac

    2010-01-01

    . By enabling the trailing edge to move independently and quickly along the spanwise position of the blade, local small flutuations in the aerodynamic forces can be alleviated by deformation of the airfoil flap. Strain gauges are used as input for the flap controller, and the effect of placing strain gauges......The present work contains a deformable trailing edge flap controller integrated in a numerically simulated modern, variablespeed, pitch-regulated megawatt (MW)-size wind turbine. The aeroservoelastic multi-body code HAWC2 acts as a component in the control loop design. At the core of the proposed...... edge flaps on a wind turbine blade rather than a conclusive control design with traditional issues like stability and robustness fully investigated. Recent works have shown that the fatigue load reduction by use of trailing edge flaps may be greater than for traditional pitch control methods...

  17. Analysis of a 10 megawatt space-based solar-pumped neodymium laser system

    Science.gov (United States)

    Kurweg, U. H.

    1984-01-01

    A ten megawatt solar-pumped continuous liquid laser system for space applications is examined. It is found that a single inflatable mirror of 434 m diameter used in conjunction with a conical secondary concentrator is sufficient to side pump a liquid neodymium lasant in an annular tube of 6 m length and 1 m outer and 0.8 m inner diameter. About one fourth of intercepted radiation converging on the laser tube is absorbed and one fifth of this radiation is effective in populating the upper levels. The liquid lasant is flowed through the annular laser cavity at 1.9 m/s and is cooled via a heat exchanger and a large radiator surface comparable in size to the concentrating mirror. The power density of incident light within the lasant of approximately 68 watt/cu cm required for cw operation is exceeded in the present annular configuration. Total system weight corresponds to 20,500 kg and is thus capable of being transported to near Earth orbit by a single shuttle flight.

  18. Silicon controlled rectifier polyphase bridge inverter commutated with gate-turn-off thyristor

    Science.gov (United States)

    Edwards, Dean B. (Inventor); Rippel, Wally E. (Inventor)

    1986-01-01

    A polyphase SCR inverter (10) having N switching poles, each comprised of two SCR switches (1A, 1B; 2A, 2B . . . NA, NB) and two diodes (D1B; D1B; D2A, D2B . . . DNA, DNB) in series opposition with saturable reactors (L1A, L1B; L2A, L2B . . . LNA, LNB) connecting the junctions between the SCR switches and diodes to an output terminal (1, 2 . . . 3) is commutated with only one GTO thyristor (16) connected between the common negative terminal of a dc source and a tap of a series inductor (14) connected to the positive terminal of the dc source. A clamp winding (22) and diode (24) are provided, as is a snubber (18) which may have its capacitance (c) sized for maximum load current divided into a plurality of capacitors (C.sub.1, C.sub.2 . . . C.sub.N), each in series with an SCR switch S.sub.1, S.sub.2 . . . S.sub.N). The total capacitance may be selected by activating selected switches as a function of load current. A resistor 28 and SCR switch 26 shunt reverse current when the load acts as a generator, such as a motor while braking.

  19. Phanerozoic polyphase orogenies recorded in the northeastern Okcheon Belt, Korea from SHRIMP U-Pb detrital zircon and K-Ar illite geochronologies

    Science.gov (United States)

    Jang, Yirang; Kwon, Sanghoon; Song, Yungoo; Kim, Sung Won; Kwon, Yi Kyun; Yi, Keewook

    2018-05-01

    We present the SHRIMP U-Pb detrital zircon and K-Ar illite 1Md/1M and 2M1 ages, suggesting new insight into the Phanerozoic polyphase orogenies preserved in the northeastern Okcheon Belt, Korea since the initial basin formation during Neoproterozoic rifting through several successive contractional orogens. The U-Pb detrital zircon ages from the Early Paleozoic strata of the Taebaeksan Zone suggest a Cambrian maximum deposition age, and are supported by trilobite and conodont biostratigraphy. Although the age spectra from two sedimentary groups, the Yeongwol and Taebaek Groups, show similar continuous distributions from the Late Paleoproterozoic to Early Paleozoic ages, a Grenville-age hiatus (1.3-0.9 Ga) in the continuous stratigraphic sequence from the Taebaek Group suggests the existence of different peripheral clastic sources along rifted continental margin(s). In addition, we present the K-Ar illite 1Md/1M ages of the fault gouges, which confirm fault formation/reactivation during the Late Cretaceous to Early Paleogene (ca. 82-62 Ma) and the Early Miocene (ca. 20-18 Ma). The 2M1 illite ages, at least those younger than the host rock ages, provide episodes of deformation, metamorphism and hydrothermal effects related to the tectonic events during the Devonian (ca.410 Ma) and Permo-Triassic (ca. 285-240 Ma). These results indicate that the northeastern Okcheon Belt experienced polyphase orogenic events, namely the Okcheon (Middle Paleozoic), Songrim (Late Paleozoic to Early Mesozoic), Daebo (Middle Mesozoic) and Bulguksa (Late Mesozoic to Early Cenozoic) Orogenies, reflecting the Phanerozoic tectonic evolution of the Korean Peninsula along the East Asian continental margin.

  20. Study on Abrasive Wear of Brake Pad in the Large-megawatt Wind Turbine Brake Based on Deform Software

    Science.gov (United States)

    Zhang, Shengfang; Hao, Qiang; Sha, Zhihua; Yin, Jian; Ma, Fujian; Liu, Yu

    2017-12-01

    For the friction and wear issues of brake pads in the large-megawatt wind turbine brake during braking, this paper established the micro finite element model of abrasive wear by using Deform-2D software. Based on abrasive wear theory and considered the variation of the velocity and load in the micro friction and wear process, the Archard wear calculation model is developed. The influence rules of relative sliding velocity and friction coefficient in the brake pad and disc is analysed. The simulation results showed that as the relative sliding velocity increases, the wear will be more serious, while the larger friction coefficient lowered the contact pressure which released the wear of the brake pad.

  1. Acoustic Noise Test Report for the U.S. Department of Energy 1.5-Megawatt Wind Turbine

    Energy Technology Data Exchange (ETDEWEB)

    Roadman, Jason [National Renewable Energy Lab. (NREL), Golden, CO (United States); Huskey, Arlinda [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-07-01

    A series of tests were conducted to characterize the baseline properties and performance of the U.S. Department of Energy (DOE) 1.5-megawatt wind turbine (DOE 1.5) to enable research model development and quantify the effects of future turbine research modifications. The DOE 1.5 is built on the platform of GE's 1.5-MW SLE commercial wind turbine model. It was installed in a nonstandard configuration at the NWTC with the objective of supporting DOE Wind Program research initiatives such as A2e. Therefore, the test results may not represent the performance capabilities of other GE 1.5-MW SLE turbines. The acoustic noise test documented in this report is one of a series of tests carried out to establish a performance baseline for the DOE 1.5 in the NWTC inflow environment.

  2. Polyphase ceramic and glass-ceramic forms for immobilizing ICPP high-level nuclear waste

    International Nuclear Information System (INIS)

    Harker, A.B.; Flintoff, J.F.

    1984-01-01

    Polyphase ceramic and glass-ceramic forms have been consolidated from simulated Idaho Chemical Processing Plant wastes by hot isostatic pressing calcined waste and chemical additives by 1000 0 C or less. The ceramic forms can contain over 70 wt% waste with densities ranging from 3.5 to 3.85 g/cm 3 , depending upon the formulation. Major phases are CaF 2 , CaZrTi 207 , CaTiO 3 , monoclinic ZrO 2 , and amorphous intergranular material. The relative fraction of the phases is a function of the chemical additives (TiO 2 , CaO, and SiO 2 ) and consolidation temperature. Zirconolite, the major actinide host, makes the ceramic forms extremely leach resistant for the actinide simulant U 238 . The amorphous phase controls the leach performance for Sr and Cs which is improved by the addition of SiO 2 . Glass-ceramic forms were also consolidated by HIP at waste loadings of 30 to 70 wt% with densities of 2.73 to 3.1 g/cm 3 using Exxon 127 borosilicate glass frit. The glass-ceramic forms contain crystalline CaF 2 , Al 203 , and ZrSi 04 (zircon) in a glass matrix. Natural mineral zircon is a stable host for 4+ valent actinides. 17 references, 3 figures, 5 tables

  3. A megawatt-level 28 GHz heating system for the National Spherical Torus Experiment Upgrade

    Directory of Open Access Journals (Sweden)

    Taylor G.

    2015-01-01

    Full Text Available The National Spherical Torus Experiment Upgrade (NSTX-U will operate at axial toroidal fields of ≤ 1 T and plasma currents, Ip ≤ 2 MA. The development of non-inductive (NI plasmas is a major long-term research goal for NSTX-U. Time dependent numerical simulations of 28 GHz electron cyclotron (EC heating of low density NI start-up plasmas generated by Coaxial Helicity Injection (CHI in NSTX-U predict a significant and rapid increase of the central electron temperature (Te(0 before the plasma becomes overdense. The increased Te(0 will significantly reduce the Ip decay rate of CHI plasmas, allowing the coupling of fast wave heating and neutral beam injection. A megawatt-level, 28 GHz electron heating system is planned for heating NI start-up plasmas in NSTX-U. In addition to EC heating of CHI start-up discharges, this system will be used for electron Bernstein wave (EBW plasma start-up, and eventually for EBW heating and current drive during the Ip flattop.

  4. Comparison of methods for the identification and sub-typing of O157 and non-O157 Escherichia coli serotypes and their integration into a polyphasic taxonomy approach

    OpenAIRE

    Prieto-Calvo M.A.; Omer M.K.; Alvseike O.; López M.; Alvarez-Ordóñez A.; Prieto M.

    2016-01-01

    Phenotypic, chemotaxonomic and genotypic data from 12 strains of Escherichia coli were collected, including carbon source utilisation profiles, ribotypes, sequencing data of the 16S–23S rRNA internal transcribed region (ITS) and Fourier transform-infrared (FT-IR) spectroscopic profiles. The objectives were to compare several identification systems for E. coli and to develop and test a polyphasic taxonomic approach using the four methodologies combined for the sub-typing of O157 and non-O157 E...

  5. A micro-kinematic framework for vorticity analysis in polyphase shear zones using integrated field, microstructural and crystallographic orientation-dispersion methods

    Science.gov (United States)

    Kruckenberg, S. C.; Michels, Z. D.; Parsons, M. M.

    2017-12-01

    We present results from integrated field, microstructural and textural analysis in the Burlington mylonite zone (BMZ) of eastern Massachusetts to establish a unified micro-kinematic framework for vorticity analysis in polyphase shear zones. Specifically, we define the vorticity-normal surface based on lattice-scale rotation axes calculated from electron backscatter diffraction data using orientation statistics. In doing so, we objectively identify a suitable reference frame for rigid grain methods of vorticity analysis that can be used in concert with textural studies to constrain field- to plate-scale deformation geometries without assumptions that may bias tectonic interpretations, such as relationships between kinematic axes and fabric forming elements or the nature of the deforming zone (e.g., monoclinic vs. triclinic shear zones). Rocks within the BMZ comprise a heterogeneous mix of quartzofeldspathic ± hornblende-bearing mylonitic gneisses and quartzites. Vorticity axes inferred from lattice rotations lie within the plane of mylonitic foliation perpendicular to lineation - a pattern consistent with monoclinic deformation geometries involving simple shear and/or wrench-dominated transpression. The kinematic vorticity number (Wk) is calculated using Rigid Grain Net analysis and ranges from 0.25-0.55, indicating dominant general shear. Using the calculated Wk values and the dominant geographic fabric orientation, we constrain the angle of paleotectonic convergence between the Nashoba and Avalon terranes to 56-75º with the convergence vector trending 142-160° and plunging 3-10°. Application of the quartz recrystallized grain size piezometer suggests differential stresses in the BMZ mylonites ranging from 44 to 92 MPa; quartz CPO patterns are consistent with deformation at greenschist- to amphibolite-facies conditions. We conclude that crustal strain localization in the BMZ involved a combination of pure and simple shear in a sinistral reverse transpressional

  6. On Start to End Simulation and Modeling Issues of the Megawatt Proton Beam Facility at PSI

    CERN Document Server

    Adelmann, Andreas; Fitze, Hansruedi; Geus, Roman; Humbel, Martin; Stingelin, Lukas

    2005-01-01

    At the Paul Scherrer Institut (PSI) we routinely extract a one megawatt (CW) proton beam out of our 590 MeV Ring Cyclotron. In the frame of the ongoing upgrade program, large scale simulations have been undertaken in order to provide a sound basis to assess the behaviour of very intense beams in cyclotrons. The challenges and attempts towards massive parallel three dimensional start-to- end simulations will be discussed. The used state of the art numerical tools (mapping techniques, time integration, parallel FFT and finite element based multigrid Poisson solver) and their parallel implementation will be discussed. Results will be presented in the area of: space charge dominated beam transport including neighbouring turns, eigenmode analysis to obtain accurate electromagnetic fields in large the rf cavities and higher order mode interaction between the electromagnetic fields and the particle beam. For the problems investigated so far a good agreement between theory i.e. calculations and measurements is obtain...

  7. Torsional Vibration in the National Wind Technology Center’s 2.5-Megawatt Dynamometer

    Energy Technology Data Exchange (ETDEWEB)

    Sethuraman, Latha [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keller, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Wallen, Robb [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-08-31

    This report documents the torsional drivetrain dynamics of the NWTC's 2.5-megawatt dynamometer as identified experimentally and as calculated using lumped parameter models using known inertia and stiffness parameters. The report is presented in two parts beginning with the identification of the primary torsional modes followed by the investigation of approaches to damp the torsional vibrations. The key mechanical parameters for the lumped parameter models and justification for the element grouping used in the derivation of the torsional modes are presented. The sensitivities of the torsional modes to different test article properties are discussed. The oscillations observed from the low-speed and generator torque measurements were used to identify the extent of damping inherently achieved through active and passive compensation techniques. A simplified Simulink model of the dynamometer test article integrating the electro-mechanical power conversion and control features was established to emulate the torque behavior that was observed during testing. The torque response in the high-speed, low-speed, and generator shafts were tested and validated against experimental measurements involving step changes in load with the dynamometer operating under speed-regulation mode. The Simulink model serves as a ready reference to identify the torque sensitivities to various system parameters and to explore opportunities to improve torsional damping under different conditions.

  8. OSIRIS reactor radioprotection, radioprotection measurements performed during the power rise and the first 50 megawatt operation; Radioprotection de la pile OSIRIS, mesures de radioprotection effectuees au cours de la montee en puissance et des premiers fonctionnements a 50 megawatts

    Energy Technology Data Exchange (ETDEWEB)

    Fanton, B; Lebouleux, P

    1967-12-01

    The authors supply the results of the measurements that have been made near the Osiris reactor during the power increase and during the first functioning at 50 megawatts. The measurements relate to the absorbed dose rates in the premises, the water activation and the atmospheric contamination. The influence of the heat layer of water movements and the water rate in the core chimney on the absorbed dose rate at the footbridge level overhanging the pile core has been studied. The modifications to the protection devices that have been proposed after the measurements and the effect of these modifications on the results of the measures are given then. The regeneration process of a water purification chain has been examined from the radiation protection point of view. It has been possible to make some twenty radionuclides obvious in the produced effluents and to determine the volume activity of these effluents for each radionuclide. The whole of results show that in a general way, the irradiation levels are low during the usual reactor functioning. [French] Les auteurs fournissent les resultats des mesures de radioprotection oui ont ete effectuees aupres de la pile Osiris pendant la montee en puissance et au cours des premiers fonctionnements a 50 megawatts. Les mesures portent sur les debits de dose absorbee dans les locaux, l'activation de l'eau et la contamination atmospherique. L'influence de la couche chaude des mouvements d'eau et du debit d'eau dans la cheminee du coeur sur le debit de dose absorbee au niveau de la passerelle surplombant le coeur de la pile, a ete etudiee. Les modifications aux dispositifs de protection, qui ont ete proposees a la suite des mesures, et l'effet de ces modifications sur les resultats des mesures sont indiques ensuite. Le processus de regeneration d'une chaine d'epuration de l'eau a ete examine sous l'angle de la radioprotection. Il a ete possible de mettre en evidence une vingtaine de radionucleides dans les effluents produits et de

  9. A study on electric power management for power producer-suppliers utilizing output of megawatt-solar power plants

    Directory of Open Access Journals (Sweden)

    Hirotaka Takano

    2016-01-01

    Full Text Available The growth in penetration of photovoltaic generation units (PVs has brought new power management ideas, which achieve more profitable operation, to Power Producer-Suppliers (PPSs. The expected profit for the PPSs will improve if they appropriately operate their controllable generators and sell the generated electricity to contracted customers and Power Exchanges together with the output of Megawatt-Solar Power Plants (MSPPs. Moreover, we can expect that the profitable cooperation between the PPSs and the MSPPs decreases difficulties in the supply-demand balancing operation for the main power grids. However, it is necessary that the PPSs treat the uncertainty in output prediction of PVs carefully. This is because there is a risk for them to pay a heavy imbalance penalty. This paper presents a problem framework and its solution to make the optimal power management plan for the PPSs in consideration with the electricity procurement from the MSPPs. The validity of the authors’ proposal is verified through numerical simulations and discussions of their results.

  10. An Assessment of Hydrogen as a Means to Implement the United States Navy’s Renewable Energy Initiative

    Science.gov (United States)

    2014-09-01

    China Lake. The 270 megawatt geothermal power plant “provides on average 1.4 million megawatt-hours of electricity to the California power grid...22  3.  GEOTHERMAL POWER ......................................................... 23  4.  BIOMASS...4 Wind Power 5 - 22 Ocean Power (Tidal) 14 Solar (PV) 30 Geothermal 20 -50 Fossil-Based 150 Nuclear 4000 Table 1. Alternative Energy Sources

  11. Multi-Megawatt-Scale Power-Hardware-in-the-Loop Interface for Testing Ancillary Grid Services by Converter-Coupled Generation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Koralewicz, Przemyslaw J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gevorgian, Vahan [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wallen, Robert B [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-07-26

    Power-hardware-in-the-loop (PHIL) is a simulation tool that can support electrical systems engineers in the development and experimental validation of novel, advanced control schemes that ensure the robustness and resiliency of electrical grids that have high penetrations of low-inertia variable renewable resources. With PHIL, the impact of the device under test on a generation or distribution system can be analyzed using a real-time simulator (RTS). PHIL allows for the interconnection of the RTS with a 7 megavolt ampere (MVA) power amplifier to test multi-megawatt renewable assets available at the National Wind Technology Center (NWTC). This paper addresses issues related to the development of a PHIL interface that allows testing hardware devices at actual scale. In particular, the novel PHIL interface algorithm and high-speed digital interface, which minimize the critical loop delay, are discussed.

  12. OSIRIS reactor radioprotection, radioprotection measurements performed during the power rise and the first 50 megawatt operation; Radioprotection de la pile OSIRIS, mesures de radioprotection effectuees au cours de la montee en puissance et des premiers fonctionnements a 50 megawatts

    Energy Technology Data Exchange (ETDEWEB)

    Fanton, B.; Lebouleux, P

    1967-12-01

    The authors supply the results of the measurements that have been made near the Osiris reactor during the power increase and during the first functioning at 50 megawatts. The measurements relate to the absorbed dose rates in the premises, the water activation and the atmospheric contamination. The influence of the heat layer of water movements and the water rate in the core chimney on the absorbed dose rate at the footbridge level overhanging the pile core has been studied. The modifications to the protection devices that have been proposed after the measurements and the effect of these modifications on the results of the measures are given then. The regeneration process of a water purification chain has been examined from the radiation protection point of view. It has been possible to make some twenty radionuclides obvious in the produced effluents and to determine the volume activity of these effluents for each radionuclide. The whole of results show that in a general way, the irradiation levels are low during the usual reactor functioning. [French] Les auteurs fournissent les resultats des mesures de radioprotection oui ont ete effectuees aupres de la pile Osiris pendant la montee en puissance et au cours des premiers fonctionnements a 50 megawatts. Les mesures portent sur les debits de dose absorbee dans les locaux, l'activation de l'eau et la contamination atmospherique. L'influence de la couche chaude des mouvements d'eau et du debit d'eau dans la cheminee du coeur sur le debit de dose absorbee au niveau de la passerelle surplombant le coeur de la pile, a ete etudiee. Les modifications aux dispositifs de protection, qui ont ete proposees a la suite des mesures, et l'effet de ces modifications sur les resultats des mesures sont indiques ensuite. Le processus de regeneration d'une chaine d'epuration de l'eau a ete examine sous l'angle de la radioprotection. Il a ete possible de mettre en evidence une vingtaine

  13. Three-dimensional eddy current solution of a polyphase machine test model (abstract)

    Science.gov (United States)

    Pahner, Uwe; Belmans, Ronnie; Ostovic, Vlado

    1994-05-01

    This abstract describes a three-dimensional (3D) finite element solution of a test model that has been reported in the literature. The model is a basis for calculating the current redistribution effects in the end windings of turbogenerators. The aim of the study is to see whether the analytical results of the test model can be found using a general purpose finite element package, thus indicating that the finite element model is accurate enough to treat real end winding problems. The real end winding problems cannot be solved analytically, as the geometry is far too complicated. The model consists of a polyphase coil set, containing 44 individual coils. This set generates a two pole mmf distribution on a cylindrical surface. The rotating field causes eddy currents to flow in the inner massive and conducting rotor. In the analytical solution a perfect sinusoidal mmf distribution is put forward. The finite element model contains 85824 tetrahedra and 16451 nodes. A complex single scalar potential representation is used in the nonconducting parts. The computation time required was 3 h and 42 min. The flux plots show that the field distribution is acceptable. Furthermore, the induced currents are calculated and compared with the values found from the analytical solution. The distribution of the eddy currents is very close to the distribution of the analytical solution. The most important results are the losses, both local and global. The value of the overall losses is less than 2% away from those of the analytical solution. Also the local distribution of the losses is at any given point less than 7% away from the analytical solution. The deviations of the results are acceptable and are partially due to the fact that the sinusoidal mmf distribution was not modeled perfectly in the finite element method.

  14. Biodiversity analysis by polyphasic study of marine bacteria associated with biocorrosion phenomena.

    Science.gov (United States)

    Boudaud, N; Coton, M; Coton, E; Pineau, S; Travert, J; Amiel, C

    2010-07-01

    A polyphasic approach was used to study the biodiversity bacteria associated with biocorrosion processes, in particular sulfate-reducing bacteria (SRB) and thiosulfate-reducing bacteria (TRB) which are described to be particularly aggressive towards metallic materials, notably via hydrogen sulfide release. To study this particular flora, an infrared spectra library of 22 SRB and TRB collection strains were created using a Common Minimum Medium (CMM) developed during this study and standardized culture conditions. The CMM proved its ability to allow for growth of both SRB and TRB strains. These sulfurogen collection strains were clearly discriminated and differentiated at the genus level by fourier transform infrared (FT-IR) spectroscopy. In a second step, infrared spectra of isolates, recovered from biofilms formed on carbon steel coupons immersed for 1 year in three different French harbour areas, were compared to the infrared reference spectra library. In parallel, molecular methods (M13-PCR and 16S rRNA gene sequencing) were used to qualitatively evaluate the intra- and inter-species genetic diversity of biofilm isolates. The biodiversity study indicated that strains belonging to the Vibrio genus were the dominant population; strains belonging to the Desulfovibrio genus (SRB) and Peptostreptococcaceae were also identified. Overall, the combination of the FT-IR spectroscopy and molecular approaches allowed for the taxonomic and ecological study of a bacterial flora, cultivated on CMM, associated with microbiology-induced corrosion (MIC) processes. Via the use of the CMM medium, the culture of marine bacteria (including both SRB and TRB bacteria) was allowed, and the implication of nonsulforogen bacteria in MIC was observed. Their involvement in the biocorrosion phenomena will have to be studied and taken into account in the future. © 2009 The Authors. Journal compilation © 2009 The Society for Applied Microbiology.

  15. Power Performance Test Report for the U.S. Department of Energy 1.5-Megawatt Wind Turbine

    Energy Technology Data Exchange (ETDEWEB)

    Mendoza, Ismael [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hur, Jerry [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Thao, Syhoune [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Curtis, Amy [Windward Engineering, Santa Barbara, CA (United States)

    2015-08-11

    The U.S. Department of Energy (DOE) acquired and installed a 1.5-megawatt (MW) wind turbine at the National Wind Technology Center (NWTC) at the National Renewable Energy Laboratory (NREL). This turbine (hereafter referred to as the DOE 1.5) is envisioned to become an integral part of the research initiatives for the DOE Wind Program, such as Atmosphere to Electrons (A2e). A2e is a multiyear DOE research initiative targeting significant reductions in the cost of wind energy through an improved understanding of the complex physics governing wind flow into and through wind farms. For more information, visit http://energy.gov/eere/wind/atmosphere-electrons. To validate new and existing high-fidelity simulations, A2e must deploy several experimental measurement campaigns across different scales. Proposed experiments include wind tunnel tests, scaled field tests, and large field measurement campaigns at operating wind plants. Data of interest includes long-term atmospheric data sets, wind plant inflow, intra-wind plant flows (e.g., wakes), and rotor loads measurements. It is expected that new, high-fidelity instrumentation will be required to successfully collect data at the resolutions required to validate the high-fidelity simulations.

  16. Power Quality Test Report for the U.S. Department of Energy 1.5-Megawatt Wind Turbine

    Energy Technology Data Exchange (ETDEWEB)

    Mendoza, Ismael [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hur, Jerry [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Thao, Syhoune [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2015-08-20

    The U.S. Department of Energy (DOE) acquired and installed a 1.5-megawatt (MW) wind turbine at the National Wind Technology Center (NWTC) at the National Renewable Energy Laboratory. This turbine (hereafter referred to as the DOE 1.5) is envisioned to become an integral part of the research initiatives for the DOE Wind Program, such as Atmosphere to Electrons (A2e). A2e is a multiyear DOE research initiative targeting significant reductions in the cost of wind energy through an improved understanding of the complex physics governing wind flow into and through wind farms. For more information, visit http://energy.gov/eere/wind/atmosphere-electrons. To validate new and existing high-fidelity simulations, A2e must deploy several experimental measurement campaigns across different scales. Proposed experiments include wind tunnel tests, scaled field tests, and large field measurement campaigns at operating wind plants. Data of interest includes long-term atmospheric data sets, wind plant inflow, intra-wind plant flows (e.g., wakes), and rotor loads measurements. It is expected that new, high-fidelity instrumentation will be required to successfully collect data at the resolutions required to validate the high-fidelity simulations.

  17. Polyphase Rifting and Breakup of the Central Mozambique Margin

    Science.gov (United States)

    Senkans, Andrew; Leroy, Sylvie; d'Acremont, Elia; Castilla, Raymi

    2017-04-01

    The breakup of the Gondwana supercontinent resulted in the formation of the Central Mozambique passive margin as Africa and Antarctica were separated during the mid-Jurassic period. The identification of magnetic anomalies in the Mozambique Basin and Riiser Larsen Sea means that post-oceanisation plate kinematics are well-constrained. Unresolved questions remain, however, regarding the initial fit, continental breakup process, and the first relative movements of Africa and Antarctica. This study uses high quality multi-channel seismic reflection profiles in an effort to identify the major crustal domains in the Angoche and Beira regions of the Central Mozambique margin. This work is part of the integrated pluri-disciplinary PAMELA project*. Our results show that the Central Mozambique passive margin is characterised by intense but localised magmatic activity, evidenced by the existence of seaward dipping reflectors (SDR) in the Angoche region, as well as magmatic sills and volcanoclastic material which mark the Beira High. The Angoche region is defined by a faulted upper-continental crust, with the possible exhumation of lower crustal material forming an extended ocean-continent transition (OCT). The profiles studied across the Beira high reveal an offshore continental fragment, which is overlain by a pre-rift sedimentary unit likely to belong to the Karoo Group. Faulting of the crust and overlying sedimentary unit reveals that the Beira High has recorded several phases of deformation. The combination of our seismic interpretation with existing geophysical and geological results have allowed us to propose a breakup model which supports the idea that the Central Mozambique margin was affected by polyphase rifting. The analysis of both along-dip and along-strike profiles shows that the Beira High initially experienced extension in a direction approximately parallel to the Mozambique coastline onshore of the Beira High. Our results suggest that the Beira High results

  18. Packaging and transportation of derived enriched uranium for the ''megatons to megawatts'' USA/Russia agreement

    International Nuclear Information System (INIS)

    Darrough, E.; Ewing, L.; Ravenscroft, N.

    1998-01-01

    In January 1998 the United States Enrichment Corporation (USEC) and Techsnabexport Co., Ltd (TENEX) of Russia celebrated the fourth anniversary of the signing of the 20-year contract between these two executive agents. USEC and TENEX are responsible for implementing the Government to-Government agreement between the United States and the Russian Federation for the purchase of uranium derived from dismantled nuclear weapons from the former Soviet Union. This program, entitled 'Megatons to Megawatts', is the first time nuclear warheads have been turned into fuel as well as the first time a commercial contract has been used to implement such a program. As of the fourth anniversary, the equivalent of almost 1,200 nuclear warheads had been converted to fuel. USEC is responsible for making all of the arrangements to transport the Russian LEU derived from HEU--hence the term, derived enriched uranium (DEU)--from St Petersburg. Russia to the USEC plant near portsmouth, Ohio. Edlow International Company is working with USEC to implement the shipping campaign and is responsible for coordination of the port delivery within Russia, as well. The organization responsible for these shipments within Russia is IZOTOP. While the program has been a major new responsibility for USEC, the early years of the program prepared all parties for the future challenges such as increased numbers of shipments, additional originating sites in Russia and witnessing requirements in Russia. (authors)

  19. Polyphasic bacterial community analysis of an aerobic activated sludge removing phenols and thiocyanate from coke plant effluent

    Energy Technology Data Exchange (ETDEWEB)

    Felfoldi, T.; Szekely, A.J.; Goral, R.; Barkacs, K.; Scheirich, G.; Andras, J.; Racz, A.; Marialigeti, K. [Eotvos Lorand University, Budapest (Hungary). Dept. of Microbiology

    2010-05-15

    Biological purification processes are effective tools in the treatment of hazardous wastes such as toxic compounds produced in coal coking. In this study, the microbial community of a lab-scale activated sludge system treating coking effluent was assessed by cultivation-based (strain isolation and identification, biodegradation tests) and culture-independent techniques (sequence-aided T-RFLP, taxon-specific PCR). The results of the applied polyphasic approach showed a simple microbial community dominated by easily culturable heterotrophic bacteria. Comamonas badia was identified as the key microbe of the system, since it was the predominant member of the bacterial community, and its phenol degradation capacity was also proved. Metabolism of phenol, even at elevated concentrations (up to 1500 mg/L), was also presented for many other dominant (Pseudomonas, Rhodanobacter, Oligella) and minor (Alcaligenes, Castellaniella, Microbacterium) groups, while some activated sludge bacteria (Sphingomonas, Rhodopseudomonas) did not tolerate it even in lower concentrations (250 mg/L). In some cases, closely related strains showed different tolerance and degradation properties. Members of the genus Thiobacillus were detected in the activated sludge, and were supposedly responsible for the intensive thiocyanate biodegradation observed in the system. Additionally, some identified bacteria (e.g. C. badia and the Ottowia-related strains) might also have had a significant impact on the structure of the activated sludge due to their floc-forming abilities.

  20. Applications of wind turbines in Canada

    Energy Technology Data Exchange (ETDEWEB)

    South, P; Rangi, R S; Templin, R J

    1977-01-01

    There are differing views as to the role of wind energy in the overall requirements. While some people tend to ignore it there are others who think that wind could be a major source of energy. In this paper an effort has been made to determine the wind power potential and also the amount that is economically usable. From the existing wind data a map showing the distribution of wind power density has been prepared. This map shows that the maritime provinces and the west coast of Hudson Bay have high wind power potential. These figures show that the wind power potential is of the same order as the installed electrical generating capacity in Canada (58 x 10/sup 6/kW in 1974). However, in order to determine how much of this power is usable the economics of adding wind energy to an existing system must be considered. A computer program has been developed at NRC to analyze the coupling of wind turbines with mixed power systems. Using this program and making certain assumptions about the cost of WECS and fuel the maximum amount of usable wind energy has been calculated. It is shown that if an installed capacity of 420 megawatts of wind power was added to the existing diesel capacity it would result in a savings of 60,000,000 gallons of fuel oil per year. On the other hand it is shown that if the existing installed hydro electric capacity of 37,000 megawatts (1976) was increased to 60,000 megawatts without increasing the average water flow rate, an installed capacity of 60,000 megawatts of wind power could be added to the system. This would result in an average of 14,000 megawatts from the wind. Using projected manufacturing costs for vertical axis wind turbines, the average cost of wind energy could be in the range of 1.4 cents/kwh to 3.6 cents/kwh.

  1. Operational Performance of the Two-Channel 10 Megawatt Feedback Amplifier System for MHD Control on the Columbia University HBT-EP Tokamak

    International Nuclear Information System (INIS)

    Reass, W.A.; Wurden, G.A.

    1997-01-01

    The operational characteristics and performance of the two channel 10 Megawatt MHD feedback control system as installed by Los Alamos National Laboratory on the Columbia University HBT-EP tokamak are described. In the present configuration, driving independent 300 microH saddle coil sets, each channel can deliver 1100 Amperes and 16 kV peak to peak. Full power bandwidth is about 12 kHz, with capabilities at reduced power to 30 kHz. The present system topology is designed to suppress magnetohydrodynamic activity with m=2, n=1 symmetry. Application of either static (single phase) or rotating (twin phased) magnetic perturbations shows the ability to spin up or slow down the plasma, and also prevent (or cause) so-called ''mode-locking''. Open loop and active feedback experiments using a digital signal processor (DSP) have been performed on the HBT-EP tokamak and initial results show the ability to manipulate the plasma MHD mode frequency

  2. The polyphasic description of a Desmodesmus spp. isolate with the potential of bioactive compounds production

    Directory of Open Access Journals (Sweden)

    El Semary, NA.

    2011-01-01

    Full Text Available A polyphasic approach was applied to describe a colony-forming Desmodesmus species collected from the Nile River, Maadi area, Helwan district, Egypt. The isolate grows best at moderate temperature and relatively high light intensity. The phenotypic features revealed the presence of both unicellular and colonial forms of the isolate and the latter form was either 2-4 celled. Cells were 4-6 mm ± 0.5 at their widest point and 11-15 mm ± 0.48 in their length with spiny projections that encircled the cells. Cells were heavily-granulated and enclosed within common mucilaginous sheath. Colonial forms were developed through production of daughter cells within mother cell. Molecular analysis using 18S rRNA gene showed some similarity to its nearest relative (Desmodesmus communis whereas the phylogenetic analyses clustered it together with other Desmodesmus spp. and away from Scenedesmus spp. from the database. However, the use of ITS-2 as a phylotaxonomic marker proved to be more resolving and confirmed the generic identity of the isolate as Desmodesmus spp. The fatty acid composition revealed the presence of saturated palmitic fatty acid as the most abundant component followed by monounsaturated palmitoleic acid whereas the polyunsaturated fatty acids were in relatively low abundance. The palmitoleic acid in particular is suggested to be involved in active defense mechanism. The phytochemical screening revealed the presence of alkaloids and saponins and absence of tannins. Fractions of methanolic extracts showed antimicrobial activities against pathogenic bacterial strains including multi-drug resistant ones. This study documents the presence of this strain in the River Nile and highlights its biotechnological potential as a source of bioactive compounds.

  3. Polyphasic Approach Including MALDI-TOF MS/MS Analysis for Identification and Characterisation of Fusarium verticillioides in Brazilian Corn Kernels

    Directory of Open Access Journals (Sweden)

    Susane Chang

    2016-02-01

    Full Text Available Fusarium verticillioides is considered one of the most important global sources of fumonisins contamination in food and feed. Corn is one of the main commodities produced in the Northeastern Region of Brazil. The present study investigated potential mycotoxigenic fungal strains belonging to the F. verticillioides species isolated from corn kernels in 3 different Regions of the Brazilian State of Pernambuco. A polyphasic approach including classical taxonomy, molecular biology, MALDI-TOF MS and MALDI-TOF MS/MS for the identification and characterisation of the F. verticillioides strains was used. Sixty F. verticillioides strains were isolated and successfully identified by classical morphology, proteomic profiles of MALDI-TOF MS, and by molecular biology using the species-specific primers VERT-1 and VERT-2. FUM1 gene was further detected for all the 60 F. verticillioides by using the primers VERTF-1 and VERTF-2 and through the amplification profiles of the ISSR regions using the primers (GTG5 and (GACA4. Results obtained from molecular analysis shown a low genetic variability among these isolates from the different geographical regions. All of the 60 F. verticillioides isolates assessed by MALDI-TOF MS/MS presented ion peaks with the molecular mass of the fumonisin B1 (721.83 g/mol and B2 (705.83 g/mol.

  4. Fibre amplifier based on an ytterbium-doped active tapered fibre for the generation of megawatt peak power ultrashort optical pulses

    Energy Technology Data Exchange (ETDEWEB)

    Koptev, M Yu; Anashkina, E A; Lipatov, D S; Andrianov, A V; Muravyev, S V; Kim, A V [Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod (Russian Federation); Bobkov, K K; Likhachev, M E; Levchenko, A E; Aleshkina, S S; Semjonov, S L; Denisov, A N; Bubnov, M M [Fiber Optics Research Center, Russian Academy of Sciences, Moscow (Russian Federation); Laptev, A Yu; Gur' yanov, A N [G.G.Devyatykh Institute of Chemistry of High-Purity Substances, Russian Academy of Sciences, Nizhnii Novgorod (Russian Federation)

    2015-05-31

    We report a new ytterbium-doped active tapered fibre used in the output amplifier stage of a fibre laser system for the generation of megawatt peak power ultrashort pulses in the microjoule energy range. The tapered fibre is single-mode at its input end (core and cladding diameters of 10 and 80 μm) and multimode at its output end (diameters of 45 and 430 μm), but ultrashort pulses are amplified in a quasi-single-mode regime. Using a hybrid Er/Yb fibre system comprising an erbium master oscillator and amplifier at a wavelength near 1.5 μm, a nonlinear wavelength converter to the 1 μm range and a three-stage ytterbium-doped fibre amplifier, we obtained pulses of 1 μJ energy and 7 ps duration, which were then compressed by a grating-pair dispersion compressor with 60% efficiency to a 130 fs duration, approaching the transform-limited pulse duration. The present experimental data agree well with numerical simulation results for pulse amplification in the threestage amplifier. (extreme light fields and their applications)

  5. The Spallation Neutron Source Beam Commissioning and Initial Operations

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, Stuart [Argonne National Lab. (ANL), Argonne, IL (United States); Aleksandrov, Alexander V. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Allen, Christopher K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Assadi, Saeed [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bartoski, Dirk [University of Texas, Houston, TX (United States). Anderson Cancer Center; Blokland, Willem [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Casagrande, F. [Michigan State Univ., East Lansing, MI (United States); Campisi, I. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chu, C. [Michigan State Univ., East Lansing, MI (United States); Cousineau, Sarah M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Crofford, Mark T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Danilov, Viatcheslav [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Deibele, Craig E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dodson, George W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Feshenko, A. [Inst. for Nuclear Research (INR), Moscow (Russian Federation); Galambos, John D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Han, Baoxi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hardek, T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Holmes, Jeffrey A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Holtkamp, N. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Howell, Matthew P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jeon, D. [Inst. for Basic Science, Daejeon (Korea); Kang, Yoon W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kasemir, Kay [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kim, Sang-Ho [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kravchuk, L. [Institute for Nuclear Research (INR), Moscow (Russian Federation); Long, Cary D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); McManamy, T. [McManamy Consulting, Inc., Middlesex, MA (United States); Pelaia, II, Tom [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Piller, Chip [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Plum, Michael A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pogge, James R. [Tennessee Technological Univ., Cookeville, TN (United States); Purcell, John David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shea, T. [European Spallation Source, Lund (Sweden); Shishlo, Andrei P [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sibley, C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Stockli, Martin P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Stout, D. [Michigan State Univ., East Lansing, MI (United States); Tanke, E. [European Spallation Source, Lund (Sweden); Welton, Robert F [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhang, Y. [Michigan State Univ., East Lansing, MI (United States); Zhukov, Alexander P [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-09-01

    The Spallation Neutron Source (SNS) accelerator delivers a one mega-Watt beam to a mercury target to produce neutrons used for neutron scattering materials research. It delivers ~ 1 GeV protons in short (< 1 us) pulses at 60 Hz. At an average power of ~ one mega-Watt, it is the highest-powered pulsed proton accelerator. The accelerator includes the first use of superconducting RF acceleration for a pulsed protons at this energy. The storage ring used to create the short time structure has record peak particle per pulse intensity. Beam commissioning took place in a staged manner during the construction phase of SNS. After the construction, neutron production operations began within a few months, and one mega-Watt operation was achieved within three years. The methods used to commission the beam and the experiences during initial operation are discussed.

  6. Initial Design of the 60 Megawatt Rotating Magnetic Field (RMF) Oscillator System for the University of Washington ''TCS'' Field Reversed Configuration Experiment

    International Nuclear Information System (INIS)

    Reass, W.A.; Miera, D.A.; Wurden, G.A.

    1997-01-01

    This paper presents the initial electrical and mechanical design of two phase-locked 30 Megawatt RMS, 150 kHz oscillator systems used for current drive and plasma sustainment of the ''Translation, Confinement, and Sustainment'' (TCS) field reversed configuration (FRC) plasma. By the application of orthogonally-placed saddle coils on the surface of the glass vacuum vessel, the phase-controlled rotating magnetic field perturbation will induce an electric field in the plasma which should counter the intrinsic ohmic decay of the plasma, and maintain the FRC. Each system utilizes a bank of 6 parallel magnetically beamed ML8618 triodes. These devices are rated at 250 Amperes cathode current and a 45 kV plate voltage. An advantage of the magnetically beamed triode is their extreme efficiency, requiring only 2.5 kW of filament and a few amps and a few kV of grid drive. Each 3.5 uH saddle coil is configured with an adjustable tank circuit (for tuning). Assuming no losses and a nominal 18 kV plate voltage, the tubes can circulate about 30 kV and 9 kA (pk to pk) in the saddle coil antenna, a circulating power of over 33 megawatts RMS. On each cycle the tubes can kick in up to 1500 Amperes, providing a robust phase control. DC high-voltage from the tubes is isolated from the saddle coil antennas and tank circuits by a 1:1 coaxial air-core balun transformer. To control the ML8618's phase and amplitude, fast 150 Ampere ''totem-pole'' grid drivers, an ''on'' hot-deck and an ''off'' hot-deck are utilized. The hot-decks use up to 6 each 3CPX1500A7 slotted radial beam triodes. By adjusting the conduction angle, amplitude may be regulated, with inter-pulse timing, phase angle can be controlled. A central feedback timing chassis monitors each systems' saddle coil antenna and appropriately derives each systems timing signals. Fiber-optic cables are used to isolate between the control room timing chassis and the remote power oscillator system. Complete system design detail will be

  7. 76 FR 69721 - Goat Lake Hydro, Inc.; Notice of Preliminary Permit Application Accepted for Filing and...

    Science.gov (United States)

    2011-11-09

    ..., 60-foot-wide powerhouse to contain two turbine/ generating units with a total installed capacity of 12 megawatts, with a hydraulic capacity of 90 cubic feet per second, and an average hydraulic head of...

  8. 76 FR 51025 - Goat Lake Hydro, Inc.; Notice of Preliminary Permit Application Accepted for Filing and...

    Science.gov (United States)

    2011-08-17

    ..., 60-foot-wide powerhouse to contain two turbine/ generating units with a total installed capacity of 12 megawatts, with a hydraulic capacity of 90 cubic feet per second, and an average hydraulic head of...

  9. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  10. Cyanobacterial Diversity in Microbial Mats from the Hypersaline Lagoon System of Araruama, Brazil: An In-depth Polyphasic Study

    Directory of Open Access Journals (Sweden)

    Vitor M. C. Ramos

    2017-06-01

    Full Text Available Microbial mats are complex, micro-scale ecosystems that can be found in a wide range of environments. In the top layer of photosynthetic mats from hypersaline environments, a large diversity of cyanobacteria typically predominates. With the aim of strengthening the knowledge on the cyanobacterial diversity present in the coastal lagoon system of Araruama (state of Rio de Janeiro, Brazil, we have characterized three mat samples by means of a polyphasic approach. We have used morphological and molecular data obtained by culture-dependent and -independent methods. Moreover, we have compared different classification methodologies and discussed the outcomes, challenges, and pitfalls of these methods. Overall, we show that Araruama's lagoons harbor a high cyanobacterial diversity. Thirty-six unique morphospecies could be differentiated, which increases by more than 15% the number of morphospecies and genera already reported for the entire Araruama system. Morphology-based data were compared with the 16S rRNA gene phylogeny derived from isolate sequences and environmental sequences obtained by PCR-DGGE and pyrosequencing. Most of the 48 phylotypes could be associated with the observed morphospecies at the order level. More than one third of the sequences demonstrated to be closely affiliated (best BLAST hit results of ≥99% with cyanobacteria from ecologically similar habitats. Some sequences had no close relatives in the public databases, including one from an isolate, being placed as “loner” sequences within different orders. This hints at hidden cyanobacterial diversity in the mats of the Araruama system, while reinforcing the relevance of using complementary approaches to study cyanobacterial diversity.

  11. Cyanobacterial Diversity in Microbial Mats from the Hypersaline Lagoon System of Araruama, Brazil: An In-depth Polyphasic Study.

    Science.gov (United States)

    Ramos, Vitor M C; Castelo-Branco, Raquel; Leão, Pedro N; Martins, Joana; Carvalhal-Gomes, Sinda; Sobrinho da Silva, Frederico; Mendonça Filho, João G; Vasconcelos, Vitor M

    2017-01-01

    Microbial mats are complex, micro-scale ecosystems that can be found in a wide range of environments. In the top layer of photosynthetic mats from hypersaline environments, a large diversity of cyanobacteria typically predominates. With the aim of strengthening the knowledge on the cyanobacterial diversity present in the coastal lagoon system of Araruama (state of Rio de Janeiro, Brazil), we have characterized three mat samples by means of a polyphasic approach. We have used morphological and molecular data obtained by culture-dependent and -independent methods. Moreover, we have compared different classification methodologies and discussed the outcomes, challenges, and pitfalls of these methods. Overall, we show that Araruama's lagoons harbor a high cyanobacterial diversity. Thirty-six unique morphospecies could be differentiated, which increases by more than 15% the number of morphospecies and genera already reported for the entire Araruama system. Morphology-based data were compared with the 16S rRNA gene phylogeny derived from isolate sequences and environmental sequences obtained by PCR-DGGE and pyrosequencing. Most of the 48 phylotypes could be associated with the observed morphospecies at the order level. More than one third of the sequences demonstrated to be closely affiliated (best BLAST hit results of ≥99%) with cyanobacteria from ecologically similar habitats. Some sequences had no close relatives in the public databases, including one from an isolate, being placed as "loner" sequences within different orders. This hints at hidden cyanobacterial diversity in the mats of the Araruama system, while reinforcing the relevance of using complementary approaches to study cyanobacterial diversity.

  12. steady – state performance of induction and transfer state

    African Journals Online (AJOL)

    eobe

    This paper presents paper presents paper presents the steady the steady the steady–state performance state performance state performance comparison comparison comparison between polyphase induction motor and polyphase between polyphase induction motor and polyphase. TF motor operating in. TF motor ...

  13. Thermochronological evidence for polyphase post-rift reactivation in SE Brazil

    Science.gov (United States)

    Cogné, N.; Gallagher, K.; Cobbold, P. R.; Riccomini, C.

    2012-04-01

    area cooled and uplifted during the Neogene. The synchronicity of the cooling phases with tectonic pulses in the Andes and in NE Brazil, as well as the tectonic setting of the Tertiary basins (Cogné et al., submitted) lead us to attribute these phases to a plate-wide compressive stress, which reactivated inherited structures during the Late Cretaceous and Tertiary. The relief of the margin is therefore due, more to polyphase post-rift reactivation and uplift, than to rifting itself. - Cobbold, P.R., Meisling, K.E., Mount, V.S., 2001. Reactivation of an obliquely rifted margin, Campos and Santos Basins, Southeastern Brazil. AAPG Bulletin 85, 1925-1944. - Cogné, N., Gallagher, K., Cobbold, P.R., 2011. Post-rift reactivation of the onshore margin of southeast Brazil: Evidence from apatite (U-Th)/He and fission-track data. Earth and Planetary Science Letters 309, 118-130. - Cogné, N., Cobbold, P.R., Riccomini, C., Gallagher, K. Tectonic setting of the Taubaté basin (southeastern Brazil): insights from regional seismic profiles and outcrop data. Submitted to Journal of South American Earth Sciences.

  14. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  15. Polyphasic analysis of an Azoarcus-Leptothrix-dominated bacterial biofilm developed on stainless steel surface in a gasoline-contaminated hypoxic groundwater.

    Science.gov (United States)

    Benedek, Tibor; Táncsics, András; Szabó, István; Farkas, Milán; Szoboszlay, Sándor; Fábián, Krisztina; Maróti, Gergely; Kriszt, Balázs

    2016-05-01

    Pump and treat systems are widely used for hydrocarbon-contaminated groundwater remediation. Although biofouling (formation of clogging biofilms on pump surfaces) is a common problem in these systems, scarce information is available regarding the phylogenetic and functional complexity of such biofilms. Extensive information about the taxa and species as well as metabolic potential of a bacterial biofilm developed on the stainless steel surface of a pump submerged in a gasoline-contaminated hypoxic groundwater is presented. Results shed light on a complex network of interconnected hydrocarbon-degrading chemoorganotrophic and chemolitotrophic bacteria. It was found that besides the well-known hydrocarbon-degrading aerobic/facultative anaerobic biofilm-forming organisms (e.g., Azoarcus, Leptothrix, Acidovorax, Thauera, Pseudomonas, etc.), representatives of Fe(2+)-and Mn(2+)-oxidizing (Thiobacillus, Sideroxydans, Gallionella, Rhodopseudomonas, etc.) as well as of Fe(3+)- and Mn(4+)-respiring (Rhodoferax, Geobacter, Magnetospirillum, Sulfurimonas, etc.) bacteria were present in the biofilm. The predominance of β-Proteobacteria within the biofilm bacterial community in phylogenetic and functional point of view was revealed. Investigation of meta-cleavage dioxygenase and benzylsuccinate synthase (bssA) genes indicated that within the biofilm, Azoarcus, Leptothrix, Zoogloea, and Thauera species are most probably involved in intrinsic biodegradation of aromatic hydrocarbons. Polyphasic analysis of the biofilm shed light on the fact that subsurface microbial accretions might be reservoirs of novel putatively hydrocarbon-degrading bacterial species. Moreover, clogging biofilms besides their detrimental effects might supplement the efficiency of pump and treat systems.

  16. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  17. 10 CFR Appendix C to Subpart D of... - Classes of Actions that Normally Require EAs But Not Necessarily EISs

    Science.gov (United States)

    2010-01-01

    ... management program C6. Implementation of Power Marketing Administration systemwide erosion control program C7... Marketing Administration system-wide vegetation management program. C6Implementation of a Power Marketing... 10 average megawatts or more over a 12 month period. This applies to power marketing operations and...

  18. Lessons from Iowa : development of a 270 megawatt compressed air energy storage project in midwest Independent System Operator : a study for the DOE Energy Storage Systems Program.

    Energy Technology Data Exchange (ETDEWEB)

    Holst, Kent (Iowa Stored Energy Plant Agency, Traer, IA); Huff, Georgianne; Schulte, Robert H. (Schulte Associates LLC, Northfield, MN); Critelli, Nicholas (Critelli Law Office PC, Des Moines, IA)

    2012-01-01

    The Iowa Stored Energy Park was an innovative, 270 Megawatt, $400 million compressed air energy storage (CAES) project proposed for in-service near Des Moines, Iowa, in 2015. After eight years in development the project was terminated because of site geological limitations. However, much was learned in the development process regarding what it takes to do a utility-scale, bulk energy storage facility and coordinate it with regional renewable wind energy resources in an Independent System Operator (ISO) marketplace. Lessons include the costs and long-term economics of a CAES facility compared to conventional natural gas-fired generation alternatives; market, legislative, and contract issues related to enabling energy storage in an ISO market; the importance of due diligence in project management; and community relations and marketing for siting of large energy projects. Although many of the lessons relate to CAES applications in particular, most of the lessons learned are independent of site location or geology, or even the particular energy storage technology involved.

  19. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  20. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  1. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  2. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  3. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  4. Occupational radiation exposure at Commercial Nuclear Power reactors 1983. Volume 5. Annual report

    International Nuclear Information System (INIS)

    Brooks, B.G.

    1985-03-01

    This report presents an updated compilation of occupational radiation exposure at commercial nuclear power reactors for the years 1969 through 1983. The summary based on information received from the 75 light-water-cooled reactors (LWRs) and one high temperature gas-cooled reactor (HTGR). The total number of personnel monitored at LWRs in 1983 was 136,700. The number of workers that received measurable doses during 1983 and 85,600 which is about 1000 more than that found in 1982. The total collective dose at LWRs for 1983 is estimated to be 56,500 man-rems (man-cSv), which is about 4000 more man-rems (man-cSv) than that reported in 1982. This resulted in the average annual dose for each worker who received a measurable dose increasing slightly to 0.66 rems (cSv), and the average collective dose per reactor increasing by about 50 man-rems (man-cSv), and the average collective dose per reactor increasing by about 50 man-rems (man-cSv) to a value of 753 man-rems (man-cSv). The collective dose per megawatt of electricity generated by each reactor also increased slightly to an average value of 1.7 man-rems (man-cSv) per megawatt-year. Health implications of these annual occupational doses are discussed

  5. Inventory of power plants in the United States, 1991

    International Nuclear Information System (INIS)

    1992-01-01

    Operable capacity at US electric power plants totaled 693,016 megawatts, as of year-end 1991. Coal-fired capacity accounted for 43 percent (299,849 megawatts) of the total US generating capacity, the share it has essentially maintained for the past decade. Gas-fired capacity accounted for 18 percent (125,683 megawatts); nuclear, 14 percent (99,589 megawatts); water, 13 percent (92,031 megawatts); petroleum, 10 percent (72,357 megawatts); other, one percent (3,507 megawatts). The 693,016 megawatts of operable capacity includes 3,627 megawatts of new capacity that came on line during 1991 (Table 2). This new capacity is 42 percent less than capacity in new units reported for 1990. Gas-fired capacity accounted for the greatest share of this new capacity. It represents 38 percent of the new capacity that started operation in 1991. The surge in new gas-fired capacity is the beginning of a trend that is expected to exist over the next 10 years. That is, gas-fired capacity will dominate new capacity additions. Gas-fired capacity additions during the next 10 years will primarily be in simple cycle gas turbines and gas turbines operating as combined cycle units. These planned gas turbine and combined cycle units, whose capacity totals over 21,000 megawatts, are expected to serve peak and intermediate loads of electric utilities

  6. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  7. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  8. On robust signal reconstruction in noisy filter banks

    CERN Document Server

    Vikalo, H; Hassibi, B; Kailath, T; 10.1016/j.sigpro.2004.08.011

    2005-01-01

    We study the design of synthesis filters in noisy filter bank systems using an H/sup infinity / estimation point of view. The H/sup infinity / approach is most promising in situations where the statistical properties of the disturbances (arising from quantization, compression, etc.) in each subband of the filter bank is unknown, or is too difficult to model and analyze. For the important special case of unitary analysis polyphase matrices we obtain an explicit expression for the minimum achievable disturbance attenuation. For arbitrary analysis polyphase matrices, standard state-space H/sup infinity / techniques can be employed to obtain numerical solutions. When the synthesis filters are restricted to being FIR, as is often the case in practice, the design can be cast as a finite-dimensional semi-definite program. In this case, we can effectively exploit the inherent non-uniqueness of the H/sup infinity / solution to optimize for an additional criteria. By optimizing for average performance in addition to th...

  9. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  10. Challenges in Human Resources Management for Sustainable Nuclear Power Generation: U.S. Perspectives

    International Nuclear Information System (INIS)

    Goodnight, Charles T.

    2017-01-01

    In the US, average 2-unit staffing is ~1,200 personnel; average 1-unit staffing is ~860 personnel. Staff per megawatt, electric (MWe) are much lower for 2-Unit plants due to economies of scale achieved in most work functions (maintenance, engineering, licensing/regulatory affairs, quality assurance, etc.) when a second reactor unit is present. Staffing models show GEN III/III+ and GEN IV reactors will have fewer personnel than GEN II plants. Staffing requirements have multiple drivers that must be taken into consideration

  11. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  12. Analysis of bacterial community during the fermentation of pulque, a traditional Mexican alcoholic beverage, using a polyphasic approach.

    Science.gov (United States)

    Escalante, Adelfo; Giles-Gómez, Martha; Hernández, Georgina; Córdova-Aguilar, María Soledad; López-Munguía, Agustín; Gosset, Guillermo; Bolívar, Francisco

    2008-05-31

    In this study, the characterization of the bacterial community present during the fermentation of pulque, a traditional Mexican alcoholic beverage from maguey (Agave), was determined for the first time by a polyphasic approach in which both culture and non-culture dependent methods were utilized. The work included the isolation of lactic acid bacteria (LAB), aerobic mesophiles, and 16S rDNA clone libraries from total DNA extracted from the maguey sap (aguamiel) used as substrate, after inoculation with a sample of previously produced pulque and followed by 6-h fermentation. Microbiological diversity results were correlated with fermentation process parameters such as sucrose, glucose, fructose and fermentation product concentrations. In addition, medium rheological behavior analysis and scanning electron microscopy in aguamiel and during pulque fermentation were also performed. Our results showed that both culture and non-culture dependent approaches allowed the detection of several new and previously reported species within the alpha-, gamma-Proteobacteria and Firmicutes. Bacteria diversity in aguamiel was composed by the heterofermentative Leuconostoc citreum, L. mesenteroides, L. kimchi, the gamma-Proteobacteria Erwinia rhapontici, Enterobacter spp. and Acinetobacter radioresistens. Inoculation with previously fermented pulque incorporated to the system microbiota, homofermentative lactobacilli related to Lactobacillus acidophilus, several alpha-Proteobacteria such as Zymomonas mobilis and Acetobacter malorum, other gamma-Proteobacteria and an important amount of yeasts, creating a starting metabolic diversity composed by homofermentative and heterofermentative LAB, acetic and ethanol producing microorganisms. At the end of the fermentation process, the bacterial diversity was mainly composed by the homofermentative Lactobacillus acidophilus, the heterofermentative L. mesenteroides, Lactococcus lactis subsp. lactis and the alpha-Proteobacteria A. malorum. After

  13. A Polyphasic Approach for Phenotypic and Genetic Characterization of the Fastidious Aquatic Pathogen Francisella noatunensis subsp. orientalis

    Directory of Open Access Journals (Sweden)

    José G. Ramírez-Paredes

    2017-12-01

    B, 16SrRNA-ITS-23SrRNA, and concatenated sequence the two Francisella noatunensis ssp. diverged more from each other than from the closely related Francisella philomiragia (Fp. The phenotypic and genetic characterization confirmed the Fno isolates represent a solid phylo-phenetic taxon that in the current context of the genus seems to be misplaced within the species Fn. We propose the use of the present polyphasic approach in future studies to characterize strains of Fnn and Fp and verify their current taxonomic rank of Fno and other aquatic Francisella spp.

  14. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  15. Polyphasic approach to the identification and characterization of aflatoxigenic strains of Aspergillus section Flavi isolated from peanuts and peanut-based products marketed in Malaysia.

    Science.gov (United States)

    Norlia, M; Jinap, S; Nor-Khaizura, M A R; Son, R; Chin, C K; Sardjono

    2018-05-31

    Peanuts are widely consumed as the main ingredient in many local dishes in Malaysia. However, the tropical climate in Malaysia (high temperature and humidity) favours the growth of fungi from Aspergillus section Flavi, especially during storage. Most of the species from this section, such as A. flavus, A. parasiticus and A. nomius, are natural producers of aflatoxins. Precise identification of local isolates and information regarding their ability to produce aflatoxins are very important to evaluate the safety of food marketed in Malaysia. Therefore, this study aimed to identify and characterize the aflatoxigenic and non-aflatoxigenic strains of Aspergillus section Flavi in peanuts and peanut-based products. A polyphasic approach, consisting of morphological and chemical characterizations was applied to 128 isolates originating from raw peanuts and peanut-based products. On the basis of morphological characters, 127 positively identified as Aspergillus flavus, and the other as A. nomius. Chemical characterization revealed six chemotype profiles which indicates diversity of toxigenic potential. About 58.6%, 68.5%, and 100% of the isolates are positive for aflatoxins, cyclopiazonic acid and aspergillic acid productions respectively. The majority of the isolates originating from raw peanut samples (64.8%) were aflatoxigenic, while those from peanut-based products were less toxigenic (39.1%). The precise identification of these species may help in developing control strategies for aflatoxigenic fungi and aflatoxin contamination in peanuts, especially during storage. These findings also highlight the possibility of the co-occurrence of other toxins, which could increase the potential toxic effects of peanuts. Copyright © 2018. Published by Elsevier B.V.

  16. Ultra-short pulse delivery at high average power with low-loss hollow core fibers coupled to TRUMPF's TruMicro laser platforms for industrial applications

    Science.gov (United States)

    Baumbach, S.; Pricking, S.; Overbuschmann, J.; Nutsch, S.; Kleinbauer, J.; Gebs, R.; Tan, C.; Scelle, R.; Kahmann, M.; Budnicki, A.; Sutter, D. H.; Killi, A.

    2017-02-01

    Multi-megawatt ultrafast laser systems at micrometer wavelength are commonly used for material processing applications, including ablation, cutting and drilling of various materials or cleaving of display glass with excellent quality. There is a need for flexible and efficient beam guidance, avoiding free space propagation of light between the laser head and the processing unit. Solid core step index fibers are only feasible for delivering laser pulses with peak powers in the kW-regime due to the optical damage threshold in bulk silica. In contrast, hollow core fibers are capable of guiding ultra-short laser pulses with orders of magnitude higher peak powers. This is possible since a micro-structured cladding confines the light within the hollow core and therefore minimizes the spatial overlap between silica and the electro-magnetic field. We report on recent results of single-mode ultra-short pulse delivery over several meters in a lowloss hollow core fiber packaged with industrial connectors. TRUMPF's ultrafast TruMicro laser platforms equipped with advanced temperature control and precisely engineered opto-mechanical components provide excellent position and pointing stability. They are thus perfectly suited for passive coupling of ultra-short laser pulses into hollow core fibers. Neither active beam launching components nor beam trackers are necessary for a reliable beam delivery in a space and cost saving packaging. Long term tests with weeks of stable operation, excellent beam quality and an overall transmission efficiency of above 85 percent even at high average power confirm the reliability for industrial applications.

  17. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  18. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  19. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  20. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  1. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  2. Megawatts with safety

    International Nuclear Information System (INIS)

    Carpenter, E.W.; Dent, K.H.

    1983-01-01

    The comprehensive programme of research and development which backs the AGR stations being built and operated in the UK is described. The programme, operated by the UKAEA, the National Nuclear Corporation, British Nuclear Fuels Limited, the Central Electricity Generating Board and the Scottish Electricity Board, provides comprehensive technical support to the new stations being commissioned and endeavours to reduce costs by maximising plant performance and life without prejudice to the good safety characteristics of the AGR system. The programme is examined under the headings; fuel, core and coolant chemistry, circuit activity, shielding and nuclear heating, performance, corrosion and structural integrity, component development, irradiated fuel storage and transport. (U.K.)

  3. Optimizing Microgrid Architecture on Department of Defense Installations

    Science.gov (United States)

    2014-09-01

    this information was published. At China Lake Naval Air Weapons Station, the 270-megawatt geothermal power plant annually provides an average of 1.4...wind, geothermal and biomass resources on or in the vicinity of DOD installations” [10]. More declaratively, “electrical power produced from these...shift to more renewable sources of energy of all types. Projects have ranged from geothermal plants to photovoltaic (PV) cells on rooftops. Table

  4. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  5. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  6. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  7. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  8. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  9. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  10. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  11. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  12. Environmental and industrial applications of pulsed power systems

    International Nuclear Information System (INIS)

    Neau, E.L.

    1993-01-01

    The technology base formed by the development of high peak power simulators, laser drivers, free electron lasers (FEL's), and Inertial Confinement Fusion (ICF) drivers from the early 60's through the late 80's is being extended to high average power short-pulse machines with the capabilities of performing new roles in environmental cleanup applications and in supporting new types of industrial manufacturing processes. Some of these processes will require very high average beam power levels of hundreds of kilowatts to perhaps megawatts. In this paper we briefly discuss new technology capabilities and then concentrate on specific application areas that may benefit from the high specific energies and high average powers attainable with short-pulse machines

  13. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  14. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  15. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  16. Spectrometric determination of the species distribution of hydrogen and deuterium in the multi-megawatt ion sources (PINI) of the neutral beam injectors NI-1 and NI-2 of TEXTOR

    International Nuclear Information System (INIS)

    Rotter, H.; Uhlemann, R.

    1990-11-01

    The ion species fractions of hydrogen H + , H 2 + , H 3 + and deuterium D + , D 2 + , D 3 + in the extracted beam of the multi-megawatt ion sources (PINI) of the neutral beam injectors of TEXTOR are determined. The measurements are obtained from two grating spectrometers of 0.5 m focal length with a light guiding system of 50 mm aperture using the Doppler shifted H α /D α -light of the accelerated beam particles. The spectral resolution obtained is 0.76 A with a 50 μm entrance slit. The ion source is a bucket source (modified JET PINI) with a multipole magnetic field in checkerboard arrangement. The species fraction measurements are performed as function of beam current, ion source pressure and beam pulse length. The results for hydrogen and deuterium at particle energies of 20-55 keV and beam currents of 13-87 A show no significant difference between neutral injector I and II. For 55 keV and a beam current of 87 A in hydrogen and 63 A in deuterium a species mix of 67.2:24.5:8.4% (H + :H 2 + :H 3 + ) and of 69.1:23.8:7.1% (D + :D 2 + :D 3 + ) is obtained. (orig.) [de

  17. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  18. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  19. Linear induction accelerators for fusion and neutron production

    International Nuclear Information System (INIS)

    Barletta, W.A.; California Univ., Los Angeles, CA

    1993-08-01

    Linear induction accelerators (LIA) with pulsed power drives can produce high energy, intense beams or electrons, protons, or heavy ions with megawatts of average power. The continuing development of highly reliable LIA components permits the use such accelerators as cost-effective beam sources to drive fusion pellets with heavy ions, to produce intense neutron fluxes using proton beams, and to generate with electrons microwave power to drive magnetic fusion reactors and high gradient, rf-linacs

  20. Ultramafic clasts from the South Chamorro serpentine mud volcano reveal a polyphase serpentinization history of the Mariana forearc mantle

    Science.gov (United States)

    Kahl, Wolf-Achim; Jöns, Niels; Bach, Wolfgang; Klein, Frieder; Alt, Jeffrey C.

    2015-06-01

    Serpentine seamounts located on the outer half of the pervasively fractured Mariana forearc provide an excellent window into the forearc devolatilization processes, which can strongly influence the cycling of volatiles and trace elements in subduction zones. Serpentinized ultramafic clasts recovered from an active mud volcano in the Mariana forearc reveal microstructures, mineral assemblages and compositions that are indicative of a complex polyphase alteration history. Petrologic phase relations and oxygen isotopes suggest that ultramafic clasts were serpentinized at temperatures below 200 °C. Several successive serpentinization events represented by different vein generations with distinct trace element contents can be recognized. Measured in situ Rb/Cs ratios are fairly uniform ranging between 1 and 10, which is consistent with Cs mobilization from sediments at lower temperatures and lends further credence to the low-temperature conditions proposed in models of the thermal structure in forearc settings. Late veins show lower fluid mobile element (FME) concentrations than early veins, suggesting a decreasing influence of fluid discharge from the subducting slab on the composition of the serpentinizing fluids. The continuous microfabric and mineral chemical evolution observed in the ultramafic clasts may have implications as to the origin and nature of the serpentinizing fluids. We hypothesize that opal and smectite dehydration produce quartz-saturated fluids with high FME contents and Rb/Cs between 1 and 4 that cause the early pervasive serpentinization. The partially serpentinized material may then be eroded from the basal plane of the suprasubduction mantle wedge. Serpentinization continued but the interacting fluids did not carry a pronounced sedimentary signature, either because FMEs were no longer released from the slab, or due to an en route loss of FMEs. Late chrysotile veins that document the increased access of fluids in a now fluid-dominated regime are

  1. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  2. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  3. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  4. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  5. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  6. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  7. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  8. Operations buttressed by research - and everything works smoothly

    Energy Technology Data Exchange (ETDEWEB)

    Aeijaelae, M.; Ahtikari, J.; Repo, A. [ed.

    1997-11-01

    The central Finnish town of Jyvaeskylae is home to an IVO energy generation facility which is regarded as one of the most successful power-plant configurations in the world. During its ten years of operation, the Rauhalahti Power Plant, which produces 140 megawatts of district heat, 65 megawatts of industrial steam and 87 megawatts of electricity, has proved successful in terms of both profitability and technology

  9. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  10. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  11. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  12. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  13. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  14. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  15. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  16. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  17. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  18. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  19. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  20. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  1. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  2. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  3. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  4. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  5. Just the facts: 1999 annual report

    International Nuclear Information System (INIS)

    1999-01-01

    TransAlta is an international electric energy company with about $ 6 billion in assets, and a generation capacity of more than 8,000 megawatts. In addition to about 4,500 megawatts of coal-fired and hydroelectric generation in Alberta, the company is closing a 1,340 megawatt acquisition in the United States and has almost 2,200 megawatts of gas-fired power projects operating or in development in North America, Mexico and Australia. This annual report reviews progress made by the corporation during 1999. Notable among these achievements were the acquisition of the 1,340 megawatts power generating plant and coal mine in Washington State; disposing of the less profitable Alberta-based distribution and retail businesses, and businesses with unacceptable risk profile in New Zealand; new profitable power purchase arrangements which will preserve the value of Alberta-based generation assets in a more competitive market; and transition into Y2K with no interruptions in service to customers. The Corporation intends to continue building on its core strengths as a low-cost operator of generation and transmission assets. The short-term target (2002) is 10,000 megawatts, while the longer term goal (2005 to 2007) is to reach 15,000 megawatts of generating capacity, focusing on growth in Canada, the United States and Mexico, as well as Australia. In 1999 the Corporation provided a 10.1 per cent return on investment to shareholders; it expects to do as well in the year 2000 and beyond. A complete audited financial statement is incorporated into the annual report

  6. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  7. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  8. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  9. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  10. Polyphase Pulse Compression Waveforms

    Science.gov (United States)

    1982-01-05

    nreuction wzsahrvr mfnolhnr mehid for ic-dmurin the "ur4s at nr-tgtec y Snnian1 : and .%ckrfnYd j91 T-henu ap;xroa4ch was fri r-Tlxrh) thei phase’ý of a...errors were due only to the A/D converters and that the matched-filter phases and amplitude were perfect . The results are shown in Fig. 16 where each...Electronic System," May 1981, AES-17, pp. 364-372. 6. C. Cook and M. Bernfield, "Radar Signals, An Introduction to Thery and Applications," New York

  11. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  12. Design, construction, and first operational results of a 5 megawatt feedback controlled amplifier system for disruption control on the Columbia University HBT-EP tokamak

    International Nuclear Information System (INIS)

    Reass, W.A.; Alvestad, H.A.; Bartsch, R.R.; Wurden, G.A.; Ivers, T.H.; Nadle, D.L.

    1995-01-01

    This paper presents the electrical design and first operational results of a 5 Megawatt feedback controlled amplifier system designed to drive a 300 uH saddle coil set on the ''HBT-EP'' tokamak. It will be used to develop various plasma feedback techniques to control and inhibit the onset of plasma disruptions that are observed in high ''B'' plasmas. To provide a well characterized system, a high fidelity, high power closed loop amplifier system has been refurbished from the Los Alamos ''ZT-P'' equilibrium feedback system. In it's configuration developed for the Columbia HBT-EP tokamak, any desired waveform may be generated within a I 100 ampere and 16 kV peak to peak dynamic range. An energy storage capacitor bank presently limits the effective full power pulse width to 10 mS. The full power bandwidth driving the saddle coil set is ∼12 kHz, with bandwidth at reduced powers exceeding 30 kHz. The system is designed similar to a grounded cathode, push-pull, transformer coupled, tube type amplifier system. 'Me push pull amplifier consists of 6 each Machlett ML8618 magnetically beamed triodes, 3 on each end of the (center tapped) coupling transformer. The transformer has .I volt-seconds of core and a 1:1 turns ratio. The transformer is specially designed for high power, low leakage inductance, and high bandwidth. Each array of ML8618's is (grid) driven with a fiber optic controlled hotdeck with a 3CXI0,000A7 (triode) output. To linearize the ML8618 grid drive, a minor feedback loop in the hotdeck is utilized. Overall system response is controlled by active feedback of the saddle coil current, derived from a coaxial current viewing resistor. The detailed electrical design of the power amplifier, transformer, and feedback system will be provided in addition to recent HBT-EP operational results

  13. Energy wood harvesting from nurse crop of spruce seeding stand; Kuusen taimikon verhopuuston korjuu energiapuuksi

    Energy Technology Data Exchange (ETDEWEB)

    Peltola, M.; Tanttu, V.

    2008-07-01

    The study focused on establishing the productivity and costs of mechanical energy wood cutting and the profitability of forest management alternatives in the harvesting of hold-overs from spruce seeding stands. The productivity in whole-tree harvesting performed using a multi-tree whole tree processing method reached 3.5 m3/E{sub 0}h with a felling cost of 26 euros/m3. The calculated cost of chainsaw harvesting using a felling-piling technique was 16 euros/m3. The average size of trees harvested from the research stand was 15 dm3. At a rate of 17.8 euros per megawatt that was paid for forest chips delivered to the plant, the net profit using mechanical harvesting method was 272 euros per hectare. The net profit using chainsaw harvesting was 464 euros per hectare. 'Net profit' is defined here as the total amount earned, taking into account forest management costs, the production cost of forest chips, the Kemera subsidies and the price paid for the chips at the place of usage. The net profit of felling the removed trees to the ground (not processing it into fuel) was minus 124 euros. A theoretical stumpage price rate was calculated for the energy harvesting alternatives by dividing the net result by the volume of trees harvested. Theoretical stumpage price was positive when the paid price per megawatt of chips delivered to the place of usage was 13 euros per megawatt-hour for mechanically harvested chips or 10 euros per megawatt-hour for chainsaw-harvested chips. In mechanical harvesting, 17 percent of the trees harvested were damaged in the harvesting process. While it is often essential for the forest owner to ensure that any forest management measures contribute to quick profitability, the forest management benefits that will become realisable assets in the future must nevertheless also be taken into account. (orig.)

  14. Comparative analysis of methods for modelling the short-term probability distribution of extreme wind turbine loads

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov

    2016-01-01

    We have tested the performance of statistical extrapolation methods in predicting the extreme response of a multi-megawatt wind turbine generator. We have applied the peaks-over-threshold, block maxima and average conditional exceedance rates (ACER) methods for peaks extraction, combined with four...... levels, based on the assumption that the response tail is asymptotically Gumbel distributed. Example analyses were carried out, aimed at comparing the different methods, analysing the statistical uncertainties and identifying the factors, which are critical to the accuracy and reliability...

  15. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...

  16. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  17. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  18. Scale-invariant Green-Kubo relation for time-averaged diffusivity

    Science.gov (United States)

    Meyer, Philipp; Barkai, Eli; Kantz, Holger

    2017-12-01

    In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.

  19. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  20. MARGATS cruise: investigation of the deep internal structure and the heterogeneous margins of the Demerara plateau reveals a polyphased volcanic history

    Science.gov (United States)

    Graindorge, D.; Museur, T.; Roest, W. R.; Klingelhoefer, F.; Loncke, L.; Basile, C.; Poetisi, E.; Deverchere, J.; Heuret, A.; Jean-Frederic, L.; Perrot, J.

    2017-12-01

    opportunity to present the exceptional quality of the seismic data, after the initial processing steps and how these data are conditioning a new understanding of the Demarara plateau and its margins which implies the hypothetic role of a new hot spot shaping the complex polyphased history of the structure.

  1. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  2. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  3. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  4. Average L-shell fluorescence, Auger, and electron yields

    International Nuclear Information System (INIS)

    Krause, M.O.

    1980-01-01

    The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization

  5. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  6. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  7. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  8. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  9. Polyphase tertiary fold-and-thrust tectonics in the Belluno Dolomites: new mapping, kinematic analysis, and 3D modelling

    Science.gov (United States)

    Chistolini, Filippo; Bistacchi, Andrea; Massironi, Matteo; Consonni, Davide; Cortinovis, Silvia

    2014-05-01

    The Belluno Dolomites are comprised in the eastern sector of the Southern Alps, which corresponds to the fold-and-thrust belt at the retro-wedge of the Alpine collisional orogen. They are characterized by a complex and polyphase fold-and-thrust tectonics, highlighted by multiple thrust sheets and thrust-related folding. We have studied this tectonics in the Vajont area where a sequence of Jurassic, Cretaceous and Tertiary units have been involved in multiple deformations. The onset of contractional tectonics in this part of the Alps is constrained to be Tertiary (likely Post-Eocene) by structural relationships with the Erto Flysch, whilst in the Mesozoic tectonics was extensional. We have recognized two contractional deformation phases (D1 and D2 in the following), of which only the second was mentioned in previous studies of the area and attributed to the Miocene Neoalpine event. D1 and D2 are characterized by roughly top-to-WSW (possibly Dinaric) and top-to-S (Alpine) transport directions respectively, implying a 90° rotation of the regional-scale shortening axis, and resulting in complex thrust and fold interference and reactivation patterns. Geological mapping and detailed outcrop-scale kinematic analysis allowed us to characterize the kinematics and chronology of deformations. Particularly, relative chronology was unravelled thanks to (1) diagnostic fold interference patterns and (2) crosscutting relationships between thrust faults and thrust-related folds. A km-scale D1 syncline, filled with the Eocene Erto Flysch and "decapitated" by a D2 thrust fault, provides the best map-scale example of crosscutting relationships allowing to reconstruct the faulting history. Due to the strong competence contrast between Jurassic carbonates and Tertiary flysch, in this syncline spectacular duplexes were also developed during D2. In order to quantitatively characterize the complex interference pattern resulting from two orthogonal thrusting and folding events, we

  10. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  11. Mycobacterium alsense sp. nov., a scotochromogenic slow grower isolated from clinical respiratory specimens

    DEFF Research Database (Denmark)

    Tortoli, Enrico; Richter, Elvira; Borroni, Emanuele

    2016-01-01

    . Mycobacterium asiaticum is the most closely related species on the basis of the 16S rRNA sequence (similarity 99.3%); the average nucleotide Identity between the genomes of the two species is 80.72%, clearly below the suggested cutoff (95-96%). The name M. alsense is proposed here for the new species......"Mycobacterium alsiense", although reported in 2007, has not been validly published so far. The polyphasic characterization of the three strains available so far led us to the conclusion that they represent a distinct species within the genus Mycobacterium. The proposed new species grows slowly...

  12. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  13. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any

  14. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  15. 7 CFR 1437.11 - Average market price and payment factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Average market price and payment factors. 1437.11... ASSISTANCE PROGRAM General Provisions § 1437.11 Average market price and payment factors. (a) An average... average market price by the applicable payment factor (i.e., harvested, unharvested, or prevented planting...

  16. Anomalous behavior of q-averages in nonextensive statistical mechanics

    International Nuclear Information System (INIS)

    Abe, Sumiyoshi

    2009-01-01

    A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases

  17. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  18. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  19. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  20. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  1. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  2. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  3. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  4. Salecker-Wigner-Peres clock and average tunneling times

    International Nuclear Information System (INIS)

    Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.

    2011-01-01

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  5. Average wind statistics for SRP area meteorological towers

    International Nuclear Information System (INIS)

    Laurinat, J.E.

    1987-01-01

    A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics

  6. Control of underactuated driftless systems using higher-order averaging theory

    OpenAIRE

    Vela, Patricio A.; Burdick, Joel W.

    2003-01-01

    This paper applies a recently developed "generalized averaging theory" to construct stabilizing feedback control laws for underactuated driftless systems. These controls exponentialy stabilize in the average; the actual system may orbit around the average. Conditions for which the orbit collapses to the averaged trajectory are given. An example validates the theory, demonstrating its utility.

  7. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  8. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  9. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  10. Low Wind Speed Turbine Project Phase II: The Application of Medium-Voltage Electrical Apparatus to the Class of Variable Speed Multi-Megawatt Low Wind Speed Turbines; 15 June 2004--30 April 2005

    Energy Technology Data Exchange (ETDEWEB)

    Erdman, W.; Behnke, M.

    2005-11-01

    Kilowatt ratings of modern wind turbines have progressed rapidly from 50 kW to 1,800 kW over the past 25 years, with 3.0- to 7.5-MW turbines expected in the next 5 years. The premise of this study is simple: The rapid growth of wind turbine power ratings and the corresponding growth in turbine electrical generation systems and associated controls are quickly making low-voltage (LV) electrical design approaches cost-ineffective. This report provides design detail and compares the cost of energy (COE) between commercial LV-class wind power machines and emerging medium-voltage (MV)-class multi-megawatt wind technology. The key finding is that a 2.5% reduction in the COE can be achieved by moving from LV to MV systems. This is a conservative estimate, with a 3% to 3.5% reduction believed to be attainable once purchase orders to support a 250-turbine/year production level are placed. This evaluation considers capital costs as well as installation, maintenance, and training requirements for wind turbine maintenance personnel. Subsystems investigated include the generator, pendant cables, variable-speed converter, and padmount transformer with switchgear. Both current-source and voltage-source converter/inverter MV topologies are compared against their low-voltage, voltage-source counterparts at the 3.0-, 5.0-, and 7.5-MW levels.

  11. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  12. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  13. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  14. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  15. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  16. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  17. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  18. GI Joe or Average Joe? The impact of average-size and muscular male fashion models on men's and women's body image and advertisement effectiveness.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2010-06-01

    Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.

  19. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  20. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  1. Serpent-COREDAX analysis of CANDU-6 time-average model

    Energy Technology Data Exchange (ETDEWEB)

    Motalab, M.A.; Cho, B.; Kim, W.; Cho, N.Z.; Kim, Y., E-mail: yongheekim@kaist.ac.kr [Korea Advanced Inst. of Science and Technology (KAIST), Dept. of Nuclear and Quantum Engineering Daejeon (Korea, Republic of)

    2015-07-01

    COREDAX-2 is the nuclear core analysis nodal code that has adopted the Analytic Function Expansion Nodal (AFEN) methodology which has been developed in Korea. AFEN method outperforms in terms of accuracy compared to other conventional nodal methods. To evaluate the possibility of CANDU-type core analysis using the COREDAX-2, the time-average analysis code system was developed. The two-group homogenized cross-sections were calculated using Monte Carlo code, Serpent2. A stand-alone time-average module was developed to determine the time-average burnup distribution in the core for a given fuel management strategy. The coupled Serpent-COREDAX-2 calculation converges to an equilibrium time-average model for the CANDU-6 core. (author)

  2. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  3. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  4. Coping Strategies Applied to Comprehend Multistep Arithmetic Word Problems by Students with Above-Average Numeracy Skills and Below-Average Reading Skills

    Science.gov (United States)

    Nortvedt, Guri A.

    2011-01-01

    This article discusses how 13-year-old students with above-average numeracy skills and below-average reading skills cope with comprehending word problems. Compared to other students who are proficient in numeracy and are skilled readers, these students are more disadvantaged when solving single-step and multistep arithmetic word problems. The…

  5. Polyphase tectono-magmatic and fluid history related to mantle exhumation in an ultra-distal rift domain: example of the fossil Platta domain, SE Switzerland

    Science.gov (United States)

    Epin, Marie-Eva; Manatschal, Gianreto; Amann, Méderic; Lescanne, Marc

    2017-04-01

    Despite the fact that many studies have investigated mantle exhumation at magma-poor rifted margins, there are still numerous questions concerning the 3D architecture, magmatic, fluid and thermal evolution of these ultra-distal domains that remain unexplained. Indeed, it has been observed in seismic data from ultra-distal magma-poor rifted margins that top basement is heavily structured and complex, however, the processes controlling the morpho-tectonic and magmatic evolution of these domains remain unknown. The aim of this study is to describe the 3D top basement morphology of an exhumed mantle domain, exposed over 200 km2 in the fossil Platta domain in SE Switzerland, and to define the timing and processes controlling its evolution. The examined Platta nappe corresponds to a remnant of the former ultra-distal Adriatic margin of the Alpine Tethys. The rift-structures are relatively well preserved due to the weak Alpine tectonic and metamorphic overprint during the emplacement in the Alpine nappe stack. Detailed mapping of parts of the Platta nappe enabled us to document the top basement architecture of an exhumed mantle domain and to investigate its link to later, rift/oceanic structures, magmatic additions and fluids. Our observations show a polyphase and/or complex: 1) deformation history associated with mantle exhumation along low-angle exhumation faults overprinted by later high-angle normal faults, 2) top basement morphology capped by magmato-sedimentary rocks, 3) tectono-magmatic evolution that includes gabbros, emplaced at deeper levels and subsequently exhumed and overlain by younger extrusive magmatic additions, and 4) fluid history including serpentinization, calcification, hydrothermal vent, rodingitization and spilitization affecting exhumed mantle and associated magmatic rocks. The overall observations provide important information on the temporal and spatial evolution of the tectonic, magmatic and fluid systems controlling the formation of ultra

  6. Preliminary reactor physics calculations for Exxon LWR fuel testing in the power burst facility

    International Nuclear Information System (INIS)

    Olson, W.O.; Nigg, D.W.

    1981-05-01

    The PFB reactor is being considered as an irradiation facility to test LWR fuel rods for Exxon Nuclear Company. Requested test conditions are 18 kW/ft axial peak steady state power in 2.5% initial enrichment, 20,000 MWd/Tu exposed rods. Multigroup transport theory calculations (S/sub n/ and Monte Carlo) showed that this was unattainable in the standard PBF test loop. Thus, a flux multiplier was developed in the form of a Zr-2-clad 0.15-inch thick cylindrical shell of 35% enriched, 88% T.D. UO 2 replacing the flow divider, surrounding the rod within the in-pile tube in PFB. With this flux multiplier installed and assuming an average water density of 0.86 g/cm 3 within the test loop, a Figure of Merit (FOM) for a single-rod test assembly of 0.86 kW/ft-MW +- 5% (at 95% confidence level) was calculated. This FOM is the axial peak linear test rod power per megawatt of reactor power. A reactor power of about 21 megawatts will therefore be required to supply the requested linear test rod axial peak heating rate of 18 kW/ft

  7. Average of delta: a new quality control tool for clinical laboratories.

    Science.gov (United States)

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  8. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  9. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  10. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  11. Manitoba Hydro 1998 progress report

    International Nuclear Information System (INIS)

    1998-11-01

    Manitoba Hydro has four commitments: 1) they will integrate climate change management into their plans and operations, 2) between 1991 and 2012 they will reduce their greenhouse gas emissions in excess of 6% below 1990 levels, 3) they anticipate that their greenhouse gas production will be in excess of 40% below their 1990 levels by 2010-11, and 4) they have the potential to make a greater contribution to national and international environmental and economic efforts by developing additional hydroelectric energy. The performance to date in implementing their strategy since 1990 includes: the 1330 megawatt Limestone Generating Station came into full production in 1992, which has increased electricity output without additional greenhouse gas emissions; four coal fueled generating units were removed from service at Brandon Generating Station in 1996, reducing coal generating capacity; seven communities previously served have been connected to the province's provincial grid, reducing the emissions from diesel generators; demand-side energy initiatives resulted in a saving of 45 megawatts since 1990, and supply-side initiatives, 152 megawatts; and net exports have increased significantly, from 2,296 megawatt hours in 1990 to 13,888 megawatt hours in 1997-98, which displaces energy that would otherwise have been produced at fossil-fueled generating stations. tabs

  12. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  13. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  14. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  15. Measurement of average radon gas concentration at workplaces

    International Nuclear Information System (INIS)

    Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.

    2003-01-01

    In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)

  16. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  17. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  18. Economic potential of smaller-sized nuclear plants in today's economy

    International Nuclear Information System (INIS)

    Behrens, C.E.

    1984-01-01

    In this study, the cost of producing power was modelled for a utility with specified financial and production parameters. Two reference cases were considered: in one, it was assumed that the utility would build 400-megawatt nuclear units as necessary to meet its growth in load; in the second, that it would meet its load growth by building 1200-MW units. The smaller plants were assumed to cost 12 percent more per kilowatt than the larger units. The object was to see if the lower financing costs of the 400-megawatt units were enough to overcome the larger plants' economies of scale. In addition to the reference cases, the sensitivity of the cost measurement to changes in various parameters was modelled. The parameters tested included interest rates, fuel mix, cost differential between the 400-megawatt and 1200-megawatt plants, and the rate of growth in load. The results of these cases indicate strongly that small nuclear power plants could have a market

  19. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  20. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  1. Influence of coma aberration on aperture averaged scintillations in oceanic turbulence

    Science.gov (United States)

    Luo, Yujuan; Ji, Xiaoling; Yu, Hong

    2018-01-01

    The influence of coma aberration on aperture averaged scintillations in oceanic turbulence is studied in detail by using the numerical simulation method. In general, in weak oceanic turbulence, the aperture averaged scintillation can be effectively suppressed by means of the coma aberration, and the aperture averaged scintillation decreases as the coma aberration coefficient increases. However, in moderate and strong oceanic turbulence the influence of coma aberration on aperture averaged scintillations can be ignored. In addition, the aperture averaged scintillation dominated by salinity-induced turbulence is larger than that dominated by temperature-induced turbulence. In particular, it is shown that for coma-aberrated Gaussian beams, the behavior of aperture averaged scintillation index is quite different from the behavior of point scintillation index, and the aperture averaged scintillation index is more suitable for characterizing scintillations in practice.

  2. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  3. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  4. A depth semi-averaged model for coastal dynamics

    Science.gov (United States)

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  5. Digital Spectrometers for Interplanetary Science Missions

    Science.gov (United States)

    Jarnot, Robert F.; Padmanabhan, Sharmila; Raffanti, Richard; Richards, Brian; Stek, Paul; Werthimer, Dan; Nikolic, Borivoje

    2010-01-01

    A fully digital polyphase spectrometer recently developed by the University of California Berkeley Wireless Research Center in conjunction with the Jet Propulsion Laboratory provides a low mass, power, and cost implementation of a spectrum channelizer for submillimeter spectrometers for future missions to the Inner and Outer Solar System. The digital polyphase filter bank spectrometer (PFB) offers broad bandwidth with high spectral resolution, minimal channel-to-channel overlap, and high out-of-band rejection.

  6. The average covering tree value for directed graph games

    NARCIS (Netherlands)

    Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf

    We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering

  7. The Average Covering Tree Value for Directed Graph Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.

    2012-01-01

    Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all

  8. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...

  9. Free Energy Self-Averaging in Protein-Sized Random Heteropolymers

    International Nuclear Information System (INIS)

    Chuang, Jeffrey; Grosberg, Alexander Yu.; Kardar, Mehran

    2001-01-01

    Current theories of heteropolymers are inherently macroscopic, but are applied to mesoscopic proteins. To compute the free energy over sequences, one assumes self-averaging -- a property established only in the macroscopic limit. By enumerating the states and energies of compact 18, 27, and 36mers on a lattice with an ensemble of random sequences, we test the self-averaging approximation. We find that fluctuations in the free energy between sequences are weak, and that self-averaging is valid at the scale of real proteins. The results validate sequence design methods which exponentially speed up computational design and simplify experimental realizations

  10. Average multiplications in deep inelastic processes and their interpretation

    International Nuclear Information System (INIS)

    Kiselev, A.V.; Petrov, V.A.

    1983-01-01

    Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity

  11. Several hundred megawatt MHD units

    International Nuclear Information System (INIS)

    Pishchikov, S.; Pinkhasik, D.; Sidorov, V.

    1978-01-01

    The features are described of the future MHD unit U-25 tested at the Institute of High Temperatures of the Academy of Sciences of the USSR. The attainable thermal load of the combustion chamber is 290x10 6 kJ/m 3 .h. Three types of channel were tested, i.e., the Faraday channel divided into sections with modular insulating walls, the diagonal channel without metal body, and an improved Faraday channel with an output of 20 MW. The described MHD generator is equipped with an inverter which transforms direct current into alternating current, continuously adjusts the load from no-load operation to short-circuit connection and maintains the desired electrical voltage independently of the changes in loading. A new technique of connecting and disconnecting the oxygen equipment was developed which considerably reduces the time of start-up and shut-down. Natural gas is used for heating the air heaters. All equipment used in the operation of the MHD generator is remote controlled by computer or manually. (J.B.)

  12. Several hundred megawatt MHD units

    Energy Technology Data Exchange (ETDEWEB)

    Pishchikov, S; Pinkhasik, D; Sidorov, V

    1978-07-01

    The features are described of the future MHD unit U-25 tested at the Institute of High Temperatures of the Academy of Sciences of the USSR. The attainable thermal load of the combustion chamber is 290x10/sup 6/ kJ/m/sup 3/.h. Three types of channel were tested, i.e., the Faraday channel divided into sections with modular insulating walls, the diagonal channel without metal body, and an improved Faraday channel with an output of 20 MW. The described MHD generator is equipped with an inverter which transforms direct current into alternating current, continuously adjusts the load from no-load operation to short-circuit connection and maintains the desired electrical voltage independently of the changes in loading. A new technique of connecting and disconnecting the oxygen equipment was developed which considerably reduces the time of start-up and shut-down. Natural gas is used for heating the air heaters. All equipment used in the operation of the MHD generator is remote controlled by computer or manually.

  13. High reliability megawatt transformer/rectifier

    Science.gov (United States)

    Zwass, Samuel; Ashe, Harry; Peters, John W.

    1991-01-01

    The goal of the two phase program is to develop the technology and design and fabricate ultralightweight high reliability DC to DC converters for space power applications. The converters will operate from a 5000 V dc source and deliver 1 MW of power at 100 kV dc. The power weight density goal is 0.1 kg/kW. The cycle to cycle voltage stability goals was + or - 1 percent RMS. The converter is to operate at an ambient temperature of -40 C with 16 minute power pulses and one hour off time. The uniqueness of the design in Phase 1 resided in the dc switching array which operates the converter at 20 kHz using Hollotron plasma switches along with a specially designed low loss, low leakage inductance and a light weight high voltage transformer. This approach reduced considerably the number of components in the converter thereby increasing the system reliability. To achieve an optimum transformer for this application, the design uses four 25 kV secondary windings to produce the 100 kV dc output, thus reducing the transformer leakage inductance, and the ac voltage stresses. A specially designed insulation system improves the high voltage dielectric withstanding ability and reduces the insulation path thickness thereby reducing the component weight. Tradeoff studies and tests conducted on scaled-down model circuits and using representative coil insulation paths have verified the calculated transformer wave shape parameters and the insulation system safety. In Phase 1 of the program a converter design approach was developed and a preliminary transformer design was completed. A fault control circuit was designed and a thermal profile of the converter was also developed.

  14. Megawatt wind turbines gaining momentum

    International Nuclear Information System (INIS)

    Oehlenschlaeger, K.; Madsen, B.T.

    1996-01-01

    Through the short history of the modern wind turbine, electric utilities have made it amply clear that they have held a preference for large scale wind turbines over smaller ones, which is why wind turbine builders through the years have made numerous attempts develop such machines - machines that would meet the technical, aesthetic and economic demands that a customer would require. Considerable effort was put into developing such wind turbines in the early 1980s. There was the U.S. Department of Energy's MOD 1-5 program, which ranged up to 3.2 MW, Denmark's Nibe A and B, 630 kW turbine and the 2 MW Tjaereborg machine, Sweden's Naesudden, 3 MW, and Germany's Growian, 3 MW. Most of these were dismal failures, though some did show the potential of MW technology. (au)

  15. 12 CFR 702.105 - Weighted-average life of investments.

    Science.gov (United States)

    2010-01-01

    ... investment funds. (1) For investments in registered investment companies (e.g., mutual funds) and collective investment funds, the weighted-average life is defined as the maximum weighted-average life disclosed, directly or indirectly, in the prospectus or trust instrument; (2) For investments in money market funds...

  16. System for evaluation of the true average input-pulse rate

    International Nuclear Information System (INIS)

    Eichenlaub, D.P.; Garrett, P.

    1977-01-01

    The description is given of a digital radiation monitoring system making use of current digital circuit and microprocessor for rapidly processing the pulse data coming from remote radiation controllers. This system analyses the pulse rates in order to determine if a new datum is statistically the same as that previously received. Hence it determines the best possible average time for itself. So long as the true average pulse rate stays constant, the time required to establish an average can increase until the statistical error is under the desired level, i.e. 1%. When the digital processing of the pulse data indicates a change in the true average pulse rate, the time required to establish an average can be reduced so as to improve the response time of the system at the statistical error. This concept includes a fixed compromise between the statistical error and the response time [fr

  17. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  18. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  19. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    International Nuclear Information System (INIS)

    Craft, David; Papp, Dávid; Unkelbach, Jan

    2014-01-01

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step

  20. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT.

    Science.gov (United States)

    Craft, David; Papp, Dávid; Unkelbach, Jan

    2014-02-01

    To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.

  1. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Craft, David, E-mail: dcraft@partners.org; Papp, Dávid; Unkelbach, Jan [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)

    2014-02-15

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.

  2. Average contraction and synchronization of complex switched networks

    International Nuclear Information System (INIS)

    Wang Lei; Wang Qingguo

    2012-01-01

    This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)

  3. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  4. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  5. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  6. 47 CFR 1.959 - Computation of average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...

  7. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  8. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  9. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  10. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  11. Wave function collapse implies divergence of average displacement

    OpenAIRE

    Marchewka, A.; Schuss, Z.

    2005-01-01

    We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.

  12. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  13. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  14. Very high power THz radiation at Jefferson Lab

    International Nuclear Information System (INIS)

    Carr, G.L.; Martin, Michael C.; McKinney, Wayne R.; Jordan, K.; Neil, George R.; Williams, G.P.

    2002-01-01

    We report the production of high power (20 watts average, ∼;1 Megawatt peak) broadband THz light based on coherent emission from relativistic electrons. We describe the source, presenting theoretical calculations and their experimental verification. For clarity we compare this source with one based on ultrafast laser techniques, and in fact the radiation has qualities closely analogous to that produced by such sources, namely that it is spatially coherent, and comprises short duration pulses with transform-limited spectral content. In contrast to conventional THz radiation, however, the intensity is many orders of magnitude greater due to the relativistic enhancement

  15. Safety-technical lay-out of the operational environment of a high-power spallation target system of the megawatt class with mercury as target material

    International Nuclear Information System (INIS)

    Butzek, M.

    2005-06-01

    This thesis is concerning the safety relevant layout of the environment of a mercury based 5-Megawatt-spallation target. All safety relevant aspects related to construction, operation and dismantling as well as economical issues were taken into account. Safety concerns are basically driven by the toxic and radioactive inventory as well as the kind and intensity of radiation produced by the spallation process. Due to significant differences in inventory and radiation between a spallation source and a fission reactor, for the design of the spallation source mentioned above the safety philosophy of a fission reactor must not be used unchanged. Rather than this a systematic study of all safety related boundary conditions is necessary. Within this thesis all safety relevant boundary conditions for this specific type of machine are given. Beside the spatial distribution of different areas inside the target station, influence of medias to be used as well as arising radiation and handling requirements are discussed in detail. A general layout of the target station is presented, serving as a basis for all further component and system development. An enclosure concept for the target station was developed, taking into account the safety relevant issues concerning the mercury used as target materials, the water cooling loops containing massive amounts of tritium as well as the materials used for the moderators potentially forming explosive mixtures. Concept and detailed technical layout of the enclosure system was chosen to guarantee safe operation of the source as well as taking care of requirement arising for handling needs. For design of the shielding different suitable materials have been discussed. A design for assembling the shielding is shown taking into account the safety relevant requirements during operation as well as during dismantling. The neutron beam shutters, buried inside the shielding were designed to optimize handling and positioning issued of the inner part

  16. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  17. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  18. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  19. Category structure determines the relative attractiveness of global versus local averages.

    Science.gov (United States)

    Vogel, Tobias; Carr, Evan W; Davis, Tyler; Winkielman, Piotr

    2018-02-01

    Stimuli that capture the central tendency of presented exemplars are often preferred-a phenomenon also known as the classic beauty-in-averageness effect . However, recent studies have shown that this effect can reverse under certain conditions. We propose that a key variable for such ugliness-in-averageness effects is the category structure of the presented exemplars. When exemplars cluster into multiple subcategories, the global average should no longer reflect the underlying stimulus distributions, and will thereby become unattractive. In contrast, the subcategory averages (i.e., local averages) should better reflect the stimulus distributions, and become more attractive. In 3 studies, we presented participants with dot patterns belonging to 2 different subcategories. Importantly, across studies, we also manipulated the distinctiveness of the subcategories. We found that participants preferred the local averages over the global average when they first learned to classify the patterns into 2 different subcategories in a contrastive categorization paradigm (Experiment 1). Moreover, participants still preferred local averages when first classifying patterns into a single category (Experiment 2) or when not classifying patterns at all during incidental learning (Experiment 3), as long as the subcategories were sufficiently distinct. Finally, as a proof-of-concept, we mapped our empirical results onto predictions generated by a well-known computational model of category learning (the Generalized Context Model [GCM]). Overall, our findings emphasize the key role of categorization for understanding the nature of preferences, including any effects that emerge from stimulus averaging. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. Oppugning the assumptions of spatial averaging of segment and joint orientations.

    Science.gov (United States)

    Pierrynowski, Michael Raymond; Ball, Kevin Arthur

    2009-02-09

    Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.

  1. Analytic computation of average energy of neutrons inducing fission

    International Nuclear Information System (INIS)

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  2. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Asymptotic behaviour of time averages for non-ergodic Gaussian processes

    Science.gov (United States)

    Ślęzak, Jakub

    2017-08-01

    In this work, we study the behaviour of time-averages for stationary (non-ageing), but ergodicity-breaking Gaussian processes using their representation in Fourier space. We provide explicit formulae for various time-averaged quantities, such as mean square displacement, density, and analyse the behaviour of time-averaged characteristic function, which gives insight into rich memory structure of the studied processes. Moreover, we show applications of the ergodic criteria in Fourier space, determining the ergodicity of the generalised Langevin equation's solutions.

  4. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  5. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  6. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  7. Comparison of power pulses from homogeneous and time-average-equivalent models

    International Nuclear Information System (INIS)

    De, T.K.; Rouben, B.

    1995-01-01

    The time-average-equivalent model is an 'instantaneous' core model designed to reproduce the same three dimensional power distribution as that generated by a time-average model. However it has been found that the time-average-equivalent model gives a full-core static void reactivity about 8% smaller than the time-average or homogeneous models. To investigate the consequences of this difference in static void reactivity in time dependent calculations, simulations of the power pulse following a hypothetical large-loss-of-coolant accident were performed with a homogeneous model and compared with the power pulse from the time-average-equivalent model. The results show that there is a much smaller difference in peak dynamic reactivity than in static void reactivity between the two models. This is attributed to the fact that voiding is not complete, but also to the retardation effect of the delayed-neutron precursors on the dynamic flux shape. The difference in peak reactivity between the models is 0.06 milli-k. The power pulses are essentially the same in the two models, because the delayed-neutron fraction in the time-average-equivalent model is lower than in the homogeneous model, which compensates for the lower void reactivity in the time-average-equivalent model. (author). 1 ref., 5 tabs., 9 figs

  8. Recovery of tritium from CANDU reactors, its storage and monitoring of its migration in the environment

    International Nuclear Information System (INIS)

    Holtslander, W.J.; Osborne, R.V.

    1979-07-01

    Tritium is produced in CANDU heavy water reactors mainly by neutron activation of deuterium. The typical production rate is 2.4 kCi per megawatt-year (89 TBq. per megawatt-year. In Pickering Generating Station the average concentration of tritium in the moderators has reached 16 Ci.kg -1 (0.6 TBq.kg -1 ) and in coolants, 0.5 Ci.kg -1 (0.02 TBq.kg -1 ). Concentrations will continue to increase towards an equilibrium determined by the production rate, the tritium decay rate and heavy water replacement. Tritium removal methods that are being considered for a pilot plant design are catalytic exchange of DTO with D 2 and electrolysis of D 2 O/DTO to provide feed for cryogenic distillation of D 2 /DT/T 2 . Storage methods for the removed tritium - as elemental gas, as metal hydrides and in cements - are also being investigated. Transport of tritiated wastes should not be a particularly difficult problem in light of extensive experience in transporting tritiated heavy water. Methods for determining the presence of tritium in the environment of any tritium handling facility are well established and have the capability of measuring concentrations of tritium down to current ambient values. (author)

  9. The Average Temporal and Spectral Evolution of Gamma-Ray Bursts

    International Nuclear Information System (INIS)

    Fenimore, E.E.

    1999-01-01

    We have averaged bright BATSE bursts to uncover the average overall temporal and spectral evolution of gamma-ray bursts (GRBs). We align the temporal structure of each burst by setting its duration to a standard duration, which we call T left-angleDurright-angle . The observed average open-quotes aligned T left-angleDurright-angle close quotes profile for 32 bright bursts with intermediate durations (16 - 40 s) has a sharp rise (within the first 20% of T left-angleDurright-angle ) and then a linear decay. Exponentials and power laws do not fit this decay. In particular, the power law seen in the X-ray afterglow (∝T -1.4 ) is not observed during the bursts, implying that the X-ray afterglow is not just an extension of the average temporal evolution seen during the gamma-ray phase. The average burst spectrum has a low-energy slope of -1.03, a high-energy slope of -3.31, and a peak in the νF ν distribution at 390 keV. We determine the average spectral evolution. Remarkably, it is also a linear function, with the peak of the νF ν distribution given by ∼680-600(T/T left-angleDurright-angle ) keV. Since both the temporal profile and the peak energy are linear functions, on average, the peak energy is linearly proportional to the intensity. This behavior is inconsistent with the external shock model. The observed temporal and spectral evolution is also inconsistent with that expected from variations in just a Lorentz factor. Previously, trends have been reported for GRB evolution, but our results are quantitative relationships that models should attempt to explain. copyright copyright 1999. The American Astronomical Society

  10. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  11. Is average daily travel time expenditure constant? In search of explanations for an increase in average travel time.

    NARCIS (Netherlands)

    van Wee, B.; Rietveld, P.; Meurs, H.

    2006-01-01

    Recent research suggests that the average time spent travelling by the Dutch population has increased over the past decades. However, different data sources show different levels of increase. This paper explores possible causes for this increase. They include a rise in incomes, which has probably

  12. Yearly, seasonal and monthly daily average diffuse sky radiation models

    International Nuclear Information System (INIS)

    Kassem, A.S.; Mujahid, A.M.; Turner, D.W.

    1993-01-01

    A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs

  13. Parameterization of Time-Averaged Suspended Sediment Concentration in the Nearshore

    Directory of Open Access Journals (Sweden)

    Hyun-Doug Yoon

    2015-11-01

    Full Text Available To quantify the effect of wave breaking turbulence on sediment transport in the nearshore, the vertical distribution of time-averaged suspended sediment concentration (SSC in the surf zone was parameterized in terms of the turbulent kinetic energy (TKE at different cross-shore locations, including the bar crest, bar trough, and inner surf zone. Using data from a large-scale laboratory experiment, a simple relationship was developed between the time-averaged SSC and the time-averaged TKE. The vertical variation of the time-averaged SSC was fitted to an equation analogous to the turbulent dissipation rate term. At the bar crest, the proposed equation was slightly modified to incorporate the effect of near-bed sediment processes and yielded reasonable agreement. This parameterization yielded the best agreement at the bar trough, with a coefficient of determination R2 ≥ 0.72 above the bottom boundary layer. The time-averaged SSC in the inner surf zone showed good agreement near the bed but poor agreement near the water surface, suggesting that there is a different sedimentation mechanism that controls the SSC in the inner surf zone.

  14. 42 CFR 100.2 - Average cost of a health insurance policy.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Average cost of a health insurance policy. 100.2... VACCINE INJURY COMPENSATION § 100.2 Average cost of a health insurance policy. For purposes of determining..., less certain deductions. One of the deductions is the average cost of a health insurance policy, as...

  15. Annual average equivalent dose of workers form health area

    International Nuclear Information System (INIS)

    Daltro, T.F.L.; Campos, L.L.

    1992-01-01

    The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)

  16. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  17. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  18. 47 CFR 64.1801 - Geographic rate averaging and rate integration.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Geographic rate averaging and rate integration. 64.1801 Section 64.1801 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) MISCELLANEOUS RULES RELATING TO COMMON CARRIERS Geographic Rate Averaging and...

  19. The B-dot Earth Average Magnetic Field

    Science.gov (United States)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  20. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  1. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  2. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  3. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail

    2015-01-01

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees

  4. Deblurring of class-averaged images in single-particle electron microscopy

    International Nuclear Information System (INIS)

    Park, Wooram; Chirikjian, Gregory S; Madden, Dean R; Rockmore, Daniel N

    2010-01-01

    This paper proposes a method for the deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid-body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre–Fourier expansions, and Hermite expansion and Laguerre–Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method

  5. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  6. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  7. An application of commercial data averaging techniques in pulsed photothermal experiments

    International Nuclear Information System (INIS)

    Grozescu, I.V.; Moksin, M.M.; Wahab, Z.A.; Yunus, W.M.M.

    1997-01-01

    We present an application of data averaging technique commonly implemented in many commercial digital oscilloscopes or waveform digitizers. The technique was used for transient data averaging in the pulsed photothermal radiometry experiments. Photothermal signals are surrounded by an important amount of noise which affect the precision of the measurements. The effect of the noise level on photothermal signal parameters in our particular case, fitted decay time, is shown. The results of the analysis can be used in choosing the most effective averaging technique and estimating the averaging parameter values. This would help to reduce the data acquisition time while improving the signal-to-noise ratio

  8. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  9. Runoff and leaching of metolachlor from Mississippi River alluvial soil during seasons of average and below-average rainfall.

    Science.gov (United States)

    Southwick, Lloyd M; Appelboom, Timothy W; Fouss, James L

    2009-02-25

    The movement of the herbicide metolachlor [2-chloro-N-(2-ethyl-6-methylphenyl)-N-(2-methoxy-1-methylethyl)acetamide] via runoff and leaching from 0.21 ha plots planted to corn on Mississippi River alluvial soil (Commerce silt loam) was measured for a 6-year period, 1995-2000. The first three years received normal rainfall (30 year average); the second three years experienced reduced rainfall. The 4-month periods prior to application plus the following 4 months after application were characterized by 1039 +/- 148 mm of rainfall for 1995-1997 and by 674 +/- 108 mm for 1998-2000. During the normal rainfall years 216 +/- 150 mm of runoff occurred during the study seasons (4 months following herbicide application), accompanied by 76.9 +/- 38.9 mm of leachate. For the low-rainfall years these amounts were 16.2 +/- 18.2 mm of runoff (92% less than the normal years) and 45.1 +/- 25.5 mm of leachate (41% less than the normal seasons). Runoff of metolachlor during the normal-rainfall seasons was 4.5-6.1% of application, whereas leaching was 0.10-0.18%. For the below-normal periods, these losses were 0.07-0.37% of application in runoff and 0.22-0.27% in leachate. When averages over the three normal and the three less-than-normal seasons were taken, a 35% reduction in rainfall was characterized by a 97% reduction in runoff loss and a 71% increase in leachate loss of metolachlor on a percent of application basis. The data indicate an increase in preferential flow in the leaching movement of metolachlor from the surface soil layer during the reduced rainfall periods. Even with increased preferential flow through the soil during the below-average rainfall seasons, leachate loss (percent of application) of the herbicide remained below 0.3%. Compared to the average rainfall seasons of 1995-1997, the below-normal seasons of 1998-2000 were characterized by a 79% reduction in total runoff and leachate flow and by a 93% reduction in corresponding metolachlor movement via these routes

  10. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  11. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  12. The average action for scalar fields near phase transitions

    International Nuclear Information System (INIS)

    Wetterich, C.

    1991-08-01

    We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)

  13. Average geodesic distance of skeleton networks of Sierpinski tetrahedron

    Science.gov (United States)

    Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao

    2018-04-01

    The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.

  14. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  15. Are average and symmetric faces attractive to infants? Discrimination and looking preferences.

    Science.gov (United States)

    Rhodes, Gillian; Geddes, Keren; Jeffery, Linda; Dziurawiec, Suzanne; Clark, Alison

    2002-01-01

    Young infants prefer to look at faces that adults find attractive, suggesting a biological basis for some face preferences. However, the basis for infant preferences is not known. Adults find average and symmetric faces attractive. We examined whether 5-8-month-old infants discriminate between different levels of averageness and symmetry in faces, and whether they prefer to look at faces with higher levels of these traits. Each infant saw 24 pairs of female faces. Each pair consisted of two versions of the same face differing either in averageness (12 pairs) or symmetry (12 pairs). Data from the mothers confirmed that adults preferred the more average and more symmetric versions in each pair. The infants were sensitive to differences in both averageness and symmetry, but showed no looking preference for the more average or more symmetric versions. On the contrary, longest looks were significantly longer for the less average versions, and both longest looks and first looks were marginally longer for the less symmetric versions. Mean looking times were also longer for the less average and less symmetric versions, but those differences were not significant. We suggest that the infant looking behaviour may reflect a novelty preference rather than an aesthetic preference.

  16. Fission neutron spectrum averaged cross sections for threshold reactions on arsenic

    International Nuclear Information System (INIS)

    Dorval, E.L.; Arribere, M.A.; Kestelman, A.J.; Comision Nacional de Energia Atomica, Cuyo Nacional Univ., Bariloche; Ribeiro Guevara, S.; Cohen, I.M.; Ohaco, R.A.; Segovia, M.S.; Yunes, A.N.; Arrondo, M.; Comision Nacional de Energia Atomica, Buenos Aires

    2006-01-01

    We have measured the cross sections, averaged over a 235 U fission neutron spectrum, for the two high threshold reactions: 75 As(n,p) 75 mGe and 75 As(n,2n) 74 As. The measured averaged cross sections are 0.292±0.022 mb, referred to the 3.95±0.20 mb standard for the 27 Al(n,p) 27 Mg averaged cross section, and 0.371±0.032 mb referred to the 111±3 mb standard for the 58 Ni(n,p) 58m+g Co averaged cross section, respectively. The measured averaged cross sections were also evaluated semi-empirically by numerically integrating experimental differential cross section data extracted for both reactions from the current literature. The calculations were performed for four different representations of the thermal-neutron-induced 235 U fission neutron spectrum. The calculated cross sections, though depending on analytical representation of the flux, agree with the measured values within the estimated uncertainties. (author)

  17. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  18. 40 CFR 63.652 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... emissions more than the reference control technology, but the combination of the pollution prevention... emissions average. This must include any Group 1 emission points to which the reference control technology... agrees has a higher nominal efficiency than the reference control technology. Information on the nominal...

  19. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  20. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Science.gov (United States)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  1. Average cross sections for the 252Cf neutron spectrum

    International Nuclear Information System (INIS)

    Dezso, Z.; Csikai, J.

    1977-01-01

    A number of average cross sections have been measured for 252 Cf neutrons in (n, γ), (n,p), (n,2n), (n,α) reactions by the activation method and for fission by fission chamber. Cross sections have been determined for 19 elements and 45 reactions. The (n,γ) cross section values lie in the interval from 0.3 to 200 mb. The data as a function of target neutron number increases up to about N=60 with minimum near to dosed shells. The values lie between 0.3 mb and 113 mb. These cross sections decrease significantly with increasing the threshold energy. The values are below 20 mb. The data do not exceed 10 mb. Average (n,p) cross sections as a function of the threshold energy and average fission cross sections as a function of Zsup(4/3)/A are shown. The results obtained are summarized in tables

  2. Perceived Average Orientation Reflects Effective Gist of the Surface.

    Science.gov (United States)

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  3. Closed cycle electric discharge laser design investigation

    Science.gov (United States)

    Baily, P. K.; Smith, R. C.

    1978-01-01

    Closed cycle CO2 and CO electric discharge lasers were studied. An analytical investigation assessed scale-up parameters and design features for CO2, closed cycle, continuous wave, unstable resonator, electric discharge lasing systems operating in space and airborne environments. A space based CO system was also examined. The program objectives were the conceptual designs of six CO2 systems and one CO system. Three airborne CO2 designs, with one, five, and ten megawatt outputs, were produced. These designs were based upon five minute run times. Three space based CO2 designs, with the same output levels, were also produced, but based upon one year run times. In addition, a conceptual design for a one megawatt space based CO laser system was also produced. These designs include the flow loop, compressor, and heat exchanger, as well as the laser cavity itself. The designs resulted in a laser loop weight for the space based five megawatt system that is within the space shuttle capacity. For the one megawatt systems, the estimated weight of the entire system including laser loop, solar power generator, and heat radiator is less than the shuttle capacity.

  4. The Value and Feasibility of Farming Differently Than the Local Average

    OpenAIRE

    Morris, Cooper; Dhuyvetter, Kevin; Yeager, Elizabeth A; Regier, Greg

    2018-01-01

    The purpose of this research is to quantify the value of being different than the local average and feasibility of distinguishing particular parts of an operation from the local average. Kansas crop farms are broken down by their farm characteristics, production practices, and management performances. An ordinary least squares regression model is used to quantify the value of having different than average characteristics, practices, and management performances. The degree farms have distingui...

  5. Resident characterization of better-than- and worse-than-average clinical teaching.

    Science.gov (United States)

    Haydar, Bishr; Charnin, Jonathan; Voepel-Lewis, Terri; Baker, Keith

    2014-01-01

    Clinical teachers and trainees share a common view of what constitutes excellent clinical teaching, but associations between these behaviors and high teaching scores have not been established. This study used residents' written feedback to their clinical teachers, to identify themes associated with above- or below-average teaching scores. All resident evaluations of their clinical supervisors in a single department were collected from January 1, 2007 until December 31, 2008. A mean teaching score assigned by each resident was calculated. Evaluations that were 20% higher or 15% lower than the resident's mean score were used. A subset of these evaluations was reviewed, generating a list of 28 themes for further study. Two researchers then, independently coded the presence or absence of these themes in each evaluation. Interrater reliability of the themes and logistic regression were used to evaluate the predictive associations of the themes with above- or below-average evaluations. Five hundred twenty-seven above-average and 285 below-average evaluations were evaluated for the presence or absence of 15 positive themes and 13 negative themes, which were divided into four categories: teaching, supervision, interpersonal, and feedback. Thirteen of 15 positive themes correlated with above-average evaluations and nine had high interrater reliability (Intraclass Correlation Coefficient >0.6). Twelve of 13 negative themes correlated with below-average evaluations, and all had high interrater reliability. On the basis of these findings, the authors developed 13 recommendations for clinical educators. The authors developed 13 recommendations for clinical teachers using the themes identified from the above- and below-average clinical teaching evaluations submitted by anesthesia residents.

  6. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  7. Domain-averaged Fermi-hole Analysis for Solids

    Czech Academy of Sciences Publication Activity Database

    Baranov, A.; Ponec, Robert; Kohout, M.

    2012-01-01

    Roč. 137, č. 21 (2012), s. 214109 ISSN 0021-9606 R&D Projects: GA ČR GA203/09/0118 Institutional support: RVO:67985858 Keywords : bonding in solids * domain averaged fermi hole * natural orbitals Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012

  8. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  9. 19 CFR 10.310 - Election to average for motor vehicles.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Election to average for motor vehicles. 10.310... Free Trade Agreement § 10.310 Election to average for motor vehicles. (a) Election. In determining whether a motor vehicle is originating for purposes of the preferences under the Agreement or a Canadian...

  10. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  11. Positivity of the spherically averaged atomic one-electron density

    DEFF Research Database (Denmark)

    Fournais, Søren; Hoffmann-Ostenhof, Maria; Hoffmann-Ostenhof, Thomas

    2008-01-01

    We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes.......We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes....

  12. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Directory of Open Access Journals (Sweden)

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  13. Non-self-averaging nucleation rate due to quenched disorder

    International Nuclear Information System (INIS)

    Sear, Richard P

    2012-01-01

    We study the nucleation of a new thermodynamic phase in the presence of quenched disorder. The quenched disorder is a generic model of both impurities and disordered porous media; both are known to have large effects on nucleation. We find that the nucleation rate is non-self-averaging. This is in a simple Ising model with clusters of quenched spins. We also show that non-self-averaging behaviour is straightforward to detect in experiments, and may be rather common. (fast track communication)

  14. The economic and energy-economic development of Armenia - new strategies in matters of energy policy

    International Nuclear Information System (INIS)

    Chitechian, V.I.

    1996-01-01

    For geopolitical, economic, technical and structural reasons, Armenia's power generating capacity, which formerly was 3500 megawatts, is now, at the beginning of the nineties, a mere 650 megawatts. Consequently, the Armenian government in 1993 decided to rebuild unit 2 of the Mesamor nuclear power station in order for it to become operational in 1995. Armenia is a member of the IAEO and WANO. (DG) [de

  15. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  16. Role of spatial averaging in multicellular gradient sensing.

    Science.gov (United States)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-05-20

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  17. An Experimental Study Related to Planning Abilities of Gifted and Average Students

    Directory of Open Access Journals (Sweden)

    Marilena Z. Leana-Taşcılar

    2016-02-01

    Full Text Available Gifted students differ from their average peers in psychological, social, emotional and cognitive development. One of these differences in the cognitive domain is related to executive functions. One of the most important executive functions is planning and organization ability. The aim of this study was to compare planning abilities of gifted students with those of their average peers and to test the effectiveness of a training program on planning abilities of gifted students and average students. First, students’ intelligence and planning abilities were measured and then assigned to either experimental or control group. The groups were matched by intelligence and planning ability (experimental: (13 gifted and 8 average; control: 14 gifted and 8 average. In total 182 students (79 gifted and 103 average participated in the study. Then, a training program was implemented in the experimental group to find out if it improved students’ planning ability. Results showed that boys had better planning abilities than girls did, and gifted students had better planning abilities than their average peers did. Significant results were obtained in favor of the experimental group in the posttest scores

  18. Exactly averaged equations for flow and transport in random media

    International Nuclear Information System (INIS)

    Shvidler, Mark; Karasaki, Kenzi

    2001-01-01

    It is well known that exact averaging of the equations of flow and transport in random porous media can be realized only for a small number of special, occasionally exotic, fields. On the other hand, the properties of approximate averaging methods are not yet fully understood. For example, the convergence behavior and the accuracy of truncated perturbation series. Furthermore, the calculation of the high-order perturbations is very complicated. These problems for a long time have stimulated attempts to find the answer for the question: Are there in existence some exact general and sufficiently universal forms of averaged equations? If the answer is positive, there arises the problem of the construction of these equations and analyzing them. There exist many publications related to these problems and oriented on different applications: hydrodynamics, flow and transport in porous media, theory of elasticity, acoustic and electromagnetic waves in random fields, etc. We present a method of finding the general form of exactly averaged equations for flow and transport in random fields by using (1) an assumption of the existence of Green's functions for appropriate stochastic problems, (2) some general properties of the Green's functions, and (3) the some basic information about the random fields of the conductivity, porosity and flow velocity. We present a general form of the exactly averaged non-local equations for the following cases. 1. Steady-state flow with sources in porous media with random conductivity. 2. Transient flow with sources in compressible media with random conductivity and porosity. 3. Non-reactive solute transport in random porous media. We discuss the problem of uniqueness and the properties of the non-local averaged equations, for the cases with some types of symmetry (isotropic, transversal isotropic, orthotropic) and we analyze the hypothesis of the structure non-local equations in general case of stochastically homogeneous fields. (author)

  19. Increase in average foveal thickness after internal limiting membrane peeling

    Directory of Open Access Journals (Sweden)

    Kumagai K

    2017-04-01

    Full Text Available Kazuyuki Kumagai,1 Mariko Furukawa,1 Tetsuyuki Suetsugu,1 Nobuchika Ogino2 1Department of Ophthalmology, Kami-iida Daiichi General Hospital, 2Department of Ophthalmology, Nishigaki Eye Clinic, Aichi, Japan Purpose: To report the findings in three cases in which the average foveal thickness was increased after a thin epiretinal membrane (ERM was removed by vitrectomy with internal limiting membrane (ILM peeling.Methods: The foveal contour was normal preoperatively in all eyes. All cases underwent successful phacovitrectomy with ILM peeling for a thin ERM. The optical coherence tomography (OCT images were examined before and after the surgery. The changes in the average foveal (1 mm thickness and the foveal areas within 500 µm from the foveal center were measured. The postoperative changes in the inner and outer retinal areas determined from the cross-sectional OCT images were analyzed.Results: The average foveal thickness and the inner and outer foveal areas increased significantly after the surgery in each of the three cases. The percentage increase in the average foveal thickness relative to the baseline thickness was 26% in Case 1, 29% in Case 2, and 31% in Case 3. The percentage increase in the foveal inner retinal area was 71% in Case 1, 113% in Case 2, and 110% in Case 3, and the percentage increase in foveal outer retinal area was 8% in Case 1, 13% in Case 2, and 18% in Case 3.Conclusion: The increase in the average foveal thickness and the inner and outer foveal areas suggests that a centripetal movement of the inner and outer retinal layers toward the foveal center probably occurred due to the ILM peeling. Keywords: internal limiting membrane, optical coherence tomography, average foveal thickness, epiretinal membrane, vitrectomy

  20. The Health Effects of Income Inequality: Averages and Disparities.

    Science.gov (United States)

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  1. A Martian PFS average spectrum: Comparison with ISO SWS

    Science.gov (United States)

    Formisano, V.; Encrenaz, T.; Fonti, S.; Giuranna, M.; Grassi, D.; Hirsh, H.; Khatuntsev, I.; Ignatiev, N.; Lellouch, E.; Maturilli, A.; Moroz, V.; Orleanski, P.; Piccioni, G.; Rataj, M.; Saggin, B.; Zasova, L.

    2005-08-01

    The evaluation of the planetary Fourier spectrometer performance at Mars is presented by comparing an average spectrum with the ISO spectrum published by Lellouch et al. [2000. Planet. Space Sci. 48, 1393.]. First, the average conditions of Mars atmosphere are compared, then the mixing ratios of the major gases are evaluated. Major and minor bands of CO 2 are compared, from the point of view of features characteristics and bands depth. The spectral resolution is also compared using several solar lines. The result indicates that PFS radiance is valid to better than 1% in the wavenumber range 1800-4200 cm -1 for the average spectrum considered (1680 measurements). The PFS monochromatic transfer function generates an overshooting on the left-hand side of strong narrow lines (solar or atmospheric). The spectral resolution of PFS is of the order of 1.3 cm -1 or better. A large number of narrow features to be identified are discovered.

  2. Improving sensitivity in micro-free flow electrophoresis using signal averaging

    Science.gov (United States)

    Turgeon, Ryan T.; Bowser, Michael T.

    2009-01-01

    Microfluidic free-flow electrophoresis (μFFE) is a separation technique that separates continuous streams of analytes as they travel through an electric field in a planar flow channel. The continuous nature of the μFFE separation suggests that approaches more commonly applied in spectroscopy and imaging may be effective in improving sensitivity. The current paper describes the S/N improvements that can be achieved by simply averaging multiple images of a μFFE separation; 20–24-fold improvements in S/N were observed by averaging the signal from 500 images recorded for over 2 min. Up to an 80-fold improvement in S/N was observed by averaging 6500 images. Detection limits as low as 14 pM were achieved for fluorescein, which is impressive considering the non-ideal optical set-up used in these experiments. The limitation to this signal averaging approach was the stability of the μFFE separation. At separation times longer than 20 min bubbles began to form at the electrodes, which disrupted the flow profile through the device, giving rise to erratic peak positions. PMID:19319908

  3. The classical correlation limits the ability of the measurement-induced average coherence

    Science.gov (United States)

    Zhang, Jun; Yang, Si-Ren; Zhang, Yang; Yu, Chang-Shui

    2017-04-01

    Coherence is the most fundamental quantum feature in quantum mechanics. For a bipartite quantum state, if a measurement is performed on one party, the other party, based on the measurement outcomes, will collapse to a corresponding state with some probability and hence gain the average coherence. It is shown that the average coherence is not less than the coherence of its reduced density matrix. In particular, it is very surprising that the extra average coherence (and the maximal extra average coherence with all the possible measurements taken into account) is upper bounded by the classical correlation of the bipartite state instead of the quantum correlation. We also find the sufficient and necessary condition for the null maximal extra average coherence. Some examples demonstrate the relation and, moreover, show that quantum correlation is neither sufficient nor necessary for the nonzero extra average coherence within a given measurement. In addition, the similar conclusions are drawn for both the basis-dependent and the basis-free coherence measure.

  4. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  5. Testing averaged cosmology with type Ia supernovae and BAO data

    Energy Technology Data Exchange (ETDEWEB)

    Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)

    2017-02-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  6. Testing averaged cosmology with type Ia supernovae and BAO data

    International Nuclear Information System (INIS)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani

    2017-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  7. Status of the Northrop Grumman Compact Infrared Free-Electron Laser

    Energy Technology Data Exchange (ETDEWEB)

    Lehrman, I.S.; Krishnaswamy, J.; Hartley, R.A. [Northrop Grumman Advanced Technology & Development Center, Princeton, NJ (United States)] [and others

    1995-12-31

    The Compact Infrared Free Electron Laser (CIRFEL) was built as part of a joint collaboration between the Northrop Grumman Corporation and Princeton University to develop FEL`s for use by researchers in the materials, medical and physical sciences. The CIRFEL was designed to lase in the Mid-IR and Far-IR regimes with picosecond pulses, megawatt level peak powers and an average power of a few watts. The micropulse separation is 7 nsec which allows a number of relaxation phenomenon to be observed. The CIRFEL utilizes an RF photocathode gun to produce high-brightness time synchronized electron bunches. The operational status and experimental results of the CERFEL will be presented.

  8. Engineering design of the interaction waveguide for high-power accelerator-driven microwave free-electron lasers

    International Nuclear Information System (INIS)

    Hopkins, D.B.; Clay, H.W.; Stallard, B.W.; Throop, A.L.; Listvinsky, G.; Makowski, M.A.

    1989-01-01

    Linear induction accelerators (LIAs) operating at beam energies of a few million electron volts and currents of a few thousand amperes are suitable drivers for free-electron lasers (FELs). Such lasers are capable of producing gigawatts of peak power and megawatts of average power at microwave frequencies. Such devices are being studied as possible power sources for future high-gradient accelerators and are being constructed for plasma heating applications. At high power levels, the engineering design of the interaction waveguide presents a challenge. This paper discusses several concerns, including electrical breakdown and metal fatigue limits, choice of material, and choice of operating propagation mode. 13 refs., 3 figs

  9. Status of the Northrop Grumman Compact Infrared Free-Electron Laser

    International Nuclear Information System (INIS)

    Lehrman, I.S.; Krishnaswamy, J.; Hartley, R.A.

    1995-01-01

    The Compact Infrared Free Electron Laser (CIRFEL) was built as part of a joint collaboration between the Northrop Grumman Corporation and Princeton University to develop FEL's for use by researchers in the materials, medical and physical sciences. The CIRFEL was designed to lase in the Mid-IR and Far-IR regimes with picosecond pulses, megawatt level peak powers and an average power of a few watts. The micropulse separation is 7 nsec which allows a number of relaxation phenomenon to be observed. The CIRFEL utilizes an RF photocathode gun to produce high-brightness time synchronized electron bunches. The operational status and experimental results of the CERFEL will be presented

  10. Research & development and growth: A Bayesian model averaging analysis

    Czech Academy of Sciences Publication Activity Database

    Horváth, Roman

    2011-01-01

    Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economic s Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf

  11. Stereotypes Of College Students Toward The Average Man's And Woman's Attitudes Toward Women

    Science.gov (United States)

    Kaplan, Robert M.; Goldman, Roy D.

    1973-01-01

    College students perceive a great difference between the (stereotyped) attitude of the average man'' and average woman'' toward the role of women in society. The average man was seen as viewing women in a more traditional manner than the average woman. The interaction between sex of respondent and stereotype sex indicated that female respondents…

  12. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  13. Size and emotion averaging: costs of dividing attention after all.

    Science.gov (United States)

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.

  14. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  15. Medicare Part B Drug Average Sales Pricing Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  16. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  17. SACALCCYL, Calculates the average solid angle subtended by a volume; SACALC2B, Calculates the average solid angle for source-detector geometries

    International Nuclear Information System (INIS)

    Whitcher, Ralph

    2007-01-01

    1 - Description of program or function: SACALC2B calculates the average solid angle subtended by a rectangular or circular detector window to a coaxial or non-coaxial rectangular, circular or point source, including where the source and detector planes are not parallel. SACALC C YL calculates the average solid angle subtended by a cylinder to a rectangular or circular source, plane or thick, at any location and orientation. This is needed, for example, in calculating the intrinsic gamma efficiency of a detector such as a GM tube. The program also calculates the number of hits on the cylinder side and on each end, and the average path length through the detector volume (assuming no scattering or absorption). Point sources can be modelled by using a circular source of zero radius. NEA-1688/03: Documentation has been updated (January 2006). 2 - Methods: The program uses a Monte Carlo method to calculate average solid angle for source-detector geometries that are difficult to analyse by analytical methods. The values of solid angle are calculated to accuracies of typically better than 0.1%. The calculated values from the Monte Carlo method agree closely with those produced by polygon approximation and numerical integration by Gardner and Verghese, and others. 3 - Restrictions on the complexity of the problem: The program models a circular or rectangular detector in planes that are not necessarily coaxial, nor parallel. Point sources can be modelled by using a circular source of zero radius. The sources are assumed to be uniformly distributed. NEA-1688/04: In SACALC C YL, to avoid rounding errors, differences less than 1 E-12 are assumed to be zero

  18. Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods

    Science.gov (United States)

    Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor

    2016-10-01

    Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.

  19. Preference for Averageness in Faces Does Not Generalize to Non-Human Primates

    Directory of Open Access Journals (Sweden)

    Olivia B. Tomeo

    2017-07-01

    Full Text Available Facial attractiveness is a long-standing topic of active study in both neuroscience and social science, motivated by its positive social consequences. Over the past few decades, it has been established that averageness is a major factor influencing judgments of facial attractiveness in humans. Non-human primates share similar social behaviors as well as neural mechanisms related to face processing with humans. However, it is unknown whether monkeys, like humans, also find particular faces attractive and, if so, which kind of facial traits they prefer. To address these questions, we investigated the effect of averageness on preferences for faces in monkeys. We tested three adult male rhesus macaques using a visual paired comparison (VPC task, in which they viewed pairs of faces (both individual faces, or one individual face and one average face; viewing time was used as a measure of preference. We did find that monkeys looked longer at certain individual faces than others. However, unlike humans, monkeys did not prefer the average face over individual faces. In fact, the more the individual face differed from the average face, the longer the monkeys looked at it, indicating that the average face likely plays a role in face recognition rather than in judgments of facial attractiveness: in models of face recognition, the average face operates as the norm against which individual faces are compared and recognized. Taken together, our study suggests that the preference for averageness in faces does not generalize to non-human primates.

  20. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  1. Quantized Average Consensus on Gossip Digraphs with Reduced Computation

    Science.gov (United States)

    Cai, Kai; Ishii, Hideaki

    The authors have recently proposed a class of randomized gossip algorithms which solve the distributed averaging problem on directed graphs, with the constraint that each node has an integer-valued state. The essence of this algorithm is to maintain local records, called “surplus”, of individual state updates, thereby achieving quantized average consensus even though the state sum of all nodes is not preserved. In this paper we study a modified version of this algorithm, whose feature is primarily in reducing both computation and communication effort. Concretely, each node needs to update fewer local variables, and can transmit surplus by requiring only one bit. Under this modified algorithm we prove that reaching the average is ensured for arbitrary strongly connected graphs. The condition of arbitrary strong connection is less restrictive than those known in the literature for either real-valued or quantized states; in particular, it does not require the special structure on the network called balanced. Finally, we provide numerical examples to illustrate the convergence result, with emphasis on convergence time analysis.

  2. Beef steers with average dry matter intake and divergent average daily gain have altered gene expression in the jejunum

    Science.gov (United States)

    The objective of this study was to determine the association of differentially expressed genes (DEG) in the jejunum of steers with average DMI and high or low ADG. Feed intake and growth were measured in a cohort of 144 commercial Angus steers consuming a finishing diet containing (on a DM basis) 67...

  3. Polyphasic taxonomy of the genus Talaromyces

    DEFF Research Database (Denmark)

    Yilmaz, N.; Visagie, C.M.; Houbraken, J.

    2014-01-01

    The genus Talaromyces was described by Benjamin in 1955 as a sexual state of Penicillium that produces soft walled ascomata covered with interwoven hyphae. Phylogenetic information revealed that Penicillium subgenus Biverticillium and Talaromyces form a monophyletic clade distinct from the other...

  4. Microscopic description of average level spacing in even-even nuclei

    International Nuclear Information System (INIS)

    Huong, Le Thi Quynh; Hung, Nguyen Quang; Phuc, Le Tan

    2017-01-01

    A microscopic theoretical approach to the average level spacing at the neutron binding energy in even-even nuclei is proposed. The approach is derived based on the Bardeen-Cooper-Schrieffer (BCS) theory at finite temperature and projection M of the total angular momentum J , which is often used to describe the superfluid properties of hot rotating nuclei. The exact relation of the J -dependent total level density to the M -dependent state densities, based on which the average level spacing is calculated, was employed. The numerical calculations carried out for several even-even nuclei have shown that in order to reproduce the experimental average level spacing, the M -dependent pairing gaps as well as the exact relation of the J -dependent total level density formula should be simultaneously used. (paper)

  5. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  6. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  7. Signal-averaged P wave duration and the dimensions of the atria

    DEFF Research Database (Denmark)

    Dixen, Ulrik; Joens, Christian; Rasmussen, Bo V

    2004-01-01

    Delay of atrial electrical conduction measured as prolonged signal-averaged P wave duration (SAPWD) could be due to atrial enlargement. Here, we aimed to compare different atrial size parameters obtained from echocardiography with the SAPWD measured with a signal-averaged electrocardiogram (SAECG)....

  8. Thermal motion in proteins: Large effects on the time-averaged interaction energies

    International Nuclear Information System (INIS)

    Goethe, Martin; Rubi, J. Miguel; Fita, Ignacio

    2016-01-01

    As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.

  9. Thermal motion in proteins: Large effects on the time-averaged interaction energies

    Energy Technology Data Exchange (ETDEWEB)

    Goethe, Martin, E-mail: martingoethe@ub.edu; Rubi, J. Miguel [Departament de Física Fonamental, Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Fita, Ignacio [Institut de Biologia Molecular de Barcelona, Baldiri Reixac 10, 08028 Barcelona (Spain)

    2016-03-15

    As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.

  10. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  11. The effects of average revenue regulation on electricity transmission investment and pricing

    International Nuclear Information System (INIS)

    Matsukawa, Isamu

    2008-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two-part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist's expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occurs, average revenue regulation is allocatively more efficient than a Coasian two-part tariff if the level of capacity under average revenue regulation is higher than that under a Coasian two-part tariff. (author)

  12. On the average capacity and bit error probability of wireless communication systems

    KAUST Repository

    Yilmaz, Ferkan

    2011-12-01

    Analysis of the average binary error probabilities and average capacity of wireless communications systems over generalized fading channels have been considered separately in the past. This paper introduces a novel moment generating function-based unified expression for both average binary error probabilities and average capacity of single and multiple link communication with maximal ratio combining. It is a matter to note that the generic unified expression offered in this paper can be easily calculated and that is applicable to a wide variety of fading scenarios, and the mathematical formalism is illustrated with the generalized Gamma fading distribution in order to validate the correctness of our newly derived results. © 2011 IEEE.

  13. 40 CFR 86.1866-12 - CO2 fleet average credit programs.

    Science.gov (United States)

    2010-07-01

    ... systems using electric compressors); The constant 16.6 is the average passenger car impact of air... using electric compressors); The constant 20.7 is the average passenger car impact of air conditioning.... (a) Incentive for certification of advanced technology vehicles. Electric vehicles, plug-in hybrid...

  14. Passive quantum error correction of linear optics networks through error averaging

    Science.gov (United States)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  15. Analysis of the average daily radon variations in the soil air

    International Nuclear Information System (INIS)

    Holy, K.; Matos, M.; Boehm, R.; Stanys, T.; Polaskova, A.; Hola, O.

    1998-01-01

    In this contribution the search of the relation between the daily variations of the radon concentration and the regular daily oscillations of the atmospheric pressure are presented. The deviation of the radon activity concentration in the soil air from the average daily value reaches only a few percent. For the dry summer months the average daily course of the radon activity concentration can be described by the obtained equation. The analysis of the average daily courses could give the information concerning the depth of the gas permeable soil layer. The soil parameter is determined by others method with difficulty

  16. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  17. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  18. A NOx control regulation for electric utility boilers in California

    International Nuclear Information System (INIS)

    Price, D.R.

    1992-01-01

    The reduction of oxides of nitrogen emissions is becoming an increasingly important part of ozone attainment plans. As a part of its ozone attainment plan, the Ventura County (California) Air Pollution Control Board adopted in June, 1991, a regulation (Rule 59) to limit oxides of nitrogen emissions from four electrical utility boilers in the county. Rule development took two years and involved considerable public input. The emission limit for each of two 750 megawatt units is set at 0.10 pounds of NO x per megawatt-hour net after June, 1994. The emission limit for each of two 215 megawatt units is 0.20 pounds of NO x per megawatt-hour after June, 1996. Additional limitations are included for fuel oil operation. The rule does not specify an emission control technology. Conventional selective catalytic reduction, urea injection and combustion modifications are considered the technologies most likely to be used to comply. At $17,613 per ton of NO x reduced for the two large boilers and $8.992 per ton of NO x reduced for the small boilers, the rule is considered cost effective. The capital cost for conventional selective catalytic reduction systems on all four boilers is expected to be in excess of $210,000,000

  19. Manpower equals megawatts - alternative employment option

    International Nuclear Information System (INIS)

    McKeone, J.P.

    1993-01-01

    Virtually all nuclear utilities are undergoing serious destaffing in order to reach the lowest possible operating and maintenance costs. This effort is driven by public utility commission (PUC) demands for least-cost generation. Organizational streamlining requires versatile new approaches to project staffing, outsourcing of noncore activities, and responsible care for displaced employees

  20. Proton transport properties of poly(aspartic acid) with different average molecular weights

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, Yuki, E-mail: ynagao@kuchem.kyoto-u.ac.j [Department of Mechanical Systems and Design, Graduate School of Engineering, Tohoku University, 6-6-01 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Imai, Yuzuru [Institute of Development, Aging and Cancer (IDAC), Tohoku University, 4-1 Seiryo-cho, Aoba-ku, Sendai 980-8575 (Japan); Matsui, Jun [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan); Ogawa, Tomoyuki [Department of Electronic Engineering, Graduate School of Engineering, Tohoku University, 6-6-05 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Miyashita, Tokuji [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan)

    2011-04-15

    Research highlights: Seven polymers with different average molecular weights were synthesized. The proton conductivity depended on the number-average degree of polymerization. The difference of the proton conductivities was more than one order of magnitude. The number-average molecular weight contributed to the stability of the polymer. - Abstract: We synthesized seven partially protonated poly(aspartic acids)/sodium polyaspartates (P-Asp) with different average molecular weights to study their proton transport properties. The number-average degree of polymerization (DP) for each P-Asp was 30 (P-Asp30), 115 (P-Asp115), 140 (P-Asp140), 160 (P-Asp160), 185 (P-Asp185), 205 (P-Asp205), and 250 (P-Asp250). The proton conductivity depended on the number-average DP. The maximum and minimum proton conductivities under a relative humidity of 70% and 298 K were 1.7 . 10{sup -3} S cm{sup -1} (P-Asp140) and 4.6 . 10{sup -4} S cm{sup -1} (P-Asp250), respectively. Differential thermogravimetric analysis (TG-DTA) was carried out for each P-Asp. The results were classified into two categories. One exhibited two endothermic peaks between t = (270 and 300) {sup o}C, the other exhibited only one peak. The P-Asp group with two endothermic peaks exhibited high proton conductivity. The high proton conductivity is related to the stability of the polymer. The number-average molecular weight also contributed to the stability of the polymer.

  1. STUDY OF WITHERS HEIGHT AVERAGE PERFORMANCES IN HUCUL HORSE BREED – HROBY BLOODLINE

    Directory of Open Access Journals (Sweden)

    M. MAFTEI

    2008-10-01

    Full Text Available Study of average performances in a population have a huge importance because, regarding a population, the average of phenotypic value is equal with average of genotypic value. So, the studies of the average value of characters offer us an idea about the population genetic level. The biological material is represented by 177 hucul horse from Hroby bloodline divided in 6 stallion families (tab. 1 analyzed at 18, 30 and 42 months old, owned by Lucina hucul stood farm. The average performances for withers height are presented in tab. 2. We can observe here that the average performances of the character are between characteristic limits of the breed. Both sexes have a small grade of variability with a decreasing tendency in the same time with ageing. We can observe a normal evolution in time for growth process with significant differences only at age of 42 months. We can say in this condition that the average performances for withers height have different values, influenced by the age, with a decreasing tendency.

  2. STUDY OF WITHERS HEIGHT AVERAGE PERFORMANCES IN HUCUL HORSE BREED –GORAL BLOODLINE

    Directory of Open Access Journals (Sweden)

    M. MAFTEI

    2008-10-01

    Full Text Available Study of average performances in a population have a huge importance because, regarding a population, the average of phenotypic value is equal with average of genotypic value. So, the studies of the average value of characters offer us an idea about the population genetic level. The biological material is represented by 87 hucul horse from Goral bloodline divided in 5 stallion families (tab. 1 analyzed at 18, 30 and 42 months old, owned by Lucina hucul stood farm. The average performances for withers height are presented in tab. 2. We can observe here that the average performances of the character are between characteristic limits of the breed. Both sexes have a small grade of variability with a decreasing tendency in the same time with ageing. We can observe a normal evolution in time for growth process with significant differences only at age of 42 months. We can say in this condition that the average performances for withers height have different values, influenced by the age, with a decreasing tendency.

  3. Opening up the future in space with nuclear power

    International Nuclear Information System (INIS)

    Buden, D.; Angelo, J. Jr.

    1985-01-01

    Man's extraterrestrial development is dependent on abundant power. For example, space-based manufacturing facilities are projected to have a power demand of 300 kWe by the end of this Century, and several megawatts in the early part of next millennium. The development of the lunar resource base will result in power needs ranging from an initial 100 kW(e) to many megawatts. Human visits to Mars could be achieved using a multimegawatt nuclear electric propulsion system or high thrust nuclear rockets. Detailed exploration of the solar system will also be greatly enhanced by the availability of large nuclear electric propulsion systems. All of these activities will require substantial increases in space power - hundreds of kilowatts to many megawatts. The challenge is clear: how to effectively use nuclear energy to support humanity's expansion into space

  4. Heating entrepreneur activity in 2003

    International Nuclear Information System (INIS)

    Nikkola, A.; Solmio, H.

    2004-01-01

    According to TTS Institute information, at the end of 2003 there were heating entrepreneurs responsible for fuel management and heat production in at least 212 heating plants in Finland. The number of operative plants increased by 36 from the previous year. At the end of 2003, the total boiler capacity for solid fuel in the plants managed by the heating entrepreneurs exceeded 100 megawatts. The average boiler capacity of the plants was 0.5 megawatts. Heating entrepreneur-ship was most common in west Finland, where 40 percent of the plants are located. There were some 94 heating plants managed by cooperatives or limited companies. Single entrepre neurs or entrepreneur networks consisting of several entrepreneurs were responsible for heat production in 117 plants. Heating entrepreneurs used approximately 290,000 loose cubic metres of forest chips, which is about seven percent of the volume used for heating and power plant energy production in 2003. In addition, the heating entrepreneurs used a total of 40,000 loose cubic metres of other wood fuel and an estimated 20,000 loose cubic metres of sod and milled peat. Municipalities are still the most important customer group for heating entrepreneurs. However, thenumber of private customers is growing. Industrial company, other private company or properly was the main customer already for every fourth plant established during 2003. (orig.)

  5. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  6. Evaluation of soft x-ray average recombination coefficient and average charge for metallic impurities in beam-heated plasmas

    International Nuclear Information System (INIS)

    Sesnic, S.S.; Bitter, M.; Hill, K.W.; Hiroe, S.; Hulse, R.; Shimada, M.; Stratton, B.; von Goeler, S.

    1986-05-01

    The soft x-ray continuum radiation in TFTR low density neutral beam discharges can be much lower than its theoretical value obtained by assuming a corona equilibrium. This reduced continuum radiation is caused by an ionization equilibrium shift toward lower states, which strongly changes the value of the average recombination coefficient of metallic impurities anti γ, even for only slight changes in the average charge, anti Z. The primary agent for this shift is the charge exchange between the highly ionized impurity ions and the neutral hydrogen, rather than impurity transport, because the central density of the neutral hydrogen is strongly enhanced at lower plasma densities with intense beam injection. In the extreme case of low density, high neutral beam power TFTR operation (energetic ion mode) the reduction in anti γ can be as much as one-half to two-thirds. We calculate the parametric dependence of anti γ and anti Z for Ti, Cr, Fe, and Ni impurities on neutral density (equivalent to beam power), electron temperature, and electron density. These values are obtained by using either a one-dimensional impurity transport code (MIST) or a zero-dimensional code with a finite particle confinement time. As an example, we show the variation of anti γ and anti Z in different TFTR discharges

  7. Time-dependence and averaging techniques in atomic photoionization calculations

    International Nuclear Information System (INIS)

    Scheibner, K.F.

    1984-01-01

    Two distinct problems in the development and application of averaging techniques to photoionization calculations are considered. The first part of the thesis is concerned with the specific problem of near-resonant three-photon ionization in hydrogen, a process for which no cross section exists. Effects of the inclusion of the laser pulse characteristics (both temporal and spatial) on the dynamics of the ionization probability and of the metastable state probability are examined. It is found, for example, that the ionization probability can decrease with increasing field intensity. The temporal profile of the laser pulse is found to affect the dynamics very little, whereas the spatial character of the pulse can affect the results drastically. In the second part of the thesis techniques are developed for calculating averaged cross sections directly without first calculating a detailed cross section. Techniques are developed whereby the detailed cross section never has to be calculated as an intermediate step, but rather, the averaged cross section is calculated directly. A variation of the moment technique and a new method based on the stabilization technique are applied successfully to atomic hydrogen and helium

  8. Space reactors - past, present, and future

    International Nuclear Information System (INIS)

    Buden, D.; Angelo, J.

    1983-01-01

    In the 1990s and beyond, advanced-design nuclear reactors could represent the prime source of both space power and propulsion. Many sophisticated military and civilian space missions of the future will require first kilowatt and then megawatt levels of power. This paper reviews key technology developments that accompanied past US space nuclear power development efforts, describes on-going programs, and then explores reactor technologies that will satisfy megawatt power level needs and beyond

  9. Establishment of Average Body Measurement and the Development ...

    African Journals Online (AJOL)

    cce

    body measurement for height and backneck to waist for ages 2,3,4 and 5 years. The ... average measurements of the different parts of the body must be established. ..... and OAU Charter on Rights of the child: Lagos: Nigeria Country office.

  10. GIS Tools to Estimate Average Annual Daily Traffic

    Science.gov (United States)

    2012-06-01

    This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...

  11. Maxwellian-averaged cross sections calculated from JENDL-3.2

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, Tsuneo; Chiba, Satoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ohsaka, Toshiro; Igashira, Masayuki [Research Laboratory for Nuclear Reactors, Tokyo Institute of Technology, Tokyo (Japan)

    2000-02-01

    Maxwellian-averaged cross sections of neutron capture, fission, (n,p) and (n,{alpha}) reactions are calculated from the Japanese Evaluated Nuclear Data Library, JENDL-3.2, for applications in the astrophysics. The calculation was made in the temperature (kT) range from 1 keV to 1 MeV. Results are listed in tables. The Maxwellian-averaged capture cross sections were compared with recommendations of other authors and recent experimental data. Large discrepancies were found among them especially in the light mass nuclides. Since JENDL-3.2 reproduces relatively well the recent experimental data, we conclude that JENDL-3.2 is superior to the others in such a mass region. (author)

  12. Nuclear-power capacity outside US soars 25% in 18 months

    International Nuclear Information System (INIS)

    Anon.

    1980-01-01

    Nuclear power plant operating capacity in countries outside the US increased nearly 25 percent from mid-1978 through 1980 in spite of curtailments in some countries. The operable capacity stood at 70,200 megawatts at the end of 1979, according to an Atomic Industrial Forum survey of 42 nations. The world average share of nuclear power rose from five percent in 1977 to six and is expected to rise dramatically in France, Japan, and elsewhere; France alone plans to bring one reactor into operation every two months, an average, between 1980 and 1985. Statistics from the survey track the expected growth in capacity by country from 1978 to 2000. The status of individual plants and their date or anticipated date of commercial operation are listed by country. The United Kingdom, France, Federal Republic of Germany, Japan, India, and Canada are developing reprocessing and waste-management programs

  13. MCBS Highlights: Ownership and Average Premiums for Medicare Supplementary Insurance Policies

    Science.gov (United States)

    Chulis, George S.; Eppig, Franklin J.; Poisal, John A.

    1995-01-01

    This article describes private supplementary health insurance holdings and average premiums paid by Medicare enrollees. Data were collected as part of the 1992 Medicare Current Beneficiary Survey (MCBS). Data show the number of persons with insurance and average premiums paid by type of insurance held—individually purchased policies, employer-sponsored policies, or both. Distributions are shown for a variety of demographic, socioeconomic, and health status variables. Primary findings include: Seventy-eight percent of Medicare beneficiaries have private supplementary insurance; 25 percent of those with private insurance hold more than one policy. The average premium paid for private insurance in 1992 was $914. PMID:10153473

  14. 40 CFR 63.2500 - How do I comply with emissions averaging?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true How do I comply with emissions averaging? 63.2500 Section 63.2500 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Alternative Means of Compliance § 63.2500 How do I comply with emissions averaging? (a) For an existing source...

  15. Familiarity and Voice Representation: From Acoustic-Based Representation to Voice Averages

    Directory of Open Access Journals (Sweden)

    Maureen Fontaine

    2017-07-01

    Full Text Available The ability to recognize an individual from their voice is a widespread ability with a long evolutionary history. Yet, the perceptual representation of familiar voices is ill-defined. In two experiments, we explored the neuropsychological processes involved in the perception of voice identity. We specifically explored the hypothesis that familiar voices (trained-to-familiar (Experiment 1, and famous voices (Experiment 2 are represented as a whole complex pattern, well approximated by the average of multiple utterances produced by a single speaker. In experiment 1, participants learned three voices over several sessions, and performed a three-alternative forced-choice identification task on original voice samples and several “speaker averages,” created by morphing across varying numbers of different vowels (e.g., [a] and [i] produced by the same speaker. In experiment 2, the same participants performed the same task on voice samples produced by familiar speakers. The two experiments showed that for famous voices, but not for trained-to-familiar voices, identification performance increased and response times decreased as a function of the number of utterances in the averages. This study sheds light on the perceptual representation of familiar voices, and demonstrates the power of average in recognizing familiar voices. The speaker average captures the unique characteristics of a speaker, and thus retains the information essential for recognition; it acts as a prototype of the speaker.

  16. High average power diode pumped solid state lasers for CALIOPE

    International Nuclear Information System (INIS)

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers

  17. Construction of average adult Japanese voxel phantoms for dose assessment

    International Nuclear Information System (INIS)

    Sato, Kaoru; Takahashi, Fumiaki; Satoh, Daiki; Endo, Akira

    2011-12-01

    The International Commission on Radiological Protection (ICRP) adopted the adult reference voxel phantoms based on the physiological and anatomical reference data of Caucasian on October, 2007. The organs and tissues of these phantoms were segmented on the basis of ICRP Publication 103. In future, the dose coefficients for internal dose and dose conversion coefficients for external dose calculated using the adult reference voxel phantoms will be widely used for the radiation protection fields. On the other hand, the body sizes and organ masses of adult Japanese are generally smaller than those of adult Caucasian. In addition, there are some cases that the anatomical characteristics such as body sizes, organ masses and postures of subjects influence the organ doses in dose assessment for medical treatments and radiation accident. Therefore, it was needed to use human phantoms with average anatomical characteristics of Japanese. The authors constructed the averaged adult Japanese male and female voxel phantoms by modifying the previously developed high-resolution adult male (JM) and female (JF) voxel phantoms. It has been modified in the following three aspects: (1) The heights and weights were agreed with the Japanese averages; (2) The masses of organs and tissues were adjusted to the Japanese averages within 10%; (3) The organs and tissues, which were newly added for evaluation of the effective dose in ICRP Publication 103, were modeled. In this study, the organ masses, distances between organs, specific absorbed fractions (SAFs) and dose conversion coefficients of these phantoms were compared with those evaluated using the ICRP adult reference voxel phantoms. This report provides valuable information on the anatomical and dosimetric characteristics of the averaged adult Japanese male and female voxel phantoms developed as reference phantoms of adult Japanese. (author)

  18. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  19. Determination of the average lifetime of bottom hadrons

    Energy Technology Data Exchange (ETDEWEB)

    Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W [Technische Hochschule Aachen (Germany, F.R.). Lehrstuhl fuer Experimentalphysik 1A und 1. Physikalisches Inst.; Eisenmann, J; Fischer, H M

    1984-12-27

    We have determined the average lifetime of hadrons containing b quarks produced in e/sup +/e/sup -/ annihilation to be tausub(B)=1.83 x 10/sup -12/ s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes.

  20. Determination of the average lifetime of bottom hadrons

    Energy Technology Data Exchange (ETDEWEB)

    Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W; Eisenmann, J; Fischer, H M

    1984-12-27

    We have determined the average lifetime of hadrons containing b quarks produced in e e annihilation to be tausub(B)=1.83x10 S s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes. (orig./HSI).

  1. Partial Averaged Navier-Stokes approach for cavitating flow

    International Nuclear Information System (INIS)

    Zhang, L; Zhang, Y N

    2015-01-01

    Partial Averaged Navier Stokes (PANS) is a numerical approach developed for studying practical engineering problems (e.g. cavitating flow inside hydroturbines) with a resonance cost and accuracy. One of the advantages of PANS is that it is suitable for any filter width, leading a bridging method from traditional Reynolds Averaged Navier-Stokes (RANS) to direct numerical simulations by choosing appropriate parameters. Comparing with RANS, the PANS model will inherit many physical nature from parent RANS but further resolve more scales of motion in great details, leading to PANS superior to RANS. As an important step for PANS approach, one need to identify appropriate physical filter-width control parameters e.g. ratios of unresolved-to-total kinetic energy and dissipation. In present paper, recent studies of cavitating flow based on PANS approach are introduced with a focus on the influences of filter-width control parameters on the simulation results

  2. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  3. Environmental stresses can alleviate the average deleterious effect of mutations

    Directory of Open Access Journals (Sweden)

    Leibler Stanislas

    2003-05-01

    Full Text Available Abstract Background Fundamental questions in evolutionary genetics, including the possible advantage of sexual reproduction, depend critically on the effects of deleterious mutations on fitness. Limited existing experimental evidence suggests that, on average, such effects tend to be aggravated under environmental stresses, consistent with the perception that stress diminishes the organism's ability to tolerate deleterious mutations. Here, we ask whether there are also stresses with the opposite influence, under which the organism becomes more tolerant to mutations. Results We developed a technique, based on bioluminescence, which allows accurate automated measurements of bacterial growth rates at very low cell densities. Using this system, we measured growth rates of Escherichia coli mutants under a diverse set of environmental stresses. In contrast to the perception that stress always reduces the organism's ability to tolerate mutations, our measurements identified stresses that do the opposite – that is, despite decreasing wild-type growth, they alleviate, on average, the effect of deleterious mutations. Conclusions Our results show a qualitative difference between various environmental stresses ranging from alleviation to aggravation of the average effect of mutations. We further show how the existence of stresses that are biased towards alleviation of the effects of mutations may imply the existence of average epistatic interactions between mutations. The results thus offer a connection between the two main factors controlling the effects of deleterious mutations: environmental conditions and epistatic interactions.

  4. Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data

    Science.gov (United States)

    Gebert, Warren A.; Walker, John F.; Kennedy, James L.

    2011-01-01

    Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.

  5. Solar Energy Technology Office Portfolio Review: Promotion of PV Soft Cost Reductions in the Southeastern US

    Energy Technology Data Exchange (ETDEWEB)

    Fox, E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-12-20

    From 2016-2021, the installed solar capacity in South Carolina will mushroom from less than 20 megawatts to more than 300 megawatts. Concurrently, the number of customer-sited, load-centered solar generation is expected to grow from less than 500 statewide to as many as 10,000 by 2021. This growth is anticipated to be the direct result of a landmark state policy initiative, Act 236, passed by the South Carolina General Assembly and signed into law by the Governor in June of 2014. Local policy makers in South Carolina are ill-equipped to handle the onslaught of solar permitting and zoning requests expected over the next five years. Similarly, the state’s building inspectors, first responders, and tax assessors know little about photovoltaic (PV) technology and best practices. Finally, South Carolina’s workforce and workforce trainers are underprepared to benefit from the tremendous opportunity created by the passage of Act 236. Each of these deficits in knowledge of and preparedness for solar PV translates into higher “soft costs” of installed solar PV in South Carolina. Currently, we estimate that the installed costs of residential rooftop solar are as much as 25 percent higher than the national average. The Savannah River National Laboratory (SRNL), together with almost a dozen electricity stakeholders in the Southeast, proposes to create a replicable model for solar PV soft cost reduction in South Carolina through human capacity-building at the local level and direct efforts to harmonize policy at the inter-county or regional level. The primary goal of this effort is to close the gap between South Carolina installed costs of residential rooftop solar and national averages. The secondary goal is to develop a portable and replicable model that can be applied to other jurisdictions in the Southeastern US.

  6. Measured emotional intelligence ability and grade point average in nursing students.

    Science.gov (United States)

    Codier, Estelle; Odell, Ellen

    2014-04-01

    For most schools of nursing, grade point average is the most important criteria for admission to nursing school and constitutes the main indicator of success throughout the nursing program. In the general research literature, the relationship between traditional measures of academic success, such as grade point average and postgraduation job performance is not well established. In both the general population and among practicing nurses, measured emotional intelligence ability correlates with both performance and other important professional indicators postgraduation. Little research exists comparing traditional measures of intelligence with measured emotional intelligence prior to graduation, and none in the student nurse population. This exploratory, descriptive, quantitative study was undertaken to explore the relationship between measured emotional intelligence ability and grade point average of first year nursing students. The study took place at a school of nursing at a university in the south central region of the United States. Participants included 72 undergraduate student nurse volunteers. Emotional intelligence was measured using the Mayer-Salovey-Caruso Emotional Intelligence Test, version 2, an instrument for quantifying emotional intelligence ability. Pre-admission grade point average was reported by the school records department. Total emotional intelligence (r=.24) scores and one subscore, experiential emotional intelligence(r=.25) correlated significantly (>.05) with grade point average. This exploratory, descriptive study provided evidence for some relationship between GPA and measured emotional intelligence ability, but also demonstrated lower than average range scores in several emotional intelligence scores. The relationship between pre-graduation measures of success and level of performance postgraduation deserves further exploration. The findings of this study suggest that research on the relationship between traditional and nontraditional

  7. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  8. Small Bandwidth Asymptotics for Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...

  9. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  10. On the average capacity and bit error probability of wireless communication systems

    KAUST Repository

    Yilmaz, Ferkan; Alouini, Mohamed-Slim

    2011-01-01

    Analysis of the average binary error probabilities and average capacity of wireless communications systems over generalized fading channels have been considered separately in the past. This paper introduces a novel moment generating function

  11. Multiple-level defect species evaluation from average carrier decay

    Science.gov (United States)

    Debuf, Didier

    2003-10-01

    An expression for the average decay is determined by solving the the carrier continuity equations, which include terms for multiple defect recombination. This expression is the decay measured by techniques such as the contactless photoconductance decay method, which determines the average or volume integrated decay. Implicit in the above is the requirement for good surface passivation such that only bulk properties are observed. A proposed experimental configuration is given to achieve the intended goal of an assessment of the type of defect in an n-type Czochralski-grown silicon semiconductor with an unusually high relative lifetime. The high lifetime is explained in terms of a ground excited state multiple-level defect system. Also, minority carrier trapping is investigated.

  12. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  13. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  14. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  15. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  16. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  17. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Science.gov (United States)

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  18. Non-sky-averaged sensitivity curves for space-based gravitational-wave observatories

    International Nuclear Information System (INIS)

    Vallisneri, Michele; Galley, Chad R

    2012-01-01

    The signal-to-noise ratio (SNR) is used in gravitational-wave observations as the basic figure of merit for detection confidence and, together with the Fisher matrix, for the amount of physical information that can be extracted from a detected signal. SNRs are usually computed from a sensitivity curve, which describes the gravitational-wave amplitude needed by a monochromatic source of given frequency to achieve a threshold SNR. Although the term 'sensitivity' is used loosely to refer to the detector's noise spectral density, the two quantities are not the same: the sensitivity includes also the frequency- and orientation-dependent response of the detector to gravitational waves and takes into account the duration of observation. For interferometric space-based detectors similar to LISA, which are sensitive to long-lived signals and have constantly changing position and orientation, exact SNRs need to be computed on a source-by-source basis. For convenience, most authors prefer to work with sky-averaged sensitivities, accepting inaccurate SNRs for individual sources and giving up control over the statistical distribution of SNRs for source populations. In this paper, we describe a straightforward end-to-end recipe to compute the non-sky-averaged sensitivity of interferometric space-based detectors of any geometry. This recipe includes the effects of spacecraft motion and of seasonal variations in the partially subtracted confusion foreground from Galactic binaries, and it can be used to generate a sampling distribution of sensitivities for a given source population. In effect, we derive error bars for the sky-averaged sensitivity curve, which provide a stringent statistical interpretation for previously unqualified statements about sky-averaged SNRs. As a worked-out example, we consider isotropic and Galactic-disk populations of monochromatic sources, as observed with the 'classic LISA' configuration. We confirm that the (standard) inverse-rms average sensitivity

  19. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  20. Estimation of average annual streamflows and power potentials for Alaska and Hawaii

    Energy Technology Data Exchange (ETDEWEB)

    Verdin, Kristine L. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL)

    2004-05-01

    This paper describes the work done to develop average annual streamflow estimates and power potential for the states of Alaska and Hawaii. The Elevation Derivatives for National Applications (EDNA) database was used, along with climatic datasets, to develop flow and power estimates for every stream reach in the EDNA database. Estimates of average annual streamflows were derived using state-specific regression equations, which were functions of average annual precipitation, precipitation intensity, drainage area, and other elevation-derived parameters. Power potential was calculated through the use of the average annual streamflow and the hydraulic head of each reach, which is calculated from the EDNA digital elevation model. In all, estimates of streamflow and power potential were calculated for over 170,000 stream segments in the Alaskan and Hawaiian datasets.

  1. Fitting a function to time-dependent ensemble averaged data.

    Science.gov (United States)

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  2. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2012-01-01

    Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  3. Large-signal analysis of DC motor drive system using state-space averaging technique

    International Nuclear Information System (INIS)

    Bekir Yildiz, Ali

    2008-01-01

    The analysis of a separately excited DC motor driven by DC-DC converter is realized by using state-space averaging technique. Firstly, a general and unified large-signal averaged circuit model for DC-DC converters is given. The method converts power electronic systems, which are periodic time-variant because of their switching operation, to unified and time independent systems. Using the averaged circuit model enables us to combine the different topologies of converters. Thus, all analysis and design processes about DC motor can be easily realized by using the unified averaged model which is valid during whole period. Some large-signal variations such as speed and current relating to DC motor, steady-state analysis, large-signal and small-signal transfer functions are easily obtained by using the averaged circuit model

  4. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.

    Directory of Open Access Journals (Sweden)

    Jacinta Chan Phooi M'ng

    Full Text Available The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR, Malaysia (MYR, the Philippines (PHP, Singapore (SGD, and Thailand (THB through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA' in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.

  5. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.

    Science.gov (United States)

    Chan Phooi M'ng, Jacinta; Zainudin, Rozaimah

    2016-01-01

    The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA') in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.

  6. Quasi-analytical treatment of spatially averaged radiation transfer in complex terrain

    Science.gov (United States)

    LöWe, H.; Helbig, N.

    2012-10-01

    We provide a new quasi-analytical method to compute the subgrid topographic influences on the shortwave radiation fluxes and the effective albedo in complex terrain as required for large-scale meteorological, land surface, or climate models. We investigate radiative transfer in complex terrain via the radiosity equation on isotropic Gaussian random fields. Under controlled approximations we derive expressions for domain-averaged fluxes of direct, diffuse, and terrain radiation and the sky view factor. Domain-averaged quantities can be related to a type of level-crossing probability of the random field, which is approximated by long-standing results developed for acoustic scattering at ocean boundaries. This allows us to express all nonlocal horizon effects in terms of a local terrain parameter, namely, the mean-square slope. Emerging integrals are computed numerically, and fit formulas are given for practical purposes. As an implication of our approach, we provide an expression for the effective albedo of complex terrain in terms of the Sun elevation angle, mean-square slope, the area-averaged surface albedo, and the ratio of atmospheric direct beam to diffuse radiation. For demonstration we compute the decrease of the effective albedo relative to the area-averaged albedo in Switzerland for idealized snow-covered and clear-sky conditions at noon in winter. We find an average decrease of 5.8% and spatial patterns which originate from characteristics of the underlying relief. Limitations and possible generalizations of the method are discussed.

  7. TFE design package final report, TFE Verification Program

    International Nuclear Information System (INIS)

    1994-06-01

    The program objective is to demonstrate the technology readiness of a TFE suitable for use as the basic element in a thermionic reactor with electric power output in the 0.5 to 5.0 MW(e) range, and a full-power life of 7 years. A TFE for a megawatt class system is described. Only six cells are considered for simplicity; a megawatt class TFE would have many more cells, the exact number dependent on optimization trade studies

  8. Eighth CW and High Average Power RF Workshop

    CERN Document Server

    2014-01-01

    We are pleased to announce the next Continuous Wave and High Average RF Power Workshop, CWRF2014, to take place at Hotel NH Trieste, Trieste, Italy from 13 to 16 May, 2014. This is the eighth in the CWRF workshop series and will be hosted by Elettra - Sincrotrone Trieste S.C.p.A. (www.elettra.eu). CWRF2014 will provide an opportunity for designers and users of CW and high average power RF systems to meet and interact in a convivial environment to share experiences and ideas on applications which utilize high-power klystrons, gridded tubes, combined solid-state architectures, high-voltage power supplies, high-voltage modulators, high-power combiners, circulators, cavities, power couplers and tuners. New ideas for high-power RF system upgrades and novel ways of RF power generation and distribution will also be discussed. CWRF2014 sessions will start on Tuesday morning and will conclude on Friday lunchtime. A visit to Elettra and FERMI will be organized during the workshop. ORGANIZING COMMITTEE (OC): Al...

  9. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  10. The Puerto Rico nuclear center reactor conversion project

    Energy Technology Data Exchange (ETDEWEB)

    Brown-Campos, R [Puerto Rico Nuclear Center (Puerto Rico)

    1974-07-01

    For the purpose of upgrading the control and instrumentation system to meet new AEC requirements, to increase the available neutron flux for experimenters and to replace burned out fuel the Puerto Rico Nuclear Center started a modification program on its old MTR type, one megawatt reactor on March 1971. A TRIGA core utilizing the newly developed FLIP fuel, capable of operating at two megawatts with natural convection cooling and with pulsing capabilities was chosen. The major conversion tasks included: 1. Modification of the bridge, tower and grid plate structures, 2. Modification of the water cooling system (inside the reactor pool), 3. Installation of a larger heat exchanger and cooling tower, 4. Installation of a new instrumentation and control console (including neutron detectors and rod drive mechanisms). 5. Installation of a TRIGA FLIP core. Initial criticality was achieved on January 1972. For the chosen operating configuration the critical mass was 11,522 grams of uranium 235. Core excess reactivity was $7.12 and the total (5) rod worth was $12.06. During the early stages of the startup program to determine the basic core parameters and while conducting a stepwise increase in power to the design power level of two megawatts a power fluctuation on all neutron detectors was noticed. It was determined that the power fluctuations started at about 1.4 megawatts and sharply increased as power approached 2 megawatts. Experiments to determine the cause of the problem and to correct the condition were conducted on July and December 1972 and June 1973. Modifications to the core included changing fuel pin pitch and the addition of dummy elements in the central region of the core. Final acceptance by AEC Headquarters was requested on October 1973. (author)

  11. Solid state laser driver for an ICF reactor

    International Nuclear Information System (INIS)

    Krupke, W.F.

    1988-01-01

    A conceptual design is presented of the main power amplifier of a multi-beamline, multi-megawatt solid state ICF reactor driver. Simultaneous achievement of useful beam quality and high average power is achieved by a proper choice of amplifier geometry. An amplifier beamline consists of a sequence of face-pumped rectangular slab gain elements, oriented at the Brewster angle relative to the beamline axis, and cooled on their large faces by helium gas that is flowing subsonically. The infrared amplifier output radiation is shifted to an appropriately short wavelength ( 10% (including all flow cooling input power) when the amplifiers are pumped by efficient high-power AlGaAs semiconductor laser diode arrays. 11 refs., 3 figs., 7 tabs

  12. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  13. Averages, Areas and Volumes; Cambridge Conference on School Mathematics Feasibility Study No. 45.

    Science.gov (United States)

    Cambridge Conference on School Mathematics, Newton, MA.

    Presented is an elementary approach to areas, columns and other mathematical concepts usually treated in calculus. The approach is based on the idea of average and this concept is utilized throughout the report. In the beginning the average (arithmetic mean) of a set of numbers is considered and two properties of the average which often simplify…

  14. A time averaged background compensator for Geiger-Mueller counters

    International Nuclear Information System (INIS)

    Bhattacharya, R.C.; Ghosh, P.K.

    1983-01-01

    The GM tube compensator described stores background counts to cancel an equal number of pulses from the measuring channel providing time averaged compensation. The method suits portable instruments. (orig.)

  15. MARD—A moving average rose diagram application for the geosciences

    Science.gov (United States)

    Munro, Mark A.; Blenkinsop, Thomas G.

    2012-12-01

    MARD 1.0 is a computer program for generating smoothed rose diagrams by using a moving average, which is designed for use across the wide range of disciplines encompassed within the Earth Sciences. Available in MATLAB®, Microsoft® Excel and GNU Octave formats, the program is fully compatible with both Microsoft® Windows and Macintosh operating systems. Each version has been implemented in a user-friendly way that requires no prior experience in programming with the software. MARD conducts a moving average smoothing, a form of signal processing low-pass filter, upon the raw circular data according to a set of pre-defined conditions selected by the user. This form of signal processing filter smoothes the angular dataset, emphasising significant circular trends whilst reducing background noise. Customisable parameters include whether the data is uni- or bi-directional, the angular range (or aperture) over which the data is averaged, and whether an unweighted or weighted moving average is to be applied. In addition to the uni- and bi-directional options, the MATLAB® and Octave versions also possess a function for plotting 2-dimensional dips/pitches in a single, lower, hemisphere. The rose diagrams from each version are exportable as one of a selection of common graphical formats. Frequently employed statistical measures that determine the vector mean, mean resultant (or length), circular standard deviation and circular variance are also included. MARD's scope is demonstrated via its application to a variety of datasets within the Earth Sciences.

  16. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  17. Reducing Noise by Repetition: Introduction to Signal Averaging

    Science.gov (United States)

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  18. Effect of temporal averaging of meteorological data on predictions of groundwater recharge

    Directory of Open Access Journals (Sweden)

    Batalha Marcia S.

    2018-06-01

    Full Text Available Accurate estimates of infiltration and groundwater recharge are critical for many hydrologic, agricultural and environmental applications. Anticipated climate change in many regions of the world, especially in tropical areas, is expected to increase the frequency of high-intensity, short-duration precipitation events, which in turn will affect the groundwater recharge rate. Estimates of recharge are often obtained using monthly or even annually averaged meteorological time series data. In this study we employed the HYDRUS-1D software package to assess the sensitivity of groundwater recharge calculations to using meteorological time series of different temporal resolutions (i.e., hourly, daily, weekly, monthly and yearly averaged precipitation and potential evaporation rates. Calculations were applied to three sites in Brazil having different climatological conditions: a tropical savanna (the Cerrado, a humid subtropical area (the temperate southern part of Brazil, and a very wet tropical area (Amazonia. To simplify our current analysis, we did not consider any land use effects by ignoring root water uptake. Temporal averaging of meteorological data was found to lead to significant bias in predictions of groundwater recharge, with much greater estimated recharge rates in case of very uneven temporal rainfall distributions during the year involving distinct wet and dry seasons. For example, at the Cerrado site, using daily averaged data produced recharge rates of up to 9 times greater than using yearly averaged data. In all cases, an increase in the time of averaging of meteorological data led to lower estimates of groundwater recharge, especially at sites having coarse-textured soils. Our results show that temporal averaging limits the ability of simulations to predict deep penetration of moisture in response to precipitation, so that water remains in the upper part of the vadose zone subject to upward flow and evaporation.

  19. Extension of the time-average model to Candu refueling schemes involving reshuffling

    International Nuclear Information System (INIS)

    Rouben, Benjamin; Nichita, Eleodor

    2008-01-01

    Candu reactors consist of a horizontal non-pressurized heavy-water-filled vessel penetrated axially by fuel channels, each containing twelve 50-cm-long fuel bundles cooled by pressurized heavy water. Candu reactors are refueled on-line and, as a consequence, the core flux and power distributions change continuously. For design purposes, a 'time-average' model was developed in the 1970's to calculate the average over time of the flux and power distribution and to study the effects of different refueling schemes. The original time-average model only allows treatment of simple push-through refueling schemes whereby fresh fuel is inserted at one end of the channel and irradiated fuel is removed from the other end. With the advent of advanced fuel cycles and new Candu designs, novel refueling schemes may be considered, such as reshuffling discharged fuel from some channels into other channels, to achieve better overall discharge burnup. Such reshuffling schemes cannot be handled by the original time-average model. This paper presents an extension of the time-average model to allow for the treatment of refueling schemes with reshuffling. Equations for the extended model are presented, together with sample results for a simple demonstration case. (authors)

  20. On the average luminosity of electron positron collider and positron-producing energy

    International Nuclear Information System (INIS)

    Xie Jialin

    1985-01-01

    In this paper, the average luminosity of linac injected electron positron collider is investigated from the positron-producing energy point of view. When the energy of the linac injector is fixed to be less than the operating energy of the storage ring, it has been found that there exists a positron-producing energy to give optimum average luminosity. Two cases have been studied, one for an ideal storage ring with no single-beam instability and the other for practical storage ring with fast head-tail instability. The result indicates that there is a positron-producing energy corresponding to the minimum injection time, but this does not correspond to the optimum average luminosity for the practical storage rings. For Beijing Electron Positron Collider (BEPC), the positron-producing energy corresponding to the optimum average luminosity is about one tenth of the total injector energy

  1. The consequences of time averaging for measuring temporal species turnover in the fossil record

    Science.gov (United States)

    Tomašových, Adam; Kidwell, Susan

    2010-05-01

    Modeling time averaging effects with simple simulations allows us to evaluate the magnitude of change in temporal species turnover that is expected to occur in long (paleoecological) time series with fossil assemblages. Distinguishing different modes of metacommunity dynamics (such as neutral, density-dependent, or trade-off dynamics) with time-averaged fossil assemblages requires scaling-up time-averaging effects because the decrease in temporal resolution and the decrease in temporal inter-sample separation (i.e., the two main effects of time averaging) substantially increase community stability relative to assemblages without or with weak time averaging. Large changes in temporal scale that cover centuries to millennia can lead to unprecedented effects on temporal rate of change in species composition. Temporal variation in species composition monotonically decreases with increasing duration of time-averaging in simulated fossil assemblages. Time averaging is also associated with the reduction of species dominance owing to the temporal switching in the identity of dominant species. High degrees of time averaging can cause that community parameters of local fossil assemblages converge to parameters of metacommunity rather that to parameters of individual local non-averaged communities. We find that the low variation in species composition observed among mollusk and ostracod subfossil assemblages can be explained by time averaging alone, and low temporal resolution and reduced temporal separation among assemblages in time series can thus explain a substantial part of the reduced variation in species composition relative to unscaled predictions of neutral model (i.e., species do not differ in birth, death, and immigration rates on per capita basis). The structure of time-averaged assemblages can thus provide important insights into processes that act over larger temporal scales, such as evolution of niches and dispersal, range-limit dynamics, taxon cycles, and

  2. Applications of resonance-averaged gamma-ray spectroscopy with tailored beams

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1982-01-01

    The use of techniques based on the direct experimental averaging over compound nuclear capturing states has proved valuable for investigations of nuclear structure. The various methods that have been employed are described, with particular emphasis on the transmission filter, or tailored beam technique. The mathematical limitations on averaging imposed by the filter band pass are discussed. It can readily be demonstrated that a combination of filters at different energies can form a powerful method for spin and parity predictions. Several recent examples from the HFBR program are presented

  3. MAIN STAGES SCIENTIFIC AND PRODUCTION MASTERING THE TERRITORY AVERAGE URAL

    Directory of Open Access Journals (Sweden)

    V.S. Bochko

    2006-09-01

    Full Text Available Questions of the shaping Average Ural, as industrial territory, on base her scientific study and production mastering are considered in the article. It is shown that studies of Ural resources and particularities of the vital activity of its population were concerned by Russian and foreign scientist in XVIII-XIX centuries. It is noted that in XX century there was a transition to systematic organizing-economic study of production power, society and natures of Average Ural. More attention addressed on new problems of region and on needs of their scientific solving.

  4. Applications of resonance-averaged gamma-ray spectroscopy with tailored beams

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1982-01-01

    The use of techniques based on the direct experimental averaging over compound nuclear capturing states has proved valuable for investigations of nuclear structure. The various methods that have been employed are described, with particular emphasis on the transmission filter, or tailored beam technique. The mathematical limitations on averaging imposed by the filtre band pass are discussed. It can readily be demonstrated that a combination of filters at different energies can form a powerful method for spin and parity predictions. Several recent examples from the HFBR program are presented. (author)

  5. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  6. Historical Data for Average Processing Time Until Hearing Held

    Data.gov (United States)

    Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...

  7. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  8. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Directory of Open Access Journals (Sweden)

    Luis C González

    Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  9. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  10. 75 FR 78157 - Farmer and Fisherman Income Averaging

    Science.gov (United States)

    2010-12-15

    ... to the averaging of farm and fishing income in computing income tax liability. The regulations...: PART 1--INCOME TAXES 0 Paragraph 1. The authority citation for part 1 continues to read in part as... section 1 tax would be increased if one-third of elected farm income were allocated to each year. The...

  11. Averages of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties as of summer 2014

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y.; et al.

    2014-12-23

    This article reports world averages of measurements of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2014. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, $CP$ violation parameters, parameters of semileptonic decays and CKM matrix elements.

  12. Self-averaging correlation functions in the mean field theory of spin glasses

    International Nuclear Information System (INIS)

    Mezard, M.; Parisi, G.

    1984-01-01

    In the infinite range spin glass model, we consider the staggered spin σsub(lambda)associated with a given eigenvector of the interaction matrix. We show that the thermal average of sub(lambda)sup(2) is a self-averaging quantity and we compute it

  13. METHODS OF CONTROLLING THE AVERAGE DIAMETER OF THE THREAD WITH ASYMMETRICAL PROFILE

    Directory of Open Access Journals (Sweden)

    L. M. Aliomarov

    2015-01-01

    Full Text Available To handle the threaded holes in hard materials made of marine machinery, operating at high temperatures, heavy loads and in aggressive environments, the authors have developed the combined tool core drill -tap with a special cutting scheme, which has an asymmetric thread profile on the tap part. In order to control the average diameter of the thread of tap part of the combined tool was used the method three wires, which allows to make continuous measurement of the average diameter of the thread along the entire profile. Deviation from the average diameter from the sample is registered by inductive sensor and is recorded by the recorder. In the work are developed and presented control schemes of the average diameter of the threads with a symmetrical and asymmetrical profile. On the basis of these schemes are derived formulas for calculating the theoretical option to set the wires in the thread profile in the process of measuring the average diameter. Conducted complex research and the introduction of the combined instrument core drill-tap in the production of products of marine engineering, shipbuilding, ship repair power plants made of hard materials showed a high efficiency of the proposed technology for the processing of high-quality small-diameter threaded holes that meet modern requirements.

  14. AN OVERVIEW ON AVERAGE SPEED ENFORCEMENT SYSTEM AND ROAD SAFETY EFFECTS

    OpenAIRE

    ILGAZ, ARZU; SALTAN, MEHMET

    2017-01-01

    Averagespeed enforcement system is a new intelligent transportation system applicationthat has gained popularity all over the world following Europe and Australiawhich is recently being applied in Turkey as well. The main task of the systemis measuring the average speeds of motorized vehicles for the purpose oftraffic sanctions. A literature survey related with average speed enforcementsystem was carried out in this study at an international scale. In addition toproviding a comprehensive summ...

  15. SEASONAL AVERAGE FLOW IN RÂUL NEGRU HYDROGRAPHIC BASIN

    Directory of Open Access Journals (Sweden)

    VIGH MELINDA

    2015-03-01

    Full Text Available The Râul Negru hydrographic basin is a well individualised physical-geographical unit inside the Braşov Depression. The flow is controlled by six hydrometric stations placed on the main river and on two important tributaries. The data base for seasonal flow analysis contains the discharges from 1950-2012. The results of data analysis show that there significant space-time differences between multiannual seasonal averages. Some interesting conclusions can be obtained by comparing abundant and scarce periods. Flow analysis was made using seasonal charts Q = f(T. The similarities come from the basin’s relative homogeneity, and the differences from flow’s evolution and trend. Flow variation is analysed using variation coefficient. In some cases appear significant Cv values differences. Also, Cv values trends are analysed according to basins’ average altitude.

  16. Implications of Methodist clergies' average lifespan and missional ...

    African Journals Online (AJOL)

    2015-06-09

    Jun 9, 2015 ... The author of Genesis 5 paid meticulous attention to the lifespan of several people ... of Southern Africa (MCSA), and to argue that memories of the ... average ages at death were added up and the sum was divided by 12 (which represents the 12 ..... not explicit in how the departed Methodist ministers were.

  17. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau, A.

    1996-01-01

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  18. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Coaxial Supersonic Free-Jet Experiment

    Science.gov (United States)

    Baurle, Robert A.; Edwards, Jack R.

    2010-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment was designed to study compressible mixing flow phenomenon under conditions that are representative of those encountered in scramjet combustors. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The initial value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was observed when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid Reynolds-averaged/large-eddy simulations also over-predicted the mixing layer spreading rate for the helium case, while under-predicting the rate of mixing when argon was used as the injectant. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions were suggested as a remedy to this dilemma. Second-order turbulence statistics were also compared to their modeled Reynolds-averaged counterparts to evaluate the effectiveness of common turbulence closure

  19. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  20. Averages of b-hadron, c-hadron, and τ-lepton properties as of summer 2016

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y. [LAL, Universite Paris-Sud, CNRS/IN2P3, Orsay (France); Banerjee, S. [University of Louisville, Louisville, KY (United States); Ben-Haim, E. [Universite Paris Diderot, CNRS/IN2P3, LPNHE, Universite Pierre et Marie Curie, Paris (France); Bernlochner, F.; Dingfelder, J.; Duell, S. [University of Bonn, Bonn (Germany); Bozek, A. [H. Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Bozzi, C. [INFN, Sezione di Ferrara, Ferrara (Italy); Chrzaszcz, M. [H. Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Gersabeck, M. [University of Manchester, School of Physics and Astronomy, Manchester (United Kingdom); Gershon, T. [University of Warwick, Department of Physics, Coventry (United Kingdom); Gerstel, D.; Serrano, J. [Aix Marseille Univ., CNRS/IN2P3, CPPM, Marseille (France); Goldenzweig, P. [Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Harr, R. [Wayne State University, Detroit, MI (United States); Hayasaka, K. [Niigata University, Niigata (Japan); Hayashii, H. [Nara Women' s University, Nara (Japan); Kenzie, M. [Cavendish Laboratory, University of Cambridge, Cambridge (United Kingdom); Kuhr, T. [Ludwig-Maximilians-University, Munich (Germany); Leroy, O. [Aix Marseille Univ., CNRS/IN2P3, CPPM, Marseille (France); Lusiani, A. [Scuola Normale Superiore, Pisa (Italy); INFN, Sezione di Pisa, Pisa (Italy); Lyu, X.R. [University of Chinese Academy of Sciences, Beijing (China); Miyabayashi, K. [Niigata University, Niigata (Japan); Naik, P. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Nanut, T. [J. Stefan Institute, Ljubljana (Slovenia); Oyanguren Campos, A. [Centro Mixto Universidad de Valencia-CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Patel, M. [Imperial College London, London (United Kingdom); Pedrini, D. [INFN, Sezione di Milano-Bicocca, Milan (Italy); Petric, M. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Rama, M. [INFN, Sezione di Pisa, Pisa (Italy); Roney, M. [University of Victoria, Victoria, BC (Canada); Rotondo, M. [INFN, Laboratori Nazionali di Frascati, Frascati (Italy); Schneider, O. [Institute of Physics, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne (Switzerland); Schwanda, C. [Institute of High Energy Physics, Vienna (Austria); Schwartz, A.J. [University of Cincinnati, Cincinnati, OH (United States); Shwartz, B. [Budker Institute of Nuclear Physics (SB RAS), Novosibirsk (Russian Federation); Novosibirsk State University, Novosibirsk (Russian Federation); Tesarek, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); Tonelli, D. [INFN, Sezione di Pisa, Pisa (Italy); Trabelsi, K. [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); SOKENDAI (The Graduate University for Advanced Studies), Hayama (Japan); Urquijo, P. [School of Physics, University of Melbourne, Melbourne, VIC (Australia); Van Kooten, R. [Indiana University, Bloomington, IN (United States); Yelton, J. [University of Florida, Gainesville, FL (US); Zupanc, A. [J. Stefan Institute, Ljubljana (SI); University of Ljubljana, Faculty of Mathematics and Physics, Ljubljana (SI); Collaboration: Heavy Flavor Averaging Group (HFLAV)

    2017-12-15

    This article reports world averages of measurements of b-hadron, c-hadron, and τ-lepton properties obtained by the Heavy Flavor Averaging Group using results available through summer 2016. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, CP violation parameters, parameters of semileptonic decays, and Cabbibo-Kobayashi-Maskawa matrix elements. (orig.)

  1. On the average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1978-03-01

    Over 3000 hours of IMP-6 magnetic field data obtained between 20 and 33 R sub E in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5 minute averages of B sub Z as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks than near midnight. The tail field projected in the solar magnetospheric equatorial plane deviates from the X axis due to flaring and solar wind aberration by an angle alpha = -0.9 y sub SM - 1.7, where y/sub SM/ is in earth radii and alpha is in degrees. After removing these effects the Y component of the tail field is found to depend on interplanetary sector structure. During an away sector the B/sub Y/ component of the tail field is on average 0.5 gamma greater than that during a toward sector, a result that is true in both tail lobes and is independent of location across the tail

  2. Application of Bayesian model averaging to measurements of the primordial power spectrum

    International Nuclear Information System (INIS)

    Parkinson, David; Liddle, Andrew R.

    2010-01-01

    Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940 s s is specified at a pivot scale 0.015 Mpc -1 . For the tensors model averaging can tighten the credible upper limit, depending on prior assumptions.

  3. Application of Depth-Averaged Velocity Profile for Estimation of Longitudinal Dispersion in Rivers

    Directory of Open Access Journals (Sweden)

    Mohammad Givehchi

    2010-01-01

    Full Text Available River bed profiles and depth-averaged velocities are used as basic data in empirical and analytical equations for estimating the longitudinal dispersion coefficient which has always been a topic of great interest for researchers. The simple model proposed by Maghrebi is capable of predicting the normalized isovel contours in the cross section of rivers and channels as well as the depth-averaged velocity profiles. The required data in Maghrebi’s model are bed profile, shear stress, and roughness distributions. Comparison of depth-averaged velocities and longitudinal dispersion coefficients observed in the field data and those predicted by Maghrebi’s model revealed that Maghrebi’s model had an acceptable accuracy in predicting depth-averaged velocity.

  4. The average inter-crossing number of equilateral random walks and polygons

    International Nuclear Information System (INIS)

    Diao, Y; Dobay, A; Stasiak, A

    2005-01-01

    In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a = 3ln2/8 ∼ 0.2599. In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance ρ apart and ρ is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of ρ. Our simulation result shows that the model in fact works very well for the entire range of ρ. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well

  5. Facial averageness and genetic quality: Testing heritability, genetic correlation with attractiveness, and the paternal age effect.

    Science.gov (United States)

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2016-01-01

    Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample ( N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.

  6. A Group Neighborhood Average Clock Synchronization Protocol for Wireless Sensor Networks

    Science.gov (United States)

    Lin, Lin; Ma, Shiwei; Ma, Maode

    2014-01-01

    Clock synchronization is a very important issue for the applications of wireless sensor networks. The sensors need to keep a strict clock so that users can know exactly what happens in the monitoring area at the same time. This paper proposes a novel internal distributed clock synchronization solution using group neighborhood average. Each sensor node collects the offset and skew rate of the neighbors. Group averaging of offset and skew rate value are calculated instead of conventional point-to-point averaging method. The sensor node then returns compensated value back to the neighbors. The propagation delay is considered and compensated. The analytical analysis of offset and skew compensation is presented. Simulation results validate the effectiveness of the protocol and reveal that the protocol allows sensor networks to quickly establish a consensus clock and maintain a small deviation from the consensus clock. PMID:25120163

  7. Waif goodbye! Average-size female models promote positive body image and appeal to consumers.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2011-10-01

    Despite consensus that exposure to media images of thin fashion models is associated with poor body image and disordered eating behaviours, few attempts have been made to enact change in the media. This study sought to investigate an effective alternative to current media imagery, by exploring the advertising effectiveness of average-size female fashion models, and their impact on the body image of both women and men. A sample of 171 women and 120 men were assigned to one of three advertisement conditions: no models, thin models and average-size models. Women and men rated average-size models as equally effective in advertisements as thin and no models. For women with average and high levels of internalisation of cultural beauty ideals, exposure to average-size female models was associated with a significantly more positive body image state in comparison to exposure to thin models and no models. For men reporting high levels of internalisation, exposure to average-size models was also associated with a more positive body image state in comparison to viewing thin models. These findings suggest that average-size female models can promote positive body image and appeal to consumers.

  8. arXiv Averaged Energy Conditions and Bouncing Universes

    CERN Document Server

    Giovannini, Massimo

    2017-11-16

    The dynamics of bouncing universes is characterized by violating certain coordinate-invariant restrictions on the total energy-momentum tensor, customarily referred to as energy conditions. Although there could be epochs in which the null energy condition is locally violated, it may perhaps be enforced in an averaged sense. Explicit examples of this possibility are investigated in different frameworks.

  9. Impact of connected vehicle guidance information on network-wide average travel time

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2016-12-01

    Full Text Available With the emergence of connected vehicle technologies, the potential positive impact of connected vehicle guidance on mobility has become a research hotspot by data exchange among vehicles, infrastructure, and mobile devices. This study is focused on micro-modeling and quantitatively evaluating the impact of connected vehicle guidance on network-wide travel time by introducing various affecting factors. To evaluate the benefits of connected vehicle guidance, a simulation architecture based on one engine is proposed representing the connected vehicle–enabled virtual world, and connected vehicle route guidance scenario is established through the development of communication agent and intelligent transportation systems agents using connected vehicle application programming interface considering the communication properties, such as path loss and transmission power. The impact of connected vehicle guidance on network-wide travel time is analyzed by comparing with non-connected vehicle guidance in response to different market penetration rate, following rate, and congestion level. The simulation results explore that average network-wide travel time in connected vehicle guidance shows a significant reduction versus that in non–connected vehicle guidance. Average network-wide travel time in connected vehicle guidance have an increase of 42.23% comparing to that in non-connected vehicle guidance, and average travel time variability (represented by the coefficient of variance increases as the travel time increases. Other vital findings include that higher penetration rate and following rate generate bigger savings of average network-wide travel time. The savings of average network-wide travel time increase from 17% to 38% according to different congestion levels, and savings of average travel time in more serious congestion have a more obvious improvement for the same penetration rate or following rate.

  10. Average level of satisfaction in 10 European countries: explanation of differences

    OpenAIRE

    Veenhoven, Ruut

    1996-01-01

    textabstractABSTRACT Surveys in 10 European nations assessed satisfaction with life-as-a-whole and satisfaction with three life-domains (finances, housing, social contacts). Average satisfaction differs markedly across countries. Both satisfaction with life-as-a-whole and satisfaction with life-domains are highest in North-Western Europe, medium in Southern Europe and lowest in the East-European nations. Cultural measurement bias is unlikely to be involved. The country differences in average ...

  11. A Capital Mistake? The Neglected Effect of Immigration on Average Wages

    OpenAIRE

    Declan Trott

    2011-01-01

    Much recent literature on the wage effects of immigration assumes that the return to capital, and therefore the average wage, is unaffected in the long run. If immigration is modelled as a continuous flow rather than a one off shock, this result does not necessarily hold. A simple calibration with pre-crisis US immigration rates gives a reduction in average wages of 5%, larger than most estimates of its effect on relative wages.

  12. 49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.

    Science.gov (United States)

    2010-10-01

    ... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year does...

  13. Research On Stabilization Of Radioactive Waste By Method Of SYNROCK Ceramic

    International Nuclear Information System (INIS)

    Nguyen Hoang Lan; Nguyen Ba Tien; Vuong Huu Anh; Nguyen An Thai

    2014-01-01

    Separate phases from SYNROC polyphases ceramic were investigated to fabricate completely SYNROC and the distribution of stable isotopes (Sr) in SYNROC matrix was surveyed simultaneously with leaching test. The experimental conditions: 13.5 x 11mm pressed pellet SYNROC with pressure of 2.5 - 3 tons/cm 2 , sintering temperature t tk = 1250 o C, thermal lifting velocity v t = 20 o C/min with 2 hours prolongation in 1250 o C, Sr loading amount was 7% mole, the results showed that pellets contain 3 phases perovskite CaTiO 3 , zirconolite CaZrTi 2 O 7 , hollandite BaAl 2 Ti 6 O 16 with average density of 4.1 g/cm 3 , leaching rate R (g/m 2 .d) of 10 -6 , 10 -5 for Ti, Sr respectively. (author)

  14. A collisional-radiative average atom model for hot plasmas

    International Nuclear Information System (INIS)

    Rozsnyai, B.F.

    1996-01-01

    A collisional-radiative 'average atom' (AA) model is presented for the calculation of opacities of hot plasmas not in the condition of local thermodynamic equilibrium (LTE). The electron impact and radiative rate constants are calculated using the dipole oscillator strengths of the average atom. A key element of the model is the photon escape probability which at present is calculated for a semi infinite slab. The Fermi statistics renders the rate equation for the AA level occupancies nonlinear, which requires iterations until the steady state. AA level occupancies are found. Detailed electronic configurations are built into the model after the self-consistent non-LTE AA state is found. The model shows a continuous transition from the non-LTE to the LTE state depending on the optical thickness of the plasma. 22 refs., 13 figs., 1 tab

  15. Database of average-power damage thresholds at 1064 nm

    International Nuclear Information System (INIS)

    Rainer, F.; Hildum, E.A.; Milam, D.

    1987-01-01

    We have completed a database of average-power, laser-induced, damage thresholds at 1064 nm on a variety of materials. Measurements were made with a newly constructed laser to provide design input for moderate and high average-power laser projects. The measurements were conducted with 16-ns pulses at pulse-repetition frequencies ranging from 6 to 120 Hz. Samples were typically irradiated for time ranging from a fraction of a second up to 5 minutes (36,000 shots). We tested seven categories of samples which included antireflective coatings, high reflectors, polarizers, single and multiple layers of the same material, bare and overcoated metal surfaces, bare polished surfaces, and bulk materials. The measured damage threshold ranged from 2 for some metals to > 46 J/cm 2 for a bare polished glass substrate. 4 refs., 7 figs., 1 tab

  16. Edgeworth expansion for the pre-averaging estimator

    DEFF Research Database (Denmark)

    Podolskij, Mark; Veliyev, Bezirgen; Yoshida, Nakahiro

    In this paper, we study the Edgeworth expansion for a pre-averaging estimator of quadratic variation in the framework of continuous diffusion models observed with noise. More specifically, we obtain a second order expansion for the joint density of the estimators of quadratic variation and its...... asymptotic variance. Our approach is based on martingale embedding, Malliavin calculus and stable central limit theorems for continuous diffusions. Moreover, we derive the density expansion for the studentized statistic, which might be applied to construct asymptotic confidence regions....

  17. The average-shadowing property and topological ergodicity for flows

    International Nuclear Information System (INIS)

    Gu Rongbao; Guo Wenjing

    2005-01-01

    In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic

  18. Fractional averaging of repetitive waveforms induced by self-imaging effects

    Science.gov (United States)

    Romero Cortés, Luis; Maram, Reza; Azaña, José

    2015-10-01

    We report the theoretical prediction and experimental observation of averaging of stochastic events with an equivalent result of calculating the arithmetic mean (or sum) of a rational number of realizations of the process under test, not necessarily limited to an integer record of realizations, as discrete statistical theory dictates. This concept is enabled by a passive amplification process, induced by self-imaging (Talbot) effects. In the specific implementation reported here, a combined spectral-temporal Talbot operation is shown to achieve undistorted, lossless repetition-rate division of a periodic train of noisy waveforms by a rational factor, leading to local amplification, and the associated averaging process, by the fractional rate-division factor.

  19. Average expansion rate and light propagation in a cosmological Tardis spacetime

    Energy Technology Data Exchange (ETDEWEB)

    Lavinto, Mikko; Räsänen, Syksy [Department of Physics, University of Helsinki, and Helsinki Institute of Physics, P.O. Box 64, FIN-00014 University of Helsinki (Finland); Szybka, Sebastian J., E-mail: mikko.lavinto@helsinki.fi, E-mail: syksy.rasanen@iki.fi, E-mail: sebastian.szybka@uj.edu.pl [Astronomical Observatory, Jagellonian University, Orla 171, 30-244 Kraków (Poland)

    2013-12-01

    We construct the first exact statistically homogeneous and isotropic cosmological solution in which inhomogeneity has a significant effect on the expansion rate. The universe is modelled as a Swiss Cheese, with dust FRW background and inhomogeneous holes. We show that if the holes are described by the quasispherical Szekeres solution, their average expansion rate is close to the background under certain rather general conditions. We specialise to spherically symmetric holes and violate one of these conditions. As a result, the average expansion rate at late times grows relative to the background, \\ie backreaction is significant. The holes fit smoothly into the background, but are larger on the inside than a corresponding background domain: we call them Tardis regions. We study light propagation, find the effective equations of state and consider the relation of the spatially averaged expansion rate to the redshift and the angular diameter distance.

  20. A Group Neighborhood Average Clock Synchronization Protocol for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Lin Lin

    2014-08-01

    Full Text Available Clock synchronization is a very important issue for the applications of wireless sensor networks. The sensors need to keep a strict clock so that users can know exactly what happens in the monitoring area at the same time. This paper proposes a novel internal distributed clock synchronization solution using group neighborhood average. Each sensor node collects the offset and skew rate of the neighbors. Group averaging of offset and skew rate value are calculated instead of conventional point-to-point averaging method. The sensor node then returns compensated value back to the neighbors. The propagation delay is considered and compensated. The analytical analysis of offset and skew compensation is presented. Simulation results validate the effectiveness of the protocol and reveal that the protocol allows sensor networks to quickly establish a consensus clock and maintain a small deviation from the consensus clock.