WorldWideScience

Sample records for maximum average current

  1. Maximum time-dependent space-charge limited diode currents

    Energy Technology Data Exchange (ETDEWEB)

    Griswold, M. E. [Tri Alpha Energy, Inc., Rancho Santa Margarita, California 92688 (United States); Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States)

    2016-01-15

    Recent papers claim that a one dimensional (1D) diode with a time-varying voltage drop can transmit current densities that exceed the Child-Langmuir (CL) limit on average, apparently contradicting a previous conjecture that there is a hard limit on the average current density across any 1D diode, as t → ∞, that is equal to the CL limit. However, these claims rest on a different definition of the CL limit, namely, a comparison between the time-averaged diode current and the adiabatic average of the expression for the stationary CL limit. If the current were considered as a function of the maximum applied voltage, rather than the average applied voltage, then the original conjecture would not have been refuted.

  2. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    Science.gov (United States)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  3. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  4. Influence of epoxy resin as encapsulation material of silicon photovoltaic cells on maximum current

    Directory of Open Access Journals (Sweden)

    Acevedo-Gómez David

    2017-01-01

    Full Text Available This work presents an analysis about how the performance of silicon photovoltaic cells is influenced by the use of epoxy resin as encapsulation material with flat roughness. The effect of encapsulation on current at maximum power of mono-crystalline cell was tested indoor in a solar simulator bench at 1000 w/m² and AM1.5G. The results show that implementation of flat roughness layer onto cell surface reduces the maximum current inducing on average 2.7% less power with respect to a cell before any encapsulation. The losses of power and, in consequence, the less production of energy are explained by resin light absorption, reflection and partially neutralization of non-reflective coating.

  5. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  6. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  7. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    Science.gov (United States)

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  8. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    Directory of Open Access Journals (Sweden)

    Sung Woo Park

    2015-03-01

    Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  9. Effect of current on the maximum possible reward.

    Science.gov (United States)

    Gallistel, C R; Leon, M; Waraczynski, M; Hanau, M S

    1991-12-01

    Using a 2-lever choice paradigm with concurrent variable interval schedules of reward, it was found that when pulse frequency is increased, the preference-determining rewarding effect of 0.5-s trains of brief cathodal pulses delivered to the medial forebrain bundle of the rat saturates (stops increasing) at values ranging from 200 to 631 pulses/s (pps). Raising the current lowered the saturation frequency, which confirms earlier, more extensive findings showing that the rewarding effect of short trains saturates at pulse frequencies that vary from less than 100 pps to more than 800 pps, depending on the current. It was also found that the maximum possible reward--the magnitude of the reward at or beyond the saturation pulse frequency--increases with increasing current. Thus, increasing the current reduces the saturation frequency but increases the subjective magnitude of the maximum possible reward.

  10. Accurate computations of monthly average daily extraterrestrial irradiation and the maximum possible sunshine duration

    International Nuclear Information System (INIS)

    Jain, P.C.

    1985-12-01

    The monthly average daily values of the extraterrestrial irradiation on a horizontal plane and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by solar scientists and engineers each time they are needed and often by using the approximate short-cut methods. Using the accurate analytical expressions developed by Spencer for the declination and the eccentricity correction factor, computations for these parameters have been made for all the latitude values from 90 deg. N to 90 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Monthly average daily values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables would avoid the need for repetitive and approximate calculations and serve as a useful ready reference for providing accurate values to the solar energy scientists and engineers

  11. An Invariance Property for the Maximum Likelihood Estimator of the Parameters of a Gaussian Moving Average Process

    OpenAIRE

    Godolphin, E. J.

    1980-01-01

    It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.

  12. Current opinion about maximum entropy methods in Moessbauer spectroscopy

    International Nuclear Information System (INIS)

    Szymanski, K

    2009-01-01

    Current opinion about Maximum Entropy Methods in Moessbauer Spectroscopy is presented. The most important advantage offered by the method is the correct data processing under circumstances of incomplete information. Disadvantage is the sophisticated algorithm and its application to the specific problems.

  13. Maximum power point tracking for PV systems under partial shading conditions using current sweeping

    International Nuclear Information System (INIS)

    Tsang, K.M.; Chan, W.L.

    2015-01-01

    Highlights: • A novel approach for tracking the maximum power point of photovoltaic systems. • Able to handle both the uniform insolation and partial shading conditions. • Maximum power point tracking based on current sweeping. - Abstract: Partial shading on photovoltaic (PV) arrays causes multiple peaks on the output power–voltage characteristic curve and local searching technique such as perturb and observe (P&O) method could easily fail in searching for the global maximum. Moreover, existing global searching techniques are still not very satisfactory in terms of speed and implementation complexity. In this paper, a fast global maximum power point (MPPT) tracking method which is using current sweeping for photovoltaic arrays under partial shading conditions is proposed. Unlike conventional approach, the proposed method is current based rather than voltage based. The initial maximum power point will be derived based on a current sweeping test and the maximum power point can be enhanced by a finer local search. The speed of the global search is mainly governed by the apparent time constant of the PV array and the generation of a fast current sweeping test. The fast current sweeping test can easily be realized by a DC/DC boost converter with a very fast current control loop. Experimental results are included to demonstrate the effectiveness of the proposed global searching scheme

  14. RTS noise and dark current white defects reduction using selective averaging based on a multi-aperture system.

    Science.gov (United States)

    Zhang, Bo; Kagawa, Keiichiro; Takasawa, Taishi; Seo, Min Woong; Yasutomi, Keita; Kawahito, Shoji

    2014-01-16

    In extremely low-light conditions, random telegraph signal (RTS) noise and dark current white defects become visible. In this paper, a multi-aperture imaging system and selective averaging method which removes the RTS noise and the dark current white defects by minimizing the synthetic sensor noise at every pixel is proposed. In the multi-aperture imaging system, a very small synthetic F-number which is much smaller than 1.0 is achieved by increasing optical gain with multiple lenses. It is verified by simulation that the effective noise normalized by optical gain in the peak of noise histogram is reduced from 1.38e⁻ to 0.48 e⁻ in a 3 × 3-aperture system using low-noise CMOS image sensors based on folding-integration and cyclic column ADCs. In the experiment, a prototype 3 × 3-aperture camera, where each aperture has 200 × 200 pixels and an imaging lens with a focal length of 3.0 mm and F-number of 3.0, is developed. Under a low-light condition, in which the maximum average signal is 11e⁻ per aperture, the RTS and dark current white defects are removed and the peak signal-to-noise ratio (PSNR) of the image is increased by 6.3 dB.

  15. Current control of PMSM based on maximum torque control reference frame

    Science.gov (United States)

    Ohnuma, Takumi

    2017-07-01

    This study presents a new method of current controls of PMSMs (Permanent Magnet Synchronous Motors) based on a maximum torque control reference frame, which is suitable for high-performance controls of the PMSMs. As the issues of environment and energy increase seriously, PMSMs, one of the AC motors, are becoming popular because of their high-efficiency and high-torque density in various applications, such as electric vehicles, trains, industrial machines, and home appliances. To use the PMSMs efficiently, a proper current control of the PMSMs is necessary. In general, a rotational coordinate system synchronizing with the rotor is used for the current control of PMSMs. In the rotating reference frame, the current control is easier because the currents on the rotating reference frame can be expressed as a direct current in the controller. On the other hand, the torque characteristics of PMSMs are non-linear and complex; the PMSMs are efficient and high-density though. Therefore, a complicated control system is required to involve the relation between the torque and the current, even though the rotating reference frame is adopted. The maximum torque control reference frame provides a simpler way to control efficiently the currents taking the torque characteristics of the PMSMs into consideration.

  16. Table for monthly average daily extraterrestrial irradiation on horizontal surface and the maximum possible sunshine duration

    International Nuclear Information System (INIS)

    Jain, P.C.

    1984-01-01

    The monthly average daily values of the extraterrestrial irradiation on a horizontal surface (H 0 ) and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by scientists each time they are needed and by using the approximate short-cut methods. Computations for these values have been made once and for all for latitude values of 60 deg. N to 60 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables should avoid the need for repetition and approximate calculations and serve as a useful ready reference for solar energy scientists and engineers. (author)

  17. Weakest solar wind of the space age and the current 'MINI' solar maximum

    International Nuclear Information System (INIS)

    McComas, D. J.; Angold, N.; Elliott, H. A.; Livadiotis, G.; Schwadron, N. A.; Smith, C. W.; Skoug, R. M.

    2013-01-01

    The last solar minimum, which extended into 2009, was especially deep and prolonged. Since then, sunspot activity has gone through a very small peak while the heliospheric current sheet achieved large tilt angles similar to prior solar maxima. The solar wind fluid properties and interplanetary magnetic field (IMF) have declined through the prolonged solar minimum and continued to be low through the current mini solar maximum. Compared to values typically observed from the mid-1970s through the mid-1990s, the following proton parameters are lower on average from 2009 through day 79 of 2013: solar wind speed and beta (∼11%), temperature (∼40%), thermal pressure (∼55%), mass flux (∼34%), momentum flux or dynamic pressure (∼41%), energy flux (∼48%), IMF magnitude (∼31%), and radial component of the IMF (∼38%). These results have important implications for the solar wind's interaction with planetary magnetospheres and the heliosphere's interaction with the local interstellar medium, with the proton dynamic pressure remaining near the lowest values observed in the space age: ∼1.4 nPa, compared to ∼2.4 nPa typically observed from the mid-1970s through the mid-1990s. The combination of lower magnetic flux emergence from the Sun (carried out in the solar wind as the IMF) and associated low power in the solar wind points to the causal relationship between them. Our results indicate that the low solar wind output is driven by an internal trend in the Sun that is longer than the ∼11 yr solar cycle, and they suggest that this current weak solar maximum is driven by the same trend.

  18. Depth-averaged instantaneous currents in a tidally dominated shelf sea from glider observations

    Science.gov (United States)

    Merckelbach, Lucas

    2016-12-01

    Ocean gliders have become ubiquitous observation platforms in the ocean in recent years. They are also increasingly used in coastal environments. The coastal observatory system COSYNA has pioneered the use of gliders in the North Sea, a shallow tidally energetic shelf sea. For operational reasons, the gliders operated in the North Sea are programmed to resurface every 3-5 h. The glider's dead-reckoning algorithm yields depth-averaged currents, averaged in time over each subsurface interval. Under operational conditions these averaged currents are a poor approximation of the instantaneous tidal current. In this work an algorithm is developed that estimates the instantaneous current (tidal and residual) from glider observations only. The algorithm uses a first-order Butterworth low pass filter to estimate the residual current component, and a Kalman filter based on the linear shallow water equations for the tidal component. A comparison of data from a glider experiment with current data from an acoustic Doppler current profilers deployed nearby shows that the standard deviations for the east and north current components are better than 7 cm s-1 in near-real-time mode and improve to better than 6 cm s-1 in delayed mode, where the filters can be run forward and backward. In the near-real-time mode the algorithm provides estimates of the currents that the glider is expected to encounter during its next few dives. Combined with a behavioural and dynamic model of the glider, this yields predicted trajectories, the information of which is incorporated in warning messages issued to ships by the (German) authorities. In delayed mode the algorithm produces useful estimates of the depth-averaged currents, which can be used in (process-based) analyses in case no other source of measured current information is available.

  19. Sedimentological regimes for turbidity currents: Depth-averaged theory

    Science.gov (United States)

    Halsey, Thomas C.; Kumar, Amit; Perillo, Mauricio M.

    2017-07-01

    Turbidity currents are one of the most significant means by which sediment is moved from the continents into the deep ocean; their properties are interesting both as elements of the global sediment cycle and due to their role in contributing to the formation of deep water oil and gas reservoirs. One of the simplest models of the dynamics of turbidity current flow was introduced three decades ago, and is based on depth-averaging of the fluid mechanical equations governing the turbulent gravity-driven flow of relatively dilute turbidity currents. We examine the sedimentological regimes of a simplified version of this model, focusing on the role of the Richardson number Ri [dimensionless inertia] and Rouse number Ro [dimensionless sedimentation velocity] in determining whether a current is net depositional or net erosional. We find that for large Rouse numbers, the currents are strongly net depositional due to the disappearance of local equilibria between erosion and deposition. At lower Rouse numbers, the Richardson number also plays a role in determining the degree of erosion versus deposition. The currents become more erosive at lower values of the product Ro × Ri, due to the effect of clear water entrainment. At higher values of this product, the turbulence becomes insufficient to maintain the sediment in suspension, as first pointed out by Knapp and Bagnold. We speculate on the potential for two-layer solutions in this insufficiently turbulent regime, which would comprise substantial bedload flow with an overlying turbidity current.

  20. Predictive Trailing-Edge Modulation Average Current Control in DC-DC Converters

    Directory of Open Access Journals (Sweden)

    LASCU, D.

    2013-11-01

    Full Text Available The paper investigates predictive digital average current control (PDACC in dc/dc converters using trailing-edge modulation (TEM. The study is focused on the recurrence duty cycle equation and then stability analysis is performed. It is demonstrated that average current control using trailing-edge modulation is stable on the whole range of the duty cycle and thus design problems are highly reduced. The analysis is carried out in a general manner, independent of converter topology and therefore the results can then be easily applied for a certain converter (buck, boost, buck-boost, etc.. The theoretical considerations are confirmed for a boost converter first using the MATLAB program based on state-space equations and finally with the CASPOC circuit simulation package.

  1. Average current is better than peak current as therapeutic dosage for biphasic waveforms in a ventricular fibrillation pig model of cardiac arrest.

    Science.gov (United States)

    Chen, Bihua; Yu, Tao; Ristagno, Giuseppe; Quan, Weilun; Li, Yongqin

    2014-10-01

    Defibrillation current has been shown to be a clinically more relevant dosing unit than energy. However, the effects of average and peak current in determining shock outcome are still undetermined. The aim of this study was to investigate the relationship between average current, peak current and defibrillation success when different biphasic waveforms were employed. Ventricular fibrillation (VF) was electrically induced in 22 domestic male pigs. Animals were then randomized to receive defibrillation using one of two different biphasic waveforms. A grouped up-and-down defibrillation threshold-testing protocol was used to maintain the average success rate of 50% in the neighborhood. In 14 animals (Study A), defibrillations were accomplished with either biphasic truncated exponential (BTE) or rectilinear biphasic waveforms. In eight animals (Study B), shocks were delivered using two BTE waveforms that had identical peak current but different waveform durations. Both average and peak currents were associated with defibrillation success when BTE and rectilinear waveforms were investigated. However, when pathway impedance was less than 90Ω for the BTE waveform, bivariate correlation coefficient was 0.36 (p=0.001) for the average current, but only 0.21 (p=0.06) for the peak current in Study A. In Study B, a high defibrillation success (67.9% vs. 38.8%, pcurrent (14.9±2.1A vs. 13.5±1.7A, pcurrent unchanged. In this porcine model of VF, average current was better than peak current to be an adequate parameter to describe the therapeutic dosage when biphasic defibrillation waveforms were used. The institutional protocol number: P0805. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  3. Development of a high average current polarized electron source with long cathode operational lifetime

    Energy Technology Data Exchange (ETDEWEB)

    C. K. Sinclair; P. A. Adderley; B. M. Dunham; J. C. Hansknecht; P. Hartmann; M. Poelker; J. S. Price; P. M. Rutt; W. J. Schneider; M. Steigerwald

    2007-02-01

    Substantially more than half of the electromagnetic nuclear physics experiments conducted at the Continuous Electron Beam Accelerator Facility of the Thomas Jefferson National Accelerator Facility (Jefferson Laboratory) require highly polarized electron beams, often at high average current. Spin-polarized electrons are produced by photoemission from various GaAs-based semiconductor photocathodes, using circularly polarized laser light with photon energy slightly larger than the semiconductor band gap. The photocathodes are prepared by activation of the clean semiconductor surface to negative electron affinity using cesium and oxidation. Historically, in many laboratories worldwide, these photocathodes have had short operational lifetimes at high average current, and have often deteriorated fairly quickly in ultrahigh vacuum even without electron beam delivery. At Jefferson Lab, we have developed a polarized electron source in which the photocathodes degrade exceptionally slowly without electron emission, and in which ion back bombardment is the predominant mechanism limiting the operational lifetime of the cathodes during electron emission. We have reproducibly obtained cathode 1/e dark lifetimes over two years, and 1/e charge density and charge lifetimes during electron beam delivery of over 2?105???C/cm2 and 200 C, respectively. This source is able to support uninterrupted high average current polarized beam delivery to three experimental halls simultaneously for many months at a time. Many of the techniques we report here are directly applicable to the development of GaAs photoemission electron guns to deliver high average current, high brightness unpolarized beams.

  4. Development of a high average current polarized electron source with long cathode operational lifetime

    Directory of Open Access Journals (Sweden)

    C. K. Sinclair

    2007-02-01

    Full Text Available Substantially more than half of the electromagnetic nuclear physics experiments conducted at the Continuous Electron Beam Accelerator Facility of the Thomas Jefferson National Accelerator Facility (Jefferson Laboratory require highly polarized electron beams, often at high average current. Spin-polarized electrons are produced by photoemission from various GaAs-based semiconductor photocathodes, using circularly polarized laser light with photon energy slightly larger than the semiconductor band gap. The photocathodes are prepared by activation of the clean semiconductor surface to negative electron affinity using cesium and oxidation. Historically, in many laboratories worldwide, these photocathodes have had short operational lifetimes at high average current, and have often deteriorated fairly quickly in ultrahigh vacuum even without electron beam delivery. At Jefferson Lab, we have developed a polarized electron source in which the photocathodes degrade exceptionally slowly without electron emission, and in which ion back bombardment is the predominant mechanism limiting the operational lifetime of the cathodes during electron emission. We have reproducibly obtained cathode 1/e dark lifetimes over two years, and 1/e charge density and charge lifetimes during electron beam delivery of over 2×10^{5}   C/cm^{2} and 200 C, respectively. This source is able to support uninterrupted high average current polarized beam delivery to three experimental halls simultaneously for many months at a time. Many of the techniques we report here are directly applicable to the development of GaAs photoemission electron guns to deliver high average current, high brightness unpolarized beams.

  5. Record high-average current from a high-brightness photoinjector

    Energy Technology Data Exchange (ETDEWEB)

    Dunham, Bruce; Barley, John; Bartnik, Adam; Bazarov, Ivan; Cultrera, Luca; Dobbins, John; Hoffstaetter, Georg; Johnson, Brent; Kaplan, Roger; Karkare, Siddharth; Kostroun, Vaclav; Li Yulin; Liepe, Matthias; Liu Xianghong; Loehl, Florian; Maxson, Jared; Quigley, Peter; Reilly, John; Rice, David; Sabol, Daniel [Cornell Laboratory for Accelerator-Based Sciences and Education, Cornell University, Ithaca, New York 14853 (United States); and others

    2013-01-21

    High-power, high-brightness electron beams are of interest for many applications, especially as drivers for free electron lasers and energy recovery linac light sources. For these particular applications, photoemission injectors are used in most cases, and the initial beam brightness from the injector sets a limit on the quality of the light generated at the end of the accelerator. At Cornell University, we have built such a high-power injector using a DC photoemission gun followed by a superconducting accelerating module. Recent results will be presented demonstrating record setting performance up to 65 mA average current with beam energies of 4-5 MeV.

  6. SU-E-T-174: Evaluation of the Optimal Intensity Modulated Radiation Therapy Plans Done On the Maximum and Average Intensity Projection CTs

    Energy Technology Data Exchange (ETDEWEB)

    Jurkovic, I [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States); Stathakis, S; Li, Y; Patel, A; Vincent, J; Papanikolaou, N; Mavroidis, P [Cancer Therapy and Research Center University of Texas Health Sciences Center at San Antonio, San Antonio, TX (United States)

    2014-06-01

    Purpose: To determine the difference in coverage between plans done on average intensity projection and maximum intensity projection CT data sets for lung patients and to establish correlations between different factors influencing the coverage. Methods: For six lung cancer patients, 10 phases of equal duration through the respiratory cycle, the maximum and average intensity projections (MIP and AIP) from their 4DCT datasets were obtained. MIP and AIP datasets had three GTVs delineated (GTVaip — delineated on AIP, GTVmip — delineated on MIP and GTVfus — delineated on each of the 10 phases and summed up). From the each GTV, planning target volumes (PTV) were then created by adding additional margins. For each of the PTVs an IMRT plan was developed on the AIP dataset. The plans were then copied to the MIP data set and were recalculated. Results: The effective depths in AIP cases were significantly smaller than in MIP (p < 0.001). The Pearson correlation coefficient of r = 0.839 indicates strong degree of positive linear relationship between the average percentage difference in effective depths and average PTV coverage on the MIP data set. The V2 0 Gy of involved lung depends on the PTV coverage. The relationship between PTVaip mean CT number difference and PTVaip coverage on MIP data set gives r = 0.830. When the plans are produced on MIP and copied to AIP, r equals −0.756. Conclusion: The correlation between the AIP and MIP data sets indicates that the selection of the data set for developing the treatment plan affects the final outcome (cases with high average percentage difference in effective depths between AIP and MIP should be calculated on AIP). The percentage of the lung volume receiving higher dose depends on how well PTV is covered, regardless of on which set plan is done.

  7. 30 CFR 75.601-3 - Short circuit protection; dual element fuses; current ratings; maximum values.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Short circuit protection; dual element fuses... Trailing Cables § 75.601-3 Short circuit protection; dual element fuses; current ratings; maximum values. Dual element fuses having adequate current-interrupting capacity shall meet the requirements for short...

  8. Analysis and Design of Improved Weighted Average Current Control Strategy for LCL-Type Grid-Connected Inverters

    DEFF Research Database (Denmark)

    Han, Yang; Li, Zipeng; Yang, Ping

    2017-01-01

    The LCL grid-connected inverter has the ability to attenuate the high-frequency current harmonics. However, the inherent resonance of the LCL filter affects the system stability significantly. To damp the resonance effect, the dual-loop current control can be used to stabilize the system. The grid...... Control Strategy for LCL-Type Grid-Connected Inverters. Available from: https://www.researchgate.net/publication/313734269_Analysis_and_Design_of_Improved_Weighted_Average_Current_Control_Strategy_for_LCL-Type_Grid-Connected_Inverters [accessed Apr 20, 2017]....... current plus capacitor current feedback system is widely used for its better transient response and high robustness against the grid impedance variations. While the weighted average current (WAC) feedback scheme is capable to provide a wider bandwidth at higher frequencies but show poor stability...

  9. Maximum Bandwidth Enhancement of Current Mirror using Series-Resistor and Dynamic Body Bias Technique

    Directory of Open Access Journals (Sweden)

    V. Niranjan

    2014-09-01

    Full Text Available This paper introduces a new approach for enhancing the bandwidth of a low voltage CMOS current mirror. The proposed approach is based on utilizing body effect in a MOS transistor by connecting its gate and bulk terminals together for signal input. This results in boosting the effective transconductance of MOS transistor along with reduction of the threshold voltage. The proposed approach does not affect the DC gain of the current mirror. We demonstrate that the proposed approach features compatibility with widely used series-resistor technique for enhancing the current mirror bandwidth and both techniques have been employed simultaneously for maximum bandwidth enhancement. An important consequence of using both techniques simultaneously is the reduction of the series-resistor value for achieving the same bandwidth. This reduction in value is very attractive because a smaller resistor results in smaller chip area and less noise. PSpice simulation results using 180 nm CMOS technology from TSMC are included to prove the unique results. The proposed current mirror operates at 1Volt consuming only 102 µW and maximum bandwidth extension ratio of 1.85 has been obtained using the proposed approach. Simulation results are in good agreement with analytical predictions.

  10. Maximum likelihood estimation of biophysical parameters of synaptic receptors from macroscopic currents

    Directory of Open Access Journals (Sweden)

    Andrey eStepanyuk

    2014-10-01

    Full Text Available Dendritic integration and neuronal firing patterns strongly depend on biophysical properties of synaptic ligand-gated channels. However, precise estimation of biophysical parameters of these channels in their intrinsic environment is complicated and still unresolved problem. Here we describe a novel method based on a maximum likelihood approach that allows to estimate not only the unitary current of synaptic receptor channels but also their multiple conductance levels, kinetic constants, the number of receptors bound with a neurotransmitter and the peak open probability from experimentally feasible number of postsynaptic currents. The new method also improves the accuracy of evaluation of unitary current as compared to the peak-scaled non-stationary fluctuation analysis, leading to a possibility to precisely estimate this important parameter from a few postsynaptic currents recorded in steady-state conditions. Estimation of unitary current with this method is robust even if postsynaptic currents are generated by receptors having different kinetic parameters, the case when peak-scaled non-stationary fluctuation analysis is not applicable. Thus, with the new method, routinely recorded postsynaptic currents could be used to study the properties of synaptic receptors in their native biochemical environment.

  11. Unified Subharmonic Oscillation Conditions for Peak or Average Current Mode Control

    OpenAIRE

    Fang, Chung-Chieh

    2013-01-01

    This paper is an extension of the author's recent research in which only buck converters were analyzed. Similar analysis can be equally applied to other types of converters. In this paper, a unified model is proposed for buck, boost, and buck-boost converters under peak or average current mode control to predict the occurrence of subharmonic oscillation. Based on the unified model, the associated stability conditions are derived in closed forms. The same stability condition can be applied to ...

  12. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  13. A Method of Maximum Power Control in Single-phase Utility Interactive Photovoltaic Generation System by using PWM Current Source Inverter

    Science.gov (United States)

    Neba, Yasuhiko

    This paper deals with a maximum power point tracking (MPPT) control of the photovoltaic generation with the single-phase utility interactive inverter. The photovoltaic arrays are connected by employing the PWM current source inverter to the utility. The use of the pulsating dc current and voltage allows the maximum power point to be searched. The inverter can regulate the array voltage and keep the arrays to the maximum power. This paper gives the control method and the experimental results.

  14. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  15. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  16. Speed Control Analysis of Brushless DC Motor Based on Maximum Amplitude DC Current Feedback

    Directory of Open Access Journals (Sweden)

    Hassan M.A.A.

    2014-07-01

    Full Text Available This paper describes an approach to develop accurate and simple current controlled modulation technique for brushless DC (BLDC motor drive. The approach is applied to control phase current based on generation of quasi-square wave current by using only one current controller for the three phases. Unlike the vector control method which is complicated to be implemented, this simple current modulation technique presents advantages such as phase currents are kept in balance and the current is controlled through only one dc signal which represent maximum amplitude value of trapezoidal current (Imax. This technique is performed with Proportional Integral (PI control algorithm and triangular carrier comparison method to generate Pulse Width Modulation (PWM signal. In addition, the PI speed controller is incorporated with the current controller to perform desirable speed operation of non-overshoot response. The performance and functionality of the BLDC motor driver are verified via simulation by using MATLAB/SIMULINK. The simulation results show the developed control system performs desirable speed operation of non-overshoot and good current waveforms.

  17. Occupational exposure to electric fields and induced currents associated with 400 kV substation tasks from different service platforms.

    Science.gov (United States)

    Korpinen, Leena H; Elovaara, Jarmo A; Kuisti, Harri A

    2011-01-01

    The aim of the study was to investigate the occupational exposure to electric fields, average current densities, and average total contact currents at 400 kV substation tasks from different service platforms (main transformer inspection, maintenance of operating device of disconnector, maintenance of operating device of circuit breaker). The average values are calculated over measured periods (about 2.5 min). In many work tasks, the maximum electric field strengths exceeded the action values proposed in the EU Directive 2004/40/EC, but the average electric fields (0.2-24.5 kV/m) were at least 40% lower than the maximum values. The average current densities were 0.1-2.3 mA/m² and the average total contact currents 2.0-143.2 µA, that is, clearly less than the limit values of the EU Directive. The average values of the currents in head and contact currents were 16-68% lower than the maximum values when we compared the average value from all cases in the same substation. In the future it is important to pay attention to the fact that the action and limit values of the EU Directive differ significantly. It is also important to take into account that generally, the workers' exposure to the electric fields, current densities, and total contact currents are obviously lower if we use the average values from a certain measured time period (e.g., 2.5 min) than in the case where exposure is defined with only the help of the maximum values. © 2010 Wiley-Liss, Inc.

  18. Scale dependence of the average potential around the maximum in Φ4 theories

    International Nuclear Information System (INIS)

    Tetradis, N.; Wetterich, C.

    1992-04-01

    The average potential describes the physics at a length scale k - 1 by averaging out the degrees of freedom with characteristic moments larger than k. The dependence on k can be described by differential evolution equations. We solve these equations for the nonconvex part of the potential around the origin in φ 4 theories, in the phase with spontaneous symmetry breaking. The average potential is real and approaches the convex effective potential in the limit k → 0. Our calculation is relevant for processes for which the shape of the potential at a given scale is important, such as tunneling phenomena or inflation. (orig.)

  19. Econometric modelling of Serbian current account determinants: Jackknife Model Averaging approach

    Directory of Open Access Journals (Sweden)

    Petrović Predrag

    2014-01-01

    Full Text Available This research aims to model Serbian current account determinants for the period Q1 2002 - Q4 2012. Taking into account the majority of relevant determinants, using the Jackknife Model Averaging approach, 48 different models have been estimated, where 1254 equations needed to be estimated and averaged for each of the models. The results of selected representative models indicate moderate persistence of the CA and positive influence of: fiscal balance, oil trade balance, terms of trade, relative income and real effective exchange rates, where we should emphasise: (i a rather strong influence of relative income, (ii the fact that the worsening of oil trade balance results in worsening of other components (probably non-oil trade balance of CA and (iii that the positive influence of terms of trade reveals functionality of the Harberger-Laursen-Metzler effect in Serbia. On the other hand, negative influence is evident in case of: relative economic growth, gross fixed capital formation, net foreign assets and trade openness. What particularly stands out is the strong effect of relative economic growth that, most likely, reveals high citizens' future income growth expectations, which has negative impact on the CA.

  20. Averaged currents induced by alpha particles in an InSb compound semiconductor detector

    International Nuclear Information System (INIS)

    Kanno, Ikuo; Hishiki, Shigeomi; Kogetsu, Yoshitaka; Nakamura, Tatsuya; Katagiri, Masaki

    2008-01-01

    Very fast pulses due to alpha particle incidence were observed by an undoped-type InSb Schottky detector. This InSb detector was operated without applying bias voltage and its depletion layer thickness was less than the range of alpha particles. The averaged current induced by alpha particles was analyzed as a function of operating temperature and was shown to be proportional to the Hall mobility of InSb. (author)

  1. Dst and a map of average equivalent ring current: 1958-2007

    Science.gov (United States)

    Love, J. J.

    2008-12-01

    A new Dst index construction is made using the original hourly magnetic-observatory data collected over the years 1958-2007; stations: Hermanus South Africa, Kakioka Japan, Honolulu Hawaii, and San Juan Puerto Rico. The construction method we use is generally consistent with the algorithm defined by Sugiura (1964), and which forms the basis for the standard Kyoto Dst index. This involves corrections for observatory baseline shifts, subtraction of the main-field secular variation, and subtraction of specific harmonics that approximate the solar-quiet (Sq) variation. Fourier analysis of the observatory data reveals the nature of Sq: it consists primarily of periodic variation driven by the Earth's rotation, the Moon's orbit, the Earth's orbit, and, to some extent, the solar cycle. Cross coupling of the harmonics associated with each of the external periodic driving forces results in a seemingly complicated Sq time series that is sometimes considered to be relatively random and unpredictable, but which is, in fact, well described in terms of Fourier series. Working in the frequency domain, Sq can be filtered out, and, upon return to the time domain, the local disturbance time series (Dist) for each observatory can be recovered. After averaging the local disturbance time series from each observatory, the global magnetic disturbance time series Dst is obtained. Analysis of this new Dst index is compared with that produced by Kyoto, and various biases and differences are discussed. The combination of the Dist and Dst time series can be used to explore the local-time/universal-time symmetry of an equivalent ring current. Individual magnetic storms can have a complicated disturbance field that is asymmetrical in longitude, presumably due to partial ring currents. Using 50 years of data we map the average local-time magnetic disturbance, finding that it is very nearly proportional to Dst. To our surprise, the primary asymmetry in mean magnetic disturbance is not between

  2. LANSCE beam current limiter

    International Nuclear Information System (INIS)

    Gallegos, F.R.

    1996-01-01

    The Radiation Security System (RSS) at the Los Alamos Neutron Science Center (LANSCE) provides personnel protection from prompt radiation due to accelerated beam. Active instrumentation, such as the Beam Current Limiter, is a component of the RSS. The current limiter is designed to limit the average current in a beam line below a specific level, thus minimizing the maximum current available for a beam spill accident. The beam current limiter is a self-contained, electrically isolated toroidal beam transformer which continuously monitors beam current. It is designed as fail-safe instrumentation. The design philosophy, hardware design, operation, and limitations of the device are described

  3. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  4. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  5. THE RISE AND FALL OF OPEN SOLAR FLUX DURING THE CURRENT GRAND SOLAR MAXIMUM

    International Nuclear Information System (INIS)

    Lockwood, M.; Rouillard, A. P.; Finch, I. D.

    2009-01-01

    We use geomagnetic activity data to study the rise and fall over the past century of the solar wind flow speed V SW , the interplanetary magnetic field strength B, and the open solar flux F S . Our estimates include allowance for the kinematic effect of longitudinal structure in the solar wind flow speed. As well as solar cycle variations, all three parameters show a long-term rise during the first half of the 20th century followed by peaks around 1955 and 1986 and then a recent decline. Cosmogenic isotope data reveal that this constitutes a grand maximum of solar activity which began in 1920, using the definition that such grand maxima are when 25-year averages of the heliospheric modulation potential exceeds 600 MV. Extrapolating the linear declines seen in all three parameters since 1985, yields predictions that the grand maximum will end in the years 2013, 2014, or 2027 using V SW , F S , or B, respectively. These estimates are consistent with predictions based on the probability distribution of the durations of past grand solar maxima seen in cosmogenic isotope data. The data contradict any suggestions of a floor to the open solar flux: we show that the solar minimum open solar flux, kinematically corrected to allow for the excess flux effect, has halved over the past two solar cycles.

  6. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  7. Use of the Maximum Torque Sensor to Reduce the Starting Current in the Induction Motor

    Directory of Open Access Journals (Sweden)

    Muchlas

    2010-03-01

    Full Text Available Use of the maximum torque sensor has been demonstrated able to improve the standard ramp-up technique in the induction motor circuit system. The induction motor used was of a three-phase squirrel-cage motor controlled using a microcontroller 68HC11. From the simulation done, it has been found that this innovative technique could optimize the performance of motor by introducing low stator current and low power consumption over the standard ramp-up technique.

  8. Silicon tunnel FET with average subthreshold slope of 55 mV/dec at low drain currents

    Science.gov (United States)

    Narimani, K.; Glass, S.; Bernardy, P.; von den Driesch, N.; Zhao, Q. T.; Mantl, S.

    2018-05-01

    In this paper we present a silicon tunnel FET based on line-tunneling to achieve better subthreshold performance. The fabricated device shows an on-current of Ion = 2.55 × 10-7 A/μm at Vds = Von = Vgs - Voff = -0.5 V for an Ioff = 1 nA/μm and an average SS of 55 mV/dec over two orders of magnitude of Id. Furthermore, the analog figures of merit have been calculated and show that the transconductance efficiency gm/Id beats the MOSFET performance at low currents.

  9. A Novel Technique for Maximum Power Point Tracking of a Photovoltaic Based on Sensing of Array Current Using Adaptive Neuro-Fuzzy Inference System (ANFIS)

    Science.gov (United States)

    El-Zoghby, Helmy M.; Bendary, Ahmed F.

    2016-10-01

    Maximum Power Point Tracking (MPPT) is now widely used method in increasing the photovoltaic (PV) efficiency. The conventional MPPT methods have many problems concerning the accuracy, flexibility and efficiency. The MPP depends on the PV temperature and solar irradiation that randomly varied. In this paper an artificial intelligence based controller is presented through implementing of an Adaptive Neuro-Fuzzy Inference System (ANFIS) to obtain maximum power from PV. The ANFIS inputs are the temperature and cell current, and the output is optimal voltage at maximum power. During operation the trained ANFIS senses the PV current using suitable sensor and also senses the temperature to determine the optimal operating voltage that corresponds to the current at MPP. This voltage is used to control the boost converter duty cycle. The MATLAB simulation results shows the effectiveness of the ANFIS with sensing the PV current in obtaining the MPPT from the PV.

  10. Maximum vehicle cabin temperatures under different meteorological conditions

    Science.gov (United States)

    Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John

    2009-05-01

    A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.

  11. 40 CFR 1045.140 - What is my engine's maximum engine power?

    Science.gov (United States)

    2010-07-01

    ...) Maximum engine power for an engine family is generally the weighted average value of maximum engine power... engine family's maximum engine power apply in the following circumstances: (1) For outboard or personal... value for maximum engine power from all the different configurations within the engine family to...

  12. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  13. 12 CFR 702.105 - Weighted-average life of investments.

    Science.gov (United States)

    2010-01-01

    ... investment funds. (1) For investments in registered investment companies (e.g., mutual funds) and collective investment funds, the weighted-average life is defined as the maximum weighted-average life disclosed, directly or indirectly, in the prospectus or trust instrument; (2) For investments in money market funds...

  14. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  15. Vegetable and Fruit Intakes of On-Reserve First Nations Schoolchildren Compared to Canadian Averages and Current Recommendations

    Directory of Open Access Journals (Sweden)

    Ian D. Martin

    2012-04-01

    Full Text Available This study investigated, in on-reserve First Nations (FN youth in Ontario, Canada, the following: (a the intakes of vegetable and fruit, “other” foods and relevant nutrients as compared to current recommendations and national averages, (b current prevalence rates of overweight and obesity and (c the relationship between latitude and dietary intakes. Twenty-four-hour diet recalls were collected via the Waterloo Web-Based Eating Behaviour Questionnaire (WEB-Q (n = 443. Heights and weights of participants were self reported using measured values and Body Mass Index was categorized using the International Obesity Task Force cutoffs. Food group and nutrient intakes were compared to current standards, Southern Ontario Food Behaviour data and the Canadian Community Health Survey, Cycle 2.2, using descriptive statistics. Mean vegetable and fruit, fibre and folate intakes were less than current recommendations. Girls aged 14–18 years had mean intakes of vitamin A below current recommendations for this sub-group; for all sub-groups, mean intakes of vegetables and fruit were below Canadian averages. All sub-groups also had intakes of all nutrients and food groups investigated that were less than those observed in non-FN youth from Southern Ontario, with the exception of “other” foods in boys 12–18 years. Prevalence rates of overweight and obesity were 31.8% and 19.6%, respectively, exceeding rates in the general population. Dietary intakes did not vary consistently by latitude (n = 248, as revealed by ANOVA. This study provided a unique investigation of the dietary intakes of on-reserve FN youth in Ontario and revealed poor intakes of vegetables and fruit and related nutrients and high intakes of “other” foods. Prevalence rates of overweight and obesity exceed those of the general population.

  16. Vegetable and Fruit Intakes of On-Reserve First Nations Schoolchildren Compared to Canadian Averages and Current Recommendations

    Science.gov (United States)

    Gates, Allison; Hanning, Rhona M.; Gates, Michelle; Skinner, Kelly; Martin, Ian D.; Tsuji, Leonard J. S.

    2012-01-01

    This study investigated, in on-reserve First Nations (FN) youth in Ontario, Canada, the following: (a) the intakes of vegetable and fruit, “other” foods and relevant nutrients as compared to current recommendations and national averages, (b) current prevalence rates of overweight and obesity and (c) the relationship between latitude and dietary intakes. Twenty-four-hour diet recalls were collected via the Waterloo Web-Based Eating Behaviour Questionnaire (WEB-Q) (n = 443). Heights and weights of participants were self reported using measured values and Body Mass Index was categorized using the International Obesity Task Force cutoffs. Food group and nutrient intakes were compared to current standards, Southern Ontario Food Behaviour data and the Canadian Community Health Survey, Cycle 2.2, using descriptive statistics. Mean vegetable and fruit, fibre and folate intakes were less than current recommendations. Girls aged 14–18 years had mean intakes of vitamin A below current recommendations for this sub-group; for all sub-groups, mean intakes of vegetables and fruit were below Canadian averages. All sub-groups also had intakes of all nutrients and food groups investigated that were less than those observed in non-FN youth from Southern Ontario, with the exception of “other” foods in boys 12–18 years. Prevalence rates of overweight and obesity were 31.8% and 19.6%, respectively, exceeding rates in the general population. Dietary intakes did not vary consistently by latitude (n = 248), as revealed by ANOVA. This study provided a unique investigation of the dietary intakes of on-reserve FN youth in Ontario and revealed poor intakes of vegetables and fruit and related nutrients and high intakes of “other” foods. Prevalence rates of overweight and obesity exceed those of the general population. PMID:22690200

  17. Latitudinal Change of Tropical Cyclone Maximum Intensity in the Western North Pacific

    OpenAIRE

    Choi, Jae-Won; Cha, Yumi; Kim, Hae-Dong; Kang, Sung-Dae

    2016-01-01

    This study obtained the latitude where tropical cyclones (TCs) show maximum intensity and applied statistical change-point analysis on the time series data of the average annual values. The analysis results found that the latitude of the TC maximum intensity increased from 1999. To investigate the reason behind this phenomenon, the difference of the average latitude between 1999 and 2013 and the average between 1977 and 1998 was analyzed. In a difference of 500 hPa streamline between the two ...

  18. NGA-West 2 GMPE average site coefficients for use in earthquake-resistant design

    Science.gov (United States)

    Borcherdt, Roger D.

    2015-01-01

    Site coefficients corresponding to those in tables 11.4–1 and 11.4–2 of Minimum Design Loads for Buildings and Other Structures published by the American Society of Civil Engineers (Standard ASCE/SEI 7-10) are derived from four of the Next Generation Attenuation West2 (NGA-W2) Ground-Motion Prediction Equations (GMPEs). The resulting coefficients are compared with those derived by other researchers and those derived from the NGA-West1 database. The derivation of the NGA-W2 average site coefficients provides a simple procedure to update site coefficients with each update in the Maximum Considered Earthquake Response MCER maps. The simple procedure yields average site coefficients consistent with those derived for site-specific design purposes. The NGA-W2 GMPEs provide simple scale factors to reduce conservatism in current simplified design procedures.

  19. Criticality evaluation of BWR MOX fuel transport packages using average Pu content

    International Nuclear Information System (INIS)

    Mattera, C.; Martinotti, B.

    2004-01-01

    Currently in France, criticality studies in transport configurations for Boiling Water Reactor Mixed Oxide fuel assemblies are based on conservative hypothesis assuming that all rods (Mixed Oxide (Uranium and Plutonium), Uranium Oxide, Uranium and Gadolinium Oxide rods) are Mixed Oxide rods with the same Plutonium-content, corresponding to the maximum value. In that way, the real heterogeneous mapping of the assembly is masked and covered by a homogeneous Plutonium-content assembly, enriched at the maximum value. As this calculation hypothesis is extremely conservative, COGEMA LOGISTICS has studied a new calculation method based on the average Plutonium-content in the criticality studies. The use of the average Plutonium-content instead of the real Plutonium-content profiles provides a highest reactivity value that makes it globally conservative. This method can be applied for all Boiling Water Reactor Mixed Oxide complete fuel assemblies of type 8 x 8, 9 x 9 and 10 x 10 which Plutonium-content in mass weight does not exceed 15%; it provides advantages which are discussed in our approach. With this new method, for the same package reactivity, the Pu-content allowed in the package design approval can be higher. The COGEMA LOGISTICS' new method allows, at the design stage, to optimise the basket, materials or geometry for higher payload, keeping the same reactivity

  20. Submesoscale cyclones in the Agulhas Current

    CSIR Research Space (South Africa)

    Krug, Marjolaine

    2017-01-01

    Full Text Available of about 0.6m s�1. Surface currents were considerably stronger, averaging 0.4m s�1 and reaching maximum values of 1.8m s�1. The strongest surface currents were toward the southwest and observed in deep waters, where the gliders approached the AC (Figures 1c..., J. Phys. Oceanogr., 45(9), 2294–2314. Eriksen, C. C., T. J. Osse, R. D. Light, T. Wen, T. W. Lehman, P. L. Sabin, J. W. Ballard, and A. M. Chiodi (2001), Seaglider: A long-range auton- omous underwater vehicle for oceanographic research, IEEE J...

  1. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  2. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  3. Surface ionization ion source with high current

    International Nuclear Information System (INIS)

    Fang Jinqing; Lin Zhizhou; Yu Lihua; Zhan Rongan; Huang Guojun; Wu Jianhua

    1986-04-01

    The working principle and structure of a surface ionization ion source with high current is described systematically. Some technological keypoints of the ion source are given in more detail, mainly including: choosing and shaping of the material of the surface ionizer, heating of the ionizer, distributing of working vapour on the ionizer surface, the flow control, the cooling problem at the non-ionization surface and the ion optics, etc. This ion source has been used since 1972 in the electromagnetic isotope separator with 180 deg angle. It is suitable for separating isotopes of alkali metals and rare earth metals. For instance, in the case of separating Rubidium, the maximum ion current of Rbsup(+) extracted from the ion source is about 120 mA, the maximum ion current accepted by the receiver is about 66 mA, the average ion current is more than 25 mA. The results show that our ion source have advantages of high ion current, good characteristics of focusing ion beam, working stability and structure reliability etc. It may be extended to other fields. Finally, some interesting phenomena in the experiment are disccused briefly. Some problems which should be investigated are further pointed out

  4. Current measurement studies around the Cesme Peninsula (Turkey)

    International Nuclear Information System (INIS)

    Taspinar, N.

    1989-04-01

    In order to design coastal structures and marine vehicles safely, it is required to know current climate which shows the variation of the current characteristics with time. There are a wide variety of current meters designed to measure water flow today. Each current meter is capable of recording the influence of mooring arrangement. Here we describe sea water temperatures, salinities and current velocities at offshore of Akburun, Tatlicak Burnu, Kalem Burnu and Kizil Burun areas in Cesme Peninsula 27 August, 1986 to 19 November, 1986. At the end of the investigations, measured significant maximum and average current velocities have been routinely analysed with micro-computers and also the percentages of current velocity have been calculated. (author). 8 refs, 6 figs, 4 tabs

  5. Motor current signature analysis for gearbox condition monitoring under transient speeds using wavelet analysis and dual-level time synchronous averaging

    Science.gov (United States)

    Bravo-Imaz, Inaki; Davari Ardakani, Hossein; Liu, Zongchang; García-Arribas, Alfredo; Arnaiz, Aitor; Lee, Jay

    2017-09-01

    This paper focuses on analyzing motor current signature for fault diagnosis of gearboxes operating under transient speed regimes. Two different strategies are evaluated, extensively tested and compared to analyze the motor current signature in order to implement a condition monitoring system for gearboxes in industrial machinery. A specially designed test bench is used, thoroughly monitored to fully characterize the experiments, in which gears in different health status are tested. The measured signals are analyzed using discrete wavelet decomposition, in different decomposition levels using a range of mother wavelets. Moreover, a dual-level time synchronous averaging analysis is performed on the same signal to compare the performance of the two methods. From both analyses, the relevant features of the signals are extracted and cataloged using a self-organizing map, which allows for an easy detection and classification of the diverse health states of the gears. The results demonstrate the effectiveness of both methods for diagnosing gearbox faults. A slightly better performance was observed for dual-level time synchronous averaging method. Based on the obtained results, the proposed methods can used as effective and reliable condition monitoring procedures for gearbox condition monitoring using only motor current signature.

  6. A mesic maximum in biological water use demarcates biome sensitivity to aridity shifts.

    Science.gov (United States)

    Good, Stephen P; Moore, Georgianne W; Miralles, Diego G

    2017-12-01

    Biome function is largely governed by how efficiently available resources can be used and yet for water, the ratio of direct biological resource use (transpiration, E T ) to total supply (annual precipitation, P) at ecosystem scales remains poorly characterized. Here, we synthesize field, remote sensing and ecohydrological modelling estimates to show that the biological water use fraction (E T /P) reaches a maximum under mesic conditions; that is, when evaporative demand (potential evapotranspiration, E P ) slightly exceeds supplied precipitation. We estimate that this mesic maximum in E T /P occurs at an aridity index (defined as E P /P) between 1.3 and 1.9. The observed global average aridity of 1.8 falls within this range, suggesting that the biosphere is, on average, configured to transpire the largest possible fraction of global precipitation for the current climate. A unimodal E T /P distribution indicates that both dry regions subjected to increasing aridity and humid regions subjected to decreasing aridity will suffer declines in the fraction of precipitation that plants transpire for growth and metabolism. Given the uncertainties in the prediction of future biogeography, this framework provides a clear and concise determination of ecosystems' sensitivity to climatic shifts, as well as expected patterns in the amount of precipitation that ecosystems can effectively use.

  7. MASEX '83, a survey of the turbidity maximum in the Weser Estuary

    International Nuclear Information System (INIS)

    Fanger, H.U.; Neumann, L.; Ohm, K.; Riethmueller, R.

    1986-01-01

    A one-week survey of the turbidity maximum in the Weser Estuary was conducted in the Fall of 1983 using the survey ship RV 'Victor Hensen'. Supplemental measurements were taken using in-situ current - conductivity - temperature - turbidity meters. The thickness of the bottom mud was determined using a gamma-ray transmission probe and compared with core sample analysis. The location of no-net tidal averaged bottom flow was determined to be at km 57. The off-ship measurements were taken using a CTD probe combined with a light attenuation meter. A comparison between salinity and attenuation gives insight into the relative importance of erosion, sedimentation and advective transport. (orig.) [de

  8. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  9. Development of quick-response area-averaged void fraction meter

    International Nuclear Information System (INIS)

    Watanabe, Hironori; Iguchi, Tadashi; Kimura, Mamoru; Anoda, Yoshinari

    2000-11-01

    Authors are performing experiments to investigate BWR thermal-hydraulic instability under coupling of neutronics and thermal-hydraulics. To perform the experiment, it is necessary to measure instantaneously area-averaged void fraction in rod bundle under high temperature/high pressure gas-liquid two-phase flow condition. Since there were no void fraction meters suitable for these requirements, we newly developed a practical void fraction meter. The principle of the meter is based on the electrical conductance changing with void fraction in gas-liquid two-phase flow. In this meter, metal flow channel wall is used as one electrode and a L-shaped line electrode installed at the center of flow channel is used as the other electrode. This electrode arrangement makes possible instantaneous measurement of area-averaged void fraction even under the metal flow channel. We performed experiments with air/water two-phase flow to clarify the void fraction meter performance. Experimental results indicated that void fraction was approximated by α=1-I/I o , where α and I are void fraction and current (I o is current at α=0). This relation holds in the wide range of void fraction of 0∼70%. The difference between α and 1-I/I o was approximately 10% at maximum. The major reasons of the difference are a void distribution over measurement area and an electrical insulation of the center electrode by bubbles. The principle and structure of this void fraction meter are very basic and simple. Therefore, the meter can be applied to various fields on gas-liquid two-phase flow studies. (author)

  10. The Impacts of Maximum Temperature and Climate Change to Current and Future Pollen Distribution in Skopje, Republic of Macedonia

    Directory of Open Access Journals (Sweden)

    Vladimir Kendrovski

    2012-02-01

    Full Text Available BACKGROUND. The goal of the present paper was to assess the impact of current and future burden of the ambient temperature to pollen distributions in Skopje. METHODS. In the study we have evaluated a correlation between the concentration of pollen grains in the atmosphere of Skopje and maximum temperature, during the vegetation period of 1996, 2003, 2007 and 2009 as a current burden in context of climate change. For our analysis we have selected 9 representative of each phytoallergen group (trees, grasses, weeds. The concentration of pollen grains has been monitored by a Lanzoni volumetric pollen trap. The correlation between the concentration of pollen grains in the atmosphere and selected meteorological variable from weekly monitoring has been studied with the help of linear regression and correlation coefficients. RESULTS. The prevalence of the sensibilization of standard pollen allergens in Skopje during the some period shows increasing from 16,9% in 1996 to 19,8% in 2009. We detect differences in onset of flowering, maximum and end of the length of seasons for pollen. The pollen distributions and risk increases in 3 main periods: early spring, spring and summer which are the main cause of allergies during these seasons. The largest increase of air temperature due to climate change in Skopje is expected in the summer season. CONCLUSION. The impacts of climate change by increasing of the temperature in the next decades very likely will include impacts on pollen production and differences in current pollen season. [TAF Prev Med Bull 2012; 11(1.000: 35-40

  11. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: Analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe

    Energy Technology Data Exchange (ETDEWEB)

    Prevosto, L.; Mancinelli, B. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Kelly, H. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Instituto de Física del Plasma (CONICET), Departamento de Física, Facultad de Ciencias Exactas y Naturales (UBA) Ciudad Universitaria Pab. I, 1428 Buenos Aires (Argentina)

    2013-12-15

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

  12. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: Analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe

    International Nuclear Information System (INIS)

    Prevosto, L.; Mancinelli, B.; Kelly, H.

    2013-01-01

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core

  13. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe.

    Science.gov (United States)

    Prevosto, L; Kelly, H; Mancinelli, B

    2013-12-01

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

  14. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  15. Characteristics of current quenches during disruptions in the J-TEXT tokamak

    International Nuclear Information System (INIS)

    Zhang, Y; Chen, Z Y; Fang, D; Jin, W; Huang, Y H; Wang, Z J; Yang, Z J; Chen, Z P; Ding, Y H; Zhang, M; Zhuang, G

    2012-01-01

    Characteristics of tokamak current quenches are an important issue for the determination of electro-magnetic forces that act on the in-vessel components and vacuum vessel during major disruptions. The characteristics of current quenches in spontaneous disruptions in the J-TEXT tokamak have been investigated. It is shown that the waveforms for the fastest current quenches are more accurately fitted by linear current decays than exponential, although neither is a good fit in many slower cases. The minimum current quench time is about 2.4 ms for the J-TEXT tokamak. The maximum instantaneous current quench rate is more than seven times the average current quench rate in J-TEXT. (paper)

  16. Uncovering a New Current: The Southwest MAdagascar Coastal Current

    Science.gov (United States)

    Ramanantsoa, Juliano D.; Penven, P.; Krug, M.; Gula, J.; Rouault, M.

    2018-02-01

    Cruise data sets, satellite remote sensing observations, and model data analyses are combined to highlight the existence of a coastal surface poleward flow in the southwest of Madagascar: the Southwest MAdagascar Coastal Current (SMACC). The SMACC is a relatively shallow (water surface signature of the SMACC extends from 22°S (upstream) to 26.4°S (downstream). The SMACC exhibits a seasonal variability: more intense in summer and reduced in winter. The average volume transport of its core is about 1.3 Sv with a mean summer maximum of 2.1 Sv. It is forced by a strong cyclonic wind stress curl associated with the bending of the trade winds along the southern tip of Madagascar. The SMACC directly influences the coastal upwelling regions south of Madagascar. Its existence is likely to influence local fisheries and larval transport patterns, as well as the connectivity with the Agulhas Current, affecting the returning branch of the global overturning circulation.

  17. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  18. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  19. Verification of average daily maximum permissible concentration of styrene in the atmospheric air of settlements under the results of epidemiological studies of the children’s population

    Directory of Open Access Journals (Sweden)

    М.А. Zemlyanova

    2015-03-01

    Full Text Available We presented the materials on the verification of the average daily maximum permissible concentration of styrene in the atmospheric air of settlements performed under the results of own in-depth epidemiological studies of children’s population according to the principles of the international risk assessment practice. It was established that children in the age of 4–7 years when exposed to styrene at the level above 1.2 of threshold level value for continuous exposure develop the negative exposure effects in the form of disorders of hormonal regulation, pigmentary exchange, antioxidative activity, cytolysis, immune reactivity and cytogenetic disbalance which contribute to the increased morbidity of diseases of the central nervous system, endocrine system, respiratory organs, digestion and skin. Based on the proved cause-and-effect relationships between the biomarkers of negative effects and styrene concentration in blood it was demonstrated that the benchmark styrene concentration in blood is 0.002 mg/dm3. The justified value complies with and confirms the average daily styrene concentration in the air of settlements at the level of 0.002 mg/m3 accepted in Russia which provides the safety for the health of population (1 threshold level value for continuous exposure.

  20. A METHOD FOR DETERMINING THE RADIALLY-AVERAGED EFFECTIVE IMPACT AREA FOR AN AIRCRAFT CRASH INTO A STRUCTURE

    Energy Technology Data Exchange (ETDEWEB)

    Walker, William C. [ORNL

    2018-02-01

    This report presents a methodology for deriving the equations which can be used for calculating the radially-averaged effective impact area for a theoretical aircraft crash into a structure. Conventionally, a maximum effective impact area has been used in calculating the probability of an aircraft crash into a structure. Whereas the maximum effective impact area is specific to a single direction of flight, the radially-averaged effective impact area takes into consideration the real life random nature of the direction of flight with respect to a structure. Since the radially-averaged effective impact area is less than the maximum effective impact area, the resulting calculated probability of an aircraft crash into a structure is reduced.

  1. Results from transcranial Doppler examination on children and adolescents with sickle cell disease and correlation between the time-averaged maximum mean velocity and hematological characteristics: a cross-sectional analytical study

    Directory of Open Access Journals (Sweden)

    Mary Hokazono

    Full Text Available CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD detects stroke risk among children with sickle cell anemia (SCA. Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD genotypes and correlate the time-averaged maximum mean (TAMM velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia; and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia. TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05. RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6% and five conditional results (8.1% in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics.

  2. LANSCE Beam Current Limiter (XL)

    International Nuclear Information System (INIS)

    Gallegos, F.R.; Hall, M.J.

    1997-01-01

    The Radiation Security System (RSS) at the Los Alamos Neutron Science Center (LANSCE) is an engineered safety system that provides personnel protection from prompt radiation due to accelerated proton beams. The Beam Current Limiter (XL), as an active component of the RSS, limits the maximum average current in a beamline, thus the current available for a beam spill accident. Exceeding the pre-set limit initiates action by the RSS to mitigate the hazard (insertion of beam stoppers in the low energy beam transport). The beam limiter is an electrically isolated, toroidal transformer and associated electronics. The device was designed to continuously monitor beamline currents independent of any external timing. Fail-safe operation was a prime consideration in its development. Fail-safe operation is defined as functioning as intended (due to redundant circuitry), functioning with a more sensitive fault threshold, or generating a fault condition. This report describes the design philosophy, hardware, implementation, operation, and limitations of the device

  3. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  4. Persistent current of relativistic electrons on a Dirac ring in presence of impurities

    KAUST Repository

    Ghosh, Sumit; Saha, Arijit

    2014-01-01

    We study the behaviour of persistent current of relativistic electrons on a one dimensional ring in presence of attractive/repulsive scattering potentials. In particular, we investigate the persistent current in accordance with the strength as well as the number of the scattering potential. We find that in presence of single scatterer the persistent current becomes smaller in magnitude than the scattering free scenario. This behaviour is similar to the non-relativistic case. Even for a very strong scattering potential, finite amount of persistent current remains for a relativistic ring. In presence of multiple scatterer we observe that the persistent current is maximum when the scatterers are placed uniformly compared to the current averaged over random configurations. However if we increase the number of scatterers, we find that the random averaged current increases with the number of scatterers. The latter behaviour is in contrast to the non-relativistic case. © 2014 EDP Sciences, SIF, Springer-Verlag Berlin Heidelberg.

  5. Persistent current of relativistic electrons on a Dirac ring in presence of impurities

    KAUST Repository

    Ghosh, Sumit

    2014-08-01

    We study the behaviour of persistent current of relativistic electrons on a one dimensional ring in presence of attractive/repulsive scattering potentials. In particular, we investigate the persistent current in accordance with the strength as well as the number of the scattering potential. We find that in presence of single scatterer the persistent current becomes smaller in magnitude than the scattering free scenario. This behaviour is similar to the non-relativistic case. Even for a very strong scattering potential, finite amount of persistent current remains for a relativistic ring. In presence of multiple scatterer we observe that the persistent current is maximum when the scatterers are placed uniformly compared to the current averaged over random configurations. However if we increase the number of scatterers, we find that the random averaged current increases with the number of scatterers. The latter behaviour is in contrast to the non-relativistic case. © 2014 EDP Sciences, SIF, Springer-Verlag Berlin Heidelberg.

  6. Initial Beam Dynamics Simulations of a High-Average-Current Field-Emission Electron Source in a Superconducting RadioFrequency Gun

    Energy Technology Data Exchange (ETDEWEB)

    Mohsen, O. [Northern Illinois U.; Gonin, I. [Fermilab; Kephart, R. [Fermilab; Khabiboulline, T. [Fermilab; Piot, P. [Northern Illinois U.; Solyak, N. [Fermilab; Thangaraj, J. C. [Fermilab; Yakovlev, V. [Fermilab

    2018-01-05

    High-power electron beams are sought-after tools in support to a wide array of societal applications. This paper investigates the production of high-power electron beams by combining a high-current field-emission electron source to a superconducting radio-frequency (SRF) cavity. We especially carry out beam-dynamics simulations that demonstrate the viability of the scheme to form $\\sim$ 300 kW average-power electron beam using a 1+1/2-cell SRF gun.

  7. User's guide for SLWDN9, a code for calculating flux-surfaced-averaging of alpha densities, currents, and heating in non-circular tokamaks

    International Nuclear Information System (INIS)

    Hively, L.M.; Miley, G.M.

    1980-03-01

    The code calculates flux-surfaced-averaged values of alpha density, current, and electron/ion heating profiles in realistic, non-circular tokamak plasmas. The code is written in FORTRAN and execute on the CRAY-1 machine at the Magnetic Fusion Energy Computer Center

  8. Latitudinal Change of Tropical Cyclone Maximum Intensity in the Western North Pacific

    Directory of Open Access Journals (Sweden)

    Jae-Won Choi

    2016-01-01

    Full Text Available This study obtained the latitude where tropical cyclones (TCs show maximum intensity and applied statistical change-point analysis on the time series data of the average annual values. The analysis results found that the latitude of the TC maximum intensity increased from 1999. To investigate the reason behind this phenomenon, the difference of the average latitude between 1999 and 2013 and the average between 1977 and 1998 was analyzed. In a difference of 500 hPa streamline between the two periods, anomalous anticyclonic circulations were strong in 30°–50°N, while anomalous monsoon trough was located in the north of South China Sea. This anomalous monsoon trough was extended eastward to 145°E. Middle-latitude region in East Asia is affected by the anomalous southeasterlies due to these anomalous anticyclonic circulations and anomalous monsoon trough. These anomalous southeasterlies play a role of anomalous steering flows that make the TCs heading toward region in East Asia middle latitude. As a result, TCs during 1999–2013 had higher latitude of the maximum intensity compared to the TCs during 1977–1998.

  9. Optimisation of sea surface current retrieval using a maximum cross correlation technique on modelled sea surface temperature

    Science.gov (United States)

    Heuzé, Céline; Eriksson, Leif; Carvajal, Gisela

    2017-04-01

    Using sea surface temperature from satellite images to retrieve sea surface currents is not a new idea, but so far its operational near-real time implementation has not been possible. Validation studies are too region-specific or uncertain, due to the errors induced by the images themselves. Moreover, the sensitivity of the most common retrieval method, the maximum cross correlation, to the three parameters that have to be set is unknown. Using model outputs instead of satellite images, biases induced by this method are assessed here, for four different seas of Western Europe, and the best of nine settings and eight temporal resolutions are determined. For all regions, tracking a small 5 km pattern from the first image over a large 30 km region around its original location on a second image, separated from the first image by 6 to 9 hours returned the most accurate results. Moreover, for all regions, the problem is not inaccurate results but missing results, where the velocity is too low to be picked by the retrieval. The results are consistent both with limitations caused by ocean surface current dynamics and with the available satellite technology, indicating that automated sea surface current retrieval from sea surface temperature images is feasible now, for search and rescue operations, pollution confinement or even for more energy efficient and comfortable ship navigation.

  10. Microprocessor Controlled Maximum Power Point Tracker for Photovoltaic Application

    International Nuclear Information System (INIS)

    Jiya, J. D.; Tahirou, G.

    2002-01-01

    This paper presents a microprocessor controlled maximum power point tracker for photovoltaic module. Input current and voltage are measured and multiplied within the microprocessor, which contains an algorithm to seek the maximum power point. The duly cycle of the DC-DC converter, at which the maximum power occurs is obtained, noted and adjusted. The microprocessor constantly seeks for improvement of obtained power by varying the duty cycle

  11. Three-level grid-connected photovoltaic inverter with maximum power point tracking

    International Nuclear Information System (INIS)

    Tsang, K.M.; Chan, W.L.

    2013-01-01

    Highlight: ► This paper reports a novel 3-level grid connected photovoltaic inverter. ► The inverter features maximum power point tracking and grid current shaping. ► The inverter can be acted as an active filter and a renewable power source. - Abstract: This paper presents a systematic way of designing control scheme for a grid-connected photovoltaic (PV) inverter featuring maximum power point tracking (MPPT) and grid current shaping. Unlike conventional design, only four power switches are required to achieve three output levels and it is not necessary to use any phase-locked-loop circuitry. For the proposed scheme, a simple integral controller has been designed for the tracking of the maximum power point of a PV array based on an improved extremum seeking control method. For the grid-connected inverter, a current loop controller and a voltage loop controller have been designed. The current loop controller is designed to shape the inverter output current while the voltage loop controller can maintain the capacitor voltage at a certain level and provide a reference inverter output current for the PV inverter without affecting the maximum power point of the PV array. Experimental results are included to demonstrate the effectiveness of the tracking and control scheme.

  12. The use of the average plutonium-content for criticality evaluation of boiling water reactor mixed oxide-fuel transport and storage packages

    International Nuclear Information System (INIS)

    Mattera, C.

    2003-01-01

    Currently in France, criticality studies in transport configurations for Boiling Water Reactor Mixed Oxide fuel assemblies are based on conservative hypothesis assuming that all rods (Mixed Oxide (Uranium and Plutonium), Uranium Oxide, Uranium and (Gadolinium Oxide rods) are Mixed Oxide rods with the same Plutonium-content, corresponding to the maximum value. In that way, the real heterogeneous mapping of the assembly is masked and covered by an homogenous Plutonium-content assembly, enriched at the maximum value. As this calculation hypothesis is extremely conservative, Cogema Logistics (formerly Transnucleaire) has studied a new calculation method based on the use of the average Plutonium-content in the criticality studies. The use of the average Plutonium-content instead of the real Plutonium-content profiles provides a highest reactivity value that makes it globally conservative. This method can be applied for all Boiling Water Reactor Mixed Oxide complete fuel assemblies of type 8 x 8, 9 x 9 and 10 x 10 which Plutonium-content in mass weight does not exceed 15%; it provides advantages which are discussed in the paper. (author)

  13. Electron density distribution in Si and Ge using multipole, maximum ...

    Indian Academy of Sciences (India)

    Si and Ge has been studied using multipole, maximum entropy method (MEM) and ... and electron density distribution using the currently available versatile ..... data should be subjected to maximum possible utility for the characterization of.

  14. Comparing modeled and observed changes in mineral dust transport and deposition to Antarctica between the Last Glacial Maximum and current climates

    Energy Technology Data Exchange (ETDEWEB)

    Albani, Samuel [University of Siena, Graduate School in Polar Sciences, Siena (Italy); University of Milano-Bicocca, Department of Environmental Sciences, Milano (Italy); Cornell University, Department of Earth and Atmospheric Sciences, Ithaca, NY (United States); Mahowald, Natalie M. [Cornell University, Department of Earth and Atmospheric Sciences, Ithaca, NY (United States); Delmonte, Barbara; Maggi, Valter [University of Milano-Bicocca, Department of Environmental Sciences, Milano (Italy); Winckler, Gisela [Columbia University, Lamont-Doherty Earth Observatory, Palisades, NY (United States); Columbia University, Department of Earth and Environmental Sciences, New York, NY (United States)

    2012-05-15

    Mineral dust aerosols represent an active component of the Earth's climate system, by interacting with radiation directly, and by modifying clouds and biogeochemistry. Mineral dust from polar ice cores over the last million years can be used as paleoclimate proxy, and provide unique information about climate variability, as changes in dust deposition at the core sites can be due to changes in sources, transport and/or deposition locally. Here we present results from a study based on climate model simulations using the Community Climate System Model. The focus of this work is to analyze simulated differences in the dust concentration, size distribution and sources in current climate conditions and during the Last Glacial Maximum at specific ice core locations in Antarctica, and compare with available paleodata. Model results suggest that South America is the most important source for dust deposited in Antarctica in current climate, but Australia is also a major contributor and there is spatial variability in the relative importance of the major dust sources. During the Last Glacial Maximum the dominant source in the model was South America, because of the increased activity of glaciogenic dust sources in Southern Patagonia-Tierra del Fuego and the Southernmost Pampas regions, as well as an increase in transport efficiency southward. Dust emitted from the Southern Hemisphere dust source areas usually follow zonal patterns, but southward flow towards Antarctica is located in specific areas characterized by southward displacement of air masses. Observations and model results consistently suggest a spatially variable shift in dust particle sizes. This is due to a combination of relatively reduced en route wet removal favouring a generalized shift towards smaller particles, and on the other hand to an enhanced relative contribution of dry coarse particle deposition in the Last Glacial Maximum. (orig.)

  15. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  16. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  17. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  18. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  19. An Experimental Observation of Axial Variation of Average Size of Methane Clusters in a Gas Jet

    International Nuclear Information System (INIS)

    Ji-Feng, Han; Chao-Wen, Yang; Jing-Wei, Miao; Jian-Feng, Lu; Meng, Liu; Xiao-Bing, Luo; Mian-Gong, Shi

    2010-01-01

    Axial variation of average size of methane clusters in a gas jet produced by supersonic expansion of methane through a cylindrical nozzle of 0.8 mm in diameter is observed using a Rayleigh scattering method. The scattered light intensity exhibits a power scaling on the backing pressure ranging from 16 to 50 bar, and the power is strongly Z dependent varying from 8.4 (Z = 3 mm) to 5.4 (Z = 11 mm), which is much larger than that of the argon cluster. The scattered light intensity versus axial position shows that the position of 5 mm has the maximum signal intensity. The estimation of the average cluster size on axial position Z indicates that the cluster growth process goes forward until the maximum average cluster size is reached at Z = 9 mm, and the average cluster size will decrease gradually for Z > 9 mm

  20. Choice of initial operating parameters for high average current linear accelerators

    International Nuclear Information System (INIS)

    Batchelor, K.

    1976-01-01

    Recent emphasis on alternative energy sources together with the need for intense neutron sources for testing of materials for CTR has resulted in renewed interest in high current (approximately 100 mA) c.w. proton and deuteron linear accelerators. In desinging an accelerator for such high currents, it is evident that beam losses in the machine must be minimized, which implies well matched beams, and that adequate acceptance under severe space charge conditions must be met. An investigation is presented of the input parameters to an Alvarez type drift-tube accelerator resulting from such factors. The analysis indicates that an accelerator operating at a frequency of 50 MHz is capable of accepting deuteron currents of about 0.4 amperes and proton currents of about 1.2 amperes. These values depend critically on the assumed values of beam emittance and on the ability to properly ''match'' this to the linac acceptance

  1. Maximum power analysis of photovoltaic module in Ramadi city

    Energy Technology Data Exchange (ETDEWEB)

    Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)

    2013-07-01

    Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.

  2. Preliminary analysis of the afforestation role in the maximum runoff in Valea Rece Catchment

    Directory of Open Access Journals (Sweden)

    Mihalcea Andreea

    2017-06-01

    Full Text Available The aim of this article is to demonstrate the afforestation role in maximum surface runoff. In this way, it was made a comparison of simulated flows in the current conditions of afforestation and the simulated flows in conditions of applying both afforestation and deforestation scenarios in Valea Rece catchment. Through HEC-HMS 4.1 hydrologic modeling software, using the method of unit hydrograph SCS Curve Number, were simulated flow of the river Valea Rece closing section of the basin, where precipitation amounts of 30,50,80,120 mm fallen in intervals of 1.3 to 6 hours on a soil with varying degrees of moisture: dry soil, average soil moisture and high humidity. This was done for the current degree of afforestation basin, for the results from a possible afforestation that would increase the afforestation degree to 80%, and for a possible deforestation that would lead to a degree of afforestation 15 %.

  3. A Two-Stage Information-Theoretic Approach to Modeling Landscape-Level Attributes and Maximum Recruitment of Chinook Salmon in the Columbia River Basin.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L.; Lee, Danny C.

    2000-11-01

    Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.

  4. 20 CFR 10.806 - How are the maximum fees defined?

    Science.gov (United States)

    2010-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees.../Current Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time...

  5. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  6. Flux surface shape and current profile optimization in tokamaks

    International Nuclear Information System (INIS)

    Dobrott, D.R.; Miller, R.L.

    1977-01-01

    Axisymmetric tokamak equilibria of noncircular cross section are analyzed numerically to study the effects of flux surface shape and current profile on ideal and resistive interchange stability. Various current profiles are examined for circles, ellipses, dees, and doublets. A numerical code separately analyzes stability in the neighborhood of the magnetic axis and in the remainder of the plasma using the criteria of Mercier and Glasser, Greene, and Johnson. Results are interpreted in terms of flux surface averaged quantities such as magnetic well, shear, and the spatial variation in the magnetic field energy density over the cross section. The maximum stable β is found to vary significantly with shape and current profile. For current profiles varying linearly with poloidal flux, the highest β's found were for doublets. Finally, an algorithm is presented which optimizes the current profile for circles and dees by making the plasma everywhere marginally stable

  7. Research of long pulse high current diode radial insulation

    International Nuclear Information System (INIS)

    Tan Jie; Chang Anbi; Hu Kesong; Liu Qingxiang; Ma Qiaosheng; Liu Zhong

    2002-01-01

    A radial insulation structure which is used in long pulse high current diode is introduced. The theory of vacuum flashover and the idea of design are briefly introduced. In the research, cone-shaped insulator was used. The geometry structure parameters were optimized by simulating the static electrical field distribution. Experiment was done on a pulse power source with 200 ns pulse width. The maximum voltage 750 kV was obtained, and the average stand-off electrical field of insulator is about 50 kV/cm

  8. Entanglement in random pure states: spectral density and average von Neumann entropy

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Santosh; Pandey, Akhilesh, E-mail: skumar.physics@gmail.com, E-mail: ap0700@mail.jnu.ac.in [School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110 067 (India)

    2011-11-04

    Quantum entanglement plays a crucial role in quantum information, quantum teleportation and quantum computation. The information about the entanglement content between subsystems of the composite system is encoded in the Schmidt eigenvalues. We derive here closed expressions for the spectral density of Schmidt eigenvalues for all three invariant classes of random matrix ensembles. We also obtain exact results for average von Neumann entropy. We find that maximum average entanglement is achieved if the system belongs to the symplectic invariant class. (paper)

  9. Maximum field capability of energy saver superconducting magnets

    International Nuclear Information System (INIS)

    Turkot, F.; Cooper, W.E.; Hanft, R.; McInturff, A.

    1983-01-01

    At an energy of 1 TeV the superconducting cable in the Energy Saver dipole magnets will be operating at ca. 96% of its nominal short sample limit; the corresponding number in the quadrupole magnets will be 81%. All magnets for the Saver are individually tested for maximum current capability under two modes of operation; some 900 dipoles and 275 quadrupoles have now been measured. The dipole winding is composed of four individually wound coils which in general come from four different reels of cable. As part of the magnet fabrication quality control a short piece of cable from both ends of each reel has its critical current measured at 5T and 4.3K. In this paper the authors describe and present the statistical results of the maximum field tests (including quench and cycle) on Saver dipole and quadrupole magnets and explore the correlation of these tests with cable critical current

  10. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  11. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  12. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    Science.gov (United States)

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  13. Current-mode minimax circuit

    NARCIS (Netherlands)

    Wassenaar, R.F.

    1992-01-01

    The minimum-maximum (minimax) circuit selects the minimum and maximum of two input currents. Four transistors in matched pairs are operated in the saturation region. Because the behavior of the circuit is based on matched devices and is independent of the relationship between the drain current and

  14. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic.

    Science.gov (United States)

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.

  15. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  16. The Hengill geothermal area, Iceland: Variation of temperature gradients deduced from the maximum depth of seismogenesis

    Science.gov (United States)

    Foulger, G. R.

    1995-04-01

    Given a uniform lithology and strain rate and a full seismic data set, the maximum depth of earthquakes may be viewed to a first order as an isotherm. These conditions are approached at the Hengill geothermal area S. Iceland, a dominantly basaltic area. The likely strain rate calculated from thermal and tectonic considerations is 10 -15 s -1, and temperature measurements from four drill sites within the area indicate average, near-surface geothermal gradients of up to 150 °C km -1 throughout the upper 2 km. The temperature at which seismic failure ceases for the strain rates likely at the Hengill geothermal area is determined by analogy with oceanic crust, and is about 650 ± 50 °C. The topographies of the top and bottom of the seismogenic layer were mapped using 617 earthquakes located highly accurately by performing a simultaneous inversion for three-dimensional structure and hypocentral parameters. The thickness of the seismogenic layer is roughly constant and about 3 km. A shallow, aseismic, low-velocity volume within the spreading plate boundary that crosses the area occurs above the top of the seismogenic layer and is interpreted as an isolated body of partial melt. The base of the seismogenic layer has a maximum depth of about 6.5 km beneath the spreading axis and deepens to about 7 km beneath a transform zone in the south of the area. Beneath the high-temperature part of the geothermal area, the maximum depth of earthquakes may be as shallow as 4 km. The geothermal gradient below drilling depths in various parts of the area ranges from 84 ± 9 °Ckm -1 within the low-temperature geothermal area of the transform zone to 138 ± 15 °Ckm -1 below the centre of the high-temperature geothermal area. Shallow maximum depths of earthquakes and therefore high average geothermal gradients tend to correlate with the intensity of the geothermal area and not with the location of the currently active spreading axis.

  17. Modelling information flow along the human connectome using maximum flow.

    Science.gov (United States)

    Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung

    2018-01-01

    The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  19. The concept of the average stress in the fracture process zone for the search of the crack path

    Directory of Open Access Journals (Sweden)

    Yu.G. Matvienko

    2015-10-01

    Full Text Available The concept of the average stress has been employed to propose the maximum average tangential stress (MATS criterion for predicting the direction of fracture angle. This criterion states that a crack grows when the maximum average tangential stress in the fracture process zone ahead of the crack tip reaches its critical value and the crack growth direction coincides with the direction of the maximum average tangential stress along a constant radius around the crack tip. The tangential stress is described by the singular and nonsingular (T-stress terms in the Williams series solution. To demonstrate the validity of the proposed MATS criterion, this criterion is directly applied to experiments reported in the literature for the mixed mode I/II crack growth behavior of Guiting limestone. The predicted directions of fracture angle are consistent with the experimental data. The concept of the average stress has been also employed to predict the surface crack path under rolling-sliding contact loading. The proposed model considers the size and orientation of the initial crack, normal and tangential loading due to rolling–sliding contact as well as the influence of fluid trapped inside the crack by a hydraulic pressure mechanism. The MATS criterion is directly applied to equivalent contact model for surface crack growth on a gear tooth flank.

  20. Maximum Safety Regenerative Power Tracking for DC Traction Power Systems

    Directory of Open Access Journals (Sweden)

    Guifu Du

    2017-02-01

    Full Text Available Direct current (DC traction power systems are widely used in metro transport systems, with running rails usually being used as return conductors. When traction current flows through the running rails, a potential voltage known as “rail potential” is generated between the rails and ground. Currently, abnormal rises of rail potential exist in many railway lines during the operation of railway systems. Excessively high rail potentials pose a threat to human life and to devices connected to the rails. In this paper, the effect of regenerative power distribution on rail potential is analyzed. Maximum safety regenerative power tracking is proposed for the control of maximum absolute rail potential and energy consumption during the operation of DC traction power systems. The dwell time of multiple trains at each station and the trigger voltage of the regenerative energy absorbing device (READ are optimized based on an improved particle swarm optimization (PSO algorithm to manage the distribution of regenerative power. In this way, the maximum absolute rail potential and energy consumption of DC traction power systems can be reduced. The operation data of Guangzhou Metro Line 2 are used in the simulations, and the results show that the scheme can reduce the maximum absolute rail potential and energy consumption effectively and guarantee the safety in energy saving of DC traction power systems.

  1. Phantom and Clinical Study of Differences in Cone Beam Computed Tomographic Registration When Aligned to Maximum and Average Intensity Projection

    Energy Technology Data Exchange (ETDEWEB)

    Shirai, Kiyonori [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Nishiyama, Kinji, E-mail: sirai-ki@mc.pref.osaka.jp [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Katsuda, Toshizo [Department of Radiology, National Cerebral and Cardiovascular Center, Osaka (Japan); Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan)

    2014-01-01

    Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error.

  2. Phantom and Clinical Study of Differences in Cone Beam Computed Tomographic Registration When Aligned to Maximum and Average Intensity Projection

    International Nuclear Information System (INIS)

    Shirai, Kiyonori; Nishiyama, Kinji; Katsuda, Toshizo; Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo

    2014-01-01

    Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error

  3. Radio Frequency Transistors Using Aligned Semiconducting Carbon Nanotubes with Current-Gain Cutoff Frequency and Maximum Oscillation Frequency Simultaneously Greater than 70 GHz.

    Science.gov (United States)

    Cao, Yu; Brady, Gerald J; Gui, Hui; Rutherglen, Chris; Arnold, Michael S; Zhou, Chongwu

    2016-07-26

    In this paper, we report record radio frequency (RF) performance of carbon nanotube transistors based on combined use of a self-aligned T-shape gate structure, and well-aligned, high-semiconducting-purity, high-density polyfluorene-sorted semiconducting carbon nanotubes, which were deposited using dose-controlled, floating evaporative self-assembly method. These transistors show outstanding direct current (DC) performance with on-current density of 350 μA/μm, transconductance as high as 310 μS/μm, and superior current saturation with normalized output resistance greater than 100 kΩ·μm. These transistors create a record as carbon nanotube RF transistors that demonstrate both the current-gain cutoff frequency (ft) and the maximum oscillation frequency (fmax) greater than 70 GHz. Furthermore, these transistors exhibit good linearity performance with 1 dB gain compression point (P1dB) of 14 dBm and input third-order intercept point (IIP3) of 22 dBm. Our study advances state-of-the-art of carbon nanotube RF electronics, which have the potential to be made flexible and may find broad applications for signal amplification, wireless communication, and wearable/flexible electronics.

  4. Generation and Applications of High Average Power Mid-IR Supercontinuum in Chalcogenide Fibers

    OpenAIRE

    Petersen, Christian Rosenberg

    2016-01-01

    Mid-infrared supercontinuum with up to 54.8 mW average power, and maximum bandwidth of 1.77-8.66 μm is demonstrated as a result of pumping tapered chalcogenide photonic crystal fibers with a MHz parametric source at 4 μm

  5. Effects of bruxism on the maximum bite force

    Directory of Open Access Journals (Sweden)

    Todić Jelena T.

    2017-01-01

    Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.

  6. Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules

    DEFF Research Database (Denmark)

    Gao, Junling; Chen, Min

    2013-01-01

    Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...... that the main cause is the influence of various currents on the produced electromotive potential. A simple and effective calibration method is proposed to minimize the deviations in specifying the maximum power. Experimental results validate the method with improved estimation accuracy....

  7. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.

    Science.gov (United States)

    Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz

    2017-10-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.

  8. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  9. A Comparative Frequency Analysis of Maximum Daily Rainfall for a SE Asian Region under Current and Future Climate Conditions

    Directory of Open Access Journals (Sweden)

    Velautham Daksiya

    2017-01-01

    Full Text Available The impact of changing climate on the frequency of daily rainfall extremes in Jakarta, Indonesia, is analysed and quantified. The study used three different models to assess the changes in rainfall characteristics. The first method involves the use of the weather generator LARS-WG to quantify changes between historical and future daily rainfall maxima. The second approach consists of statistically downscaling general circulation model (GCM output based on historical empirical relationships between GCM output and station rainfall. Lastly, the study employed recent statistically downscaled global gridded rainfall projections to characterize climate change impact rainfall structure. Both annual and seasonal rainfall extremes are studied. The results show significant changes in annual maximum daily rainfall, with an average increase as high as 20% in the 100-year return period daily rainfall. The uncertainty arising from the use of different GCMs was found to be much larger than the uncertainty from the emission scenarios. Furthermore, the annual and wet seasonal analyses exhibit similar behaviors with increased future rainfall, but the dry season is not consistent across the models. The GCM uncertainty is larger in the dry season compared to annual and wet season.

  10. Correlation between maximum isometric strength variables and specific performance of Brazilian military judokas

    Directory of Open Access Journals (Sweden)

    Michel Moraes Gonçalves

    2017-06-01

    Full Text Available It was our objective to correlate specific performance in the Special Judo Fitness Test (SJFT and the maximum isometric handgrip (HGSMax, scapular traction (STSMax and lumbar traction (LTSMax strength tests in military judo athletes. Twenty-two military athletes from the judo team of the Brazilian Navy Almirante Alexandrino Instruction Centre, with average age of 26.14 ± 3.31 years old, and average body mass of 83.23 ± 14.14 kg participated in the study. Electronic dynamometry tests for HGSMax, STSMax and LTSMax were conducted. Then, after approximately 1 hour-interval, the SJFT protocol was applied. All variables were adjusted to the body mass of the athletes. Pearson correlation coefficient for statistical analysis was used. The results showed moderate negative correlation between the SJFT index and STSMax (r= -0.550, p= 0.008, strong negative correlations between the SJFT index and HGSMax (r= -0.706, p< 0.001, SJFT index and LTSMax (r= -0.721; p= 0.001, besides the correlation between the sum of the three maximum isometric strength tests and the SJFT index (r= -0.786, p< 0.001. This study concludes that negative correlations occur between the SJFT index and maximum isometric handgrip, shoulder and lumbar traction strength and the sum of the three maximum isometric strength tests in military judokas.

  11. Dependence of US hurricane economic loss on maximum wind speed and storm size

    International Nuclear Information System (INIS)

    Zhai, Alice R; Jiang, Jonathan H

    2014-01-01

    Many empirical hurricane economic loss models consider only wind speed and neglect storm size. These models may be inadequate in accurately predicting the losses of super-sized storms, such as Hurricane Sandy in 2012. In this study, we examined the dependences of normalized US hurricane loss on both wind speed and storm size for 73 tropical cyclones that made landfall in the US from 1988 through 2012. A multi-variate least squares regression is used to construct a hurricane loss model using both wind speed and size as predictors. Using maximum wind speed and size together captures more variance of losses than using wind speed or size alone. It is found that normalized hurricane loss (L) approximately follows a power law relation with maximum wind speed (V max ) and size (R), L = 10 c V max a R b , with c determining an overall scaling factor and the exponents a and b generally ranging between 4–12 and 2–4 respectively. Both a and b tend to increase with stronger wind speed. Hurricane Sandy’s size was about three times of the average size of all hurricanes analyzed. Based on the bi-variate regression model that explains the most variance for hurricanes, Hurricane Sandy’s loss would be approximately 20 times smaller if its size were of the average size with maximum wind speed unchanged. It is important to revise conventional empirical hurricane loss models that are only dependent on maximum wind speed to include both maximum wind speed and size as predictors. (letters)

  12. High-precision measurement of tidal current structures using coastal acoustic tomography

    Science.gov (United States)

    Zhang, Chuanzheng; Zhu, Xiao-Hua; Zhu, Ze-Nan; Liu, Wenhu; Zhang, Zhongzhe; Fan, Xiaopeng; Zhao, Ruixiang; Dong, Menghong; Wang, Min

    2017-07-01

    A high-precision coastal acoustic tomography (CAT) experiment for reconstructing the current variation in Dalian Bay (DLB) was successfully conducted by 11 coastal acoustic tomography systems during March 7-8, 2015. The horizontal distributions of tidal currents and residual currents were mapped well by the inverse method, which used reciprocal travel time data along 51 successful sound transmission rays. The semi-diurnal tide is dominant in DLB, with a maximum speed of 0.69 m s-1 at the eastern and southwestern parts near the bay mouth that gradually decreases toward the inner bay with an average velocity of 0.31 m s-1. The residual current enters the observational domain from the two flanks of the bay mouth and flows out in the inner bay. One anticyclone and one cyclone were noted inside DLB as was one cyclone at the bay mouth. The maximum residual current in the observational domain reached 0.11 m s-1, with a mean residual current of 0.03 m s-1. The upper 15-m depth-averaged inverse velocities were in excellent agreement with the moored Acoustic Doppler Current Profiler (ADCP) at the center of the bay, with a root-mean-square difference (RMSD) of 0.04 m s-1 for the eastward and northward components. The precision of the present tomography measurements was the highest thus far owing to the largest number of transmission rays ever recorded. Sensitivity experiments showed that the RMSD between CAT and moored-ADCP increased from 0.04 m s-1 to 0.08 m s-1 for both the eastward and northward velocities when reducing the number of transmission rays from 51 to 11. The observational accuracy was determined by the spatial resolution of acoustic ray in the CAT measurements. The cost-optimal scheme consisted of 29 transmission rays with a spatial resolution of acoustic ray of 2.03 √{ km2 / ray numbers } . Moreover, a dynamic analysis of the residual currents showed that the horizontal pressure gradient of residual sea level and Coriolis force contribute 38.3% and 36

  13. Mass mortality of the vermetid gastropod Ceraesignum maximum

    Science.gov (United States)

    Brown, A. L.; Frazer, T. K.; Shima, J. S.; Osenberg, C. W.

    2016-09-01

    Ceraesignum maximum (G.B. Sowerby I, 1825), formerly Dendropoma maximum, was subject to a sudden, massive die-off in the Society Islands, French Polynesia, in 2015. On Mo'orea, where we have detailed documentation of the die-off, these gastropods were previously found in densities up to 165 m-2. In July 2015, we surveyed shallow back reefs of Mo'orea before, during and after the die-off, documenting their swift decline. All censused populations incurred 100% mortality. Additional surveys and observations from Mo'orea, Tahiti, Bora Bora, and Huahine (but not Taha'a) suggested a similar, and approximately simultaneous, die-off. The cause(s) of this cataclysmic mass mortality are currently unknown. Given the previously documented negative effects of C. maximum on corals, we expect the die-off will have cascading effects on the reef community.

  14. A New MPPT Control for Photovoltaic Panels by Instantaneous Maximum Power Point Tracking

    Science.gov (United States)

    Tokushima, Daiki; Uchida, Masato; Kanbei, Satoshi; Ishikawa, Hiroki; Naitoh, Haruo

    This paper presents a new maximum power point tracking control for photovoltaic (PV) panels. The control can be categorized into the Perturb and Observe (P & O) method. It utilizes instantaneous voltage ripples at PV panel output terminals caused by the switching of a chopper connected to the panel in order to identify the direction for the maximum power point (MPP). The tracking for the MPP is achieved by a feedback control of the average terminal voltage of the panel. Appropriate use of the instantaneous and the average values of the PV voltage for the separate purposes enables both the quick transient response and the good convergence with almost no ripples simultaneously. The tracking capability is verified experimentally with a 2.8 W PV panel under a controlled experimental setup. A numerical comparison with a conventional P & O confirms that the proposed control extracts much more power from the PV panel.

  15. MAXIMUM CORONAL MASS EJECTION SPEED AS AN INDICATOR OF SOLAR AND GEOMAGNETIC ACTIVITIES

    International Nuclear Information System (INIS)

    Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.; Goode, P. R.; Gopalswamy, N.; Ozguc, A.; Rozelot, J. P.

    2011-01-01

    We investigate the relationship between the monthly averaged maximal speeds of coronal mass ejections (CMEs), international sunspot number (ISSN), and the geomagnetic Dst and Ap indices covering the 1996-2008 time interval (solar cycle 23). Our new findings are as follows. (1) There is a noteworthy relationship between monthly averaged maximum CME speeds and sunspot numbers, Ap and Dst indices. Various peculiarities in the monthly Dst index are correlated better with the fine structures in the CME speed profile than that in the ISSN data. (2) Unlike the sunspot numbers, the CME speed index does not exhibit a double peak maximum. Instead, the CME speed profile peaks during the declining phase of solar cycle 23. Similar to the Ap index, both CME speed and the Dst indices lag behind the sunspot numbers by several months. (3) The CME number shows a double peak similar to that seen in the sunspot numbers. The CME occurrence rate remained very high even near the minimum of the solar cycle 23, when both the sunspot number and the CME average maximum speed were reaching their minimum values. (4) A well-defined peak of the Ap index between 2002 May and 2004 August was co-temporal with the excess of the mid-latitude coronal holes during solar cycle 23. The above findings suggest that the CME speed index may be a useful indicator of both solar and geomagnetic activities. It may have advantages over the sunspot numbers, because it better reflects the intensity of Earth-directed solar eruptions.

  16. Plasma bullet current measurements in a free-stream helium capillary jet

    Science.gov (United States)

    Oh, Jun-Seok; Walsh, James L.; Bradley, James W.

    2012-06-01

    A commercial current monitor has been used to measure the current associated with plasma bullets created in both the positive and negative half cycles of the sinusoidal driving voltage sustaining a plasma jet. The maximum values of the positive bullet current are typically ˜750 µA and persist for 10 µs, while the peaks in the negative current of several hundred μA are broad, persisting for about 40 µs. From the time delay of the current peaks with increasing distance from the jet nozzle, an average bullet propagation speed has been measured; the positive and negative bullets travel at 17.5 km s-1 and 3.9 km s-1 respectively. The net space charge associated with the bullet(s) has also been calculated; the positive and negative bullets contain a similar net charge of the order of 10-9 C measured at all monitor positions, with estimated charged particle densities nb of ˜1010-1011 cm-3 in the bullet.

  17. Jarzynski equality in the context of maximum path entropy

    Science.gov (United States)

    González, Diego; Davis, Sergio

    2017-06-01

    In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.

  18. 30 CFR 77.902-1 - Fail safe ground check circuits; maximum voltage.

    Science.gov (United States)

    2010-07-01

    ... voltage. 77.902-1 Section 77.902-1 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF... OF UNDERGROUND COAL MINES Low- and Medium-Voltage Alternating Current Circuits § 77.902-1 Fail safe ground check circuits; maximum voltage. The maximum voltage used for ground check circuits under § 77.902...

  19. Modification of the deep salinity-maximum in the Southern Ocean by circulation in the Antarctic Circumpolar Current and the Weddell Gyre

    Science.gov (United States)

    Donnelly, Matthew; Leach, Harry; Strass, Volker

    2017-07-01

    The evolution of the deep salinity-maximum associated with the Lower Circumpolar Deep Water (LCDW) is assessed using a set of 37 hydrographic sections collected over a 20-year period in the Southern Ocean as part of the WOCE/CLIVAR programme. A circumpolar decrease in the value of the salinity-maximum is observed eastwards from the North Atlantic Deep Water (NADW) in the Atlantic sector of the Southern Ocean through the Indian and Pacific sectors to Drake Passage. Isopycnal mixing processes are limited by circumpolar fronts, and in the Atlantic sector, this acts to limit the direct poleward propagation of the salinity signal. Limited entrainment occurs into the Weddell Gyre, with LCDW entering primarily through the eddy-dominated eastern limb. A vertical mixing coefficient, κV of (2.86 ± 1.06) × 10-4 m2 s-1 and an isopycnal mixing coefficient, κI of (8.97 ± 1.67) × 102 m2 s-1 are calculated for the eastern Indian and Pacific sectors of the Antarctic Circumpolar Current (ACC). A κV of (2.39 ± 2.83) × 10-5 m2 s-1, an order of magnitude smaller, and a κI of (2.47 ± 0.63) × 102 m2 s-1, three times smaller, are calculated for the southern and eastern Weddell Gyre reflecting a more turbulent regime in the ACC and a less turbulent regime in the Weddell Gyre. In agreement with other studies, we conclude that the ACC acts as a barrier to direct meridional transport and mixing in the Atlantic sector evidenced by the eastward propagation of the deep salinity-maximum signal, insulating the Weddell Gyre from short-term changes in NADW characteristics.

  20. Free Energy Self-Averaging in Protein-Sized Random Heteropolymers

    International Nuclear Information System (INIS)

    Chuang, Jeffrey; Grosberg, Alexander Yu.; Kardar, Mehran

    2001-01-01

    Current theories of heteropolymers are inherently macroscopic, but are applied to mesoscopic proteins. To compute the free energy over sequences, one assumes self-averaging -- a property established only in the macroscopic limit. By enumerating the states and energies of compact 18, 27, and 36mers on a lattice with an ensemble of random sequences, we test the self-averaging approximation. We find that fluctuations in the free energy between sequences are weak, and that self-averaging is valid at the scale of real proteins. The results validate sequence design methods which exponentially speed up computational design and simplify experimental realizations

  1. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  2. 5 CFR 531.221 - Maximum payable rate rule.

    Science.gov (United States)

    2010-01-01

    ... before the reassignment. (ii) If the rate resulting from the geographic conversion under paragraph (c)(2... previous rate (i.e., the former special rate after the geographic conversion) with the rates on the current... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Maximum payable rate rule. 531.221...

  3. Efficacy of spatial averaging of infrasonic pressure in varying wind speeds

    International Nuclear Information System (INIS)

    DeWolf, Scott; Walker, Kristoffer T.; Zumberge, Mark A.; Denis, Stephane

    2013-01-01

    Wind noise reduction (WNR) is important in the measurement of infra-sound. Spatial averaging theory led to the development of rosette pipe arrays. The efficacy of rosettes decreases with increasing wind speed and only provides a maximum of 20 dB WNR due to a maximum size limitation. An Optical Fiber Infra-sound Sensor (OFIS) reduces wind noise by instantaneously averaging infra-sound along the sensor's length. In this study two experiments quantify the WNR achieved by rosettes and OFISs of various sizes and configurations. Specifically, it is shown that the WNR for a circular OFIS 18 m in diameter is the same as a collocated 32-inlet pipe array of the same diameter. However, linear OFISs ranging in length from 30 to 270 m provide a WNR of up to 30 dB in winds up to 5 m/s. The measured WNR is a logarithmic function of the OFIS length and depends on the orientation of the OFIS with respect to wind direction. OFISs oriented parallel to the wind direction achieve 4 dB greater WNR than those oriented perpendicular to the wind. Analytical models for the rosette and OFIS are developed that predict the general observed relationships between wind noise reduction, frequency, and wind speed. (authors)

  4. Lower hybrid current drive in Tore Supra and jet

    International Nuclear Information System (INIS)

    Moreau, D.; Gormezano, C.

    1991-07-01

    Recent Lower Hybrid Current Drive (LHCD) experiments in TORE SUPRA and JET are reported. Large multijunction launchers have allowed the coupling of 5 MW to the plasma for several seconds with a maximum of 3.8 kW/cm 2 . Measurements of the scattering matrices of the antennae show good agreement with theory. The current drive efficiency in TORE SUPRA is about 0.2 x 10 20 Am -2 /W with LH power alone and reaches 0.4 x 10 20 Am -2 /W in JET thanks to a high volume-averaged electron temperature (1.9 keV) and also to a synergy between Lower Hybrid and Fast Magnetosonic Waves. At average n e = 1.5 x 10 19 m -3 in TORE SUPRA, sawteeth are suppressed and m = 1 MHD oscillations the frequency of which clearly depends on the amount of LH power are observed on soft X-rays, and also on non-thermal ECE. In JET ICRH produced sawtooth-free periods are extended by the application of LHCD (2.9 s. with 4 MW ICRH) and current profile broadening has been clearly observed consistent with off-axis fast electron populations. LH power modulation experiments performed in TORE SUPRA at average n e = 4 x 10 19 m -3 show a delayed central electron heating despite the off-axis creation of suprathermal electrons, thus ruling out the possibility of a direct heating through central wave absorption. A possible explanation in terms of anomalous fast electron transport and classical slowing down would yield a diffusion coefficient of the order of 10 m 2 /s for the fast electrons. Other interpretations such as an anomalous heat pinch or a central confinement enhancement cannot be excluded. Finally, successful pellet fuelling of a partially LH driven plasma was obtained in TORE SUPRA, 28 successive pellets allowing the density to rise to average n e = 4 x 10 19 m -3 . This could be achieved by switching the LH power off for 90 ms before each pellet injection, i.e. without modifying significantly the current density profile

  5. Efficiency of Photovoltaic Maximum Power Point Tracking Controller Based on a Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    Ammar Al-Gizi

    2017-07-01

    Full Text Available This paper examines the efficiency of a fuzzy logic control (FLC based maximum power point tracking (MPPT of a photovoltaic (PV system under variable climate conditions and connected load requirements. The PV system including a PV module BP SX150S, buck-boost DC-DC converter, MPPT, and a resistive load is modeled and simulated using Matlab/Simulink package. In order to compare the performance of FLC-based MPPT controller with the conventional perturb and observe (P&O method at different irradiation (G, temperature (T and connected load (RL variations – rising time (tr, recovering time, total average power and MPPT efficiency topics are calculated. The simulation results show that the FLC-based MPPT method can quickly track the maximum power point (MPP of the PV module at the transient state and effectively eliminates the power oscillation around the MPP of the PV module at steady state, hence more average power can be extracted, in comparison with the conventional P&O method.

  6. A superconducting direct-current limiter with a power of up to 8 MVA

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, L. M.; Alferov, D. F., E-mail: DFAlferov@niitfa.ru; Akhmetgareev, M. R.; Budovskii, A. I.; Evsin, D. V.; Voloshin, I. F.; Kalinov, A. V. [National Technical Physics and Automation Research Institute (Russian Federation)

    2016-12-15

    A resistive switching superconducting fault current limiter (SFCL) for DC networks with a nominal voltage of 3.5 kV and a nominal current of 2 kA was developed, produced, and tested. The SFCL has two main units—an assembly of superconducting modules and a high-speed vacuum circuit breaker. The assembly of superconducting modules consists of nine (3 × 3) parallel–series connected modules. Each module contains four parallel-connected 2G high-temperature superconducting (HTS) tapes. The results of SFCL tests in the short-circuit emulation mode with a maximum current rise rate of 1300 A/ms are presented. The SFCL is capable of limiting the current at a level of 7 kA and break it 8 ms after the current-limiting mode begins. The average temperature of HTS tapes during the current-limiting mode increases to 210 K. After the current is interrupted, the superconductivity recovery time does not exceed 1 s.

  7. A superconducting direct-current limiter with a power of up to 8 MVA

    Science.gov (United States)

    Fisher, L. M.; Alferov, D. F.; Akhmetgareev, M. R.; Budovskii, A. I.; Evsin, D. V.; Voloshin, I. F.; Kalinov, A. V.

    2016-12-01

    A resistive switching superconducting fault current limiter (SFCL) for DC networks with a nominal voltage of 3.5 kV and a nominal current of 2 kA was developed, produced, and tested. The SFCL has two main units—an assembly of superconducting modules and a high-speed vacuum circuit breaker. The assembly of superconducting modules consists of nine (3 × 3) parallel-series connected modules. Each module contains four parallel-connected 2G high-temperature superconducting (HTS) tapes. The results of SFCL tests in the short-circuit emulation mode with a maximum current rise rate of 1300 A/ms are presented. The SFCL is capable of limiting the current at a level of 7 kA and break it 8 ms after the current-limiting mode begins. The average temperature of HTS tapes during the current-limiting mode increases to 210 K. After the current is interrupted, the superconductivity recovery time does not exceed 1 s.

  8. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  9. Maximum-power-point tracking control of solar heating system

    KAUST Repository

    Huang, Bin-Juine

    2012-11-01

    The present study developed a maximum-power point tracking control (MPPT) technology for solar heating system to minimize the pumping power consumption at an optimal heat collection. The net solar energy gain Q net (=Q s-W p/η e) was experimentally found to be the cost function for MPPT with maximum point. The feedback tracking control system was developed to track the optimal Q net (denoted Q max). A tracking filter which was derived from the thermal analytical model of the solar heating system was used to determine the instantaneous tracking target Q max(t). The system transfer-function model of solar heating system was also derived experimentally using a step response test and used in the design of tracking feedback control system. The PI controller was designed for a tracking target Q max(t) with a quadratic time function. The MPPT control system was implemented using a microprocessor-based controller and the test results show good tracking performance with small tracking errors. It is seen that the average mass flow rate for the specific test periods in five different days is between 18.1 and 22.9kg/min with average pumping power between 77 and 140W, which is greatly reduced as compared to the standard flow rate at 31kg/min and pumping power 450W which is based on the flow rate 0.02kg/sm 2 defined in the ANSI/ASHRAE 93-1986 Standard and the total collector area 25.9m 2. The average net solar heat collected Q net is between 8.62 and 14.1kW depending on weather condition. The MPPT control of solar heating system has been verified to be able to minimize the pumping energy consumption with optimal solar heat collection. © 2012 Elsevier Ltd.

  10. SIMULATION OF NEW SIMPLE FUZZY LOGIC MAXIMUM POWER ...

    African Journals Online (AJOL)

    2010-06-30

    Jun 30, 2010 ... Basic structure photovoltaic system Solar array mathematic ... The equivalent circuit model of a solar cell consists of a current generator and a diode .... control of boost converter (tracker) such that maximum power is achieved at the output of the solar panel. Fig.11. The membership function of input. Fig.12.

  11. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  12. Maximum Power Point Tracking Based on Sliding Mode Control

    Directory of Open Access Journals (Sweden)

    Nimrod Vázquez

    2015-01-01

    Full Text Available Solar panels, which have become a good choice, are used to generate and supply electricity in commercial and residential applications. This generated power starts with the solar cells, which have a complex relationship between solar irradiation, temperature, and output power. For this reason a tracking of the maximum power point is required. Traditionally, this has been made by considering just current and voltage conditions at the photovoltaic panel; however, temperature also influences the process. In this paper the voltage, current, and temperature in the PV system are considered to be a part of a sliding surface for the proposed maximum power point tracking; this means a sliding mode controller is applied. Obtained results gave a good dynamic response, as a difference from traditional schemes, which are only based on computational algorithms. A traditional algorithm based on MPPT was added in order to assure a low steady state error.

  13. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  14. Calculation of induced current densities for humans by magnetic fields from electronic article surveillance devices

    Science.gov (United States)

    Gandhi, Om P.; Kang, Gang

    2001-11-01

    This paper illustrates the use of the impedance method to calculate the electric fields and current densities induced in millimetre resolution anatomic models of the human body, namely an adult and 10- and 5-year-old children, for exposure to nonuniform magnetic fields typical of two assumed but representative electronic article surveillance (EAS) devices at 1 and 30 kHz, respectively. The devices assumed for the calculations are a solenoid type magnetic deactivator used at store checkouts and a pass-by panel-type EAS system consisting of two overlapping rectangular current-carrying coils used at entry and exit from a store. The impedance method code is modified to obtain induced current densities averaged over a cross section of 1 cm2 perpendicular to the direction of induced currents. This is done to compare the peak current densities with the limits or the basic restrictions given in the ICNIRP safety guidelines. Because of the stronger magnetic fields at lower heights for both the assumed devices, the peak 1 cm2 area-averaged current densities for the CNS tissues such as the brain and the spinal cord are increasingly larger for smaller models and are the highest for the model of the 5-year-old child. For both the EAS devices, the maximum 1 cm2 area-averaged current densities for the brain of the model of the adult are lower than the ICNIRP safety guideline, but may approach or exceed the ICNIRP basic restrictions for models of 10- and 5-year-old children if sufficiently strong magnetic fields are used.

  15. Calculation of induced current densities for humans by magnetic fields from electronic article surveillance devices.

    Science.gov (United States)

    Gandhi, O P; Kang, G

    2001-11-01

    This paper illustrates the use of the impedance method to calculate the electric fields and current densities induced in millimetre resolution anatomic models of the human body, namely an adult and 10- and 5-year-old children, for exposure to nonuniform magnetic fields typical of two assumed but representative electronic article surveillance (EAS) devices at 1 and 30 kHz, respectively. The devices assumed for the calculations are a solenoid type magnetic deactivator used at store checkouts and a pass-by panel-type EAS system consisting of two overlapping rectangular current-carrying coils used at entry and exit from a store. The impedance method code is modified to obtain induced current densities averaged over a cross section of 1 cm2 perpendicular to the direction of induced currents. This is done to compare the peak current densities with the limits or the basic restrictions given in the ICNIRP safety guidelines. Because of the stronger magnetic fields at lower heights for both the assumed devices, the peak 1 cm2 area-averaged current densities for the CNS tissues such as the brain and the spinal cord are increasingly larger for smaller models and are the highest for the model of the 5-year-old child. For both the EAS devices, the maximum 1 cm2 area-averaged current densities for the brain of the model of the adult are lower than the ICNIRP safety guideline, but may approach or exceed the ICNIRP basic restrictions for models of 10- and 5-year-old children if sufficiently strong magnetic fields are used.

  16. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2013-06-01

    Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  17. PTree: pattern-based, stochastic search for maximum parsimony phylogenies.

    Science.gov (United States)

    Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  18. Choice of initial operating parameters for high average current linear accelerators

    International Nuclear Information System (INIS)

    Batchelor, K.

    1976-01-01

    In designing an accelerator for high currents it is evident that beam losses in the machine must be minimized, which implies well matched beams, and that adequate acceptance under severe space charge conditions must be met. This paper investigates the input parameters to an Alvarez type drift-tube accelerator resulting from such factors

  19. Computer simulation of induced electric currents and fields in biological bodies by 60 Hz magnetic fields

    International Nuclear Information System (INIS)

    Xi Weiguo; Stuchly, M.A.; Gandhi, O.P.

    1993-01-01

    Possible health effects of human exposure to 60 Hz magnetic fields are a subject of increasing concern. An understanding of the coupling of electromagnetic fields to human body tissues is essential for assessment of their biological effects. A method is presented for the computerized simulation of induced electric currents and fields in bodies of men and rodents from power-line frequency magnetic fields. In the impedance method, the body is represented by a 3 dimensional impedance network. The computational model consists of several tens of thousands of cubic numerical cells and thus represented a realistic shape. The modelling for humans is performed with two models, a heterogeneous model based on cross-section anatomy and a homogeneous one using an average tissue conductivity. A summary of computed results of induced electric currents and fields is presented. It is confirmed that induced currents are lower than endangerous current levels for most environmental exposures. However, the induced current density varies greatly, with the maximum being at least 10 times larger than the average. This difference is likely to be greater when more detailed anatomy and morphology are considered. 15 refs., 2 figs., 1 tab

  20. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    Science.gov (United States)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  1. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    Science.gov (United States)

    Tan, Elcin

    physically possible upper limits of precipitation due to climate change. The simulation results indicate that the meridional shift in atmospheric conditions is the optimum method to determine maximum precipitation in consideration of cost and efficiency. Finally, exceedance probability analyses of the model results of 42 historical extreme precipitation events demonstrate that the 72-hr basin averaged probable maximum precipitation is 21.72 inches for the exceedance probability of 0.5 percent. On the other hand, the current operational PMP estimation for the American River Watershed is 28.57 inches as published in the hydrometeorological report no. 59 and a previous PMP value was 31.48 inches as published in the hydrometeorological report no. 36. According to the exceedance probability analyses of this proposed method, the exceedance probabilities of these two estimations correspond to 0.036 percent and 0.011 percent, respectively.

  2. Novel methods for estimating lithium-ion battery state of energy and maximum available energy

    International Nuclear Information System (INIS)

    Zheng, Linfeng; Zhu, Jianguo; Wang, Guoxiu; He, Tingting; Wei, Yiying

    2016-01-01

    Highlights: • Study on temperature, current, aging dependencies of maximum available energy. • Study on the various factors dependencies of relationships between SOE and SOC. • A quantitative relationship between SOE and SOC is proposed for SOE estimation. • Estimate maximum available energy by means of moving-window energy-integral. • The robustness and feasibility of the proposed approaches are systematic evaluated. - Abstract: The battery state of energy (SOE) allows a direct determination of the ratio between the remaining and maximum available energy of a battery, which is critical for energy optimization and management in energy storage systems. In this paper, the ambient temperature, battery discharge/charge current rate and cell aging level dependencies of battery maximum available energy and SOE are comprehensively analyzed. An explicit quantitative relationship between SOE and state of charge (SOC) for LiMn_2O_4 battery cells is proposed for SOE estimation, and a moving-window energy-integral technique is incorporated to estimate battery maximum available energy. Experimental results show that the proposed approaches can estimate battery maximum available energy and SOE with high precision. The robustness of the proposed approaches against various operation conditions and cell aging levels is systematically evaluated.

  3. Characteristics of halo current in JT-60U

    International Nuclear Information System (INIS)

    Neyatani, Y.; Nakamura, Y.; Yoshino, R.; Hatae, T.

    1999-01-01

    Halo currents and their toroidal peaking factor (TPF) have been measured in JT-60U by Rogowski coil type halo current sensors. The electron temperature in the halo region was around 10 eV at 1 ms before the timing of the maximum halo current. The maximum TPF*I h /I p0 was 0.52 in the operational range of I p = 0.7 ∼ 1.8 MA, B T = 2.2 ∼ 3.5 T, including ITER design parameters of κ > 1.6 and q 95 = 3, which was lower than that of the maximum value of ITER data base (0.75). The magnitude of halo currents tended to decrease with the increase in stored energy just before the energy quench and with the line integrated electron density at the time of the maximum halo current. A termination technique in which the current channel remains stationary was useful to avoid halo current generation. Intense neon gas puffing during the VDE was effective for reducing the halo currents. (author)

  4. Characteristics of halo current in JT-60U

    International Nuclear Information System (INIS)

    Neyatani, Y.; Nakamura, Y.; Yoshino, R.; Hatae, T.

    2001-01-01

    Halo currents and their toroidal peaking factor (TPF) have been measured in JT-60U by Rogowski coil type halo current sensors. The electron temperature in the halo region was around 10 eV at 1 ms before the timing of the maximum halo current. The maximum TPF *I h /I p0 was 0.52 in the operational range of I p =0.7∼1.8MA, B T =2.2∼3.5T, including ITER design parameters of κ>1.6 and q 95 =3, which was lower than that of the maximum value of ITER data base (0.75). The magnitude of halo currents tended to decrease with the increase in stored energy just before the energy quench and with the line integrated electron density at the time of the maximum halo current. A termination technique in which the current channel remains stationary was useful to avoid halo current generation. Intense neon gas puffing during the VDE was effective for reducing the halo currents. (author)

  5. Maximum power point tracker for photovoltaic power plants

    Science.gov (United States)

    Arcidiacono, V.; Corsi, S.; Lambri, L.

    The paper describes two different closed-loop control criteria for the maximum power point tracking of the voltage-current characteristic of a photovoltaic generator. The two criteria are discussed and compared, inter alia, with regard to the setting-up problems that they pose. Although a detailed analysis is not embarked upon, the paper also provides some quantitative information on the energy advantages obtained by using electronic maximum power point tracking systems, as compared with the situation in which the point of operation of the photovoltaic generator is not controlled at all. Lastly, the paper presents two high-efficiency MPPT converters for experimental photovoltaic plants of the stand-alone and the grid-interconnected type.

  6. Step Test: a method for evaluating maximum oxygen consumption to determine the ability kind of work among students of medical emergencies.

    Science.gov (United States)

    Heydari, Payam; Varmazyar, Sakineh; Nikpey, Ahmad; Variani, Ali Safari; Jafarvand, Mojtaba

    2017-03-01

    Maximum oxygen consumption shows the maximum oxygen rate of muscle oxygenation that is acceptable in many cases, to measure the fitness between person and the desired job. Given that medical emergencies are important, and difficult jobs in emergency situations require people with high physical ability and readiness for the job, the aim of this study was to evaluate the maximum oxygen consumption, to determine the ability of work type among students of medical emergencies in Qazvin in 2016. This study was a descriptive - analytical, and in cross-sectional type conducted among 36 volunteer students of medical emergencies in Qazvin in 2016. After necessary coordination for the implementation of the study, participants completed health questionnaires and demographic characteristics and then the participants were evaluated with step tests of American College of Sport Medicine (ACSM). Data analysis was done by SPSS version 18 and U-Mann-Whitney tests, Kruskal-Wallis and Pearson correlation coefficient. Average of maximum oxygen consumption of the participants was estimated 3.15±0.50 liters per minute. 91.7% of medical emergencies students were selected as appropriate in terms of maximum oxygen consumption and thus had the ability to do heavy and too heavy work. Average of maximum oxygen consumption evaluated by the U-Mann-Whitney test and Kruskal-Wallis, had significant relationship with age (p<0.05) and weight groups (p<0.001). There was a significant positive correlation between maximum oxygen consumption with weight and body mass index (p<0.001). The results of this study showed that demographic variables of weight and body mass index are the factors influencing the determination of maximum oxygen consumption, as most of the students had the ability to do heavy, and too heavy work. Therefore, people with ability to do average work are not suitable for medical emergency tasks.

  7. A simple maximum power point tracker for thermoelectric generators

    International Nuclear Information System (INIS)

    Paraskevas, Alexandros; Koutroulis, Eftichios

    2016-01-01

    Highlights: • A Maximum Power Point Tracking (MPPT) method for thermoelectric generators is proposed. • A power converter is controlled to operate on a pre-programmed locus. • The proposed MPPT technique has the advantage of operational and design simplicity. • The experimental average deviation from the MPP power of the TEG source is 1.87%. - Abstract: ThermoElectric Generators (TEGs) are capable to harvest the ambient thermal energy for power-supplying sensors, actuators, biomedical devices etc. in the μW up to several hundreds of Watts range. In this paper, a Maximum Power Point Tracking (MPPT) method for TEG elements is proposed, which is based on controlling a power converter such that it operates on a pre-programmed locus of operating points close to the MPPs of the power–voltage curves of the TEG power source. Compared to the past-proposed MPPT methods for TEGs, the technique presented in this paper has the advantage of operational and design simplicity. Thus, its implementation using off-the-shelf microelectronic components with low-power consumption characteristics is enabled, without being required to employ specialized integrated circuits or signal processing units of high development cost. Experimental results are presented, which demonstrate that for MPP power levels of the TEG source in the range of 1–17 mW, the average deviation of the power produced by the proposed system from the MPP power of the TEG source is 1.87%.

  8. Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting

    International Nuclear Information System (INIS)

    Azad, Ariful; Buluc, Aydn; Pothen, Alex

    2016-01-01

    It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting path is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.

  9. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  10. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  11. Transports and tidal current estimates in the Taiwan Strait from shipboard ADCP observations (1999-2001)

    Science.gov (United States)

    Wang, Y. H.; Jan, S.; Wang, D. P.

    2003-05-01

    Tidal and mean flows in the Taiwan Strait are obtained from analysis of 2.5 years (1999-2001) of shipboard ADCP data using a spatial least-squares technique. The average tidal current amplitude is 0.46 ms -1, the maximum amplitude is 0.80 ms -1 at the northeast and southeast entrances and the minimum amplitude is 0.20 ms -1 in the middle of the Strait. The tidal current ellipses derived from the shipboard ADCP data compare well with the predictions of a high-resolution regional tidal model. For the mean currents, the average velocity is about 0.40 ms -1. The mean transport through the Strait is northward (into the East China Sea) at 1.8 Sv. The transport is related to the along Strait wind by a simple regression, transport (Sv)=2.42+0.12×wind (ms -1). Using this empirical formula, the maximum seasonal transport is in summer, about 2.7 Sv, the minimum transport is in winter, at 0.9 Sv, and the mean transport is 1.8 Sv. For comparison, this result indicates that the seasonal amplitude is almost identical to the classical estimate by Wyrtki (Physical oceanography of the southeast Asian waters, scientific results of marine investigations of the South China Sea and Gulf of Thailand, 1959-1961. Naga Report 2, Scripps Institute of Oceanography, 195 pp.) based on the mass balance in the South China Sea, while the mean is close to the recent estimate by Isobe [Continental Shelf Research 19 (1999) 195] based on the mass balance in the East China Sea.

  12. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  13. Applications of the maximum entropy principle in nuclear physics

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1990-01-01

    Soon after the advent of information theory the principle of maximum entropy was recognized as furnishing the missing rationale for the familiar rules of classical thermodynamics. More recently it has also been applied successfully in nuclear physics. As an elementary example we derive a physically meaningful macroscopic description of the spectrum of neutrons emitted in nuclear fission, and compare the well known result with accurate data on 252 Cf. A second example, derivation of an expression for resonance-averaged cross sections for nuclear reactions like scattering or fission, is less trivial. Entropy maximization, constrained by given transmission coefficients, yields probability distributions for the R- and S-matrix elements, from which average cross sections can be calculated. If constrained only by the range of the spectrum of compound-nuclear levels it produces the Gaussian Orthogonal Ensemble (GOE) of Hamiltonian matrices that again yields expressions for average cross sections. Both avenues give practically the same numbers in spite of the quite different cross section formulae. These results were employed in a new model-aided evaluation of the 238 U neutron cross sections in the unresolved resonance region. (orig.) [de

  14. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  15. Maximum Power Point Tracking (MPPT Pada Sistem Pembangkit Listrik Tenaga Angin Menggunakan Buck-Boost Converter

    Directory of Open Access Journals (Sweden)

    Muhamad Otong

    2017-05-01

    Full Text Available In this paper, the implementation of the Maximum Power Point Tracking (MPPT technique is developed using buck-boost converter. Perturb and observe (P&O MPPT algorithm is used to searching maximum power from the wind power plant for charging of the battery. The model used in this study is the Variable Speed Wind Turbine (VSWT with a Permanent Magnet Synchronous Generator (PMSG. Analysis, design, and modeling of wind energy conversion system has done using MATLAB/simulink. The simulation results show that the proposed MPPT produce a higher output power than the system without MPPT. The average efficiency that can be achieved by the proposed system to transfer the maximum power into battery is 90.56%.

  16. Plasma bullet current measurements in a free-stream helium capillary jet

    International Nuclear Information System (INIS)

    Oh, Jun-Seok; Walsh, James L; Bradley, James W

    2012-01-01

    A commercial current monitor has been used to measure the current associated with plasma bullets created in both the positive and negative half cycles of the sinusoidal driving voltage sustaining a plasma jet. The maximum values of the positive bullet current are typically ∼750 µA and persist for 10 µs, while the peaks in the negative current of several hundred μA are broad, persisting for about 40 µs. From the time delay of the current peaks with increasing distance from the jet nozzle, an average bullet propagation speed has been measured; the positive and negative bullets travel at 17.5 km s −1 and 3.9 km s −1 respectively. The net space charge associated with the bullet(s) has also been calculated; the positive and negative bullets contain a similar net charge of the order of 10 −9 C measured at all monitor positions, with estimated charged particle densities n b of ∼10 10 –10 11 cm −3 in the bullet. (special)

  17. Allometries of Maximum Growth Rate versus Body Mass at Maximum Growth Indicate That Non-Avian Dinosaurs Had Growth Rates Typical of Fast Growing Ectothermic Sauropsids

    Science.gov (United States)

    Werner, Jan; Griebeler, Eva Maria

    2014-01-01

    We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case’s study (1978), which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles) to 20 (fishes) times (in comparison to mammals) or even 45 (reptiles) to 100 (fishes) times (in comparison to birds) lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule out either

  18. Allometries of maximum growth rate versus body mass at maximum growth indicate that non-avian dinosaurs had growth rates typical of fast growing ectothermic sauropsids.

    Science.gov (United States)

    Werner, Jan; Griebeler, Eva Maria

    2014-01-01

    We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles) to 20 (fishes) times (in comparison to mammals) or even 45 (reptiles) to 100 (fishes) times (in comparison to birds) lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule out either of

  19. Allometries of maximum growth rate versus body mass at maximum growth indicate that non-avian dinosaurs had growth rates typical of fast growing ectothermic sauropsids.

    Directory of Open Access Journals (Sweden)

    Jan Werner

    Full Text Available We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes strongly differed from Case's study (1978, which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles to 20 (fishes times (in comparison to mammals or even 45 (reptiles to 100 (fishes times (in comparison to birds lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule

  20. Proton transport properties of poly(aspartic acid) with different average molecular weights

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, Yuki, E-mail: ynagao@kuchem.kyoto-u.ac.j [Department of Mechanical Systems and Design, Graduate School of Engineering, Tohoku University, 6-6-01 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Imai, Yuzuru [Institute of Development, Aging and Cancer (IDAC), Tohoku University, 4-1 Seiryo-cho, Aoba-ku, Sendai 980-8575 (Japan); Matsui, Jun [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan); Ogawa, Tomoyuki [Department of Electronic Engineering, Graduate School of Engineering, Tohoku University, 6-6-05 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Miyashita, Tokuji [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan)

    2011-04-15

    Research highlights: Seven polymers with different average molecular weights were synthesized. The proton conductivity depended on the number-average degree of polymerization. The difference of the proton conductivities was more than one order of magnitude. The number-average molecular weight contributed to the stability of the polymer. - Abstract: We synthesized seven partially protonated poly(aspartic acids)/sodium polyaspartates (P-Asp) with different average molecular weights to study their proton transport properties. The number-average degree of polymerization (DP) for each P-Asp was 30 (P-Asp30), 115 (P-Asp115), 140 (P-Asp140), 160 (P-Asp160), 185 (P-Asp185), 205 (P-Asp205), and 250 (P-Asp250). The proton conductivity depended on the number-average DP. The maximum and minimum proton conductivities under a relative humidity of 70% and 298 K were 1.7 . 10{sup -3} S cm{sup -1} (P-Asp140) and 4.6 . 10{sup -4} S cm{sup -1} (P-Asp250), respectively. Differential thermogravimetric analysis (TG-DTA) was carried out for each P-Asp. The results were classified into two categories. One exhibited two endothermic peaks between t = (270 and 300) {sup o}C, the other exhibited only one peak. The P-Asp group with two endothermic peaks exhibited high proton conductivity. The high proton conductivity is related to the stability of the polymer. The number-average molecular weight also contributed to the stability of the polymer.

  1. The maximum sizes of large scale structures in alternative theories of gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, Sourav [IUCAA, Pune University Campus, Post Bag 4, Ganeshkhind, Pune, 411 007 India (India); Dialektopoulos, Konstantinos F. [Dipartimento di Fisica, Università di Napoli ' Federico II' , Complesso Universitario di Monte S. Angelo, Edificio G, Via Cinthia, Napoli, I-80126 Italy (Italy); Romano, Antonio Enea [Instituto de Física, Universidad de Antioquia, Calle 70 No. 52–21, Medellín (Colombia); Skordis, Constantinos [Department of Physics, University of Cyprus, 1 Panepistimiou Street, Nicosia, 2109 Cyprus (Cyprus); Tomaras, Theodore N., E-mail: sbhatta@iitrpr.ac.in, E-mail: kdialekt@gmail.com, E-mail: aer@phys.ntu.edu.tw, E-mail: skordis@ucy.ac.cy, E-mail: tomaras@physics.uoc.gr [Institute of Theoretical and Computational Physics and Department of Physics, University of Crete, 70013 Heraklion (Greece)

    2017-07-01

    The maximum size of a cosmic structure is given by the maximum turnaround radius—the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulae for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulae agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the ΛCDM value, by a factor 1 + 1/3ω, where ω>> 1 is the Brans-Dicke parameter, implying consistency of the theory with current data.

  2. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  3. An event- and network-level analysis of college students' maximum drinking day.

    Science.gov (United States)

    Meisel, Matthew K; DiBello, Angelo M; Balestrieri, Sara G; Ott, Miles Q; DiGuiseppi, Graham T; Clark, Melissa A; Barnett, Nancy P

    2018-04-01

    Heavy episodic drinking is common among college students and remains a serious public health issue. Previous event-level research among college students has examined behaviors and individual-level characteristics that drive consumption and related consequences but often ignores the social network of people with whom these heavy drinking episodes occur. The main aim of the current study was to investigate the network of social connections between drinkers on their heaviest drinking occasions. Sociocentric network methods were used to collect information from individuals in the first-year class (N=1342) at one university. Past-month drinkers (N=972) reported on the characteristics of their heaviest drinking occasion in the past month and indicated who else among their network connections was present during this occasion. Average max drinking day indegree, or the total number of times a participant was nominated as being present on another students' heaviest drinking occasion, was 2.50 (SD=2.05). Network autocorrelation models indicated that max drinking day indegree (e.g., popularity on heaviest drinking occassions) and peers' number of drinks on their own maximum drinking occasions were significantly associated with participant maximum number of drinks, after controlling for demographic variables, pregaming, and global network indegree (e.g., popularity in the entire first-year class). Being present at other peers' heaviest drinking occasions is associated with greater drinking quantities on one's own heaviest drinking occasion. These findings suggest the potential for interventions that target peer influences within close social networks of drinkers. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Thermal instability and current-voltage scaling in superconducting fault current limiters

    Energy Technology Data Exchange (ETDEWEB)

    Zeimetz, B [Department of Materials Science and Metallurgy, Cambridge University, Pembroke Street, Cambridge CB1 3QZ (United Kingdom); Tadinada, K [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom); Eves, D E [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom); Coombs, T A [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom); Evetts, J E [Department of Materials Science and Metallurgy, Cambridge University, Pembroke Street, Cambridge CB1 3QZ (United Kingdom); Campbell, A M [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom)

    2004-04-01

    We have developed a computer model for the simulation of resistive superconducting fault current limiters in three dimensions. The program calculates the electromagnetic and thermal response of a superconductor to a time-dependent overload voltage, with different possible cooling conditions for the surfaces, and locally variable superconducting and thermal properties. We find that the cryogen boil-off parameters critically influence the stability of a limiter. The recovery time after a fault increases strongly with thickness. Above a critical thickness, the temperature is unstable even for a small applied AC voltage. The maximum voltage and maximum current during a short fault are correlated by a simple exponential law.

  5. Last Glacial Maximum to Holocene climate evolution controlled by sea-level change, Leeuwin Current, and Australian Monsoon in the Northwestern Australia

    Science.gov (United States)

    Ishiwa, T.; Yokoyama, Y.; McHugh, C.; Reuning, L.; Gallagher, S. J.

    2017-12-01

    The transition from cold to warm conditions during the last deglaciation influenced climate variability in the Indian Ocean and Pacific as a result of submerge of continental shelf and variations in the Indonesian Throughflow and Australian Monsoon. The shallow continental shelf (Program Expedition 356 Indonesian Throughflow drilled in the northwestern Australian shallow continental shelf and recovered an interval from the Last Glacial Maximum to Holocene in Site U1461. Radiocarbon dating on macrofossils, foraminifera, and bulk organic matter provided a precise age-depth model, leading to high-resolved paleoclimate reconstruction. X-ray elemental analysis results are interpreted as an indicator of sedimentary environmental changes. The upper 20-m part of Site U1461 apparently records the climate transition from the LGM to Holocene in the northwestern Australia, which could be associated with sea-level change, Leeuwin Current activity, and the Australian Monsoon.

  6. Development of an Intelligent Maximum Power Point Tracker Using an Advanced PV System Test Platform

    DEFF Research Database (Denmark)

    Spataru, Sergiu; Amoiridis, Anastasios; Beres, Remus Narcis

    2013-01-01

    The performance of photovoltaic systems is often reduced by the presence of partial shadows. The system efficiency and availability can be improved by a maximum power point tracking algorithm that is able to detect partial shadow conditions and to optimize the power output. This work proposes...... an intelligent maximum power point tracking method that monitors the maximum power point voltage and triggers a current-voltage sweep only when a partial shadow is detected, therefore minimizing power loss due to repeated current-voltage sweeps. The proposed system is validated on an advanced, flexible...... photovoltaic inverter system test platform that is able to reproduce realistic partial shadow conditions, both in simulation and on hardware test system....

  7. System for evaluation of the true average input-pulse rate

    International Nuclear Information System (INIS)

    Eichenlaub, D.P.; Garrett, P.

    1977-01-01

    The description is given of a digital radiation monitoring system making use of current digital circuit and microprocessor for rapidly processing the pulse data coming from remote radiation controllers. This system analyses the pulse rates in order to determine if a new datum is statistically the same as that previously received. Hence it determines the best possible average time for itself. So long as the true average pulse rate stays constant, the time required to establish an average can increase until the statistical error is under the desired level, i.e. 1%. When the digital processing of the pulse data indicates a change in the true average pulse rate, the time required to establish an average can be reduced so as to improve the response time of the system at the statistical error. This concept includes a fixed compromise between the statistical error and the response time [fr

  8. Maximum one-shot dissipated work from Rényi divergences

    Science.gov (United States)

    Yunger Halpern, Nicole; Garner, Andrew J. P.; Dahlsten, Oscar C. O.; Vedral, Vlatko

    2018-05-01

    Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.

  9. Predicting the current and future potential distributions of lymphatic filariasis in Africa using maximum entropy ecological niche modelling.

    Directory of Open Access Journals (Sweden)

    Hannah Slater

    Full Text Available Modelling the spatial distributions of human parasite species is crucial to understanding the environmental determinants of infection as well as for guiding the planning of control programmes. Here, we use ecological niche modelling to map the current potential distribution of the macroparasitic disease, lymphatic filariasis (LF, in Africa, and to estimate how future changes in climate and population could affect its spread and burden across the continent. We used 508 community-specific infection presence data collated from the published literature in conjunction with five predictive environmental/climatic and demographic variables, and a maximum entropy niche modelling method to construct the first ecological niche maps describing potential distribution and burden of LF in Africa. We also ran the best-fit model against climate projections made by the HADCM3 and CCCMA models for 2050 under A2a and B2a scenarios to simulate the likely distribution of LF under future climate and population changes. We predict a broad geographic distribution of LF in Africa extending from the west to the east across the middle region of the continent, with high probabilities of occurrence in the Western Africa compared to large areas of medium probability interspersed with smaller areas of high probability in Central and Eastern Africa and in Madagascar. We uncovered complex relationships between predictor ecological niche variables and the probability of LF occurrence. We show for the first time that predicted climate change and population growth will expand both the range and risk of LF infection (and ultimately disease in an endemic region. We estimate that populations at risk to LF may range from 543 and 804 million currently, and that this could rise to between 1.65 to 1.86 billion in the future depending on the climate scenario used and thresholds applied to signify infection presence.

  10. A Hybrid Maximum Power Point Search Method Using Temperature Measurements in Partial Shading Conditions

    Directory of Open Access Journals (Sweden)

    Mroczka Janusz

    2014-12-01

    Full Text Available Photovoltaic panels have a non-linear current-voltage characteristics to produce the maximum power at only one point called the maximum power point. In the case of the uniform illumination a single solar panel shows only one maximum power, which is also the global maximum power point. In the case an irregularly illuminated photovoltaic panel many local maxima on the power-voltage curve can be observed and only one of them is the global maximum. The proposed algorithm detects whether a solar panel is in the uniform insolation conditions. Then an appropriate strategy of tracking the maximum power point is taken using a decision algorithm. The proposed method is simulated in the environment created by the authors, which allows to stimulate photovoltaic panels in real conditions of lighting, temperature and shading.

  11. Tracking the global maximum power point of PV arrays under partial shading conditions

    Science.gov (United States)

    Fennich, Meryem

    This thesis presents the theoretical and simulation studies of the global maximum power point tracking (MPPT) for photovoltaic systems under partial shading. The main goal is to track the maximum power point of the photovoltaic module so that the maximum possible power can be extracted from the photovoltaic panels. When several panels are connected in series with some of them shaded partially either due to clouds or shadows from neighboring buildings, several local maxima appear in the power vs. voltage curve. A power increment based MPPT algorithm is effective in identifying the global maximum from the several local maxima. Several existing MPPT algorithms are explored and the state-of-the-art power increment method is simulated and tested for various partial shading conditions. The current-voltage and power-voltage characteristics of the PV model are studied under different partial shading conditions, along with five different cases demonstrating how the MPPT algorithm performs when shading switches from one state to another. Each case is supplemented with simulation results. The method of tracking the Global MPP is based on controlling the DC-DC converter connected to the output of the PV array. A complete system simulation including the PV array, the direct current to direct current (DC-DC) converter and the MPPT is presented and tested using MATLAB software. The simulation results show that the MPPT algorithm works very well with the buck converter, while the boost converter needs further changes and implementation.

  12. Heat Convection at the Density Maximum Point of Water

    Science.gov (United States)

    Balta, Nuri; Korganci, Nuri

    2018-01-01

    Water exhibits a maximum in density at normal pressure at around 4° degree temperature. This paper demonstrates that during cooling, at around 4 °C, the temperature remains constant for a while because of heat exchange associated with convective currents inside the water. Superficial approach implies it as a new anomaly of water, but actually it…

  13. Current glaciation of the Chikhachev ridge (South-Eastern Altai and its dynamics after maximum of the Little Ice Age

    Directory of Open Access Journals (Sweden)

    D. A. Ganyushkin

    2016-01-01

    Full Text Available Glaciation of the Chikhachev ridge (South-Eastern Altai remains poorly known: field observations were not performed since the mid-twentieth century, available schemes and estimates of the glaciation and its scale made on the basis of remote sensing cover only a part of the glaciers, reconstructions of the Little Ice Age (LIA glaciations are absent. This research was based on interpretation of the satellite images: Landsat-4 (1989, Landsat-7 (2001, and Spot-5 (2011, as well as with the use of data of the field season of 2015. Characteristics of glaciations of the Chikhachev ridge as the whole and of its individual centers (Talduair massif, Mongun-Taiga-Minor massif, and southern part of the Chikhachev ridge were determined for the first time. Recent glaciation is represented by 7 glaciers with their total area of 1.12 km2 in the Talduair massif, by 5 glaciers with total area of 0.75 km2 in the Mongun-Taiga-Minor massif, and by 85 glaciers with total area of 29 km2 in the southern part of the Chikhachev ridge. Since the LIA maximum, areas of glaciers decreased by 61% in the Talduair massif, by 74% in the Mongun-Taiga-Minor massif, by 56% in the southern part of the Chikhachev ridge with simultaneous lifting of the firn line by 50 m, 65 m, and 70 m, respectively.The largest rates of the glacier contractions were determined for the period 1989–2011. Different mechanisms of the glacier retreats were shown by the example of the glacier complexes Burgastyn-Gol (one-sided retreat and disintegration and the Grigorjev glacier (gradual retreat of the tongue. Retreat of the Grigorjev glacier has been reconstructed for the period from the LIA maximum until 2015. Average rate of the retreat increased from 1,6 m/year in 1957–1989 up to 11,3 m/year in 2011–2015. The present-day scales of the glaciers and rates of their retreating do not significantly differ from estimations made by other researchers for the nearest centers of glaciation of the

  14. Potential for efficient frequency conversion at high average power using solid state nonlinear optical materials

    International Nuclear Information System (INIS)

    Eimerl, D.

    1985-01-01

    High-average-power frequency conversion using solid state nonlinear materials is discussed. Recent laboratory experience and new developments in design concepts show that current technology, a few tens of watts, may be extended by several orders of magnitude. For example, using KD*P, efficient doubling (>70%) of Nd:YAG at average powers approaching 100 KW is possible; and for doubling to the blue or ultraviolet regions, the average power may approach 1 MW. Configurations using segmented apertures permit essentially unlimited scaling of average power. High average power is achieved by configuring the nonlinear material as a set of thin plates with a large ratio of surface area to volume and by cooling the exposed surfaces with a flowing gas. The design and material fabrication of such a harmonic generator are well within current technology

  15. Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis

    Directory of Open Access Journals (Sweden)

    LiMin Wang

    2014-01-01

    Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.

  16. Numerical and experimental research on pentagonal cross-section of the averaging Pitot tube

    Science.gov (United States)

    Zhang, Jili; Li, Wei; Liang, Ruobing; Zhao, Tianyi; Liu, Yacheng; Liu, Mingsheng

    2017-07-01

    Averaging Pitot tubes have been widely used in many fields because of their simple structure and stable performance. This paper introduces a new shape of the cross-section of an averaging Pitot tube. Firstly, the structure of the averaging Pitot tube and the distribution of pressure taps are given. Then, a mathematical model of the airflow around it is formulated. After that, a series of numerical simulations are carried out to optimize the geometry of the tube. The distribution of the streamline and pressures around the tube are given. To test its performance, a test platform was constructed in accordance with the relevant national standards and is described in this paper. Curves are provided, linking the values of flow coefficient with the values of Reynolds number. With a maximum deviation of only  ±3%, the results of the flow coefficient obtained from the numerical simulations were in agreement with those obtained from experimental methods. The proposed tube has a stable flow coefficient and favorable metrological characteristics.

  17. New algorithm using only one variable measurement applied to a maximum power point tracker

    Energy Technology Data Exchange (ETDEWEB)

    Salas, V.; Olias, E.; Lazaro, A.; Barrado, A. [University Carlos III de Madrid (Spain). Dept. of Electronic Technology

    2005-05-01

    A novel algorithm for seeking the maximum power point of a photovoltaic (PV) array for any temperature and solar irradiation level, needing only the PV current value, is proposed. Satisfactory theoretical and experimental results are presented and were obtained when the algorithm was included on a 100 W 24 V PV buck converter prototype, using an inexpensive microcontroller. The load of the system used was a battery and a resistance. The main advantage of this new maximum power point tracking (MPPT), when is compared with others, is that it only uses the measurement of the photovoltaic current, I{sub PV}. (author)

  18. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  19. Testing averaged cosmology with type Ia supernovae and BAO data

    Energy Technology Data Exchange (ETDEWEB)

    Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)

    2017-02-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  20. Testing averaged cosmology with type Ia supernovae and BAO data

    International Nuclear Information System (INIS)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani

    2017-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  1. An adaptive mesh refinement approach for average current nodal expansion method in 2-D rectangular geometry

    International Nuclear Information System (INIS)

    Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.

    2013-01-01

    Highlights: ► A new adaptive h-refinement approach has been developed for a class of nodal method. ► The resulting system of nodal equations is more amenable to efficient numerical solution. ► The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. ► Spatially adaptive approach greatly enhances the accuracy of the solution. - Abstract: The aim of this work is to develop a spatially adaptive coarse mesh strategy that progressively refines the nodes in appropriate regions of domain to solve the neutron balance equation by zeroth order nodal expansion method. A flux gradient based a posteriori estimation scheme has been utilized for checking the approximate solutions for various nodes. The relative surface net leakage of nodes has been considered as an assessment criterion. In this approach, the core module is called in by adaptive mesh generator to determine gradients of node surfaces flux to explore the possibility of node refinements in appropriate regions and directions of the problem. The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. For this purpose, a computer program ANRNE-2D, Adaptive Node Refinement Nodal Expansion, has been developed to solve neutron diffusion equation using average current nodal expansion method for 2D rectangular geometries. Implementing the adaptive algorithm confirms its superiority in enhancing the accuracy of the solution without using fine nodes throughout the domain and increasing the number of unknown solution. Some well-known benchmarks have been investigated and improvements are reported

  2. The B-dot Earth Average Magnetic Field

    Science.gov (United States)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  3. Determining Maximum Photovoltaic Penetration in a Distribution Grid considering Grid Operation Limits

    DEFF Research Database (Denmark)

    Kordheili, Reza Ahmadi; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna

    2014-01-01

    High penetration of photovoltaic panels in distribution grid can bring the grid to its operation limits. The main focus of the paper is to determine maximum photovoltaic penetration level in the grid. Three main criteria were investigated for determining maximum penetration level of PV panels...... for this grid: even distribution of PV panels, aggregation of panels at the beginning of each feeder, and aggregation of panels at the end of each feeder. Load modeling is done using Velander formula. Since PV generation is highest in the summer due to irradiation, a summer day was chosen to determine maximum......; maximum voltage deviation of customers, cables current limits, and transformer nominal value. Voltage deviation of different buses was investigated for different penetration levels. The proposed model was simulated on a Danish distribution grid. Three different PV location scenarios were investigated...

  4. Experimental Results of a DC Bus Voltage Level Control for a Load-Controlled Marine Current Energy Converter

    Directory of Open Access Journals (Sweden)

    Johan Forslund

    2015-05-01

    Full Text Available This paper investigates three load control methods for a  marine current energy converter using a vertical axis current  turbine (VACT mounted on a permanent magnet synchronous generator  (PMSG. The three cases are; a fixed AC load, a fixed pulse width  modulated (PWM DC load and DC bus voltage control of a DC  load. Experimental results show that the DC bus voltage control  reduces the variations of rotational speed by a factor of 3.5 at the cost  of slightly increased losses in the generator and transmission lines.  For all three cases, the tip speed ratio \\(\\lambda\\ can be kept close to  the expected \\(\\lambda_{opt}\\. The power coefficient is estimated to be  0.36 at \\(\\lambda_{opt}\\; however, for all three cases, the average  extracted power was about \\(\\sim 19\\\\%. A maximum power point  tracking (MPPT system, with or without water velocity measurement,  could increase the average extracted power.

  5. Analysis and Minimization of Output Current Ripple for Discontinuous Pulse-Width Modulation Techniques in Three-Phase Inverters

    Directory of Open Access Journals (Sweden)

    Gabriele Grandi

    2016-05-01

    Full Text Available This paper gives the complete analysis of the output current ripple in three-phase voltage source inverters considering the different discontinuous pulse-width modulation (DPWM strategies. In particular, peak-to-peak current ripple amplitude is analytically evaluated over the fundamental period and compared among the most used DPWMs, including positive and negative clamped (DPWM+ and DPWM−, and the four possible combinations between them, usually named as DPWM0, DPWM1, DPWM2, and DPWM3. The maximum and the average values of peak-to-peak current ripple are estimated, and a simple method to correlate the ripple envelope with the ripple rms is proposed and verified. Furthermore, all the results obtained by DPWMs are compared to the centered pulse-width modulation (CPWM, equivalent to the space vector modulation to identify the optimal pulse-width modulation (PWM strategy as a function of the modulation index, taking into account the different average switching frequency. In this way, the PWM technique providing for the minimum output current ripple is identified over the whole modulation range. The analytical developments and the main results are experimentally verified by current ripple measurements with a three-phase PWM inverter prototype supplying an induction motor load.

  6. Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.

    Science.gov (United States)

    Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N

    2014-01-01

    Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites  =  approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.

  7. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  8. [Polish guidelines of 2001 for maximum admissible intensities in high frequency EMF versus European Union recommendations].

    Science.gov (United States)

    Aniołczyk, Halina

    2003-01-01

    In 1999, a draft of amendments to maximum admissible intensities (MAI) of electromagnetic fields (0 Hz-300 GHz) was prepared by Professor H. Korniewicz of the Central Institute for Labour Protection, Warsaw, in cooperation with the Nofer Institute of Occupational Medicine, Łódź (radio- and microwaves) and the Military Institute of Hygiene and Epidemiology, Warsaw (pulse radiation). Before 2000, the development of the national MAI guidelines for the frequency range of 0.1 MHz-300 GHz was based on the knowledge of biological and health effects of EMF exposure available on the turn of the 1960s. A current basis for establishing the MAI international standards is a well-documented thermal effect measured by the value of a specific absorption rate (SAR), whereas the effects of resonant absorption imposes the nature of the functional dependency on EMF frequency. The Russian standards, already thoroughly analyzed, still take so-called non-thermal effects and the conception of energetic load for a work-shift with its progressive averaging (see hazardous zone in Polish guidelines) as a basis for setting maximum admissible intensities. The World Health Organization recommends a harmonization of the EMF protection guidelines, existing in different countries, with the guidelines of the International Commission for Non-Ionizing Radiation Protection (ICNIRP), and its position is supported by the European Union.

  9. The influence of convective current generator on the global current

    Directory of Open Access Journals (Sweden)

    V. N. Morozov

    2006-01-01

    Full Text Available The mathematical generalization of classical model of the global circuit with taking into account the convective current generator, working in the planetary boundary layer was considered. Convective current generator may be interpreted as generator, in which the electromotive force is generated by processes, of the turbulent transport of electrical charge. It is shown that the average potential of ionosphere is defined not only by the thunderstorm current generators, working at the present moment, but by the convective current generator also. The influence of the convective processes in the boundary layer on the electrical parameters of the atmosphere is not only local, but has global character as well. The numerical estimations, made for the case of the convective-unstable boundary layer demonstrate that the increase of the average potential of ionosphere may be of the order of 10% to 40%.

  10. Evaluation of regulatory variation and theoretical health risk for pesticide maximum residue limits in food.

    Science.gov (United States)

    Li, Zijian

    2018-08-01

    To evaluate whether pesticide maximum residue limits (MRLs) can protect public health, a deterministic dietary risk assessment of maximum pesticide legal exposure was conducted to convert global MRLs to theoretical maximum dose intake (TMDI) values by estimating the average food intake rate and human body weight for each country. A total of 114 nations (58% of the total nations in the world) and two international organizations, including the European Union (EU) and Codex (WHO) have regulated at least one of the most currently used pesticides in at least one of the most consumed agricultural commodities. In this study, 14 of the most commonly used pesticides and 12 of the most commonly consumed agricultural commodities were identified and selected for analysis. A health risk analysis indicated that nearly 30% of the computed pesticide TMDI values were greater than the acceptable daily intake (ADI) values; however, many nations lack common pesticide MRLs in many commonly consumed foods and other human exposure pathways, such as soil, water, and air were not considered. Normality tests of the TMDI values set indicated that all distributions had a right skewness due to large TMDI clusters at the low end of the distribution, which were caused by some strict pesticide MRLs regulated by the EU (normally a default MRL of 0.01 mg/kg when essential data are missing). The Box-Cox transformation and optimal lambda (λ) were applied to these TMDI distributions, and normality tests of the transformed data set indicated that the power transformed TMDI values of at least eight pesticides presented a normal distribution. It was concluded that unifying strict pesticide MRLs by nations worldwide could significantly skew the distribution of TMDI values to the right, lower the legal exposure to pesticide, and effectively control human health risks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  12. How to resolve microsecond current fluctuations in single ion channels: The power of beta distributions

    Science.gov (United States)

    Schroeder, Indra

    2015-01-01

    Abstract A main ingredient for the understanding of structure/function correlates of ion channels is the quantitative description of single-channel gating and conductance. However, a wealth of information provided from fast current fluctuations beyond the temporal resolution of the recording system is often ignored, even though it is close to the time window accessible to molecular dynamics simulations. This kind of current fluctuations provide a special technical challenge, because individual opening/closing or blocking/unblocking events cannot be resolved, and the resulting averaging over undetected events decreases the single-channel current. Here, I briefly summarize the history of fast-current fluctuation analysis and focus on the so-called “beta distributions.” This tool exploits characteristics of current fluctuation-induced excess noise on the current amplitude histograms to reconstruct the true single-channel current and kinetic parameters. A guideline for the analysis and recent applications demonstrate that a construction of theoretical beta distributions by Markov Model simulations offers maximum flexibility as compared to analytical solutions. PMID:26368656

  13. MCBS Highlights: Ownership and Average Premiums for Medicare Supplementary Insurance Policies

    Science.gov (United States)

    Chulis, George S.; Eppig, Franklin J.; Poisal, John A.

    1995-01-01

    This article describes private supplementary health insurance holdings and average premiums paid by Medicare enrollees. Data were collected as part of the 1992 Medicare Current Beneficiary Survey (MCBS). Data show the number of persons with insurance and average premiums paid by type of insurance held—individually purchased policies, employer-sponsored policies, or both. Distributions are shown for a variety of demographic, socioeconomic, and health status variables. Primary findings include: Seventy-eight percent of Medicare beneficiaries have private supplementary insurance; 25 percent of those with private insurance hold more than one policy. The average premium paid for private insurance in 1992 was $914. PMID:10153473

  14. Process stabilization by peak current regulation in reactive high-power impulse magnetron sputtering of hafnium nitride

    International Nuclear Information System (INIS)

    Shimizu, T; Villamayor, M; Helmersson, U; Lundin, D

    2016-01-01

    A simple and cost effective approach to stabilize the sputtering process in the transition zone during reactive high-power impulse magnetron sputtering (HiPIMS) is proposed. The method is based on real-time monitoring and control of the discharge current waveforms. To stabilize the process conditions at a given set point, a feedback control system was implemented that automatically regulates the pulse frequency, and thereby the average sputtering power, to maintain a constant maximum discharge current. In the present study, the variation of the pulse current waveforms over a wide range of reactive gas flows and pulse frequencies during a reactive HiPIMS process of Hf-N in an Ar–N 2 atmosphere illustrates that the discharge current waveform is a an excellent indicator of the process conditions. Activating the reactive HiPIMS peak current regulation, stable process conditions were maintained when varying the N 2 flow from 2.1 to 3.5 sccm by an automatic adjustment of the pulse frequency from 600 Hz to 1150 Hz and consequently an increase of the average power from 110 to 270 W. Hf–N films deposited using peak current regulation exhibited a stable stoichiometry, a nearly constant power-normalized deposition rate, and a polycrystalline cubic phase Hf-N with (1 1 1)-preferred orientation over the entire reactive gas flow range investigated. The physical reasons for the change in the current pulse waveform for different process conditions are discussed in some detail. (paper)

  15. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  16. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  17. Chromospheric oscillations observed with OSO 8. III. Average phase spectra for Si II

    International Nuclear Information System (INIS)

    White, O.R.; Athay, R.G.

    1979-01-01

    Time series of intensity and Doppler-shift fluctuations in the Si II emission lines lambda816.93 and lambda817.45 are Fourier analyzed to determine the frequency variation of phase differences between intensity and velocity and between these two lines formed 300 km apart in the middle chromosphere. Average phase spectra show that oscillations between 2 and 9 mHz in the two lines have time delays from 35 to 40 s, which is consistent with the upward propagation of sound wave at 8.6-7.5 km s -1 . In this same frequency band near 3 mHz, maximum brightness leads maximum blueshift by 60 0 . At frequencies above 11 mHz where the power spectrum is flat, the phase differences are uncertain, but approximately 65% of the cases indicate upward propagation. At these higher frequencies, the phase lead between intensity and blue Doppler shift ranges from 0 0 to 180 0 with an average value of 90 0 . However, the phase estimates in this upper band are corrupted by both aliasing and randomness inherent to the measured signals. Phase differences in the two narrow spectral features seen at 10.5 and 27 mHz in the power spectra are shown to be consistent with properties expected for aliases of the wheel rotation rate of the spacecraft wheel section

  18. High-throughput machining using high average power ultrashort pulse lasers and ultrafast polygon scanner

    Science.gov (United States)

    Schille, Joerg; Schneider, Lutz; Streek, André; Kloetzer, Sascha; Loeschner, Udo

    2016-03-01

    In this paper, high-throughput ultrashort pulse laser machining is investigated on various industrial grade metals (Aluminium, Copper, Stainless steel) and Al2O3 ceramic at unprecedented processing speeds. This is achieved by using a high pulse repetition frequency picosecond laser with maximum average output power of 270 W in conjunction with a unique, in-house developed two-axis polygon scanner. Initially, different concepts of polygon scanners are engineered and tested to find out the optimal architecture for ultrafast and precision laser beam scanning. Remarkable 1,000 m/s scan speed is achieved on the substrate, and thanks to the resulting low pulse overlap, thermal accumulation and plasma absorption effects are avoided at up to 20 MHz pulse repetition frequencies. In order to identify optimum processing conditions for efficient high-average power laser machining, the depths of cavities produced under varied parameter settings are analyzed and, from the results obtained, the characteristic removal values are specified. The maximum removal rate is achieved as high as 27.8 mm3/min for Aluminium, 21.4 mm3/min for Copper, 15.3 mm3/min for Stainless steel and 129.1 mm3/min for Al2O3 when full available laser power is irradiated at optimum pulse repetition frequency.

  19. Maximum repulsed magnetization of a bulk superconductor with low pulsed field

    International Nuclear Information System (INIS)

    Tsuchimoto, M.; Kamijo, H.; Fujimoto, H.

    2005-01-01

    Pulsed field magnetization of a bulk high-T c superconductor (HTS) is important technique especially for practical applications of a bulk superconducting magnet. Full magnetization is not obtained for low pulsed field and trapped field is decreased by reversed current in the HTS. The trapped field distribution by repulsed magnetization was previously reported in experiments with temperature control. In this study, repulsed magnetization technique with the low pulsed field is numerically analyzed under assumption of variable shielding current by the temperature control. The shielding current densities are discussed to obtain maximum trapped field by two times of low pulsed field magnetizations

  20. Comparison of mass transport using average and transient rainfall boundary conditions

    International Nuclear Information System (INIS)

    Duguid, J.O.; Reeves, M.

    1976-01-01

    A general two-dimensional model for simulation of saturated-unsaturated transport of radionuclides in ground water has been developed and is currently being tested. The model is being applied to study the transport of radionuclides from a waste-disposal site where field investigations are currently under way to obtain the necessary model parameters. A comparison of the amount of tritium transported is made using both average and transient rainfall boundary conditions. The simulations indicate that there is no substantial difference in the transport for the two conditions tested. However, the values of dispersivity used in the unsaturated zone caused more transport above the water table than has been observed under actual conditions. This deficiency should be corrected and further comparisons should be made before average rainfall boundary conditions are used for long-term transport simulations

  1. Fission neutron spectrum averaged cross sections for threshold reactions on arsenic

    International Nuclear Information System (INIS)

    Dorval, E.L.; Arribere, M.A.; Kestelman, A.J.; Comision Nacional de Energia Atomica, Cuyo Nacional Univ., Bariloche; Ribeiro Guevara, S.; Cohen, I.M.; Ohaco, R.A.; Segovia, M.S.; Yunes, A.N.; Arrondo, M.; Comision Nacional de Energia Atomica, Buenos Aires

    2006-01-01

    We have measured the cross sections, averaged over a 235 U fission neutron spectrum, for the two high threshold reactions: 75 As(n,p) 75 mGe and 75 As(n,2n) 74 As. The measured averaged cross sections are 0.292±0.022 mb, referred to the 3.95±0.20 mb standard for the 27 Al(n,p) 27 Mg averaged cross section, and 0.371±0.032 mb referred to the 111±3 mb standard for the 58 Ni(n,p) 58m+g Co averaged cross section, respectively. The measured averaged cross sections were also evaluated semi-empirically by numerically integrating experimental differential cross section data extracted for both reactions from the current literature. The calculations were performed for four different representations of the thermal-neutron-induced 235 U fission neutron spectrum. The calculated cross sections, though depending on analytical representation of the flux, agree with the measured values within the estimated uncertainties. (author)

  2. Direct maximum parsimony phylogeny reconstruction from genotype data.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-12-05

    Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  3. Direct maximum parsimony phylogeny reconstruction from genotype data

    Directory of Open Access Journals (Sweden)

    Ravi R

    2007-12-01

    Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  4. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  5. Numerical and experimental research on pentagonal cross-section of the averaging Pitot tube

    International Nuclear Information System (INIS)

    Zhang, Jili; Li, Wei; Liang, Ruobing; Zhao, Tianyi; Liu, Yacheng; Liu, Mingsheng

    2017-01-01

    Averaging Pitot tubes have been widely used in many fields because of their simple structure and stable performance. This paper introduces a new shape of the cross-section of an averaging Pitot tube. Firstly, the structure of the averaging Pitot tube and the distribution of pressure taps are given. Then, a mathematical model of the airflow around it is formulated. After that, a series of numerical simulations are carried out to optimize the geometry of the tube. The distribution of the streamline and pressures around the tube are given. To test its performance, a test platform was constructed in accordance with the relevant national standards and is described in this paper. Curves are provided, linking the values of flow coefficient with the values of Reynolds number. With a maximum deviation of only  ±3%, the results of the flow coefficient obtained from the numerical simulations were in agreement with those obtained from experimental methods. The proposed tube has a stable flow coefficient and favorable metrological characteristics. (paper)

  6. Predicting scour beneath subsea pipelines from existing small free span depths under steady currents

    Directory of Open Access Journals (Sweden)

    Jun Y. Lee

    2017-06-01

    Full Text Available An equation was developed to predict current-induced scour beneath subsea pipelines in areas with small span depths, S. Current equations for scour prediction are only applicable to partially buried pipelines. The existence of small span depths (i.e. S/D < 0.3 are of concern because the capacity for scour is higher at smaller span depths. Furthermore, it is impractical to perform rectification works, such as installing grout bags, under a pipeline with a small S/D. Full-scale two-dimensional computational fluid dynamics (CFD simulations were performed using the Reynolds-averaged Navier–Stokes approach and the Shear stress transport k–ω turbulence model. To predict the occurrence of scour, the computed maximum bed shear stress beneath the pipe was converted to the dimensionless Shields parameter, and compared with the critical Shields parameter based on the mean sediment grain size. The numerical setup was verified, and a good agreement was found between model-scale CFD data and experimental data. Field data were obtained to determine the mean grain size, far field current velocity and to measure the span depths along the surveyed pipe length. A trend line equation was fitted to the full-scale CFD data, whereby the maximum Shields parameter beneath the pipe can be calculated based on the undisturbed Shields parameter and S/D.

  7. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  8. Analysis of photosynthate translocation velocity and measurement of weighted average velocity in transporting pathway of crops

    International Nuclear Information System (INIS)

    Ge Cailin; Luo Shishi; Gong Jian; Zhang Hao; Ma Fei

    1996-08-01

    The translocation profile pattern of 14 C-photosynthate along the transporting pathway in crops were monitored by pulse-labelling a mature leaf with 14 CO 2 . The progressive spreading of translocation profile pattern along the sheath or stem indicates that the translocation of photosynthate along the sheath or stem proceed with a range of velocities rather than with just a single velocity. The method for measuring the weighted average velocity of photosynthate translocation along the sheath or stem was established in living crops. The weighted average velocity and the maximum velocity of photosynthate translocation along the sheath in rice and maize were measured actually. (4 figs., 3 tabs.)

  9. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  10. Implementation of Maximum Power Point Tracking (MPPT) Solar Charge Controller using Arduino

    Science.gov (United States)

    Abdelilah, B.; Mouna, A.; KouiderM’Sirdi, N.; El Hossain, A.

    2018-05-01

    the platform Arduino with a number of sensors standard can be used as components of an electronic system for acquiring measures and controls. This paper presents the design of a low-cost and effective solar charge controller. This system includes several elements such as the solar panel converter DC/DC, battery, circuit MPPT using Microcontroller, sensors, and the MPPT algorithm. The MPPT (Maximum Power Point Tracker) algorithm has been implemented using an Arduino Nano with the preferred program. The voltage and current of the Panel are taken where the program implemented will work and using this algorithm that MPP will be reached. This paper provides details on the solar charge control device at the maximum power point. The results include the change of the duty cycle with the change in load and thus mean the variation of the buck converter output voltage and current controlled by the MPPT algorithm.

  11. Maximum And Minimum Temperature Trends In Mexico For The Last 31 Years

    Science.gov (United States)

    Romero-Centeno, R.; Zavala-Hidalgo, J.; Allende Arandia, M. E.; Carrasco-Mijarez, N.; Calderon-Bustamante, O.

    2013-05-01

    Based on high-resolution (1') daily maps of the maximum and minimum temperatures in Mexico, an analysis of the last 31-year trends is performed. The maps were generated using all the available information from more than 5,000 stations of the Mexican Weather Service (Servicio Meteorológico Nacional, SMN) for the period 1979-2009, along with data from the North American Regional Reanalysis (NARR). The data processing procedure includes a quality control step, in order to eliminate erroneous daily data, and make use of a high-resolution digital elevation model (from GEBCO), the relationship between air temperature and elevation by means of the average environmental lapse rate, and interpolation algorithms (linear and inverse-distance weighting). Based on the monthly gridded maps for the mentioned period, the maximum and minimum temperature trends calculated by least-squares linear regression and their statistical significance are obtained and discussed.

  12. Current neutralization of nanosecond risetime, high-current electron beam

    International Nuclear Information System (INIS)

    Lidestri, J.P.; Spence, P.W.; Bailey, V.L.; Putnam, S.D.; Fockler, J.; Eichenberger, C.; Champney, P.D.

    1991-01-01

    This paper reports that the authors have recently investigated methods to achieve current neutralization in fast risetime (<3 ns) electron beams propagating in low-pressure gas. For this investigation, they injected a 3-MV, 30-kA intense beam into a drift cell containing gas pressures from 0.10 to 20 torr. By using a fast net current monitor (100-ps risetime), it was possible to observe beam front gas breakdown phenomena and to optimize the drift cell gas pressure to achieve maximum current neutralization. Experimental observations have shown that by increasing the drift gas pressure (P ∼ 12.5 torr) to decrease the mean time between secondary electron/gas collisions, the beam can propagate with 90% current neutralization for the full beam pulsewidth (16 ns)

  13. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  14. Improving sensitivity in micro-free flow electrophoresis using signal averaging

    Science.gov (United States)

    Turgeon, Ryan T.; Bowser, Michael T.

    2009-01-01

    Microfluidic free-flow electrophoresis (μFFE) is a separation technique that separates continuous streams of analytes as they travel through an electric field in a planar flow channel. The continuous nature of the μFFE separation suggests that approaches more commonly applied in spectroscopy and imaging may be effective in improving sensitivity. The current paper describes the S/N improvements that can be achieved by simply averaging multiple images of a μFFE separation; 20–24-fold improvements in S/N were observed by averaging the signal from 500 images recorded for over 2 min. Up to an 80-fold improvement in S/N was observed by averaging 6500 images. Detection limits as low as 14 pM were achieved for fluorescein, which is impressive considering the non-ideal optical set-up used in these experiments. The limitation to this signal averaging approach was the stability of the μFFE separation. At separation times longer than 20 min bubbles began to form at the electrodes, which disrupted the flow profile through the device, giving rise to erratic peak positions. PMID:19319908

  15. Maximum Entropy Closure of Balance Equations for Miniband Semiconductor Superlattices

    Directory of Open Access Journals (Sweden)

    Luis L. Bonilla

    2016-07-01

    Full Text Available Charge transport in nanosized electronic systems is described by semiclassical or quantum kinetic equations that are often costly to solve numerically and difficult to reduce systematically to macroscopic balance equations for densities, currents, temperatures and other moments of macroscopic variables. The maximum entropy principle can be used to close the system of equations for the moments but its accuracy or range of validity are not always clear. In this paper, we compare numerical solutions of balance equations for nonlinear electron transport in semiconductor superlattices. The equations have been obtained from Boltzmann–Poisson kinetic equations very far from equilibrium for strong fields, either by the maximum entropy principle or by a systematic Chapman–Enskog perturbation procedure. Both approaches produce the same current-voltage characteristic curve for uniform fields. When the superlattices are DC voltage biased in a region where there are stable time periodic solutions corresponding to recycling and motion of electric field pulses, the differences between the numerical solutions produced by numerically solving both types of balance equations are smaller than the expansion parameter used in the perturbation procedure. These results and possible new research venues are discussed.

  16. The properties of chromium electrodeposited with programmed currents. Part II. Reversing current

    Directory of Open Access Journals (Sweden)

    TANJA M. KOSTIC

    2000-01-01

    Full Text Available The electrodeposition of chromium in programmed reversing current (RC, was investigated in the regime of high cathodic current density (77 A dm-2 and anodic current density (55 A dm-2. The ratio of the cathodic and anodic time (60 : 1 was used. Chromium was deposed on a steel substrate from a chromic-sulphuric acid solution, during one hour. Anode and cathode were suited in a system of parallel plates. Basic properties of deposits, like thickness, morphology, microhardness, brightness were examined. Surface distribution of the deposits was obtained from the measurements of the thicknesses of the deposits (between 32 and 67 µm. A ferromagnetic non-destructive method was used in the measurements. Based on the results, graphic models of deposit surface distribution were made. Two ranges of the thickness could be seen on the model (range 1 - average thickness 35.1 µm and range 2 - average thickness 57.81 µm. These results were statisticaly analysed by colums, rows and by the whole surface. For the whole specimens, the average thickness was 45.39 µm with a coefficient of variation of 0.2582. The basic properties of the deposits did not change with a variation of the thickness. Because of this, the coatings deposited with the reversing current could be much more considered reliable in wear and corrosion protection systems than ones deposited by direct current.

  17. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    Science.gov (United States)

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  18. Cryogenic Current Comparator for Absolute Measurement of the Dark Current of the Superconducting Cavities for Tesla

    CERN Document Server

    Knaack, K; Wittenburg, K

    2003-01-01

    A newly high performance SQUID based measurement system for detecting dark currents, generated by superconducting cavities for TESLA is proposed. It makes use of the Cryogenic Current Comparator principle and senses dark currents in the nA range with a small signal bandwidth of 70 kHz. To reach the maximum possible energy in the TESLA project is a strong motivation to push the gradients of the superconducting cavities closer to the physical limit of 50 MV/m. The field emission of electrons (the so called dark current) of the superconducting cavities at strong fields may limit the maximum gradient. The absolute measurement of the dark current in correlation with the gradient will give a proper value to compare and classify the cavities. This contribution describes a Cryogenic Current Comparator (CCC) as an excellent and useful tool for this purpose. The most important component of the CCC is a high performance DC SQUID system which is able to measure extremely low magnetic fields, e.g. caused by the extracted ...

  19. Studies in High Current Density Ion Sources for Heavy Ion Fusion Applications

    International Nuclear Information System (INIS)

    Chacon-Golcher, E.

    2002-01-01

    This dissertation develops diverse research on small (diameter ∼ few mm), high current density (J ∼ several tens of mA/cm 2 ) heavy ion sources. The research has been developed in the context of a programmatic interest within the Heavy Ion Fusion (HIF) Program to explore alternative architectures in the beam injection systems that use the merging of small, bright beams. An ion gun was designed and built for these experiments. Results of average current density yield ( ) at different operating conditions are presented for K + and Cs + contact ionization sources and potassium aluminum silicate sources. Maximum values for a K + beam of ∼90 mA/cm 2 were observed in 2.3 (micro)s pulses. Measurements of beam intensity profiles and emittances are included. Measurements of neutral particle desorption are presented at different operating conditions which lead to a better understanding of the underlying atomic diffusion processes that determine the lifetime of the emitter. Estimates of diffusion times consistent with measurements are presented, as well as estimates of maximum repetition rates achievable. Diverse studies performed on the composition and preparation of alkali aluminosilicate ion sources are also presented. In addition, this work includes preliminary work carried out exploring the viability of an argon plasma ion source and a bismuth metal vapor vacuum arc (MEVVA) ion source. For the former ion source, fast rise-times (∼ 1 (micro)s), high current densities (∼ 100 mA/cm 2 ) and low operating pressures ( e psilon) n (le) 0.006 π mm · mrad) although measured currents differed from the desired ones (I ∼ 5mA) by about a factor of 10

  20. Validation of a 4D-PET Maximum Intensity Projection for Delineation of an Internal Target Volume

    International Nuclear Information System (INIS)

    Callahan, Jason; Kron, Tomas; Schneider-Kolsky, Michal; Dunn, Leon; Thompson, Mick; Siva, Shankar; Aarons, Yolanda; Binns, David; Hicks, Rodney J.

    2013-01-01

    Purpose: The delineation of internal target volumes (ITVs) in radiation therapy of lung tumors is currently performed by use of either free-breathing (FB) 18 F-fluorodeoxyglucose-positron emission tomography-computed tomography (FDG-PET/CT) or 4-dimensional (4D)-CT maximum intensity projection (MIP). In this report we validate the use of 4D-PET-MIP for the delineation of target volumes in both a phantom and in patients. Methods and Materials: A phantom with 3 hollow spheres was prepared surrounded by air then water. The spheres and water background were filled with a mixture of 18 F and radiographic contrast medium. A 4D-PET/CT scan was performed of the phantom while moving in 4 different breathing patterns using a programmable motion device. Nine patients with an FDG-avid lung tumor who underwent FB and 4D-PET/CT and >5 mm of tumor motion were included for analysis. The 3 spheres and patient lesions were contoured by 2 contouring methods (40% of maximum and PET edge) on the FB-PET, FB-CT, 4D-PET, 4D-PET-MIP, and 4D-CT-MIP. The concordance between the different contoured volumes was calculated using a Dice coefficient (DC). The difference in lung tumor volumes between FB-PET and 4D-PET volumes was also measured. Results: The average DC in the phantom using 40% and PET edge, respectively, was lowest for FB-PET/CT (DCAir = 0.72/0.67, DCBackground 0.63/0.62) and highest for 4D-PET/CT-MIP (DCAir = 0.84/0.83, DCBackground = 0.78/0.73). The average DC in the 9 patients using 40% and PET edge, respectively, was also lowest for FB-PET/CT (DC = 0.45/0.44) and highest for 4D-PET/CT-MIP (DC = 0.72/0.73). In the 9 lesions, the target volumes of the FB-PET using 40% and PET edge, respectively, were on average 40% and 45% smaller than the 4D-PET-MIP. Conclusion: A 4D-PET-MIP produces volumes with the highest concordance with 4D-CT-MIP across multiple breathing patterns and lesion sizes in both a phantom and among patients. Freebreathing PET/CT consistently underestimates ITV

  1. Validation of a 4D-PET Maximum Intensity Projection for Delineation of an Internal Target Volume

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, Jason, E-mail: jason.callahan@petermac.org [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Kron, Tomas [Department of Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne (Australia); Schneider-Kolsky, Michal [Department of Medical Imaging and Radiation Science, Monash University, Clayton, Victoria (Australia); Dunn, Leon [Department of Applied Physics, RMIT University, Melbourne (Australia); Thompson, Mick [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Siva, Shankar [Department of Radiation Oncology, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Aarons, Yolanda [Department of Radiation Oncology, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne (Australia); Binns, David [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Hicks, Rodney J. [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne (Australia)

    2013-07-15

    Purpose: The delineation of internal target volumes (ITVs) in radiation therapy of lung tumors is currently performed by use of either free-breathing (FB) {sup 18}F-fluorodeoxyglucose-positron emission tomography-computed tomography (FDG-PET/CT) or 4-dimensional (4D)-CT maximum intensity projection (MIP). In this report we validate the use of 4D-PET-MIP for the delineation of target volumes in both a phantom and in patients. Methods and Materials: A phantom with 3 hollow spheres was prepared surrounded by air then water. The spheres and water background were filled with a mixture of {sup 18}F and radiographic contrast medium. A 4D-PET/CT scan was performed of the phantom while moving in 4 different breathing patterns using a programmable motion device. Nine patients with an FDG-avid lung tumor who underwent FB and 4D-PET/CT and >5 mm of tumor motion were included for analysis. The 3 spheres and patient lesions were contoured by 2 contouring methods (40% of maximum and PET edge) on the FB-PET, FB-CT, 4D-PET, 4D-PET-MIP, and 4D-CT-MIP. The concordance between the different contoured volumes was calculated using a Dice coefficient (DC). The difference in lung tumor volumes between FB-PET and 4D-PET volumes was also measured. Results: The average DC in the phantom using 40% and PET edge, respectively, was lowest for FB-PET/CT (DCAir = 0.72/0.67, DCBackground 0.63/0.62) and highest for 4D-PET/CT-MIP (DCAir = 0.84/0.83, DCBackground = 0.78/0.73). The average DC in the 9 patients using 40% and PET edge, respectively, was also lowest for FB-PET/CT (DC = 0.45/0.44) and highest for 4D-PET/CT-MIP (DC = 0.72/0.73). In the 9 lesions, the target volumes of the FB-PET using 40% and PET edge, respectively, were on average 40% and 45% smaller than the 4D-PET-MIP. Conclusion: A 4D-PET-MIP produces volumes with the highest concordance with 4D-CT-MIP across multiple breathing patterns and lesion sizes in both a phantom and among patients. Freebreathing PET/CT consistently

  2. Large-signal analysis of DC motor drive system using state-space averaging technique

    International Nuclear Information System (INIS)

    Bekir Yildiz, Ali

    2008-01-01

    The analysis of a separately excited DC motor driven by DC-DC converter is realized by using state-space averaging technique. Firstly, a general and unified large-signal averaged circuit model for DC-DC converters is given. The method converts power electronic systems, which are periodic time-variant because of their switching operation, to unified and time independent systems. Using the averaged circuit model enables us to combine the different topologies of converters. Thus, all analysis and design processes about DC motor can be easily realized by using the unified averaged model which is valid during whole period. Some large-signal variations such as speed and current relating to DC motor, steady-state analysis, large-signal and small-signal transfer functions are easily obtained by using the averaged circuit model

  3. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  4. Study on characteristics of the aperture-averaging factor of atmospheric scintillation in terrestrial optical wireless communication

    Science.gov (United States)

    Shen, Hong; Liu, Wen-xing; Zhou, Xue-yun; Zhou, Li-ling; Yu, Long-Kun

    2018-02-01

    In order to thoroughly understand the characteristics of the aperture-averaging effect of atmospheric scintillation in terrestrial optical wireless communication and provide references for engineering design and performance evaluation of the optics system employed in the atmosphere, we have theoretically deduced the generally analytic expression of the aperture-averaging factor of atmospheric scintillation, and numerically investigated characteristics of the apertureaveraging factor under different propagation conditions. The limitations of the current commonly used approximate calculation formula of aperture-averaging factor have been discussed, and the results showed that the current calculation formula is not applicable for the small receiving aperture under non-uniform turbulence link. Numerical calculation has showed that aperture-averaging factor of atmospheric scintillation presented an exponential decline model for the small receiving aperture under non-uniform turbulent link, and the general expression of the model was given. This model has certain guiding significance for evaluating the aperture-averaging effect in the terrestrial optical wireless communication.

  5. How long do centenarians survive? Life expectancy and maximum lifespan.

    Science.gov (United States)

    Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A

    2017-08-01

    The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  6. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  7. Measurement of the average B hadron lifetime in Z0 decays using reconstructed vertices

    International Nuclear Information System (INIS)

    Abe, K.; Abt, I.; Ahn, C.J.; Akagi, T.; Allen, N.J.; Ash, W.W.; Aston, D.; Baird, K.G.; Baltay, C.; Band, H.R.; Barakat, M.B.; Baranko, G.; Bardon, O.; Barklow, T.; Bazarko, A.O.; Ben-David, R.; Benvenuti, A.C.; Bilei, G.M.; Bisello, D.; Blaylock, G.; Bogart, J.R.; Bolton, T.; Bower, G.R.; Brau, J.E.; Breidenbach, M.; Bugg, W.M.; Burke, D.; Burnett, T.H.; Burrows, P.N.; Busza, W.; Calcaterra, A.; Caldwell, D.O.; Calloway, D.; Camanzi, B.; Carpinelli, M.; Cassell, R.; Castaldi, R.; Castro, A.; Cavalli-Sforza, M.; Church, E.; Cohn, H.O.; Coller, J.A.; Cook, V.; Cotton, R.; Cowan, R.F.; Coyne, D.G.; D'Oliveira, A.; Damerell, C.J.S.; Daoudi, M.; De Sangro, R.; De Simone, P.; Dell'Orso, R.; Dima, M.; Du, P.Y.C.; Dubois, R.; Eisenstein, B.I.; Elia, R.; Falciai, D.; Fan, C.; Fero, M.J.; Frey, R.; Furuno, K.; Gillman, T.; Gladding, G.; Gonzalez, S.; Hallewell, G.D.; Hart, E.L.; Hasegawa, Y.; Hedges, S.; Hertzbach, S.S.; Hildreth, M.D.; Huber, J.; Huffer, M.E.; Hughes, E.W.; Hwang, H.; Iwasaki, Y.; Jackson, D.J.; Jacques, P.; Jaros, J.; Johnson, A.S.; Johnson, J.R.; Johnson, R.A.; Junk, T.; Kajikawa, R.; Kalelkar, M.; Kang, H.J.; Karliner, I.; Kawahara, H.; Kendall, H.W.; Kim, Y.; King, M.E.; King, R.; Kofler, R.R.; Krishna, N.M.; Kroeger, R.S.; Labs, J.F.; Langston, M.; Lath, A.; Lauber, J.A.; Leith, D.W.G.S.; Liu, M.X.; Liu, X.; Loreti, M.; Lu, A.; Lynch, H.L.; Ma, J.; Mancinelli, G.; Manly, S.; Mantovani, G.; Markiewicz, T.W.; Maruyama, T.; Massetti, R.; Masuda, H.; Mazzucato, E.; McKemey, A.K.; Meadows, B.T.; Messner, R.; Mockett, P.M.; Moffeit, K.C.; Mours, B.; Mueller, G.; Muller, D.; Nagamine, T.; Nauenberg, U.; Neal, H.; Nussbaum, M.; Ohnishi, Y.; Osborne, L.S.; Panvini, R.S.; Park, H.; Pavel, T.J.; Peruzzi, I.; Piccolo, M.; Piemontese, L.; Pieroni, E.; Pitts, K.T.; Plano, R.J.; Prepost, R.; Prescott, C.Y.; Punkar, G.D.; Quigley, J.; Ratcliff, B.N.; Reeves, T.W.; Reidy, J.; Rensing, P.E.; Rochester, L.S.; Rothberg, J.E.; Rowson, P.C.; Russell, J.J.

    1995-01-01

    We report a measurement of the average B hadron lifetime using data collected with the SLD detector at the SLAC Linear Collider in 1993. An inclusive analysis selected three-dimensional vertices with B hadron lifetime information in a sample of 50x10 3 Z 0 decays. A lifetime of 1.564±0.030(stat)±0.036(syst) ps was extracted from the decay length distribution of these vertices using a binned maximum likelihood method. copyright 1995 The American Physical Society

  8. Maximum Mass of Hybrid Stars in the Quark Bag Model

    Science.gov (United States)

    Alaverdyan, G. B.; Vartanyan, Yu. L.

    2017-12-01

    The effect of model parameters in the equation of state for quark matter on the magnitude of the maximum mass of hybrid stars is examined. Quark matter is described in terms of the extended MIT bag model including corrections for one-gluon exchange. For nucleon matter in the range of densities corresponding to the phase transition, a relativistic equation of state is used that is calculated with two-particle correlations taken into account based on using the Bonn meson-exchange potential. The Maxwell construction is used to calculate the characteristics of the first order phase transition and it is shown that for a fixed value of the strong interaction constant αs, the baryon concentrations of the coexisting phases grow monotonically as the bag constant B increases. It is shown that for a fixed value of the strong interaction constant αs, the maximum mass of a hybrid star increases as the bag constant B decreases. For a given value of the bag parameter B, the maximum mass rises as the strong interaction constant αs increases. It is shown that the configurations of hybrid stars with maximum masses equal to or exceeding the mass of the currently known most massive pulsar are possible for values of the strong interaction constant αs > 0.6 and sufficiently low values of the bag constant.

  9. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  10. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  11. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  12. Increasing the maximum daily operation time of MNSR reactor by modifying its cooling system

    International Nuclear Information System (INIS)

    Khamis, I.; Hainoun, A.; Al Halbi, W.; Al Isa, S.

    2006-08-01

    thermal-hydraulic natural convection correlations have been formulated based on a thorough analysis and modeling of the MNSR reactor. The model considers detailed description of the thermal and hydraulic aspects of cooling in the core and vessel. In addition, determination of pressure drop was made through an elaborate balancing of the overall pressure drop in the core against the sum of all individual channel pressure drops employing an iterative scheme. Using this model, an accurate estimation of various timely core-averaged hydraulic parameters such as generated power, hydraulic diameters, flow cross area, ... etc. for each one of the ten-fuel circles in the core can be made. Furthermore, distribution of coolant and fuel temperatures, including maximum fuel temperature and its location in the core, can now be determined. Correlation among core-coolant average temperature, reactor power, and core-coolant inlet temperature, during both steady and transient cases, have been established and verified against experimental data. Simulating various operating condition of MNSR, good agreement is obtained for at different power levels. Various schemes of cooling have been investigated for the purpose of assessing potential benefits on the operational characteristics of the syrian MNSR reactor. A detailed thermal hydraulic model for the analysis of MNSR has been developed. The analysis shows that an auxiliary cooling system, for the reactor vessel or installed in the pool which surrounds the lower section of the reactor vessel, will significantly offset the consumption of excess reactivity due to the negative reactivity temperature coefficient. Hence, the maximum operating time of the reactor is extended. The model considers detailed description of the thermal and hydraulic aspects of cooling the core and its surrounding vessel. Natural convection correlations have been formulated based on a thorough analysis and modeling of the MNSR reactor. The suggested 'micro model

  13. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  14. NOTES AND CORRESPONDENCE Evaluation of Tidal Removal Method Using Phase Average Technique from ADCP Surveys along the Peng-Hu Channel in the Taiwan Strait

    Directory of Open Access Journals (Sweden)

    Yu-Chia Chang

    2008-01-01

    Full Text Available Three cruises with shipboard Acoustic Doppler Current Profiler (ADCP were performed along a transect across the Peng-hu Channel (PHC in the Taiwan Strait during 2003 - 2004 in order to investigate the feasibility and accuracy of the phase-averaging method to eliminate tidal components from shipboard ADCP measurement of currents. In each cruise measurement was repeated a number of times along the transect with a specified time lag of either 5, 6.21, or 8 hr, and the repeated data at the same location were averaged to eliminate the tidal currents; this is the so-called ¡§phase-averaging method¡¨. We employed 5-phase-averaging, 4-phase-averaging, 3-phase-averaging, and 2-phase-averaging methods in this study. The residual currents and volume transport of the PHC derived from various phase-averaging methods were intercompared and were also compared with results of the least-square harmonic reduction method proposed by Simpson et al. (1990 and the least-square interpolation method using Gaussian function (Wang et al. 2004. The estimated uncertainty of the residual flow through the PHC derived from the 5-phase-averaging, 4-phase-averaging, 3-phase-averaging, and 2-phase-averaging methods is 0.3, 0.3, 1.3, and 4.6 cm s-1, respectively. Procedures for choosing a best phase average method to remove tidal currents in any particular region are also suggested.

  15. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  16. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  17. Evaluation of performance and maximum length of continuous decks in bridges : part 1.

    Science.gov (United States)

    2011-06-01

    The purpose of this research was to evaluate the performance history of continuous bridge decks in the State of Georgia, to determine why the current design detail works, to recommend a new design detail, and to recommend the maximum and/or optimum l...

  18. Coastal tomographic mapping of nonlinear tidal currents and residual currents

    Science.gov (United States)

    Zhu, Ze-Nan; Zhu, Xiao-Hua; Guo, Xinyu

    2017-07-01

    Depth-averaged current data, which were obtained by coastal acoustic tomography (CAT) July 12-13, 2009 in Zhitouyang Bay on the western side of the East China Sea, are used to estimate the semidiurnal tidal current (M2) as well as its first two overtide currents (M4 and M6). Spatial mean amplitude ratios M2:M4:M6 in the bay are 1.00:0.15:0.11. The shallow-water equations are used to analyze the generation mechanisms of M4 and M6. In the deep area, where water depths are larger than 60 m, M4 velocity amplitudes measured by CAT agree well with those predicted by the advection terms in the shallow water equations, indicating that M4 in the deep area is predominantly generated by the advection terms. M6 measured by CAT and M6 predicted by the nonlinear quadratic bottom friction terms agree well in the area where water depths are less than 20 m, indicating that friction mechanisms are predominant for generating M6 in the shallow area. In addition, dynamic analysis of the residual currents using the tidally averaged momentum equation shows that spatial mean values of the horizontal pressure gradient due to residual sea level and of the advection of residual currents together contribute about 75% of the spatial mean values of the advection by the tidal currents, indicating that residual currents in this bay are induced mainly by the nonlinear effects of tidal currents. This is the first ever nonlinear tidal current study by CAT.

  19. The concept of average LET values determination

    International Nuclear Information System (INIS)

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  20. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  1. The heterogeneous response method applied to couple the average pin cell and bulk moderator in cluster geometry

    International Nuclear Information System (INIS)

    Lerner, A.M.

    1986-01-01

    The first step towards evaluation of the neutron flux throughout a fuel cluster usually consists of obtaining the multigroup flux distribution in the average pin cell and in the circular outside system of shroud and bulk moderator. Here, an application of the so-called heterogeneous response method (HRM) is described to find this multigroup flux. The rather complex geometry is reduced to a microsystem, the average pin cell, and the outside or macrosystem of shroud and bulk moderator. In each of these systems, collision probabilities are used to obtain their response fluxes caused by sources and in-currents. The two systems are then coupled by cosine currents across that fraction of the average pin-cell boundary, called 'window', that represents the average common boundary between pin cells and the outside system. (author)

  2. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  3. Distância interincisiva máxima em crianças respiradoras bucais Maximum interincisal distance in mouth breathing children

    Directory of Open Access Journals (Sweden)

    Débora Martins Cattoni

    2009-12-01

    Full Text Available INTRODUÇÃO: a distância interincisiva máxima é um importante aspecto na avaliação miofuncional orofacial, pois distúrbios miofuncionais orofaciais podem limitar a abertura da boca. OBJETIVO: mensurar a distância interincisiva máxima de crianças respiradoras bucais, relacionando-a com a idade, e comparar as médias dessas medidas com as médias dessa distância em crianças sem queixas fonoaudiológicas. MÉTODOS: participaram 99 crianças respiradoras bucais, de ambos os gêneros, com idades entre 7 anos e 11 anos e 11 meses, leucodermas, em dentadura mista. O grupo controle foi composto por 253 crianças, com idades entre 7 anos e 11 anos e 11 meses, leucodermas, em dentadura mista, sem queixas fonoaudiológicas. RESULTADOS: os achados evidenciam que a média das distâncias interincisivas máximas das crianças respiradoras bucais foi, no total da amostra, de 43,55mm, não apresentando diferença estatisticamente significativa entre as médias, segundo a idade. Não houve diferença estatisticamente significativa entre as médias da distância interincisiva máxima dos respiradores bucais e as médias dessa medida das crianças do grupo controle. CONCLUSÕES: a distância interincisiva máxima é uma medida que não variou nos respiradores bucais, durante a dentadura mista, segundo a idade, e parece não estar alterada em portadores desse tipo de disfunção. Aponta-se, também, a importância do uso do paquímetro na avaliação objetiva da distância interincisiva máxima.INTRODUCTION: The maximum interincisal distance is an important aspect in the orofacial myofunctional evaluation, because orofacial myofunctional disorders can limit the mouth opening. AIM: To describe the maximum interincisal distance of the mouth breathing children, according to age, and to compare the averages of the maximum interincisal distance of mouth breathing children to those of children with no history of speech-language pathology disorders. METHODS

  4. ANALYSIS OF THE STATISTICAL BEHAVIOUR OF DAILY MAXIMUM AND MONTHLY AVERAGE RAINFALL ALONG WITH RAINY DAYS VARIATION IN SYLHET, BANGLADESH

    Directory of Open Access Journals (Sweden)

    G. M. J. HASAN

    2014-10-01

    Full Text Available Climate, one of the major controlling factors for well-being of the inhabitants in the world, has been changing in accordance with the natural forcing and manmade activities. Bangladesh, the most densely populated countries in the world is under threat due to climate change caused by excessive use or abuse of ecology and natural resources. This study checks the rainfall patterns and their associated changes in the north-eastern part of Bangladesh mainly Sylhet city through statistical analysis of daily rainfall data during the period of 1957 - 2006. It has been observed that a good correlation exists between the monthly mean and daily maximum rainfall. A linear regression analysis of the data is found to be significant for all the months. Some key statistical parameters like the mean values of Coefficient of Variability (CV, Relative Variability (RV and Percentage Inter-annual Variability (PIV have been studied and found to be at variance. Monthly, yearly and seasonal variation of rainy days also analysed to check for any significant changes.

  5. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  6. Square pulse current wave’s effect on electroplated nickel hardness

    Directory of Open Access Journals (Sweden)

    Bibian Alonso Hoyos

    2006-09-01

    Full Text Available The effects of frequency, average current density and duty cycle on the hardness of electroplated nickel were studied in Watts and sulphamate solutions by means of direct and square pulse current. The results in Watts’ solutions revealed greater hardness at low duty cycle, high average current density and high square pulse current frequency. There was little variation in hardness in nickel sulphamate solutions to changes in duty cycle and wave frequency. Hardness values obtained in the Watts’ bath with square pulse current were higher than those achieved with direct current at the same average current density; such difference was not significant in sulphamate bath treatment.

  7. Average age at death in infancy and infant mortality level: Reconsidering the Coale-Demeny formulas at current levels of low mortality

    Directory of Open Access Journals (Sweden)

    Evgeny M. Andreev

    2015-08-01

    Full Text Available Background: The long-term historical decline in infant mortality has been accompanied by increasing concentration of infant deaths at the earliest stages of infancy. In the mid-1960s Coale and Demeny developed formulas describing the dependency of the average age of death in infancy on the level of infant mortality, based on data obtained up to that time. Objective: In the more developed countries a steady rise in average age of infant death began in the mid-1960s. This paper documents this phenomenon and offers alternative formulas for calculation of the average age of death, taking into account the new mortality trends. Methods: Standard statistical methodologies and a specially developed method are applied to the linked individual birth and infant death datasets available from the US National Center for Health Statistics and the initial (raw numbers of deaths from the Human Mortality Database. Results: It is demonstrated that the trend of decline in the average age of infant death becomes interrupted when the infant mortality rate attains a level around 10 per 1000, and modifications of the Coale-Demeny formulas for practical application to contemporary low levels of mortality are offered. Conclusions: The average age of death in infancy is an important characteristic of infant mortality, although it does not influence the magnitude of life expectancy. That the increase in average age of death in infancy is connected with medical advances is proposed as a possible explanation.

  8. Global Harmonization of Maximum Residue Limits for Pesticides.

    Science.gov (United States)

    Ambrus, Árpád; Yang, Yong Zhen

    2016-01-13

    International trade plays an important role in national economics. The Codex Alimentarius Commission develops harmonized international food standards, guidelines, and codes of practice to protect the health of consumers and to ensure fair practices in the food trade. The Codex maximum residue limits (MRLs) elaborated by the Codex Committee on Pesticide Residues are based on the recommendations of the FAO/WHO Joint Meeting on Pesticides (JMPR). The basic principles applied currently by the JMPR for the evaluation of experimental data and related information are described together with some of the areas in which further developments are needed.

  9. Dopant density from maximum-minimum capacitance ratio of implanted MOS structures

    International Nuclear Information System (INIS)

    Brews, J.R.

    1982-01-01

    For uniformly doped structures, the ratio of the maximum to the minimum high frequency capacitance determines the dopant ion density per unit volume. Here it is shown that for implanted structures this 'max-min' dopant density estimate depends upon the dose and depth of the implant through the first moment of the depleted portion of the implant. A a result, the 'max-min' estimate of dopant ion density reflects neither the surface dopant density nor the average of the dopant density over the depletion layer. In particular, it is not clear how this dopant ion density estimate is related to the flatband capacitance. (author)

  10. Studies in High Current Density Ion Sources for Heavy Ion Fusion Applications

    Energy Technology Data Exchange (ETDEWEB)

    Chacon-Golcher, Edwin [Univ. of California, Berkeley, CA (United States)

    2002-06-01

    This dissertation develops diverse research on small (diameter ~ few mm), high current density (J ~ several tens of mA/cm2) heavy ion sources. The research has been developed in the context of a programmatic interest within the Heavy Ion Fusion (HIF) Program to explore alternative architectures in the beam injection systems that use the merging of small, bright beams. An ion gun was designed and built for these experiments. Results of average current density yield () at different operating conditions are presented for K+ and Cs+ contact ionization sources and potassium aluminum silicate sources. Maximum values for a K+ beam of ~90 mA/cm2 were observed in 2.3 μs pulses. Measurements of beam intensity profiles and emittances are included. Measurements of neutral particle desorption are presented at different operating conditions which lead to a better understanding of the underlying atomic diffusion processes that determine the lifetime of the emitter. Estimates of diffusion times consistent with measurements are presented, as well as estimates of maximum repetition rates achievable. Diverse studies performed on the composition and preparation of alkali aluminosilicate ion sources are also presented. In addition, this work includes preliminary work carried out exploring the viability of an argon plasma ion source and a bismuth metal vapor vacuum arc (MEVVA) ion source. For the former ion source, fast rise-times (~ 1 μs), high current densities (~ 100 mA/cm+) and low operating pressures (< 2 mtorr) were verified. For the latter, high but acceptable levels of beam emittance were measured (εn ≤ 0.006 π· mm · mrad) although measured currents differed from the desired ones (I ~ 5mA) by about a factor of 10.

  11. The study of Zn–Co alloy coatings electrochemically deposited by pulse current

    Directory of Open Access Journals (Sweden)

    Tomić Milorad V.

    2012-01-01

    Full Text Available The electrochemical deposition by pulse current of Zn-Co alloy coatings on steel was examined, with the aim to find out whether pulse plating could produce alloys that could offer a better corrosion protection. The influence of on-time and the average current density on the cathodic current efficiency, coating morphology, surface roughness and corrosion stability in 3% NaCl was examined. At the same Ton/Toff ratio the current efficiency was insignificantly smaller for deposition at higher average current density. It was shown that, depending on the on-time, pulse plating could produce more homogenous alloy coatings with finer morphology, as compared to deposits obtained by direct current. The surface roughness was the greatest for Zn-Co alloy coatings deposited with direct current, as compared with alloy coatings deposited with pulse current, for both examined average current densities. It was also shown that Zn-Co alloy coatings deposited by pulse current could increase the corrosion stability of Zn-Co alloy coatings on steel. Namely, alloy coatings deposited with pulse current showed higher corrosion stability, as compared with alloy coatings deposited with direct current, for almost all examined cathodic times, Ton. Alloy coatings deposited at higher average current density showed greater corrosion stability as compared with coatings deposited by pulse current at smaller average current density. It was shown that deposits obtained with pulse current and cathodic time of 10 ms had the poorest corrosion stability, for both investigated average deposition current density. Among all investigated alloy coatings the highest corrosion stability was obtained for Zn-Co alloy coatings deposited with pulsed current at higher average current density (jav = 4 A dm-2.

  12. Maximum Recommended Dosage of Lithium for Pregnant Women Based on a PBPK Model for Lithium Absorption

    Directory of Open Access Journals (Sweden)

    Scott Horton

    2012-01-01

    Full Text Available Treatment of bipolar disorder with lithium therapy during pregnancy is a medical challenge. Bipolar disorder is more prevalent in women and its onset is often concurrent with peak reproductive age. Treatment typically involves administration of the element lithium, which has been classified as a class D drug (legal to use during pregnancy, but may cause birth defects and is one of only thirty known teratogenic drugs. There is no clear recommendation in the literature on the maximum acceptable dosage regimen for pregnant, bipolar women. We recommend a maximum dosage regimen based on a physiologically based pharmacokinetic (PBPK model. The model simulates the concentration of lithium in the organs and tissues of a pregnant woman and her fetus. First, we modeled time-dependent lithium concentration profiles resulting from lithium therapy known to have caused birth defects. Next, we identified maximum and average fetal lithium concentrations during treatment. Then, we developed a lithium therapy regimen to maximize the concentration of lithium in the mother’s brain, while maintaining the fetal concentration low enough to reduce the risk of birth defects. This maximum dosage regimen suggested by the model was 400 mg lithium three times per day.

  13. Evaluating the maximum patient radiation dose in cardiac interventional procedures

    International Nuclear Information System (INIS)

    Kato, M.; Chida, K.; Sato, T.; Oosaka, H.; Tosa, T.; Kadowaki, K.

    2011-01-01

    Many of the X-ray systems that are used for cardiac interventional radiology provide no way to evaluate the patient maximum skin dose (MSD). The authors report a new method for evaluating the MSD by using the cumulative patient entrance skin dose (ESD), which includes a back-scatter factor and the number of cine-angiography frames during percutaneous coronary intervention (PCI). Four hundred consecutive PCI patients (315 men and 85 women) were studied. The correlation between the cumulative ESD and number of cine-angiography frames was investigated. The irradiation and overlapping fields were verified using dose-mapping software. A good correlation was found between the cumulative ESD and the number of cine-angiography frames. The MSD could be estimated using the proportion of cine-angiography frames used for the main angle of view relative to the total number of cine-angiography frames and multiplying this by the cumulative ESD. The average MSD (3.0±1.9 Gy) was lower than the average cumulative ESD (4.6±2.6 Gy). This method is an easy way to estimate the MSD during PCI. (authors)

  14. Electrochemical Properties of Current Collector in the All-vanadium Redox Flow Battery

    International Nuclear Information System (INIS)

    Hwang, Gan-Jin; Oh, Yong-Hwan; Ryu, Cheol-Hwi; Choi, Ho-Sang

    2014-01-01

    Two commercial carbon plates were evaluated as a current collector (bipolar plate) in the all vanadium redox-flow battery (V-RFB). The performance properties of V-RFB were test in the current density of 60 mA/cm 2 . The electromotive forces (OCV at SOC 100%) of V-RFB using A and B current collector were 1.47 V and 1.54 V. The cell resistance of V-RFB using A current collector was 4.44-5.00 Ω·cm 2 and 3.28-3.75 Ω·cm 2 for charge and discharge, respectively. The cell resistance of V-RFB using B current collector was 4.19-4.42Ω·cm 2 and 4.71-5.49Ω·cm 2 for charge and discharge, respectively. The performance of V-RFB using each current collector was evaluated. The performance of V-RFB using A current collector was 93.1%, 76.8% and 71.4% for average current efficiency, average voltage efficiency and average energy efficiency, respectively. The performance of V-RFB using B current collector was 96.4%, 73.6% and 71.0% for average current efficiency, average voltage efficiency and average energy efficiency, respectively

  15. A Low Input Current and Wide Conversion Ratio Buck Regulator with 75% Efficiency for High-Voltage Triboelectric Nanogenerators

    Science.gov (United States)

    Luo, Li-Chuan; Bao, De-Chun; Yu, Wu-Qi; Zhang, Zhao-Hua; Ren, Tian-Ling

    2016-01-01

    It is meaningful to research the Triboelectric Nanogenerators (TENG), which can create electricity anywhere and anytime. There are many researches on the structures and materials of TENG to explain the phenomenon that the maximum voltage is stable and the current is increasing. The output voltage of the TENG is high about 180-400 V, and the output current is small about 39 μA, which the electronic devices directly integration of TENG with Li-ion batteries will result in huge energy loss due to the ultrahigh TENG impedance. A novel interface circuit with the high-voltage buck regulator for TENG is introduced firstly in this paper. The interface circuit can transfer the output signal of the TENG into the signal fit to a lithium ion battery. Through the circuit of the buck regulator, the average output voltage is about 4.0 V and the average output current is about 1.12 mA. Further, the reliability and availability for the lithium ion battery and the circuit are discussed. The interface circuit is simulated using the Cadence software and verified through PCB experiment. The buck regulator can achieve 75% efficiency for the High-Voltage TENG. This will lead to a research hot and industrialization applications.

  16. Measurement of the Depth of Maximum of Extensive Air Showers above 10^18 eV

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, J.; /Buenos Aires, CONICET; Abreu, P.; /Lisbon, IST; Aglietta, M.; /Turin U. /INFN, Turin; Ahn, E.J.; /Fermilab; Allard, D.; /APC, Paris; Allekotte, I.; /Centro Atomico Bariloche /Buenos Aires, CONICET; Allen, J.; /New York U.; Alvarez-Muniz, J.; /Santiago de Compostela U.; Ambrosio, M.; /Naples U.; Anchordoqui, L.; /Wisconsin U., Milwaukee; Andringa, S.; /Lisbon, IST /Boskovic Inst., Zagreb

    2010-02-01

    We describe the measurement of the depth of maximum, X{sub max}, of the longitudinal development of air showers induced by cosmic rays. Almost 4000 events above 10{sup 18} eV observed by the fluorescence detector of the Pierre Auger Observatory in coincidence with at least one surface detector station are selected for the analysis. The average shower maximum was found to evolve with energy at a rate of (106{sub -21}{sup +35}) g/cm{sup 2}/decade below 10{sup 18.24 {+-} 0.05}eV, and (24 {+-} 3) g/cm{sup 2}/decade above this energy. The measured shower-to-shower fluctuations decrease from about 55 to 26 g/cm{sup 2}. The interpretation of these results in terms of the cosmic ray mass composition is briefly discussed.

  17. Persistent current through a semiconductor quantum dot with Gaussian confinement

    International Nuclear Information System (INIS)

    Boyacioglu, Bahadir; Chatterjee, Ashok

    2012-01-01

    The persistent diamagnetic current in a GaAs quantum dot with Gaussian confinement is calculated. It is shown that except at very low temperature or at high temperature, the persistent current increases with decreasing temperature. It is also shown that as a function of the dot size, the diamagnetic current exhibits a maximum at a certain confinement length. It is furthermore shown that for a shallow potential, the persistent current shows an interesting maximum structure as a function of the depth of the potential. At low temperature, the peak structure is pretty sharp but becomes broader and broader with increasing temperature.

  18. Photoinduced currents in pristine and ion irradiated kapton-H polyimide

    Science.gov (United States)

    Sharma, Anu; Sridharbabu, Y.; Quamara, J. K.

    2014-10-01

    The photoinduced currents in pristine and ion irradiated kapton-H polyimide have been investigated for different applied electric fields at 200°C. Particularly the effect of illumination intensity on the maximum current obtained as a result of photoinduced polarization has been studied. Samples were irradiated by using PELLETRON facility, IUAC, New Delhi. The photo-carrier charge generation depends directly on intensity of illumination. The samples irradiated at higher fluence show a decrease in the peak current with intensity of illumination. The secondary radiation induced crystallinity (SRIC) is responsible for the increase in maximum photoinduced currents generated with intensity of illumination.

  19. Photoinduced currents in pristine and ion irradiated kapton-H polyimide

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Anu, E-mail: sharmaanu81@gmail.com; Sridharbabu, Y., E-mail: sharmaanu81@gmail.com; Quamara, J. K., E-mail: sharmaanu81@gmail.com [Physics Department, SGTB Khalsa college, Delhi University, Delhi (India); Department of Physics, National Institute of Technology, Kurukshetra-136119 (India); Echelon Group of Institutions, Faridabad (India)

    2014-10-15

    The photoinduced currents in pristine and ion irradiated kapton-H polyimide have been investigated for different applied electric fields at 200°C. Particularly the effect of illumination intensity on the maximum current obtained as a result of photoinduced polarization has been studied. Samples were irradiated by using PELLETRON facility, IUAC, New Delhi. The photo-carrier charge generation depends directly on intensity of illumination. The samples irradiated at higher fluence show a decrease in the peak current with intensity of illumination. The secondary radiation induced crystallinity (SRIC) is responsible for the increase in maximum photoinduced currents generated with intensity of illumination.

  20. Improvement of maximum power point tracking perturb and observe algorithm for a standalone solar photovoltaic system

    International Nuclear Information System (INIS)

    Awan, M.M.A.; Awan, F.G.

    2017-01-01

    Extraction of maximum power from PV (Photovoltaic) cell is necessary to make the PV system efficient. Maximum power can be achieved by operating the system at MPP (Maximum Power Point) (taking the operating point of PV panel to MPP) and for this purpose MPPT (Maximum Power Point Trackers) are used. There are many tracking algorithms/methods used by these trackers which includes incremental conductance, constant voltage method, constant current method, short circuit current method, PAO (Perturb and Observe) method, and open circuit voltage method but PAO is the mostly used algorithm because it is simple and easy to implement. PAO algorithm has some drawbacks, one is low tracking speed under rapid changing weather conditions and second is oscillations of PV systems operating point around MPP. Little improvement is achieved in past papers regarding these issues. In this paper, a new method named 'Decrease and Fix' method is successfully introduced as improvement in PAO algorithm to overcome these issues of tracking speed and oscillations. Decrease and fix method is the first successful attempt with PAO algorithm for stability achievement and speeding up of tracking process in photovoltaic system. Complete standalone photovoltaic system's model with improved perturb and observe algorithm is simulated in MATLAB Simulink. (author)

  1. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  2. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  3. Preliminary determination of the energy potential of ocean currents along the southern coast of Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, Andrea; Beluco, Alexandre; de Almeida, Luiz Emilio B. [Inst. Pesquisas Hidraulicas, Univ. Fed Rio Grande do Sul, Porto Alegre (Brazil)

    2013-07-01

    The ocean can be a strategic alternative for obtaining energy supplies, both from ocean waves as from sea currents and tides. Among these features, the power generation projects based on ocean currents are still under development. Generating energy from ocean can have great impact on the Brazilian energy grid, since Brazil has a vast coastline, with more than 9,000 km long, with potential for generating energy from ocean currents not fully estimated. This article presents a preliminary determination of the energy potential for power generation from ocean currents along the coast of Rio Grande do Sul, the southernmost state of Brazil, and also presents notes that contribute to the characterization of the system of ocean currents in the region. The data used were obtained in two areas near Tramandai, allowing the determination of velocities and directions of the currents on a seasonal basis. The maximum speeds obtained rarely exceed 0.750 m/s, while the average speeds do not exceed 0.200 m/s. A relationship with the prevailing winds in the region was identified. Unfortunately, the results do not allow optimism about the power generation from ocean currents on the southern coast of Brazil, at least over the continental shelf.

  4. Maximum entropy networks are more controllable than preferential attachment networks

    International Nuclear Information System (INIS)

    Hou, Lvlin; Small, Michael; Lao, Songyang

    2014-01-01

    A maximum entropy (ME) method to generate typical scale-free networks has been recently introduced. We investigate the controllability of ME networks and Barabási–Albert preferential attachment networks. Our experimental results show that ME networks are significantly more easily controlled than BA networks of the same size and the same degree distribution. Moreover, the control profiles are used to provide insight into control properties of both classes of network. We identify and classify the driver nodes and analyze the connectivity of their neighbors. We find that driver nodes in ME networks have fewer mutual neighbors and that their neighbors have lower average degree. We conclude that the properties of the neighbors of driver node sensitively affect the network controllability. Hence, subtle and important structural differences exist between BA networks and typical scale-free networks of the same degree distribution. - Highlights: • The controllability of maximum entropy (ME) and Barabási–Albert (BA) networks is investigated. • ME networks are significantly more easily controlled than BA networks of the same degree distribution. • The properties of the neighbors of driver node sensitively affect the network controllability. • Subtle and important structural differences exist between BA networks and typical scale-free networks

  5. Nonuniform Illumination Correction Algorithm for Underwater Images Using Maximum Likelihood Estimation Method

    Directory of Open Access Journals (Sweden)

    Sonali Sachin Sankpal

    2016-01-01

    Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.

  6. Variation in the annual average radon concentration measured in homes in Mesa County, Colorado

    International Nuclear Information System (INIS)

    Rood, A.S.; George, J.L.; Langner, G.H. Jr.

    1990-04-01

    The purpose of this study is to examine the variability in the annual average indoor radon concentration. The TMC has been collecting annual average radon data for the past 5 years in 33 residential structures in Mesa County, Colorado. This report is an interim report that presents the data collected up to the present. Currently, the plans are to continue this study in the future. 62 refs., 3 figs., 12 tabs

  7. Highly Sensitive Measurements of the Dark Current of Superconducting Cavities for TESLA Using a SQUID Based Cryogenic Current Comparator

    CERN Document Server

    Vodel, W; Nietzsche, S

    2004-01-01

    This contribution presents a Cryogenic Current Comparator (CCC) as an excellent tool for detecting dark currents generated, e.g. by superconducting cavities for the upcoming TESLA project (X-FEL) at DESY. To achieve the maximum possible energy the gradient of the superconducting RF cavities should be pushed close to the physical limit of 50 MV/m. The undesired field emission of electrons (so-called dark current) of the superconducting RF cavities at strong fields may limit the maximum gradient. The absolute measurement of the dark current in correlation with the gradient will give a proper value to compare and classify the cavities. The main component of the CCC is a highly sensitive LTS-DC SQUID system which is able to measure extremely low magnetic fields, e.g. caused by the dark current. For this reason the input coil of the SQUID is connected across a special designed toroidal niobium pick-up coil for the passing electron beam. A noise limited current resolution of nearly 2 pA/√(Hz) with a measu...

  8. LARF: Instrumental Variable Estimation of Causal Effects through Local Average Response Functions

    Directory of Open Access Journals (Sweden)

    Weihua An

    2016-07-01

    Full Text Available LARF is an R package that provides instrumental variable estimation of treatment effects when both the endogenous treatment and its instrument (i.e., the treatment inducement are binary. The method (Abadie 2003 involves two steps. First, pseudo-weights are constructed from the probability of receiving the treatment inducement. By default LARF estimates the probability by a probit regression. It also provides semiparametric power series estimation of the probability and allows users to employ other external methods to estimate the probability. Second, the pseudo-weights are used to estimate the local average response function conditional on treatment and covariates. LARF provides both least squares and maximum likelihood estimates of the conditional treatment effects.

  9. Site Specific Probable Maximum Precipitation Estimates and Professional Judgement

    Science.gov (United States)

    Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.

    2015-12-01

    State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially

  10. Wave-Current Interactions in the Vicinity of the Sea Bed

    Energy Technology Data Exchange (ETDEWEB)

    Holmedal, Lars Erik

    2002-01-01

    The intention of the work carried out in the present thesis is to span a part of the range of sea bed boundary layer research by three separate parts. The two first parts deal with the sea bed boundary layer beneath random waves and current, while the third part represents a more fundamental approach towards the smooth turbulent boundary layer under a horizontally uniform sinusoidal plus steady forcing. The first part focuses on the bottom shear stress amplitudes under random waves plus current. Shear stresses on a rough seabed under irregular waves plus current are calculated. Parameterized models valid for regular waves plus current have been used in Monte Carlo simulations, assuming the wave amplitudes to be Rayleigh distributed. Numerical estimates of the probability distribution functions are presented. For waves only, the shear stress maxima follow a two-parameter Weibull distribution, while for waves plus current, both the maximum and time-averaged shear stresses are well represented by a three-parameter Weibull distribution. The behaviour of the maximum shear stresses under a wide range of wave-current conditions has been investigated, and it appears that under certain conditions the current has a significant influence on the maximum shear stresses. Results of comparison between predictions and measurements of the maximum bottom shear stresses from laboratory and field experiments are presented. The second part extends the first approach by applying a dynamic eddy viscosity model; the boundary layer under random waves alone as well as under random waves plus current have been examined by a dynamic turbulent boundary layer model based on the linearized boundary layer equations with horizontally uniform forcing. The turbulence closure is provided by a high Reynolds number k - {epsilon} model. The model appears to be verified as far as data exists, i.e., for sinusoidal waves alone as well as for sinusoidal waves plus a mean current. The time and space

  11. Predicting the start and maximum amplitude of solar cycle 24 using similar phases and a cycle grouping

    International Nuclear Information System (INIS)

    Wang Jialong; Zong Weiguo; Le Guiming; Zhao Haijuan; Tang Yunqiu; Zhang Yang

    2009-01-01

    We find that the solar cycles 9, 11, and 20 are similar to cycle 23 in their respective descending phases. Using this similarity and the observed data of smoothed monthly mean sunspot numbers (SMSNs) available for the descending phase of cycle 23, we make a date calibration for the average time sequence made of the three descending phases of the three cycles, and predict the start of March or April 2008 for cycle 24. For the three cycles, we also find a linear correlation of the length of the descending phase of a cycle with the difference between the maximum epoch of this cycle and that of its next cycle. Using this relationship along with the known relationship between the rise-time and the maximum amplitude of a slowly rising solar cycle, we predict the maximum SMSN of cycle 24 of 100.2 ± 7.5 to appear during the period from May to October 2012. (letters)

  12. Validity of one-repetition maximum predictive equations in men with spinal cord injury.

    Science.gov (United States)

    Ribeiro Neto, F; Guanais, P; Dornelas, E; Coutinho, A C B; Costa, R R G

    2017-10-01

    Cross-sectional study. The study aimed (a) to test the cross-validation of current one-repetition maximum (1RM) predictive equations in men with spinal cord injury (SCI); (b) to compare the current 1RM predictive equations to a newly developed equation based on the 4- to 12-repetition maximum test (4-12RM). SARAH Rehabilitation Hospital Network, Brasilia, Brazil. Forty-five men aged 28.0 years with SCI between C6 and L2 causing complete motor impairment were enrolled in the study. Volunteers were tested, in a random order, in 1RM test or 4-12RM with 2-3 interval days. Multiple regression analysis was used to generate an equation for predicting 1RM. There were no significant differences between 1RM test and the current predictive equations. ICC values were significant and were classified as excellent for all current predictive equations. The predictive equation of Lombardi presented the best Bland-Altman results (0.5 kg and 12.8 kg for mean difference and interval range around the differences, respectively). The two created equation models for 1RM demonstrated the same and a high adjusted R 2 (0.971, Ppredictive equations are accurate to assess individuals with SCI at the bench press exercise. However, the predictive equation of Lombardi presented the best associated cross-validity results. A specific 1RM prediction equation was also elaborated for individuals with SCI. The created equation should be tested in order to verify whether it presents better accuracy than the current ones.

  13. Highly efficient maximum power point tracking using DC-DC coupled inductor single-ended primary inductance converter for photovoltaic power systems

    Science.gov (United States)

    Quamruzzaman, M.; Mohammad, Nur; Matin, M. A.; Alam, M. R.

    2016-10-01

    Solar photovoltaics (PVs) have nonlinear voltage-current characteristics, with a distinct maximum power point (MPP) depending on factors such as solar irradiance and operating temperature. To extract maximum power from the PV array at any environmental condition, DC-DC converters are usually used as MPP trackers. This paper presents the performance analysis of a coupled inductor single-ended primary inductance converter for maximum power point tracking (MPPT) in a PV system. A detailed model of the system has been designed and developed in MATLAB/Simulink. The performance evaluation has been conducted on the basis of stability, current ripple reduction and efficiency at different operating conditions. Simulation results show considerable ripple reduction in the input and output currents of the converter. Both the MPPT and converter efficiencies are significantly improved. The obtained simulation results validate the effectiveness and suitability of the converter model in MPPT and show reasonable agreement with the theoretical analysis.

  14. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  15. Experimental study on the effects of surface gravity waves of different wavelengths on the phase averaged performance characteristics of marine current turbine

    Science.gov (United States)

    Luznik, L.; Lust, E.; Flack, K. A.

    2014-12-01

    There are few studies describing the interaction between marine current turbines and an overlying surface gravity wave field. In this work we present an experimental study on the effects of surface gravity waves of different wavelengths on the wave phase averaged performance characteristics of a marine current turbine model. Measurements are performed with a 1/25 scale (diameter D=0.8m) two bladed horizontal axis turbine towed in the large (116m long) towing tank at the U.S. Naval Academy equipped with a dual-flap, servo-controlled wave maker. Three regular waves with wavelengths of 15.8, 8.8 and 3.9m with wave heights adjusted such that all waveforms have the same energy input per unit width are produced by the wave maker and model turbine is towed into the waves at constant carriage speed of 1.68 m/s. This representing the case of waves travelling in the same direction as the mean current. Thrust and torque developed by the model turbine are measured using a dynamometer mounted in line with the turbine shaft. Shaft rotation speed and blade position are measured using in in-house designed shaft position indexing system. The tip speed ratio (TSR) is adjusted using a hysteresis brake which is attached to the output shaft. Free surface elevation and wave parameters are measured with two optical wave height sensors, one located in the turbine rotor plane and other one diameter upstream of the rotor. All instruments are synchronized in time and data is sampled at a rate of 700 Hz. All measured quantities are conditionally sampled as a function of the measured surface elevation and transformed to wave phase space using the Hilbert Transform. Phenomena observed in earlier experiments with the same turbine such as phase lag in the torque signal and an increase in thrust due to Stokes drift are examined and presented with the present data as well as spectral analysis of the torque and thrust data.

  16. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  17. Iaverage current mode (ACM) control for switching power converters

    OpenAIRE

    2014-01-01

    Providing a fast current sensor direct feedback path to a modulator for controlling switching of a switched power converter in addition to an integrating feedback path which monitors average current for control of a modulator provides fast dynamic response consistent with system stability and average current mode control. Feedback of output voltage for voltage regulation can be combined with current information in the integrating feedback path to limit bandwidth of the voltage feedback signal.

  18. Maximum power point tracking of partially shaded solar photovoltaic arrays

    Energy Technology Data Exchange (ETDEWEB)

    Roy Chowdhury, Shubhajit; Saha, Hiranmay [IC Design and Fabrication Centre, Department of Electronics and Telecommunication Engineering, Jadavpur University (India)

    2010-09-15

    The paper presents the simulation and hardware implementation of maximum power point (MPP) tracking of a partially shaded solar photovoltaic (PV) array using a variant of Particle Swarm Optimization known as Adaptive Perceptive Particle Swarm Optimization (APPSO). Under partially shaded conditions, the photovoltaic (PV) array characteristics get more complex with multiple maxima in the power-voltage characteristic. The paper presents an algorithmic technique to accurately track the maximum power point (MPP) of a PV array using an APPSO. The APPSO algorithm has also been validated in the current work. The proposed technique uses only one pair of sensors to control multiple PV arrays. This result in lower cost and higher accuracy of 97.7% compared to earlier obtained accuracy of 96.41% using Particle Swarm Optimization. The proposed tracking technique has been mapped onto a MSP430FG4618 microcontroller for tracking and control purposes. The whole system based on the proposed has been realized on a standard two stage power electronic system configuration. (author)

  19. Experimental assessment of blade tip immersion depth from free surface on average power and thrust coefficients of marine current turbine

    Science.gov (United States)

    Lust, Ethan; Flack, Karen; Luznik, Luksa

    2014-11-01

    Results from an experimental study on the effects of marine current turbine immersion depth from the free surface are presented. Measurements are performed with a 1/25 scale (diameter D = 0.8m) two bladed horizontal axis turbine towed in the large towing tank at the U.S. Naval Academy. Thrust and torque are measured using a dynamometer, mounted in line with the turbine shaft. Shaft rotation speed and blade position are measured using a shaft position indexing system. The tip speed ratio (TSR) is adjusted using a hysteresis brake which is attached to the output shaft. Two optical wave height sensors are used to measure the free surface elevation. The turbine is towed at 1.68 m/s, resulting in a 70% chord based Rec = 4 × 105. An Acoustic Doppler Velocimeter (ADV) is installed one turbine diameter upstream of the turbine rotation plane to characterize the inflow turbulence. Measurements are obtained at four relative blade tip immersion depths of z/D = 0.5, 0.4, 0.3, and 0.2 at a TSR value of 7 to identify the depth where free surface effects impact overall turbine performance. The overall average power and thrust coefficient are presented and compared to previously conducted baseline tests. The influence of wake expansion blockage on the turbine performance due to presence of the free surface at these immersion depths will also be discussed.

  20. Two-Stage Chaos Optimization Search Application in Maximum Power Point Tracking of PV Array

    Directory of Open Access Journals (Sweden)

    Lihua Wang

    2014-01-01

    Full Text Available In order to deliver the maximum available power to the load under the condition of varying solar irradiation and environment temperature, maximum power point tracking (MPPT technologies have been used widely in PV systems. Among all the MPPT schemes, the chaos method is one of the hot topics in recent years. In this paper, a novel two-stage chaos optimization method is presented which can make search faster and more effective. In the process of proposed chaos search, the improved logistic mapping with the better ergodic is used as the first carrier process. After finding the current optimal solution in a certain guarantee, the power function carrier as the secondary carrier process is used to reduce the search space of optimized variables and eventually find the maximum power point. Comparing with the traditional chaos search method, the proposed method can track the change quickly and accurately and also has better optimization results. The proposed method provides a new efficient way to track the maximum power point of PV array.

  1. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    International Nuclear Information System (INIS)

    Beer, M.

    1980-01-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates

  2. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    International Nuclear Information System (INIS)

    Bilsky, A V; Lozhkin, V A; Markovich, D M; Tokarev, M P

    2013-01-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART. (paper)

  3. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  4. Impact of maximum TF magnetic field on performance and cost of an advanced physics tokamak

    International Nuclear Information System (INIS)

    Reid, R.L.

    1983-01-01

    Parametric studies were conducted using the Fusion Engineering Design Center (FEDC) Tokamak Systems Code to investigate the impact of variation in the maximum value of the field at the toroidal field (TF) coils on the performance and cost of a low q/sub psi/, quasi-steady-state tokamak. Marginal ignition, inductive current startup plus 100 s of inductive burn, and a constant value of epsilon (inverse aspect ratio) times beta poloidal were global conditions imposed on this study. A maximum TF field of approximately 10 T was found to be appropriate for this device

  5. Distributed maximum power point tracking in wind micro-grids

    Directory of Open Access Journals (Sweden)

    Carlos Andrés Ramos-Paja

    2012-06-01

    Full Text Available With the aim of reducing the hardware requirements in micro-grids based on wind generators, a distributed maximum power point tracking algorithm is proposed. Such a solution reduces the amount of current sensors and processing devices to maximize the power extracted from the micro-grid, reducing the application cost. The analysis of the optimal operating points of the wind generator was performed experimentally, which in addition provides realistic model parameters. Finally, the proposed solution was validated by means of detailed simulations performed in the power electronics software PSIM, contrasting the achieved performance with traditional solutions.

  6. Correlation between maximum phonetically balanced word recognition score and pure-tone auditory threshold in elder presbycusis patients over 80 years old.

    Science.gov (United States)

    Deng, Xin-Sheng; Ji, Fei; Yang, Shi-Ming

    2014-02-01

    The maximum phonetically balanced word recognition score (PBmax) showed poor correlation with pure-tone thresholds in presbycusis patients older than 80 years. To study the characteristics of monosyllable recognition in presbycusis patients older than 80 years of age. Thirty presbycusis patients older than 80 years were included as the test group (group 80+). Another 30 patients aged 60-80 years were selected as the control group (group 80-) . PBmax was tested by Mandarin monosyllable recognition test materials with the signal level at 30 dB above the averaged thresholds of 0.5, 1, 2, and 4 kHz (4FA) or the maximum comfortable level. The PBmax values of the test group and control group were compared with each other and the correlation between PBmax and predicted maximum speech recognition scores based on 4FA (PBmax-predict) were statistically analyzed. Under the optimal test conditions, the averaged PBmax was (77.3 ± 16.7) % for group 80- and (52.0 ± 25.4) % for group 80+ (p < 0.001). The PBmax of group 80- was significantly correlated with PBmax-predict (Spearman correlation = 0.715, p < 0.001). The score for group 80+ was less statistically correlated with PBmax-predict (Spearman correlation = 0.572, p = 0.001).

  7. Waif goodbye! Average-size female models promote positive body image and appeal to consumers.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2011-10-01

    Despite consensus that exposure to media images of thin fashion models is associated with poor body image and disordered eating behaviours, few attempts have been made to enact change in the media. This study sought to investigate an effective alternative to current media imagery, by exploring the advertising effectiveness of average-size female fashion models, and their impact on the body image of both women and men. A sample of 171 women and 120 men were assigned to one of three advertisement conditions: no models, thin models and average-size models. Women and men rated average-size models as equally effective in advertisements as thin and no models. For women with average and high levels of internalisation of cultural beauty ideals, exposure to average-size female models was associated with a significantly more positive body image state in comparison to exposure to thin models and no models. For men reporting high levels of internalisation, exposure to average-size models was also associated with a more positive body image state in comparison to viewing thin models. These findings suggest that average-size female models can promote positive body image and appeal to consumers.

  8. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  9. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  10. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  11. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  12. Is the poleward migration of tropical cyclone maximum intensity associated with a poleward migration of tropical cyclone genesis?

    Science.gov (United States)

    Daloz, Anne Sophie; Camargo, Suzana J.

    2018-01-01

    A recent study showed that the global average latitude where tropical cyclones achieve their lifetime-maximum intensity has been migrating poleward at a rate of about one-half degree of latitude per decade over the last 30 years in each hemisphere. However, it does not answer a critical question: is the poleward migration of tropical cyclone lifetime-maximum intensity associated with a poleward migration of tropical cyclone genesis? In this study we will examine this question. First we analyze changes in the environmental variables associated with tropical cyclone genesis, namely entropy deficit, potential intensity, vertical wind shear, vorticity, skin temperature and specific humidity at 500 hPa in reanalysis datasets between 1980 and 2013. Then, a selection of these variables is combined into two tropical cyclone genesis indices that empirically relate tropical cyclone genesis to large-scale variables. We find a shift toward greater (smaller) average potential number of genesis at higher (lower) latitudes over most regions of the Pacific Ocean, which is consistent with a migration of tropical cyclone genesis towards higher latitudes. We then examine the global best track archive and find coherent and significant poleward shifts in mean genesis position over the Pacific Ocean basins.

  13. Effect of Electrical Current Stimulation on Pseudomonas Aeruginosa Growth

    Science.gov (United States)

    Alneami, Auns Q.; Khalil, Eman G.; Mohsien, Rana A.; Albeldawi, Ali F.

    2018-05-01

    The present study evaluates the effect of electrical current with different frequencies stimulation to kill pathogenic Pseudomonas aeruginosa (PA) bacteria in vitro using human safe level of electricity controlled by function generator. A wide range of frequencies has been used from 0.5 Hz-1.2 MHz to stimulate the bacteria at a voltage of 20 p-p volt for different periods of time (5 to 30) minutes. The culture of bacteria used Nickel, Nichrome, or Titanium electrode using agarose in phosphate buffer saline (PBS) and mixed with bacterial stock activated by trypticase soy broth (TSB). The results of frequencies between 0.5-1 KHz show the inhibition zone diameter of 20 mm in average at 30 minutes of stimulation. At frequencies between 3-60 KHz the inhibition zone diameter was only 10mm for 30 minutes of stimulation. While the average of inhibition zone diameter increased to more than 30mm for 30 minutes of stimulation at frequencies between 80-120 KHz. From this study we conclude that at specific frequency (resonance frequency) (frequencies between 0.5-1 KHz) there was relatively large inhibition zone because the inductive reactance effect is equal to the value of capacitive reactance effect (XC = XL). At frequencies over than 60 KHz, maximum inhibition zone noticed because the capacitance impedance becomes negligible (only the small resistivity of the bacterial internal organs).

  14. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  15. Influence of Maximum Inbreeding Avoidance under BLUP EBV Selection on Pinzgau Population Diversity

    Directory of Open Access Journals (Sweden)

    Radovan Kasarda

    2011-05-01

    Full Text Available Evaluated was effect of mating (random vs. maximum avoidance of inbreeding under BLUP EBV selection strategy. Existing population structure was under Monte Carlo stochastic simulation analyzed from the point to minimize increase of inbreeding. Maximum avoidance of inbreeding under BLUP selection resulted into comparable increase of inbreeding then random mating in average of 10 generation development. After 10 generations of simulation of mating strategy was observed ΔF= 6,51 % (2 sires, 5,20 % (3 sires, 3,22 % (4 sires resp. 2,94 % (5 sires. With increased number of sires selected, decrease of inbreeding was observed. With use of 4, resp. 5 sires increase of inbreeding was comparable to random mating with phenotypic selection. For saving of genetic diversity and prevention of population loss is important to minimize increase of inbreeding in small populations. Classical approach was based on balancing ratio of sires and dams in mating program. Contrariwise in the most of commercial populations small number of sires was used with high mating ratio.

  16. A maximum power point tracking algorithm for photovoltaic applications

    Science.gov (United States)

    Nelatury, Sudarshan R.; Gray, Robert

    2013-05-01

    The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.

  17. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  18. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  19. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  20. Localized persistent spin currents in defect-free quasiperiodic rings with Aharonov–Casher effect

    International Nuclear Information System (INIS)

    Qiu, R.Z.; Chen, C.H.; Cheng, Y.H.; Hsueh, W.J.

    2015-01-01

    We propose strongly localized persistent spin current in one-dimensional defect-free quasiperiodic Thue–Morse rings with Aharonov–Casher effect. The results show that the characteristics of these localized persistent currents depend not only on the radius filling factor, but also on the strength of the spin–orbit interaction. The maximum persistent spin currents in systems always appear in the ring near the middle position of the system array whether or not the Thue–Morse rings array is symmetrical. The magnitude of the persistent currents is proportional to the sharpness of the resonance peak, which is dependent on the bandwidth of the allowed band in the band structure. The maximum persistent spin currents also increase exponentially as the generation order of the system increases. - Highlights: • Strongly localized persistent spin current in quasiperiodic AC rings is proposed. • Localized persistent spin currents are much larger than those produced by traditional mesoscopic rings. • Characteristics of the localized persistent currents depend on the radius filling factor and SOI strength. • The maximum persistent current increases exponentially with the system order. • The magnitude of the persistent currents is related to the sharpness of the resonance

  1. Grid-tied photovoltaic and battery storage systems with Malaysian electricity tariff:A review on maximum demand shaving

    OpenAIRE

    Subramani, Gopinath; Ramachandaramurthy, Vigna K.; Padmanaban, Sanjeevikumar; Mihet-Popa, Lucian; Blaabjerg, Frede; Guerrero, Josep M.

    2017-01-01

    Under the current energy sector framework of electricity tariff in Malaysia, commercial and industrial customers are required to pay the maximum demand (MD) charge apart from the net consumption charges every month. The maximum demand charge will contribute up to 20% of the electricity bill, and will hence result in commercial and industrial customers focussing on alternative energy supply to minimize the billing cost. This paper aims to review the technical assessment methods of a grid-conne...

  2. Theoretical Evaluation of the Maximum Work of Free-Piston Engine Generators

    Science.gov (United States)

    Kojima, Shinji

    2017-01-01

    Utilizing the adjoint equations that originate from the calculus of variations, we have calculated the maximum thermal efficiency that is theoretically attainable by free-piston engine generators considering the work loss due to friction and Joule heat. Based on the adjoint equations with seven dimensionless parameters, the trajectory of the piston, the histories of the electric current, the work done, and the two kinds of losses have been derived in analytic forms. Using these we have conducted parametric studies for the optimized Otto and Brayton cycles. The smallness of the pressure ratio of the Brayton cycle makes the net work done negative even when the duration of heat addition is optimized to give the maximum amount of heat addition. For the Otto cycle, the net work done is positive, and both types of losses relative to the gross work done become smaller with the larger compression ratio. Another remarkable feature of the optimized Brayton cycle is that the piston trajectory of the heat addition/disposal process is expressed by the same equation as that of an adiabatic process. The maximum thermal efficiency of any combination of isochoric and isobaric heat addition/disposal processes, such as the Sabathe cycle, may be deduced by applying the methods described here.

  3. Observation and analysis of halo current in EAST

    Science.gov (United States)

    Chen, Da-Long; Shen, Biao; Qian, Jin-Ping; Sun, You-Wen; Liu, Guang-Jun; Shi, Tong-Hui; Zhuang, Hui-Dong; Xiao, Bing-Jia

    2014-06-01

    Plasma in a typically elongated cross-section tokamak (for example, EAST) is inherently unstable against vertical displacement. When plasma loses the vertical position control, it moves downward or upward, leading to disruption, and a large halo current is generated helically in EAST typically in the scrape-off layer. When flowing into the vacuum vessel through in-vessel components, the halo current will give rise to a large J × B force acting on the vessel and the in-vessel components. In EAST VDE experiment, part of the eddy current is measured in halo sensors, due to the large loop voltage. Primary experimental data demonstrate that the halo current first lands on the outer plate and then flows clockwise, and the analysis of the information indicates that the maximum halo current estimated in EAST is about 0.4 times the plasma current and the maximum value of TPF × Ih/IP0 is 0.65, furthermore Ih/Ip0 and TPF × Ih/Ip0 tend to increase with the increase of Ip0. The test of the strong gas injection system shows good success in increasing the radiated power, which may be effective in reducing the halo current.

  4. A study of a two stage maximum power point tracking control of a photovoltaic system under partially shaded insolation conditions

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Kenji; Takano, Ichiro; Sawada, Yoshio [Kogakuin University, Tokyo 163-8677 (Japan)

    2006-11-23

    A photovoltaic (PV) array shows relatively low output power density, and has a greatly drooping current-voltage (I-V) characteristic. Therefore, maximum power point tracking (MPPT) control is used to maximize the output power of the PV array. Many papers have been reported in relation to MPPT. However, the current-power (I-P) curve sometimes shows multi-local maximum point mode under non-uniform insolation conditions. The operating point of the PV system tends to converge to a local maximum output point which is not the real maximal output point on the I-P curve. Some papers have been also reported, trying to avoid this difficulty. However, most of those control systems become rather complicated. Then, the two stage MPPT control method is proposed in this paper to realize a relatively simple control system which can track the real maximum power point even under non-uniform insolation conditions. The feasibility of this control concept is confirmed for steady insolation as well as for rapidly changing insolation by simulation study using software PSIM and LabVIEW. (author)

  5. High current induction linacs

    International Nuclear Information System (INIS)

    Barletta, W.; Faltens, A.; Henestroza, E.; Lee, E.

    1994-07-01

    Induction linacs are among the most powerful accelerators in existence. They have accelerated electron bunches of several kiloamperes, and are being investigated as drivers for heavy ion driven inertial confinement fusion (HIF), which requires peak beam currents of kiloamperes and average beam powers of some tens of megawatts. The requirement for waste transmutation with an 800 MeV proton or deuteron beam with an average current of 50 mA and an average power of 40 MW lies midway between the electron machines and the heavy ion machines in overall difficulty. Much of the technology and understanding of beam physics carries over from the previous machines to the new requirements. The induction linac allows use of a very large beam aperture, which may turn out to be crucial to reducing beam loss and machine activation from the beam halo. The major issues addressed here are transport of high intensity beams, availability of sources, efficiency of acceleration, and the state of the needed technology for the waste treatment application. Because of the transformer-like action of an induction core and the accompanying magnetizing current, induction linacs make the most economic sense and have the highest efficiencies with large beam currents. Based on present understanding of beam transport limits, induction core magnetizing current requirements, and pulse modulators, the efficiencies could be very high. The study of beam transport at high intensities has been the major activity of the HIF community. Beam transport and sources are limiting at low energies but are not significant constraints at the higher energies. As will be shown, the proton beams will be space-charge-dominated, for which the emittance has only a minor effect on the overall beam diameter but does determine the density falloff at the beam edge

  6. Real time estimation of photovoltaic modules characteristics and its application to maximum power point operation

    Energy Technology Data Exchange (ETDEWEB)

    Garrigos, Ausias; Blanes, Jose M.; Carrasco, Jose A. [Area de Tecnologia Electronica, Universidad Miguel Hernandez de Elche, Avda. de la Universidad s/n, 03202 Elche, Alicante (Spain); Ejea, Juan B. [Departamento de Ingenieria Electronica, Universidad de Valencia, Avda. Dr Moliner 50, 46100 Valencia, Valencia (Spain)

    2007-05-15

    In this paper, an approximate curve fitting method for photovoltaic modules is presented. The operation is based on solving a simple solar cell electrical model by a microcontroller in real time. Only four voltage and current coordinates are needed to obtain the solar module parameters and set its operation at maximum power in any conditions of illumination and temperature. Despite its simplicity, this method is suitable for low cost real time applications, as control loop reference generator in photovoltaic maximum power point circuits. The theory that supports the estimator together with simulations and experimental results are presented. (author)

  7. Baseline restoration using current conveyors

    International Nuclear Information System (INIS)

    Morgado, A.M.L.S.; Simoes, J.B.; Correia, C.M.

    1996-01-01

    A good performance of high resolution nuclear spectrometry systems, at high pulse rates, demands restoration of baseline between pulses, in order to remove rate dependent baseline shifts. This restoration is performed by circuits named baseline restorers (BLRs) which also remove low frequency noise, such as power supply hum and detector microphonics. This paper presents simple circuits for baseline restoration based on a commercial current conveyor (CCII01). Tests were performed, on two circuits, with periodic trapezoidal shaped pulses in order to measure the baseline restoration for several pulse rates and restorer duty cycles. For the current conveyor based Robinson restorer, the peak shift was less than 10 mV, for duty cycles up to 60%, at high pulse rates. Duty cycles up to 80% were also tested, being the maximum peak shift 21 mV. The peak shift for the current conveyor based Grubic restorer was also measured. The maximum value found was 30 mV at 82% duty cycle. Keeping the duty cycle below 60% improves greatly the restorer performance. The ability of both baseline restorer architectures to reject low frequency modulation is also measured, with good results on both circuits

  8. A Novel Busbar Protection Based on the Average Product of Fault Components

    Directory of Open Access Journals (Sweden)

    Guibin Zou

    2018-05-01

    Full Text Available This paper proposes an original busbar protection method, based on the characteristics of the fault components. The method firstly extracts the fault components of the current and voltage after the occurrence of a fault, secondly it uses a novel phase-mode transformation array to obtain the aerial mode components, and lastly, it obtains the sign of the average product of the aerial mode voltage and current. For a fault on the busbar, the average products that are detected on all of the lines that are linked to the faulted busbar are all positive within a specific duration of the post-fault. However, for a fault at any one of these lines, the average product that has been detected on the faulted line is negative, while those on the non-faulted lines are positive. On the basis of the characteristic difference that is mentioned above, the identification criterion of the fault direction is established. Through comparing the fault directions on all of the lines, the busbar protection can quickly discriminate between an internal fault and an external fault. By utilizing the PSCAD/EMTDC software (4.6.0.0, Manitoba HVDC Research Centre, Winnipeg, MB, Canada, a typical 500 kV busbar model, with one and a half circuit breakers configuration, was constructed. The simulation results show that the proposed busbar protection has a good adjustability, high reliability, and rapid operation speed.

  9. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  10. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  11. Maximum power point tracking: a cost saving necessity in solar energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Enslin, J H.R. [Stellenbosch Univ. (South Africa). Dept. of Electrical and Electronic Engineering

    1992-12-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking (MPPT) can improve cost effectiveness, has a higher reliability and can improve the quality of life in remote areas. A high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of between 15 and 25% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply (RAPS) systems. The advantages at large temperature variations and high power rated systems are much higher. Other advantages include optimal sizing and system monitor and control. (author).

  12. Long-Term Prediction of Emergency Department Revenue and Visitor Volume Using Autoregressive Integrated Moving Average Model

    Directory of Open Access Journals (Sweden)

    Chieh-Fan Chen

    2011-01-01

    Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  13. Review of probable maximum flood definition at B.C. Hydro

    International Nuclear Information System (INIS)

    Keenhan, P.T.; Kroeker, M.G.; Neudorf, P.A.

    1991-01-01

    Probable maximum floods (PMF) have been derived for British Columbia Hydro structures since design of the W.C. Bennet Dam in 1965. A dam safety program for estimating PMF for structures designed before that time has been ongoing since 1979. The program, which has resulted in rehabilitative measures at dams not meeting current established standards, is now being directed at the more recently constructed larger structures on the Peace and Columbia rivers. Since 1965 detailed studies have produced 23 probable maximum precipitation (PMP) and 24 PMF estimates. What defines a PMF in British Columbia in terms of an appropriate combination of meteorological conditions varies due to basin size and the climatic effect of mountain barriers. PMP is estimated using three methods: storm maximization and transposition, orographic separation method, and modification of non-orographic PMP for orography. Details of, and problems encountered with, these methods are discussed. Tools or methods to assess meterological limits for antecedant conditions and for limits to runoff during extreme events have not been developed and require research effort. 11 refs., 2 figs., 3 tabs

  14. High-current railgap studies

    Science.gov (United States)

    Druce, R.; Gordon, L.; Hofer, W.; Wilson, M.

    1983-06-01

    Characteristics of a 40-kV, 750-kA, multichannel rail gap are presented. The gap is a three electrode, field distortion triggered design, with a total switch inductance of less than 10 nH. At maximum ratings, the gap typically switches 10 C per shot, at 700 kA, with a jitter of less than 2 ns. Channel evolution and current division were studied on image converter streak photographs. Transient gas pressure measurements were made to investigate the arc generated shocks and to detect single channel failure. Channel current sharing and simultaneity are described and their effects on the switch inductance in the channel current sharing and erosion measurements are discussed.

  15. Three dimensional winds: A maximum cross-correlation application to elastic lidar data

    Energy Technology Data Exchange (ETDEWEB)

    Buttler, William Tillman [Univ. of Texas, Austin, TX (United States)

    1996-05-01

    Maximum cross-correlation techniques have been used with satellite data to estimate winds and sea surface velocities for several years. Los Alamos National Laboratory (LANL) is currently using a variation of the basic maximum cross-correlation technique, coupled with a deterministic application of a vector median filter, to measure transverse winds as a function of range and altitude from incoherent elastic backscatter lidar (light detection and ranging) data taken throughout large volumes within the atmospheric boundary layer. Hourly representations of three-dimensional wind fields, derived from elastic lidar data taken during an air-quality study performed in a region of complex terrain near Sunland Park, New Mexico, are presented and compared with results from an Environmental Protection Agency (EPA) approved laser doppler velocimeter. The wind fields showed persistent large scale eddies as well as general terrain-following winds in the Rio Grande valley.

  16. Neural Modeling of Fuzzy Controllers for Maximum Power Point Tracking in Photovoltaic Energy Systems

    Science.gov (United States)

    Lopez-Guede, Jose Manuel; Ramos-Hernanz, Josean; Altın, Necmi; Ozdemir, Saban; Kurt, Erol; Azkune, Gorka

    2018-06-01

    One field in which electronic materials have an important role is energy generation, especially within the scope of photovoltaic energy. This paper deals with one of the most relevant enabling technologies within that scope, i.e, the algorithms for maximum power point tracking implemented in the direct current to direct current converters and its modeling through artificial neural networks (ANNs). More specifically, as a proof of concept, we have addressed the problem of modeling a fuzzy logic controller that has shown its performance in previous works, and more specifically the dimensionless duty cycle signal that controls a quadratic boost converter. We achieved a very accurate model since the obtained medium squared error is 3.47 × 10-6, the maximum error is 16.32 × 10-3 and the regression coefficient R is 0.99992, all for the test dataset. This neural implementation has obvious advantages such as a higher fault tolerance and a simpler implementation, dispensing with all the complex elements needed to run a fuzzy controller (fuzzifier, defuzzifier, inference engine and knowledge base) because, ultimately, ANNs are sums and products.

  17. An averaging battery model for a lead-acid battery operating in an electric car

    Science.gov (United States)

    Bozek, J. M.

    1979-01-01

    A battery model is developed based on time averaging the current or power, and is shown to be an effective means of predicting the performance of a lead acid battery. The effectiveness of this battery model was tested on battery discharge profiles expected during the operation of an electric vehicle following the various SAE J227a driving schedules. The averaging model predicts the performance of a battery that is periodically charged (regenerated) if the regeneration energy is assumed to be converted to retrievable electrochemical energy on a one-to-one basis.

  18. Maximum parsimony, substitution model, and probability phylogenetic trees.

    Science.gov (United States)

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  19. Time structure measurement of the ATLAS RPC gap current

    CERN Document Server

    Aielli, G; The ATLAS collaboration

    2010-01-01

    The current absorbed by an RPC represents the sum of the charge delivered in the gas by the ionizing events interesting the gap, integrated by the electrodes time constant. This is typically of the order of tens of ms thus dominating the gas discharge time scale and characterizing the granular structure observed in the current signal. In most cases this structure is considered as noise to be further integrated to observe the average gap current, used often as a detector monitoring parameter or to precisely measure the uncorrelated background rate effects. A remarkable case is given if a large number of particles is passing trough the detector within an integration time constant producing a current peak clearly detectable above the average noise. The ATLAS RPC system is equipped with a dedicated current monitoring based on an ADC capable of reading out the average value as well as the transient peaks of the currents above a given threshold. A study on such data was used to spot the gap HV noise, to monitor the...

  20. Current conserving theory at the operator level

    Science.gov (United States)

    Yuan, Jiangtao; Wang, Yin; Wang, Jian

    The basic assumption of quantum transport in mesoscopic systems is that the total charge inside the scattering region is zero. This means that the potential deep inside reservoirs is effectively screened and therefore the electric field at interface of scattering region is zero. Thus the current conservation condition can be satisfied automatically which is an important condition in mesoscopic transport. So far the current conserving ac theory is well developed by considering the displacement current which is due to Coulomb interaction if we just focus on the average current. However, the frequency dependent shot noise does not satisfy the conservation condition since we do not consider the current conservation at the operator level. In this work, we formulate a generalized current conserving theory at the operator level using non-equilibrium Green's function theory which could be applied to both average current and frequency dependent shot noise. A displacement operator is derived for the first time so that the frequency dependent correlation of displacement currents could be investigated. Moreover, the equilibrium shot noise is investigated and a generalized fluctuation-dissipation relationship is presented.

  1. Prediction of maximum earthquake intensities for the San Francisco Bay region

    Science.gov (United States)

    Borcherdt, Roger D.; Gibbs, James F.

    1975-01-01

    The intensity data for the California earthquake of April 18, 1906, are strongly dependent on distance from the zone of surface faulting and the geological character of the ground. Considering only those sites (approximately one square city block in size) for which there is good evidence for the degree of ascribed intensity, the empirical relation derived between 1906 intensities and distance perpendicular to the fault for 917 sites underlain by rocks of the Franciscan Formation is: Intensity = 2.69 - 1.90 log (Distance) (km). For sites on other geologic units intensity increments, derived with respect to this empirical relation, correlate strongly with the Average Horizontal Spectral Amplifications (AHSA) determined from 99 three-component recordings of ground motion generated by nuclear explosions in Nevada. The resulting empirical relation is: Intensity Increment = 0.27 +2.70 log (AHSA), and average intensity increments for the various geologic units are -0.29 for granite, 0.19 for Franciscan Formation, 0.64 for the Great Valley Sequence, 0.82 for Santa Clara Formation, 1.34 for alluvium, 2.43 for bay mud. The maximum intensity map predicted from these empirical relations delineates areas in the San Francisco Bay region of potentially high intensity from future earthquakes on either the San Andreas fault or the Hazard fault.

  2. Prediction of maximum earthquake intensities for the San Francisco Bay region

    Energy Technology Data Exchange (ETDEWEB)

    Borcherdt, R.D.; Gibbs, J.F.

    1975-01-01

    The intensity data for the California earthquake of Apr 18, 1906, are strongly dependent on distance from the zone of surface faulting and the geological character of the ground. Considering only those sites (approximately one square city block in size) for which there is good evidence for the degree of ascribed intensity, the empirical relation derived between 1906 intensities and distance perpendicular to the fault for 917 sites underlain by rocks of the Franciscan formation is intensity = 2.69 - 1.90 log (distance) (km). For sites on other geologic units, intensity increments, derived with respect to this empirical relation, correlate strongly with the average horizontal spectral amplifications (AHSA) determined from 99 three-component recordings of ground motion generated by nuclear explosions in Nevada. The resulting empirical relation is intensity increment = 0.27 + 2.70 log (AHSA), and average intensity increments for the various geologic units are -0.29 for granite, 0.19 for Franciscan formation, 0.64 for the Great Valley sequence, 0.82 for Santa Clara formation, 1.34 for alluvium, and 2.43 for bay mud. The maximum intensity map predicted from these empirical relations delineates areas in the San Francisco Bay region of potentially high intensity from future earthquakes on either the San Andreas fault or the Hayward fault.

  3. Universal bounds on current fluctuations.

    Science.gov (United States)

    Pietzonka, Patrick; Barato, Andre C; Seifert, Udo

    2016-05-01

    For current fluctuations in nonequilibrium steady states of Markovian processes, we derive four different universal bounds valid beyond the Gaussian regime. Different variants of these bounds apply to either the entropy change or any individual current, e.g., the rate of substrate consumption in a chemical reaction or the electron current in an electronic device. The bounds vary with respect to their degree of universality and tightness. A universal parabolic bound on the generating function of an arbitrary current depends solely on the average entropy production. A second, stronger bound requires knowledge both of the thermodynamic forces that drive the system and of the topology of the network of states. These two bounds are conjectures based on extensive numerics. An exponential bound that depends only on the average entropy production and the average number of transitions per time is rigorously proved. This bound has no obvious relation to the parabolic bound but it is typically tighter further away from equilibrium. An asymptotic bound that depends on the specific transition rates and becomes tight for large fluctuations is also derived. This bound allows for the prediction of the asymptotic growth of the generating function. Even though our results are restricted to networks with a finite number of states, we show that the parabolic bound is also valid for three paradigmatic examples of driven diffusive systems for which the generating function can be calculated using the additivity principle. Our bounds provide a general class of constraints for nonequilibrium systems.

  4. 78 FR 9845 - Minimum and Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for a Violation of...

    Science.gov (United States)

    2013-02-12

    ... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...

  5. Current density tensors

    Science.gov (United States)

    Lazzeretti, Paolo

    2018-04-01

    It is shown that nonsymmetric second-rank current density tensors, related to the current densities induced by magnetic fields and nuclear magnetic dipole moments, are fundamental properties of a molecule. Together with magnetizability, nuclear magnetic shielding, and nuclear spin-spin coupling, they completely characterize its response to magnetic perturbations. Gauge invariance, resolution into isotropic, deviatoric, and antisymmetric parts, and contributions of current density tensors to magnetic properties are discussed. The components of the second-rank tensor properties are rationalized via relationships explicitly connecting them to the direction of the induced current density vectors and to the components of the current density tensors. The contribution of the deviatoric part to the average value of magnetizability, nuclear shielding, and nuclear spin-spin coupling, uniquely determined by the antisymmetric part of current density tensors, vanishes identically. The physical meaning of isotropic and anisotropic invariants of current density tensors has been investigated, and the connection between anisotropy magnitude and electron delocalization has been discussed.

  6. Coarse-mesh discretized low-order quasi-diffusion equations for subregion averaged scalar fluxes

    International Nuclear Information System (INIS)

    Anistratov, D. Y.

    2004-01-01

    In this paper we develop homogenization procedure and discretization for the low-order quasi-diffusion equations on coarse grids for core-level reactor calculations. The system of discretized equations of the proposed method is formulated in terms of the subregion averaged group scalar fluxes. The coarse-mesh solution is consistent with a given fine-mesh discretization of the transport equation in the sense that it preserves a set of average values of the fine-mesh transport scalar flux over subregions of coarse-mesh cells as well as the surface currents, and eigenvalue. The developed method generates numerical solution that mimics the large-scale behavior of the transport solution within assemblies. (authors)

  7. Climate-simulated raceway pond culturing: quantifying the maximum achievable annual biomass productivity of Chlorella sorokiniana in the contiguous USA

    Energy Technology Data Exchange (ETDEWEB)

    Huesemann, M.; Chavis, A.; Edmundson, S.; Rye, D.; Hobbs, S.; Sun, N.; Wigmosta, M.

    2017-09-13

    Chlorella sorokiniana (DOE 1412) emerged as one of the most promising microalgae strains from the NAABB consortium project, with a remarkable doubling time under optimal conditions of 2.57 hr-1. However, its maximum achievable annual biomass productivity in outdoor ponds in the contiguous United States remained unknown. In order to address this knowledge gap, this alga was cultured in indoor LED-lighted and temperature-controlled raceways in nutrient replete freshwater (BG-11) medium at pH 7 under conditions simulating the daily sunlight intensity and water temperature fluctuations during three seasons in Southern Florida, an optimal outdoor pond culture location for this organism identified by biomass growth modeling. Prior strain characterization indicated that the average maximum specific growth rate (µmax) at 36 ºC declined continuously with pH, with µmax corresponding to 5.92, 5.83, 4.89, and 4.21 day-1 at pH 6, 7, 8, and 9, respectively. In addition, the maximum specific growth rate declined nearly linearly with increasing salinity until no growth was observed above 35 g/L NaCl. In the climate-simulated culturing studies, the volumetric ash-free dry weight-based biomass productivities during the linear growth phase were 57, 69, and 97 mg/L-day for 30-year average light and temperature simulations for January (winter), March (spring), and July (summer), respectively, which corresponds to average areal productivities of 11.6, 14.1, and 19.9 g/m2-day at a constant pond depth of 20.5 cm. The photosynthetic efficiencies (PAR) in the three climate-simulated pond culturing experiments ranged from 4.1 to 5.1%. The annual biomass productivity was estimated as ca. 15 g/m2-day, nearly double the U.S. Department of Energy (DOE) 2015 State of Technology annual cultivation productivity of 8.5 g/m2-day, but this is still significantly below the projected 2022 target of ca. 25 g/m2-day (U.S. DOE, 2016) for economic microalgal biofuel production, indicating the need for

  8. Performance analysis and comparison of an Atkinson cycle coupled to variable temperature heat reservoirs under maximum power and maximum power density conditions

    International Nuclear Information System (INIS)

    Wang, P.-Y.; Hou, S.-S.

    2005-01-01

    In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions

  9. Performance of penalized maximum likelihood in estimation of genetic covariances matrices

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2011-11-01

    Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should

  10. Dose calculation with respiration-averaged CT processed from cine CT without a respiratory surrogate

    International Nuclear Information System (INIS)

    Riegel, Adam C.; Ahmad, Moiz; Sun Xiaojun; Pan Tinsu

    2008-01-01

    . The average maximum and mean γ indices were very low (well below 1), indicating good agreement between dose distributions. Increasing the cine duration generally increased the dose agreement. In the follow-up study, 49 of 50 patients had 100% of points within the PTV pass the γ criteria. The average maximum and mean γ indices were again well below 1, indicating good agreement. Dose calculation on RACT from cine CT is negligibly different from dose calculation on RACT from 4D-CT. Differences can be decreased further by increasing the cine duration of the cine CT scan.

  11. Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays

    International Nuclear Information System (INIS)

    Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick

    2013-01-01

    Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand. - Highlights: ► Ecotoxicological shows significant benefits for detecting on site contaminations. ► MaxEnt to rebuild qualitative link on concentration and ecotoxicological assays. ► MaxEnt shows similar pattern when compared with concentrations map of groundwater. ► MaxEnt is a valuable method especially when quantitative relation is not at hand. - A Maximum Entropy method to rebuild qualitative relationships between Benzene groundwater concentrations and their ecotoxicological effect.

  12. A Study of a Two Stage Maximum Power Point Tracking Control of a Photovoltaic System under Partially Shaded Insolation Conditions

    Science.gov (United States)

    Kobayashi, Kenji; Takano, Ichiro; Sawada, Yoshio

    A photovoltaic array shows relatively low output power density, and has a greatly drooping Current-Voltage (I-V) characteristic. Therefore, Maximum Power Point Tracking (MPPT) control is used to maximize the output power of the array. Many papers have been reported in relation to MPPT. However, the Current-Power (I-P) curve sometimes shows multi-local maximum points mode under non-uniform insolation conditions. The operating point of the PV system tends to converge to a local maximum output point which is not the real maximal output point on the I-P curve. Some papers have been also reported, trying to avoid this difficulty. However most of those control systems become rather complicated. Then, the two stage MPPT control method is proposed in this paper to realize a relatively simple control system which can track the real maximum power point even under non-uniform insolation conditions. The feasibility of this control concept is confirmed for steady insolation as well as for rapidly changing insolation by simulation study using software PSIM and LabVIEW. In addition, simulated experiment confirms fundament al operation of the two stage MPPT control.

  13. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  14. High-current railgap studies

    Energy Technology Data Exchange (ETDEWEB)

    Druce, R.; Gordon, L.; Hofer, W.; Wilson, M.

    1983-06-03

    Characteristics of a 40-kV, 750-kA, multichannel rail gap are presented. The gap is a three electrode, field-distortion-triggered design, with a total switch inductance of less than 10 nH. At maximum ratings, the gap typically switches 10 C per shot, at 700 kA, with a jitter of less than 2 ns. Image-converter streak photographs were used to study channel evolution and current division. Transient gas-pressure measurements were made to investigate the arc generated shocks and to detect single channel failure. Channel current sharing and simultaneity are described and their effects on the switch inductance and lifetime are discussed. Lifetime tests of the rail gap were performed. Degradation in the channel current-sharing and erosion measurements are discussed.

  15. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  16. Monodimensional estimation of maximum Reynolds shear stress in the downstream flow field of bileaflet valves.

    Science.gov (United States)

    Grigioni, Mauro; Daniele, Carla; D'Avenio, Giuseppe; Barbaro, Vincenzo

    2002-05-01

    Turbulent flow generated by prosthetic devices at the bloodstream level may cause mechanical stress on blood particles. Measurement of the Reynolds stress tensor and/or some of its components is a mandatory step to evaluate the mechanical load on blood components exerted by fluid stresses, as well as possible consequent blood damage (hemolysis or platelet activation). Because of the three-dimensional nature of turbulence, in general, a three-component anemometer should be used to measure all components of the Reynolds stress tensor, but this is difficult, especially in vivo. The present study aimed to derive the maximum Reynolds shear stress (RSS) in three commercially available prosthetic heart valves (PHVs) of wide diffusion, starting with monodimensional data provided in vivo by echo Doppler. Accurate measurement of PHV flow field was made using laser Doppler anemometry; this provided the principal turbulence quantities (mean velocity, root-mean-square value of velocity fluctuations, average value of cross-product of velocity fluctuations in orthogonal directions) needed to quantify the maximum turbulence-related shear stress. The recorded data enabled determination of the relationship, the Reynolds stresses ratio (RSR) between maximum RSS and Reynolds normal stress in the main flow direction. The RSR was found to be dependent upon the local structure of the flow field. The reported RSR profiles, which permit a simple calculation of maximum RSS, may prove valuable during the post-implantation phase, when an assessment of valve function is made echocardiographically. Hence, the risk of damage to blood constituents associated with bileaflet valve implantation may be accurately quantified in vivo.

  17. Effect of temporal averaging of meteorological data on predictions of groundwater recharge

    Directory of Open Access Journals (Sweden)

    Batalha Marcia S.

    2018-06-01

    Full Text Available Accurate estimates of infiltration and groundwater recharge are critical for many hydrologic, agricultural and environmental applications. Anticipated climate change in many regions of the world, especially in tropical areas, is expected to increase the frequency of high-intensity, short-duration precipitation events, which in turn will affect the groundwater recharge rate. Estimates of recharge are often obtained using monthly or even annually averaged meteorological time series data. In this study we employed the HYDRUS-1D software package to assess the sensitivity of groundwater recharge calculations to using meteorological time series of different temporal resolutions (i.e., hourly, daily, weekly, monthly and yearly averaged precipitation and potential evaporation rates. Calculations were applied to three sites in Brazil having different climatological conditions: a tropical savanna (the Cerrado, a humid subtropical area (the temperate southern part of Brazil, and a very wet tropical area (Amazonia. To simplify our current analysis, we did not consider any land use effects by ignoring root water uptake. Temporal averaging of meteorological data was found to lead to significant bias in predictions of groundwater recharge, with much greater estimated recharge rates in case of very uneven temporal rainfall distributions during the year involving distinct wet and dry seasons. For example, at the Cerrado site, using daily averaged data produced recharge rates of up to 9 times greater than using yearly averaged data. In all cases, an increase in the time of averaging of meteorological data led to lower estimates of groundwater recharge, especially at sites having coarse-textured soils. Our results show that temporal averaging limits the ability of simulations to predict deep penetration of moisture in response to precipitation, so that water remains in the upper part of the vadose zone subject to upward flow and evaporation.

  18. Testing Solutions of the Protection Systems Provided with Delay Maximum Current Relays

    Directory of Open Access Journals (Sweden)

    Horia BALAN

    2017-12-01

    Full Text Available Relay protection is one of the main forms of automation control of electro energy systems, having as primary aims fault detection and disconnection of the faulty element in order to avoid the extent of damages and the as fast as possible recovery to the normal operation regime for the rest of the system. Faults that occur in the electro energy system can be classified considering on one hand their causes and on the other their types, but in the vast majority of cases the causes of the faults are combined. Further, considering their nature, faults are classified in faults due to the insulation’s damage, in faults due to the destruction of the integrity of the circuits and faults determined by interruptions. With respect to their nature, faults are short circuits, earthing faults and phases interruptions. At the same time, considering their type, faults are divided in transversal and longitudinal ones. The paper presents a testing solution of the delayed maximal current relays using a T3000 ISA Test measuring equipment.

  19. Neutron Flux and Activation Calculations for a High Current Deuteron Accelerator

    CERN Document Server

    Coniglio, Angela; Sandri, Sandro

    2005-01-01

    Neutron analysis of the first Neutral Beam (NB) for the International Thermonuclear Experimental Reactor (ITER) was performed to provide the basis for the study of the following main aspects: personnel safety during normal operation and maintenance, radiation shielding design, transportability of the NB components in the European countries. The first ITER NB is a medium energy light particle accelerator. In the scenario considered for the calculation the accelerated particles are negative deuterium ions with maximum energy of 1 MeV. The average beam current is 13.3 A. To assess neutron transport in the ITER NB structure a mathematical model of the components geometry was implemented into MCNP computer code (MCNP version 4c2. "Monte Carlo N-Particle Transport Code System." RSICC Computer Code Collection. June 2001). The neutron source definition was outlined considering both D-D and D-T neutron production. FISPACT code (R.A. Forrest, FISPACT-2003. EURATOM/UKAEA Fusion, December 2002) was used to assess neutron...

  20. Physical method to assess a probable maximum precipitation, using CRCM datas

    International Nuclear Information System (INIS)

    Beauchamp, J.

    2009-01-01

    'Full text:' For Nordic hydropower facilities, spillways are designed with a peak discharge based on extreme conditions. This peak discharge is generally derived using the concept of a probable maximum flood (PMF), which results from the combined effect of abundant downpours (probable maximum precipitation - PMP) and rapid snowmelt. On a gauged basin, the weather data record allows for the computation of the PMF. However, uncertainty in the future climate raises questions as to the accuracy of current PMP estimates for existing and future hydropower facilities. This project looks at the potential use of the Canadian Regional Climate Model (CRCM) data to compute the PMF in ungauged basins and to assess potential changes to the PMF in a changing climate. Several steps will be needed to accomplish this task. This paper presents the first step that aims at applying/adapting to CRCM data the in situ moisture maximization technique developed by the World Meteorological Organization, in order to compute the PMP at the watershed scale. The CRCM provides output data on a 45km grid at a six hour time step. All of the needed atmospheric data is available at sixteen different pressure levels. The methodology consists in first identifying extreme precipitation events under current climate conditions. Then, a maximum persisting twelve hours dew point is determined at each grid point and pressure level for the storm duration. Afterwards, the maximization ratio is approximated by merging the effective temperature with dew point and relative humidity values. The variables and maximization ratio are four-dimensional (x, y, z, t) values. Consequently, two different approaches are explored: a partial ratio at each step and a global ratio for the storm duration. For every identified extreme precipitation event, a maximized hyetograph is computed from the application of this ratio, either partial or global, on CRCM precipitation rates. Ultimately, the PMP is the depth of the

  1. Physical method to assess a probable maximum precipitation, using CRCM datas

    Energy Technology Data Exchange (ETDEWEB)

    Beauchamp, J. [Univ. de Quebec, Ecole de technologie superior, Quebec (Canada)

    2009-07-01

    'Full text:' For Nordic hydropower facilities, spillways are designed with a peak discharge based on extreme conditions. This peak discharge is generally derived using the concept of a probable maximum flood (PMF), which results from the combined effect of abundant downpours (probable maximum precipitation - PMP) and rapid snowmelt. On a gauged basin, the weather data record allows for the computation of the PMF. However, uncertainty in the future climate raises questions as to the accuracy of current PMP estimates for existing and future hydropower facilities. This project looks at the potential use of the Canadian Regional Climate Model (CRCM) data to compute the PMF in ungauged basins and to assess potential changes to the PMF in a changing climate. Several steps will be needed to accomplish this task. This paper presents the first step that aims at applying/adapting to CRCM data the in situ moisture maximization technique developed by the World Meteorological Organization, in order to compute the PMP at the watershed scale. The CRCM provides output data on a 45km grid at a six hour time step. All of the needed atmospheric data is available at sixteen different pressure levels. The methodology consists in first identifying extreme precipitation events under current climate conditions. Then, a maximum persisting twelve hours dew point is determined at each grid point and pressure level for the storm duration. Afterwards, the maximization ratio is approximated by merging the effective temperature with dew point and relative humidity values. The variables and maximization ratio are four-dimensional (x, y, z, t) values. Consequently, two different approaches are explored: a partial ratio at each step and a global ratio for the storm duration. For every identified extreme precipitation event, a maximized hyetograph is computed from the application of this ratio, either partial or global, on CRCM precipitation rates. Ultimately, the PMP is the depth of the

  2. Constraints on pulsar masses from the maximum observed glitch

    Science.gov (United States)

    Pizzochero, P. M.; Antonelli, M.; Haskell, B.; Seveso, S.

    2017-07-01

    Neutron stars are unique cosmic laboratories in which fundamental physics can be probed in extreme conditions not accessible to terrestrial experiments. In particular, the precise timing of rotating magnetized neutron stars (pulsars) reveals sudden jumps in rotational frequency in these otherwise steadily spinning-down objects. These 'glitches' are thought to be due to the presence of a superfluid component in the star, and offer a unique glimpse into the interior physics of neutron stars. In this paper we propose an innovative method to constrain the mass of glitching pulsars, using observations of the maximum glitch observed in a star, together with state-of-the-art microphysical models of the pinning interaction between superfluid vortices and ions in the crust. We study the properties of a physically consistent angular momentum reservoir of pinned vorticity, and we find a general inverse relation between the size of the maximum glitch and the pulsar mass. We are then able to estimate the mass of all the observed glitchers that have displayed at least two large events. Our procedure will allow current and future observations of glitching pulsars to constrain not only the physics of glitch models but also the superfluid properties of dense hadronic matter in neutron star interiors.

  3. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  4. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  5. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  6. The Type-2 Fuzzy Logic Controller-Based Maximum Power Point Tracking Algorithm and the Quadratic Boost Converter for Pv System

    Science.gov (United States)

    Altin, Necmi

    2018-05-01

    An interval type-2 fuzzy logic controller-based maximum power point tracking algorithm and direct current-direct current (DC-DC) converter topology are proposed for photovoltaic (PV) systems. The proposed maximum power point tracking algorithm is designed based on an interval type-2 fuzzy logic controller that has an ability to handle uncertainties. The change in PV power and the change in PV voltage are determined as inputs of the proposed controller, while the change in duty cycle is determined as the output of the controller. Seven interval type-2 fuzzy sets are determined and used as membership functions for input and output variables. The quadratic boost converter provides high voltage step-up ability without any reduction in performance and stability of the system. The performance of the proposed system is validated through MATLAB/Simulink simulations. It is seen that the proposed system provides high maximum power point tracking speed and accuracy even for fast changing atmospheric conditions and high voltage step-up requirements.

  7. Critical current in nonhomogeneous YBCO coated conductors

    International Nuclear Information System (INIS)

    Rostila, L; Mikkonen, R; Lehtonen, J

    2008-01-01

    The critical current of an YBCO tape is determined by the magnetic field inside the YBCO layer and the quality of YBCO material. In thick YBCO layers the average critical current density is reduced by the self-field and decreased material quality. In this paper the combined influence of the material nonhomogeneities and self-field on the critical current of YBCO tapes is scrutinised. First, the zero field critical current density was assumed to decrease along the YBCO thickness. Secondly, the possible defects created in the cutting of YBCO tapes were modelled as a function of lowered critical current density near the tape edges. In both cases the critical current was computed numerically with integral element method. The results suggest that the variation of zero field critical current density, J c0 , along the tape thickness does not effect on the critical current if the mean value of J c0 is kept constant. However, if J c0 is varied along the tape width the critical current can change due to the variated self-field. The computations can be used to determine when it is possible to evaluate the average zero field critical current density from a voltage-current measurement with an appropriate accuracy

  8. Western Gulf of Mexico, June 1993 to June 1994 Average Ocean Currents, Geographic NAD83, MMS (1999) [ocean_currents_wgom_AVG_MMS_1994

    Data.gov (United States)

    Louisiana Geographic Information Center — This is one data set of a data package consisting of thirteen point data sets that have as attributes the direction and velocity of ocean currents in the western...

  9. Eastern Gulf of Mexico, February 1996 to June 1997 Average Ocean Currents, Geographic NAD83, MMS (1999) [ocean_currents_egom_AVG_MMS_1997

    Data.gov (United States)

    Louisiana Geographic Information Center — This is one data set of a data package consisting of thirteen point data sets that have as attributes the direction and velocity of ocean currents in the 'eastern'...

  10. Simulated wind-generated inertial oscillations compared to current measurements in the northern North Sea

    Science.gov (United States)

    Bruserud, Kjersti; Haver, Sverre; Myrhaug, Dag

    2018-04-01

    Measured current speed data show that episodes of wind-generated inertial oscillations dominate the current conditions in parts of the northern North Sea. In order to acquire current data of sufficient duration for robust estimation of joint metocean design conditions, such as wind, waves, and currents, a simple model for episodes of wind-generated inertial oscillations is adapted for the northern North Sea. The model is validated with and compared against measured current data at one location in the northern North Sea and found to reproduce the measured maximum current speed in each episode with considerable accuracy. The comparison is further improved when a small general background current is added to the simulated maximum current speeds. Extreme values of measured and simulated current speed are estimated and found to compare well. To assess the robustness of the model and the sensitivity of current conditions from location to location, the validated model is applied at three other locations in the northern North Sea. In general, the simulated maximum current speeds are smaller than the measured, suggesting that wind-generated inertial oscillations are not as prominent at these locations and that other current conditions may be governing. Further analysis of the simulated current speed and joint distribution of wind, waves, and currents for design of offshore structures will be presented in a separate paper.

  11. Development of linear proton accelerators with the high average beam power

    CERN Document Server

    Bomko, V A; Egorov, A M

    2001-01-01

    Review of the current situation in the development of powerful linear proton accelerators carried out in many countries is given. The purpose of their creation is solving problems of safe and efficient nuclear energetics on a basis of the accelerator-reactor complex. In this case a proton beam with the energy up to 1 GeV, the average current of 30 mA is required. At the same time there is a needed in more powerful beams,for example, for production of tritium and transmutation of nuclear waste products. The creation of accelerators of such a power will be followed by the construction of linear accelerators of 1 GeV but with a more moderate beam current. They are intended for investigation of many aspects of neutron physics and neutron engineering. Problems in the creation of efficient constructions for the basic and auxiliary equipment, the reliability of the systems, and minimization of the beam losses in the process of acceleration will be solved.

  12. Electromigration failures under bidirectional current stress

    Science.gov (United States)

    Tao, Jiang; Cheung, Nathan W.; Hu, Chenming

    1998-01-01

    Electromigration failure under DC stress has been studied for more than 30 years, and the methodologies for accelerated DC testing and design rules have been well established in the IC industry. However, the electromigration behavior and design rules under time-varying current stress are still unclear. In CMOS circuits, as many interconnects carry pulsed-DC (local VCC and VSS lines) and bidirectional AC current (clock and signal lines), it is essential to assess the reliability of metallization systems under these conditions. Failure mechanisms of different metallization systems (Al-Si, Al-Cu, Cu, TiN/Al-alloy/TiN, etc.) and different metallization structures (via, plug and interconnect) under AC current stress in a wide frequency range (from mHz to 500 MHz) has been study in this paper. Based on these experimental results, a damage healing model is developed, and electromigration design rules are proposed. It shows that in the circuit operating frequency range, the "design-rule current" is the time-average current. The pure AC component of the current only contributes to self-heating, while the average (DC component) current contributes to electromigration. To ensure longer thermal-migration lifetime under high frequency AC stress, an additional design rule is proposed to limit the temperature rise due to self-joule heating.

  13. The Relationship between Interparental Conflict and Self-Reported Grade Point Average among College Students

    Science.gov (United States)

    Hunt, S. Jane; Krueger, Lacy E.; Limberg, Dodie

    2017-01-01

    Interparental conflict has been shown to have a negative effect on the academic success of children and adolescents. This study examined the relationship between college students' (N = 143) perceived levels of interparental conflict, their living arrangement, and their current self-reported grade point average. Participants who experienced more…

  14. Circuit Simulation for Solar Power Maximum Power Point Tracking with Different Buck-Boost Converter Topologies

    Directory of Open Access Journals (Sweden)

    Jaw-Kuen Shiau

    2014-08-01

    Full Text Available The power converter is one of the essential elements for effective use of renewable power sources. This paper focuses on the development of a circuit simulation model for maximum power point tracking (MPPT evaluation of solar power that involves using different buck-boost power converter topologies; including SEPIC, Zeta, and four-switch type buck-boost DC/DC converters. The circuit simulation model mainly includes three subsystems: a PV model; a buck-boost converter-based MPPT system; and a fuzzy logic MPPT controller. Dynamic analyses of the current-fed buck-boost converter systems are conducted and results are presented in the paper. The maximum power point tracking function is achieved through appropriate control of the power switches of the power converter. A fuzzy logic controller is developed to perform the MPPT function for obtaining maximum power from the PV panel. The MATLAB-based Simulink piecewise linear electric circuit simulation tool is used to verify the complete circuit simulation model.

  15. 13 CFR 107.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...

  16. Efficient processing of CFRP with a picosecond laser with up to 1.4 kW average power

    Science.gov (United States)

    Onuseit, V.; Freitag, C.; Wiedenmann, M.; Weber, R.; Negel, J.-P.; Löscher, A.; Abdou Ahmed, M.; Graf, T.

    2015-03-01

    Laser processing of carbon fiber reinforce plastic (CFRP) is a very promising method to solve a lot of the challenges for large-volume production of lightweight constructions in automotive and airplane industries. However, the laser process is actual limited by two main issues. First the quality might be reduced due to thermal damage and second the high process energy needed for sublimation of the carbon fibers requires laser sources with high average power for productive processing. To achieve thermal damage of the CFRP of less than 10μm intensities above 108 W/cm² are needed. To reach these high intensities in the processing area ultra-short pulse laser systems are favored. Unfortunately the average power of commercially available laser systems is up to now in the range of several tens to a few hundred Watt. To sublimate the carbon fibers a large volume specific enthalpy of 85 J/mm³ is necessary. This means for example that cutting of 2 mm thick material with a kerf width of 0.2 mm with industry-typical 100 mm/sec requires several kilowatts of average power. At the IFSW a thin-disk multipass amplifier yielding a maximum average output power of 1100 W (300 kHz, 8 ps, 3.7 mJ) allowed for the first time to process CFRP at this average power and pulse energy level with picosecond pulse duration. With this unique laser system cutting of CFRP with a thickness of 2 mm an effective average cutting speed of 150 mm/sec with a thermal damage below 10μm was demonstrated.

  17. Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment

    Directory of Open Access Journals (Sweden)

    Sesay Abu B

    2004-01-01

    Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.

  18. Simulation model of ANN based maximum power point tracking controller for solar PV system

    Energy Technology Data Exchange (ETDEWEB)

    Rai, Anil K.; Singh, Bhupal [Department of Electrical and Electronics Engineering, Ajay Kumar Garg Engineering College, Ghaziabad 201009 (India); Kaushika, N.D.; Agarwal, Niti [School of Research and Development, Bharati Vidyapeeth College of Engineering, A-4 Paschim Vihar, New Delhi 110063 (India)

    2011-02-15

    In this paper the simulation model of an artificial neural network (ANN) based maximum power point tracking controller has been developed. The controller consists of an ANN tracker and the optimal control unit. The ANN tracker estimates the voltages and currents corresponding to a maximum power delivered by solar PV (photovoltaic) array for variable cell temperature and solar radiation. The cell temperature is considered as a function of ambient air temperature, wind speed and solar radiation. The tracker is trained employing a set of 124 patterns using the back propagation algorithm. The mean square error of tracker output and target values is set to be of the order of 10{sup -5} and the successful convergent of learning process takes 1281 epochs. The accuracy of the ANN tracker has been validated by employing different test data sets. The control unit uses the estimates of the ANN tracker to adjust the duty cycle of the chopper to optimum value needed for maximum power transfer to the specified load. (author)

  19. Streaming current magnetic fields in a charged nanopore

    Science.gov (United States)

    Mansouri, Abraham; Taheri, Peyman; Kostiuk, Larry W.

    2016-01-01

    Magnetic fields induced by currents created in pressure driven flows inside a solid-state charged nanopore were modeled by numerically solving a system of steady state continuum partial differential equations, i.e., Poisson, Nernst-Planck, Ampere and Navier-Stokes equations (PNPANS). This analysis was based on non-dimensional transport governing equations that were scaled using Debye length as the characteristic length scale, and applied to a finite length cylindrical nano-channel. The comparison of numerical and analytical studies shows an excellent agreement and verified the magnetic fields density both inside and outside the nanopore. The radially non-uniform currents resulted in highly non-uniform magnetic fields within the nanopore that decay as 1/r outside the nanopore. It is worth noting that for either streaming currents or streaming potential cases, the maximum magnetic field occurred inside the pore in the vicinity of nanopore wall, as opposed to a cylindrical conductor that carries a steady electric current where the maximum magnetic fields occur at the perimeter of conductor. Based on these results, it is suggested and envisaged that non-invasive external magnetic fields readouts generated by streaming/ionic currents may be viewed as secondary electronic signatures of biomolecules to complement and enhance current DNA nanopore sequencing techniques. PMID:27833119

  20. Characterization of plasma current quench at JET

    International Nuclear Information System (INIS)

    Riccardo, V; Barabaschi, P; Sugihara, M

    2005-01-01

    Eddy currents generated during the fastest disruption current decays represent the most severe design condition for medium and small size in-vessel components of most tokamaks. Best-fit linear and instantaneous plasma current quench rates have been extracted for a set of recent JET disruptions. Contrary to expectations, the current quench rate spectrum of high and low thermal energy disruptions is not substantially different. For most of the disruptions with the highest instantaneous current quench rate an exponential fit of the early phase of the current decay provides a more accurate estimate of the maximum current decay velocity. However, this fit is only suitable to model the fastest events, for which the current quench is dominated by radiation losses rather than the plasma motion

  1. A Single Phase Doubly Grounded Semi-Z-Source Inverter for Photovoltaic (PV Systems with Maximum Power Point Tracking (MPPT

    Directory of Open Access Journals (Sweden)

    Tofael Ahmed

    2014-06-01

    Full Text Available In this paper, a single phase doubly grounded semi-Z-source inverter with maximum power point tracking (MPPT is proposed for photovoltaic (PV systems. This proposed system utilizes a single-ended primary inductor (SEPIC converter as DC-DC converter to implement the MPPT algorithm for tracking the maximum power from a PV array and a single phase semi-Z-source inverter for integrating the PV with AC power utilities. The MPPT controller utilizes a fast-converging algorithm to track the maximum power point (MPP and the semi-Z-source inverter utilizes a nonlinear SPWM to produce sinusoidal voltage at the output. The proposed system is able to track the MPP of PV arrays and produce an AC voltage at its output by utilizing only three switches. Experimental results show that the fast-converging MPPT algorithm has fast tracking response with appreciable MPP efficiency. In addition, the inverter shows the minimization of common mode leakage current with its ground sharing feature and reduction of the THD as well as DC current components at the output during DC-AC conversion.

  2. Exponential growth and Gaussian-like fluctuations of solutions of stochastic differential equations with maximum functionals

    International Nuclear Information System (INIS)

    Appleby, J A D; Wu, H

    2008-01-01

    In this paper we consider functional differential equations subjected to either instantaneous state-dependent noise, or to a white noise perturbation. The drift of the equations depend linearly on the current value and on the maximum of the solution. The functional term always provides positive feedback, while the instantaneous term can be mean-reverting or can exhibit positive feedback. We show in the white noise case that if the instantaneous term is mean reverting and dominates the history term, then solutions are recurrent, and upper bounds on the a.s. growth rate of the partial maxima of the solution can be found. When the instantaneous term is weaker, or is of positive feedback type, we determine necessary and sufficient conditions on the diffusion coefficient which ensure the exact exponential growth of solutions. An application of these results to an inefficient financial market populated by reference traders and speculators is given, in which the difference between the current instantaneous returns and maximum of the returns over the last few time units is used to determine trading strategies.

  3. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  4. Identification of "ever-cropped" land (1984-2010) using Landsat annual maximum NDVI image composites: Southwestern Kansas case study.

    Science.gov (United States)

    Maxwell, Susan K; Sylvester, Kenneth M

    2012-06-01

    A time series of 230 intra- and inter-annual Landsat Thematic Mapper images was used to identify land that was ever cropped during the years 1984 through 2010 for a five county region in southwestern Kansas. Annual maximum Normalized Difference Vegetation Index (NDVI) image composites (NDVI(ann-max)) were used to evaluate the inter-annual dynamics of cropped and non-cropped land. Three feature images were derived from the 27-year NDVI(ann-max) image time series and used in the classification: 1) maximum NDVI value that occurred over the entire 27 year time span (NDVI(max)), 2) standard deviation of the annual maximum NDVI values for all years (NDVI(sd)), and 3) standard deviation of the annual maximum NDVI values for years 1984-1986 (NDVI(sd84-86)) to improve Conservation Reserve Program land discrimination.Results of the classification were compared to three reference data sets: County-level USDA Census records (1982-2007) and two digital land cover maps (Kansas 2005 and USGS Trends Program maps (1986-2000)). Area of ever-cropped land for the five counties was on average 11.8 % higher than the area estimated from Census records. Overall agreement between the ever-cropped land map and the 2005 Kansas map was 91.9% and 97.2% for the Trends maps. Converting the intra-annual Landsat data set to a single annual maximum NDVI image composite considerably reduced the data set size, eliminated clouds and cloud-shadow affects, yet maintained information important for discriminating cropped land. Our results suggest that Landsat annual maximum NDVI image composites will be useful for characterizing land use and land cover change for many applications.

  5. The opportunity offered by the ESSnuSB project to exploit the larger leptonic CP violation signal at the second oscillation maximum and the requirements of this project on the ESS accelerator complex

    CERN Document Server

    Wildner, Elena; Blennow, M.; Bogomilov, M.; Burgman, A.; Bouquerel, E.; Carlile, C.; Cederkäll, J.; Christiansen, P.; Cupial, P.; Danared, H.; Dracos, M.; Ekelöf, T.; Eshraqi, M.; Hall-Wilton, R.; Koutchouk, J.P.; Lindroos, M.; Martini, M.; Matev, R.; McGinnis, D.; Miyamoto, R.; Ohlsson, T.; Öhman, H.; Olvegård, M.; Ruber, R.; Schönauer, H.; Tang, J.Y.; Tsenov, R.; Vankova-Kirilova, G.; Vassilopoulos, N.

    2016-01-01

    Very intense neutrino beams and large neutrino detectors will be needed to enable the discovery of CP violation in the leptonic sector. The European Spallation Source (ESS), currently under construction in Lund, Sweden, is a research center that will provide, by 2023, the world's most powerful neutron source. The average power will be 5 MW. Pulsing this linac at higher frequency, at the same instantaneous power, will make it possible to raise the average beam power to 10 MW to produce, in parallel with the spallation neutron production, a high performance neutrino Super Beam of about 0.4 GeV mean neutrino energy. The ESS neutrino Super Beam, ESSnuSB, operated with a 2.0 GeV linac proton beam, together with a large underground Water Cherenkov detector located at 540 km from Lund, close to the second oscillation maximum, will make it possible to discover leptonic CP violation at 5 sigma significance level in 56 percent (65 percent for an upgrade to 2.5 GeV beam energy) of the leptonic Dirac CP-violating phase r...

  6. Robust maximum power point tracker using sliding mode controller for the three-phase grid-connected photovoltaic system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Il-Song [LG Chem. Ltd./Research park, Mobile Energy R and D, 104-1 Moonji-Dong, Yuseong-Gu, Daejeon 305-380 (Korea)

    2007-03-15

    A robust maximum power point tracker (MPPT) using sliding mode controller for the three-phase grid-connected photovoltaic system has been proposed in this paper. Contrary to the previous controller, the proposed system consists of MPPT controller and current controller for tight regulation of the current. The proposed MPPT controller generates current reference directly from the solar array power information and the current controller uses the integral sliding mode for the tight control of current. The proposed system can prevent the current overshoot and provide optimal design for the system components. The structure of the proposed system is simple, and it shows robust tracking property against modeling uncertainties and parameter variations. Mathematical modeling is developed and the experimental results verify the validity of the proposed controller. (author)

  7. Performance study of highly efficient 520 W average power long pulse ceramic Nd:YAG rod laser

    Science.gov (United States)

    Choubey, Ambar; Vishwakarma, S. C.; Ali, Sabir; Jain, R. K.; Upadhyaya, B. N.; Oak, S. M.

    2013-10-01

    We report the performance study of a 2% atomic doped ceramic Nd:YAG rod for long pulse laser operation in the millisecond regime with pulse duration in the range of 0.5-20 ms. A maximum average output power of 520 W with 180 J maximum pulse energy has been achieved with a slope efficiency of 5.4% using a dual rod configuration, which is the highest for typical lamp pumped ceramic Nd:YAG lasers. The laser output characteristics of the ceramic Nd:YAG rod were revealed to be nearly equivalent or superior to those of high-quality single crystal Nd:YAG rod. The laser pump chamber and resonator were designed and optimized to achieve a high efficiency and good beam quality with a beam parameter product of 16 mm mrad (M2˜47). The laser output beam was efficiently coupled through a 400 μm core diameter optical fiber with 90% overall transmission efficiency. This ceramic Nd:YAG laser will be useful for various material processing applications in industry.

  8. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  9. Theoretical evaluation of maximum electric field approximation of direct band-to-band tunneling Kane model for low bandgap semiconductors

    Science.gov (United States)

    Dang Chien, Nguyen; Shih, Chun-Hsing; Hoa, Phu Chi; Minh, Nguyen Hong; Thi Thanh Hien, Duong; Nhung, Le Hong

    2016-06-01

    The two-band Kane model has been popularly used to calculate the band-to-band tunneling (BTBT) current in tunnel field-effect transistor (TFET) which is currently considered as a promising candidate for low power applications. This study theoretically clarifies the maximum electric field approximation (MEFA) of direct BTBT Kane model and evaluates its appropriateness for low bandgap semiconductors. By analysing the physical origin of each electric field term in the Kane model, it has been elucidated in the MEFA that the local electric field term must be remained while the nonlocal electric field terms are assigned by the maximum value of electric field at the tunnel junction. Mathematical investigations have showed that the MEFA is more appropriate for low bandgap semiconductors compared to high bandgap materials because of enhanced tunneling probability in low field regions. The appropriateness of the MEFA is very useful for practical uses in quickly estimating the direct BTBT current in low bandgap TFET devices.

  10. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  11. Power converter with maximum power point tracking MPPT for small wind-electric pumping systems

    International Nuclear Information System (INIS)

    Lara, David; Merino, Gabriel; Salazar, Lautaro

    2015-01-01

    Highlights: • We implement a wind electric pumping system of small power. • The power converter allowed to change the operating point of the electro pump. • Two control techniques were implemented in the power converter. • The control V/f variable allowed to increase the power generated by the permanent magnet generator. - Abstract: In this work, an AC–DC–AC direct-drive power converter was implemented for a wind electric pumping system consisting of a permanent magnet generator (PMG) of 1.3 kW and a peripheral single phase pump of 0.74 kW. In addition, the inverter linear V/f control scheme and the maximum power point tracking (MPPT) algorithm with variable V/f were developed. MPPT algorithm seeks to extract water in a wide range of power input using the maximum amount of wind power available. Experimental trials at different pump pressures were conducted. With a MPPT tracking system with variable V/f, a power value of 1.3 kW was obtained at a speed of 350 rpm and a maximum operating hydraulic head of 50 m. At lower operating heads pressures (between 10 and 40 m), variable V/f control increases the power generated by the PMG compared to the linear V/f control. This increase ranged between 4% and 23% depending on the operating pressure, with an average of 13%, getting close to the maximum electrical power curve of the PMG. The pump was driven at variable frequency reaching a minimum speed of 0.5 times the rated speed. Efficiency of the power converter ranges between 70% and 95% with a power factor between 0.4 and 0.85, depending on the operating pressure

  12. Targeted maximum likelihood estimation for a binary treatment: A tutorial.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E

    2018-04-23

    When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  13. Maximum Efficiency per Torque Control of Permanent-Magnet Synchronous Machines

    Directory of Open Access Journals (Sweden)

    Qingbo Guo

    2016-12-01

    Full Text Available High-efficiency permanent-magnet synchronous machine (PMSM drive systems need not only optimally designed motors but also efficiency-oriented control strategies. However, the existing control strategies only focus on partial loss optimization. This paper proposes a novel analytic loss model of PMSM in either sine-wave pulse-width modulation (SPWM or space vector pulse width modulation (SVPWM which can take into account both the fundamental loss and harmonic loss. The fundamental loss is divided into fundamental copper loss and fundamental iron loss which is estimated by the average flux density in the stator tooth and yoke. In addition, the harmonic loss is obtained from the Bertotti iron loss formula by the harmonic voltages of the three-phase inverter in either SPWM or SVPWM which are calculated by double Fourier integral analysis. Based on the analytic loss model, this paper proposes a maximum efficiency per torque (MEPT control strategy which can minimize the electromagnetic loss of PMSM in the whole operation range. As the loss model of PMSM is too complicated to obtain the analytical solution of optimal loss, a golden section method is applied to achieve the optimal operation point accurately, which can make PMSM work at maximum efficiency. The optimized results between SPWM and SVPWM show that the MEPT in SVPWM has a better effect on the optimization performance. Both the theory analysis and experiment results show that the MEPT control can significantly improve the efficiency performance of the PMSM in each operation condition with a satisfied dynamic performance.

  14. National Fire News- Current Wildfires

    Science.gov (United States)

    ... 1 to 5) Current hours for the National Fire Information Center are (MST) 8:00 am - 4: ... for more information. June 15, 2018 Nationally, wildland fire activity remains about average for this time of ...

  15. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  16. Characteristics of the cold-water belt formed off Soya Warm Current

    Science.gov (United States)

    Ishizu, Miho; Kitade, Yujiro; Matsuyama, Masaji

    2008-12-01

    We examined the data obtained by acoustic Doppler current profiler, conductivity-temperature-depth profiler, and expendable bathythermograph observations, which were collected in the summers of 2000, 2001, and 2002, to clarify the characteristics of the cold-water belt (CWB), i.e., lower-temperature water than the surrounding water extending from the southwest coast of Sakhalin along the offshore side of Soya Warm Current (SWC) and to confirm one of the formation mechanisms of the CWB as suggested by our previous study, i.e., the upwelling due to the convergence of bottom Ekman transport off the SWC region. The CWB was observed at about 30 km off the coast, having a thickness of 14 m and a minimum temperature of 12°C at the sea surface. The CWB does not have the specific water mass, but is constituted of three representative water types off the northeast coast of Hokkaido in summer, i.e., SWC water, Fresh Surface Okhotsk Sea Water, and Okhotsk Sea Intermediate Water. In a comparison of the horizontal distributions of current and temperature, the CWB region is found to be advected to the southeast at an average of 40 ± 29% of the maximum current velocity of the SWC. The pumping speed due to the convergence of the bottom Ekman transport is estimated as (1.5-3.0) × 10-4 m s-1. We examined the mixing ratio of the CWB, and the results implied that the water mass of the CWB is advected southeastward and mixes with a water mass upwelling in a different region off SWC.

  17. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  18. Current-Mode CMOS A/D Converter for pA to nA Input Currents

    DEFF Research Database (Denmark)

    Breten, Madalina; Lehmann, Torsten; Bruun, Erik

    1999-01-01

    This paper describes a current mode A/D converter designed for a maximum input current range of 5nA and a resolution of the order of 1pA. The converter is designed for a potentiostat for amperometric chemical sensors and provides a constant polarization voltage for the measuring electrode....... A prototype chip using the dual slope conversion method has been fabricated in a 0.7micron CMOS process. Experimental results from this converter are reported. Design problems and limitations of the converter are discussed and a new conversion technique providing a larger dynamic range and easy calibration...

  19. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  20. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  1. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  2. 49 CFR 195.406 - Maximum operating pressure.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...

  3. Maximum power point tracking algorithm based on sliding mode and fuzzy logic for photovoltaic sources under variable environmental conditions

    Science.gov (United States)

    Atik, L.; Petit, P.; Sawicki, J. P.; Ternifi, Z. T.; Bachir, G.; Della, M.; Aillerie, M.

    2017-02-01

    Solar panels have a nonlinear voltage-current characteristic, with a distinct maximum power point (MPP), which depends on the environmental factors, such as temperature and irradiation. In order to continuously harvest maximum power from the solar panels, they have to operate at their MPP despite the inevitable changes in the environment. Various methods for maximum power point tracking (MPPT) were developed and finally implemented in solar power electronic controllers to increase the efficiency in the electricity production originate from renewables. In this paper we compare using Matlab tools Simulink, two different MPP tracking methods, which are, fuzzy logic control (FL) and sliding mode control (SMC), considering their efficiency in solar energy production.

  4. High neutronic efficiency, low current targets for accelerator-based BNCT applications

    International Nuclear Information System (INIS)

    Powell, J.R.; Ludewig, H.; Todosow, M.

    1998-01-01

    The neutronic efficiency of target/filters for accelerator-based BNCT applications is measured by the proton current required to achieve a desirable neutron current at the treatment port (10 9 n/cm 2 /s). In this paper the authors describe two possible targeyt/filter concepts wihch minimize the required current. Both concepts are based on the Li-7 (p,n)Be-7 reaction. Targets that operate near the threshold energy generate neutrons that are close tothe desired energy for BNCT treatment. Thus, the filter can be extremely thin (∼ 5 cm iron). However, this approach has an extremely low neutron yield (n/p ∼ 1.0(-6)), thus requiring a high proton current. The proposed solutino is to design a target consisting of multiple extremely thin targets (proton energy loss per target ∼ 10 keV), and re-accelerate the protons between each target. Targets operating at ihgher proton energies (∼ 2.5 MeV) have a much higher yield (n/p ∼ 1.0(-4)). However, at these energies the maximum neutron energy is approximately 800 keV, and thus a neutron filter is required to degrade the average neutron energy to the range of interest for BNCT (10--20 keV). A neutron filter consisting of fluorine compounds and iron has been investigated for this case. Typically a proton current of approximately 5 mA is required to generate the desired neutron current at the treatment port. The efficiency of these filter designs can be further increased by incorporating neutron reflectors that are co-axial with the neutron source. These reflectors are made of materials which have high scattering cross sections in the range 0.1--1.0 MeV

  5. Maximum spectral demands in the near-fault region

    Science.gov (United States)

    Huang, Yin-Nan; Whittaker, Andrew S.; Luco, Nicolas

    2008-01-01

    The Next Generation Attenuation (NGA) relationships for shallow crustal earthquakes in the western United States predict a rotated geometric mean of horizontal spectral demand, termed GMRotI50, and not maximum spectral demand. Differences between strike-normal, strike-parallel, geometric-mean, and maximum spectral demands in the near-fault region are investigated using 147 pairs of records selected from the NGA strong motion database. The selected records are for earthquakes with moment magnitude greater than 6.5 and for closest site-to-fault distance less than 15 km. Ratios of maximum spectral demand to NGA-predicted GMRotI50 for each pair of ground motions are presented. The ratio shows a clear dependence on period and the Somerville directivity parameters. Maximum demands can substantially exceed NGA-predicted GMRotI50 demands in the near-fault region, which has significant implications for seismic design, seismic performance assessment, and the next-generation seismic design maps. Strike-normal spectral demands are a significantly unconservative surrogate for maximum spectral demands for closest distance greater than 3 to 5 km. Scale factors that transform NGA-predicted GMRotI50 to a maximum spectral demand in the near-fault region are proposed.

  6. Cloud-based design of high average power traveling wave linacs

    Science.gov (United States)

    Kutsaev, S. V.; Eidelman, Y.; Bruhwiler, D. L.; Moeller, P.; Nagler, R.; Barbe Welzel, J.

    2017-12-01

    The design of industrial high average power traveling wave linacs must accurately consider some specific effects. For example, acceleration of high current beam reduces power flow in the accelerating waveguide. Space charge may influence the stability of longitudinal or transverse beam dynamics. Accurate treatment of beam loading is central to the design of high-power TW accelerators, and it is especially difficult to model in the meter-scale region where the electrons are nonrelativistic. Currently, there are two types of available codes: tracking codes (e.g. PARMELA or ASTRA) that cannot solve self-consistent problems, and particle-in-cell codes (e.g. Magic 3D or CST Particle Studio) that can model the physics correctly but are very time-consuming and resource-demanding. Hellweg is a special tool for quick and accurate electron dynamics simulation in traveling wave accelerating structures. The underlying theory of this software is based on the differential equations of motion. The effects considered in this code include beam loading, space charge forces, and external magnetic fields. We present the current capabilities of the code, provide benchmarking results, and discuss future plans. We also describe the browser-based GUI for executing Hellweg in the cloud.

  7. Valley current characterization of high current density resonant tunnelling diodes for terahertz-wave applications

    Science.gov (United States)

    Jacobs, K. J. P.; Stevens, B. J.; Baba, R.; Wada, O.; Mukai, T.; Hogg, R. A.

    2017-10-01

    We report valley current characterisation of high current density InGaAs/AlAs/InP resonant tunnelling diodes (RTDs) grown by metal-organic vapour phase epitaxy (MOVPE) for THz emission, with a view to investigate the origin of the valley current and optimize device performance. By applying a dual-pass fabrication technique, we are able to measure the RTD I-V characteristic for different perimeter/area ratios, which uniquely allows us to investigate the contribution of leakage current to the valley current and its effect on the PVCR from a single device. Temperature dependent (20 - 300 K) characteristics for a device are critically analysed and the effect of temperature on the maximum extractable power (PMAX) and the negative differential conductance (NDC) of the device is investigated. By performing theoretical modelling, we are able to explore the effect of typical variations in structural composition during the growth process on the tunnelling properties of the device, and hence the device performance.

  8. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  9. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  10. High current polarized electron source

    Science.gov (United States)

    Suleiman, R.; Adderley, P.; Grames, J.; Hansknecht, J.; Poelker, M.; Stutzman, M.

    2018-05-01

    Jefferson Lab operates two DC high voltage GaAs photoguns with compact inverted insulators. One photogun provides the polarized electron beam at the Continuous Electron Beam Accelerator Facility (CEBAF) up to 200 µA. The other gun is used for high average current photocathode lifetime studies at a dedicated test facility up to 4 mA of polarized beam and 10 mA of un-polarized beam. GaAs-based photoguns used at accelerators with extensive user programs must exhibit long photocathode operating lifetime. Achieving this goal represents a significant challenge for proposed facilities that must operate in excess of tens of mA of polarized average current. This contribution describes techniques to maintain good vacuum while delivering high beam currents, and techniques that minimize damage due to ion bombardment, the dominant mechanism that reduces photocathode yield. Advantages of higher DC voltage include reduced space-charge emittance growth and the potential for better photocathode lifetime. Highlights of R&D to improve the performance of polarized electron sources and prolong the lifetime of strained-superlattice GaAs are presented.

  11. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  12. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  13. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  14. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  15. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  16. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  17. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  18. 22 CFR 201.67 - Maximum freight charges.

    Science.gov (United States)

    2010-04-01

    ..., commodity rate classification, quantity, vessel flag category (U.S.-or foreign-flag), choice of ports, and... the United States. (2) Maximum charter rates. (i) USAID will not finance ocean freight under any... owner(s). (4) Maximum liner rates. USAID will not finance ocean freight for a cargo liner shipment at a...

  19. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  20. Total average diastolic longitudinal displacement by colour tissue doppler imaging as an assessment of diastolic function

    DEFF Research Database (Denmark)

    de Knegt, Martina Chantal; Biering-Sørensen, Tor; Søgaard, Peter

    2016-01-01

    BACKGROUND: The current method for a non-invasive assessment of diastolic dysfunction is complex with the use of algorithms of many different echocardiographic parameters. Total average diastolic longitudinal displacement (LD), determined by colour tissue Doppler imaging (TDI) via the measurement...

  1. Dosimetric consequences of planning lung treatments on 4DCT average reconstruction to represent a moving tumour

    International Nuclear Information System (INIS)

    Dunn, L.F.; Taylor, M.L.; Kron, T.; Franich, R.

    2010-01-01

    Full text: Anatomic motion during a radiotherapy treatment is one of the more significant challenges in contemporary radiation therapy. For tumours of the lung, motion due to patient respiration makes both accurate planning and dose delivery difficult. One approach is to use the maximum intensity projection (MIP) obtained from a 40 computed tomography (CT) scan and then use this to determine the treatment volume. The treatment is then planned on a 4DCT average reco struction, rather than assuming the entire ITY has a uniform tumour density. This raises the question: how well does planning on a 'blurred' distribution of density with CT values greater than lung density but less than tumour density match the true case of a tumour moving within lung tissue? The aim of this study was to answer this question, determining the dosimetric impact of using a 4D-CT average reconstruction as the basis for a radiotherapy treatment plan. To achieve this, Monte-Carlo sim ulations were undertaken using GEANT4. The geometry consisted of a tumour (diameter 30 mm) moving with a sinusoidal pattern of amplitude = 20 mm. The tumour's excursion occurs within a lung equivalent volume beyond a chest wall interface. Motion was defined parallel to a 6 MY beam. This was then compared to a single oblate tumour of a magnitude determined by the extremes of the tumour motion. The variable density of the 4DCT average tumour is simulated by a time-weighted average, to achieve the observed density gradient. The generic moving tumour geometry is illustrated in the Figure.

  2. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  3. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    Science.gov (United States)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  4. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  5. Maximum Likelihood Method for Predicting Environmental Conditions from Assemblage Composition: The R Package bio.infer

    Directory of Open Access Journals (Sweden)

    Lester L. Yuan

    2007-06-01

    Full Text Available This paper provides a brief introduction to the R package bio.infer, a set of scripts that facilitates the use of maximum likelihood (ML methods for predicting environmental conditions from assemblage composition. Environmental conditions can often be inferred from only biological data, and these inferences are useful when other sources of data are unavailable. ML prediction methods are statistically rigorous and applicable to a broader set of problems than more commonly used weighted averaging techniques. However, ML methods require a substantially greater investment of time to program algorithms and to perform computations. This package is designed to reduce the effort required to apply ML prediction methods.

  6. Maximum power output and load matching of a phosphoric acid fuel cell-thermoelectric generator hybrid system

    Science.gov (United States)

    Chen, Xiaohang; Wang, Yuan; Cai, Ling; Zhou, Yinghui

    2015-10-01

    Based on the current models of phosphoric acid fuel cells (PAFCs) and thermoelectric generators (TGs), a new hybrid system is proposed, in which the effects of multi-irreversibilities resulting from the activation, concentration, and ohmic overpotentials in the PAFC, Joule heat and heat leak in the TG, finite-rate heat transfer between the TG and the heat reservoirs, and heat leak from the PAFC to the environment are taken into account. Expressions for the power output and efficiency of the PAFC, TG, and hybrid system are analytically derived and directly used to discuss the performance characteristics of the hybrid system. The optimal relationship between the electric currents in the PAFC and TG is obtained. The maximum power output is numerically calculated. It is found that the maximum power output density of the hybrid system will increase about 150 Wm-2, compared with that of a single PAFC. The problem how to optimally match the load resistances of two subsystems is discussed. Some significant results for practical hybrid systems are obtained.

  7. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  8. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  9. Development of high current electron beam generator

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byeong Cheol; Lee, Jong Min; Kim, Sun Kook [and others

    1997-05-01

    A high-current electron beam generator has been developed. The energy and the average current of the electron beam are 2 MeV and 50 mA, respectively. The electron beam generator is composed of an electron gun, RF acceleration cavities, a 260-kW RF generator, electron beam optics components, and control system, etc. The electron beam generator will be used for the development of a millimeter-wave free-electron laser and a high average power infrared free-electron laser. The machine will also be used as a user facility in nuclear industry, environment industry, semiconductor industry, chemical industry, etc. (author). 15 tabs., 85 figs.

  10. Development of high current electron beam generator

    International Nuclear Information System (INIS)

    Lee, Byeong Cheol; Lee, Jong Min; Kim, Sun Kook

    1997-05-01

    A high-current electron beam generator has been developed. The energy and the average current of the electron beam are 2 MeV and 50 mA, respectively. The electron beam generator is composed of an electron gun, RF acceleration cavities, a 260-kW RF generator, electron beam optics components, and control system, etc. The electron beam generator will be used for the development of a millimeter-wave free-electron laser and a high average power infrared free-electron laser. The machine will also be used as a user facility in nuclear industry, environment industry, semiconductor industry, chemical industry, etc. (author). 15 tabs., 85 figs

  11. Photovoltaic High-Frequency Pulse Charger for Lead-Acid Battery under Maximum Power Point Tracking

    Directory of Open Access Journals (Sweden)

    Hung-I. Hsieh

    2013-01-01

    Full Text Available A photovoltaic pulse charger (PV-PC using high-frequency pulse train for charging lead-acid battery (LAB is proposed not only to explore the charging behavior with maximum power point tracking (MPPT but also to delay sulfating crystallization on the electrode pores of the LAB to prolong the battery life, which is achieved due to a brief pulse break between adjacent pulses that refreshes the discharging of LAB. Maximum energy transfer between the PV module and a boost current converter (BCC is modeled to maximize the charging energy for LAB under different solar insolation. A duty control, guided by a power-increment-aided incremental-conductance MPPT (PI-INC MPPT, is implemented to the BCC that operates at maximum power point (MPP against the random insolation. A 250 W PV-PC system for charging a four-in-series LAB (48 Vdc is examined. The charging behavior of the PV-PC system in comparison with that of CC-CV charger is studied. Four scenarios of charging statuses of PV-BC system under different solar insolation changes are investigated and compared with that using INC MPPT.

  12. Uninterrupted thermoelectric energy harvesting using temperature-sensor-based maximum power point tracking system

    International Nuclear Information System (INIS)

    Park, Jae-Do; Lee, Hohyun; Bond, Matthew

    2014-01-01

    Highlights: • Feedforward MPPT scheme for uninterrupted TEG energy harvesting is suggested. • Temperature sensors are used to avoid current measurement or source disconnection. • MPP voltage reference is generated based on OCV vs. temperature differential model. • Optimal operating condition is maintained using hysteresis controller. • Any type of power converter can be used in the proposed scheme. - Abstract: In this paper, a thermoelectric generator (TEG) energy harvesting system with a temperature-sensor-based maximum power point tracking (MPPT) method is presented. Conventional MPPT algorithms for photovoltaic cells may not be suitable for thermoelectric power generation because a significant amount of time is required for TEG systems to reach a steady state. Moreover, complexity and additional power consumption in conventional circuits and periodic disconnection of power source are not desirable for low-power energy harvesting applications. The proposed system can track the varying maximum power point (MPP) with a simple and inexpensive temperature-sensor-based circuit without instantaneous power measurement or TEG disconnection. This system uses TEG’s open circuit voltage (OCV) characteristic with respect to temperature gradient to generate a proper reference voltage signal, i.e., half of the TEG’s OCV. The power converter controller maintains the TEG output voltage at the reference level so that the maximum power can be extracted for the given temperature condition. This feedforward MPPT scheme is inherently stable and can be implemented without any complex microcontroller circuit. The proposed system has been validated analytically and experimentally, and shows a maximum power tracking error of 1.15%

  13. CMOS switched current phase-locked loop

    NARCIS (Netherlands)

    Leenaerts, D.M.W.; Persoon, G.G.; Putter, B.M.

    1997-01-01

    The authors present an integrated circuit realisation of a switched current phase-locked loop (PLL) in standard 2.4 µm CMOS technology. The centre frequency is tunable to 1 MHz at a clock frequency of 5.46 MHz. The PLL has a measured maximum phase error of 21 degrees. The chip consumes

  14. 40 CFR 141.13 - Maximum contaminant levels for turbidity.

    Science.gov (United States)

    2010-07-01

    ... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...

  15. Upwelling systems in eastern boundary currents have been ...

    African Journals Online (AJOL)

    spamer

    Differences are found in the location of return, onshore flow. .... eastern boundary currents, downstream of the west wind drift ... show maximum upwelling conditions (equatorward winds) in ..... The work of PTS and CJ was supported by Grant.

  16. Generalized lower-hybrid drift instabilities in current-sheet equilibrium

    International Nuclear Information System (INIS)

    Yoon, Peter H.; Lui, Anthony T. Y.; Sitnov, Mikhail I.

    2002-01-01

    A class of drift instabilities in one-dimensional current-sheet configuration, i.e., classical Harris equilibrium, with frequency ranging from low ion-cyclotron to intermediate lower-hybrid frequencies, are investigated with an emphasis placed on perturbations propagating along the direction of cross-field current flow. Nonlocal two-fluid stability analysis is carried out, and a class of unstable modes with multiple eigenstates, similar to that of the familiar quantum mechanical potential-well problem, are found by numerical means. It is found that the most unstable modes correspond to quasi-electrostatic, short-wavelength perturbations in the lower-hybrid frequency range, with wave functions localized at the edge of the current sheet where the density gradient is maximum. It is also found that there exist quasi-electromagnetic modes located near the center of the current sheet where the current density is maximum, with both kink- and sausage-type polarizations. These modes are low-frequency, long-wavelength perturbations. It turns out that the current-driven modes are low-order eigensolutions while the lower-hybrid-type modes are higher-order states, and there are intermediate solutions between the two extreme cases. Attempts are made to interpret the available simulation results in light of the present eigenmode analysis

  17. 40 CFR 141.62 - Maximum contaminant levels for inorganic contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant levels for inorganic contaminants. 141.62 Section 141.62 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Water Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.62 Maximum...

  18. Radiation induced leakage current and stress induced leakage current in ultra-thin gate oxides

    International Nuclear Information System (INIS)

    Ceschia, M.; Paccagnella, A.; Cester, A.; Scarpa, A.

    1998-01-01

    Low-field leakage current has been measured in thin oxides after exposure to ionizing radiation. This Radiation Induced Leakage Current (RILC) can be described as an inelastic tunneling process mediated by neutral traps in the oxide, with an energy loss of about 1 eV. The neutral trap distribution is influenced by the oxide field applied during irradiation, thus indicating that the precursors of the neutral defects are charged, likely being defects associated to trapped holes. The maximum leakage current is found under zero-field condition during irradiation, and it rapidly decreases as the field is enhanced, due to a displacement of the defect distribution across the oxide towards the cathodic interface. The RILC kinetics are linear with the cumulative dose, in contrast with the power law found on electrically stressed devices

  19. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  20. Two-Dimensional Depth-Averaged Beach Evolution Modeling: Case Study of the Kizilirmak River Mouth, Turkey

    DEFF Research Database (Denmark)

    Baykal, Cüneyt; Ergin, Ayşen; Güler, Işikhan

    2014-01-01

    investigated by satellite images, physical model tests, and one-dimensional numerical models. The current study uses a two-dimensional depth-averaged numerical beach evolution model, developed based on existing methodologies. This model is mainly composed of four main submodels: a phase-averaged spectral wave......This study presents an application of a two-dimensional beach evolution model to a shoreline change problem at the Kizilirmak River mouth, which has been facing severe coastal erosion problems for more than 20 years. The shoreline changes at the Kizilirmak River mouth have been thus far...... transformation model, a two-dimensional depth-averaged numerical waveinduced circulation model, a sediment transport model, and a bottom evolution model. To validate and verify the numerical model, it is applied to several cases of laboratory experiments. Later, the model is applied to a shoreline change problem...

  1. Filament heater current modulation for increased filament lifetime

    International Nuclear Information System (INIS)

    Paul, J.D.; Williams, H.E. III.

    1996-01-01

    The surface conversion H-minus ion source employs two 60 mil tungsten filaments which are approximately 17 centimeters in length. These filaments are heated to approximately 2,800 degrees centigrade by 95--100 amperes of DC heater current. The arc is struck at a 120 hertz rate, for 800 microseconds and is generally run at 30 amperes peak current. Although sputtering is considered a contributing factor in the demise of the filament, evaporation is of greater concern. If the peak arc current can be maintained with less average heater current, the filament evaporation rate for this arc current will diminish. In the vacuum of an ion source, the authors expect the filaments to retain much of their heat throughout a 1 millisecond (12% duty) loss of heater current. A circuit to eliminate 100 ampere heater currents from filaments during the arc pulse was developed. The magnetic field due to the 100 ampere current tends to hold electrons to the filament, decreasing the arc current. By eliminating this magnetic field, the arc should be more efficient, allowing the filaments to run at a lower average heater current. This should extend the filament lifetime. The circuit development and preliminary filament results are discussed

  2. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  3. 40 CFR 141.61 - Maximum contaminant levels for organic contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant levels for organic contaminants. 141.61 Section 141.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.61 Maximum contaminant...

  4. The power and robustness of maximum LOD score statistics.

    Science.gov (United States)

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  5. Savannah River Site radioiodine atmospheric releases and offsite maximum doses

    International Nuclear Information System (INIS)

    Marter, W.L.

    1990-01-01

    Radioisotopes of iodine have been released to the atmosphere from the Savannah River Site since 1955. The releases, mostly from the 200-F and 200-H Chemical Separations areas, consist of the isotopes, I-129 and 1-131. Small amounts of 1-131 and 1-133 have also been released from reactor facilities and the Savannah River Laboratory. This reference memorandum was issued to summarize our current knowledge of releases of radioiodines and resultant maximum offsite doses. This memorandum supplements the reference memorandum by providing more detailed supporting technical information. Doses reported in this memorandum from consumption of the milk containing the highest I-131 concentration following the 1961 1-131 release incident are about 1% higher than reported in the reference memorandum. This is the result of using unrounded 1-131 concentrations of I-131 in milk in this memo. It is emphasized here that this technical report does not constitute a dose reconstruction in the same sense as the dose reconstruction effort currently underway at Hanford. This report uses existing published data for radioiodine releases and existing transport and dosimetry models

  6. Control for the Three-Phase Four-Wire Four-Leg APF Based on SVPWM and Average Current Method

    Directory of Open Access Journals (Sweden)

    Xiangshun Li

    2015-01-01

    Full Text Available A novel control method is proposed for the three-phase four-wire four-leg active power filter (APF to realize the accurate and real-time compensation of harmonic of power system, which combines space vector pulse width modulation (SVPWM with triangle modulation strategy. Firstly, the basic principle of the APF is briefly described. Then the harmonic and reactive currents are derived by the instantaneous reactive power theory. Finally simulation and experiment are built to verify the validity and effectiveness of the proposed method. The simulation results show that the response time for compensation is about 0.025 sec and the total harmonic distortion (THD of the source current of phase A is reduced from 33.38% before compensation to 3.05% with APF.

  7. Maximum Power Training and Plyometrics for Cross-Country Running.

    Science.gov (United States)

    Ebben, William P.

    2001-01-01

    Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…

  8. Estimating total maximum daily loads with the Stochastic Empirical Loading and Dilution Model

    Science.gov (United States)

    Granato, Gregory; Jones, Susan Cheung

    2017-01-01

    The Massachusetts Department of Transportation (DOT) and the Rhode Island DOT are assessing and addressing roadway contributions to total maximum daily loads (TMDLs). Example analyses for total nitrogen, total phosphorus, suspended sediment, and total zinc in highway runoff were done by the U.S. Geological Survey in cooperation with FHWA to simulate long-term annual loads for TMDL analyses with the stochastic empirical loading and dilution model known as SELDM. Concentration statistics from 19 highway runoff monitoring sites in Massachusetts were used with precipitation statistics from 11 long-term monitoring sites to simulate long-term pavement yields (loads per unit area). Highway sites were stratified by traffic volume or surrounding land use to calculate concentration statistics for rural roads, low-volume highways, high-volume highways, and ultraurban highways. The median of the event mean concentration statistics in each traffic volume category was used to simulate annual yields from pavement for a 29- or 30-year period. Long-term average yields for total nitrogen, phosphorus, and zinc from rural roads are lower than yields from the other categories, but yields of sediment are higher than for the low-volume highways. The average yields of the selected water quality constituents from high-volume highways are 1.35 to 2.52 times the associated yields from low-volume highways. The average yields of the selected constituents from ultraurban highways are 1.52 to 3.46 times the associated yields from high-volume highways. Example simulations indicate that both concentration reduction and flow reduction by structural best management practices are crucial for reducing runoff yields.

  9. Analysis on Θ pumping for tokamak current drive

    International Nuclear Information System (INIS)

    Miyamoto, Kenro; Naito, Osamu

    1986-01-01

    Analytical results of Θ pumping for the tokamak current drive are presented. Diffusion of externally applied oscillating electric field into the tokamak plasma is examined when the plasma is normal. When the oscillating electric field is parallel to the stationary toroidal plasma current and the induced current density by the applied electric field becomes larger than the average density of the toroidal plasma current over the plasma cross section, the radial profile of the safety factor has the extremum near the plasma boundary region and MHD instabilities are excited. It is assumed that anomalous diffusion of the induced current localized in the plasma boundary region takes place, so that the extreme value in the radial profile of the safety factor disappears. The anomalously diffused electric field due to this relaxation process has net d. c component and its non-zero value of the time average is estimated. Then the condition of the tokamak current drive by Θ pumping is derived. Some numerical results are presented for an example of a fusion grade plasma. (author)

  10. Maximum warming occurs about one decade after a carbon dioxide emission

    International Nuclear Information System (INIS)

    Ricke, Katharine L; Caldeira, Ken

    2014-01-01

    It is known that carbon dioxide emissions cause the Earth to warm, but no previous study has focused on examining how long it takes to reach maximum warming following a particular CO 2 emission. Using conjoined results of carbon-cycle and physical-climate model intercomparison projects (Taylor et al 2012, Joos et al 2013), we find the median time between an emission and maximum warming is 10.1 years, with a 90% probability range of 6.6–30.7 years. We evaluate uncertainties in timing and amount of warming, partitioning them into three contributing factors: carbon cycle, climate sensitivity and ocean thermal inertia. If uncertainty in any one factor is reduced to zero without reducing uncertainty in the other factors, the majority of overall uncertainty remains. Thus, narrowing uncertainty in century-scale warming depends on narrowing uncertainty in all contributing factors. Our results indicate that benefit from avoided climate damage from avoided CO 2 emissions will be manifested within the lifetimes of people who acted to avoid that emission. While such avoidance could be expected to benefit future generations, there is potential for emissions avoidance to provide substantial benefit to current generations. (letter)

  11. Maximum warming occurs about one decade after a carbon dioxide emission

    Science.gov (United States)

    Ricke, Katharine L.; Caldeira, Ken

    2014-12-01

    It is known that carbon dioxide emissions cause the Earth to warm, but no previous study has focused on examining how long it takes to reach maximum warming following a particular CO2 emission. Using conjoined results of carbon-cycle and physical-climate model intercomparison projects (Taylor et al 2012, Joos et al 2013), we find the median time between an emission and maximum warming is 10.1 years, with a 90% probability range of 6.6-30.7 years. We evaluate uncertainties in timing and amount of warming, partitioning them into three contributing factors: carbon cycle, climate sensitivity and ocean thermal inertia. If uncertainty in any one factor is reduced to zero without reducing uncertainty in the other factors, the majority of overall uncertainty remains. Thus, narrowing uncertainty in century-scale warming depends on narrowing uncertainty in all contributing factors. Our results indicate that benefit from avoided climate damage from avoided CO2 emissions will be manifested within the lifetimes of people who acted to avoid that emission. While such avoidance could be expected to benefit future generations, there is potential for emissions avoidance to provide substantial benefit to current generations.

  12. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  13. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  14. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  15. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection

    Science.gov (United States)

    DeWeber, Jefferson T.; Wagner, Tyler

    2018-01-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30‐day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species’ distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold‐water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid‐century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation

  16. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection.

    Science.gov (United States)

    DeWeber, Jefferson T; Wagner, Tyler

    2018-06-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30-day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species' distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold-water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid-century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our

  17. PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM

    Directory of Open Access Journals (Sweden)

    Bahubali K. Shiragapur

    2016-03-01

    Full Text Available In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR quantity. The Golay Code (24, 12, Reed-Muller code (16, 11, Hamming code (7, 4 and Hybrid technique (Combination of Signal Scrambling and Signal Distortion proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conventional and Modified Selective mapping techniques. The simulation results are validated through statistical properties, for proposed technique’s autocorrelation value is maximum shows reduction in PAPR. The symbol preference is the key idea to reduce PAPR based on Hamming distance. The simulation results are discussed in detail, in this article.

  18. Theoretical and experimental investigations of the limits to the maximum output power of laser diodes

    International Nuclear Information System (INIS)

    Wenzel, H; Crump, P; Pietrzak, A; Wang, X; Erbert, G; Traenkle, G

    2010-01-01

    The factors that limit both the continuous wave (CW) and the pulsed output power of broad-area laser diodes driven at very high currents are investigated theoretically and experimentally. The decrease in the gain due to self-heating under CW operation and spectral holeburning under pulsed operation, as well as heterobarrier carrier leakage and longitudinal spatial holeburning, are the dominant mechanisms limiting the maximum achievable output power.

  19. Observation of ocean current response to 1998 Hurricane Georges in the Gulf of Mexico

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The ocean current response to a hurricane on the shelf-break is examined. The study area is the DeSoto Canyon in the northeast Gulf of Mexico, and the event is the passage of 1998 Hurricane Georges with a maximum wind speed of 49 m/s. The data sets used for analysis consist of the mooring data taken by the Field Program of the DeSoto Canyon Eddy Intrusion Study, and simultaneous winds observed by NOAA (National Oceanic and Atmospheric Administration) Moored Buoy 42040. Time-depth ocean current energy density images derived from the observed data show that the ocean currents respond almost immediately to the hurricane with important differences on and offthe shelf. On the shelf, in the shallow water of 100 m, the disturbance penetrates rapidly downward to the bottom and forms two energy peaks, the major peak is located in the mixed layer and the secondary one in the lower layer. The response dissipates quickly after external forcing disappears. Off the shelf, in the deep water, the major disturbance energy seems to be trapped in the mixed layer with a trailing oscillation; although the disturbance signals may still be observed at the depths of 500 and 1 290 m. Vertical dispersion analysis reveals that the near-initial wave packet generated off the shelf consists of two modes. One is a barotropic wave mode characterized by a fast decay rate of velocity amplitude of 0.020 s-1, and the other is baroclinic wave mode characterized by a slow decay rate of 0.006 9 s-1. The band-pass-filtering and empirical function techniques are employed to the frequency analysis. The results indicate that all frequencies shift above the local inertial frequency. On the shelf, the average frequency is 1.04fin the mixed layer, close to the diagnosed frequency of the first baroclinic mode, and the average frequency increases to 1.07fin the thermocline.Off the shelf, all frequencies are a little smaller than the diagnosed frequency of the first mode. The average frequency decreases from 1

  20. Adaptive double-integral-sliding-mode-maximum-power-point tracker for a photovoltaic system

    Directory of Open Access Journals (Sweden)

    Bidyadhar Subudhi

    2015-10-01

    Full Text Available This study proposed an adaptive double-integral-sliding-mode-controller-maximum-power-point tracker (DISMC-MPPT for maximum-power-point (MPP tracking of a photovoltaic (PV system. The objective of this study is to design a DISMC-MPPT with a new adaptive double-integral-sliding surface in order that MPP tracking is achieved with reduced chattering and steady-state error in the output voltage or current. The proposed adaptive DISMC-MPPT possesses a very simple and efficient PWM-based control structure that keeps switching frequency constant. The controller is designed considering the reaching and stability conditions to provide robustness and stability. The performance of the proposed adaptive DISMC-MPPT is verified through both MATLAB/Simulink simulation and experiment using a 0.2 kW prototype PV system. From the obtained results, it is found out that this DISMC-MPPT is found to be more efficient compared with that of Tan's and Jiao's DISMC-MPPTs.

  1. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  2. Microwatt power consumption maximum power point tracking circuit using an analogue differentiator for piezoelectric energy harvesting

    Science.gov (United States)

    Chew, Z. J.; Zhu, M.

    2015-12-01

    A maximum power point tracking (MPPT) scheme by tracking the open-circuit voltage from a piezoelectric energy harvester using a differentiator is presented in this paper. The MPPT controller is implemented by using a low-power analogue differentiator and comparators without the need of a sensing circuitry and a power hungry controller. This proposed MPPT circuit is used to control a buck converter which serves as a power management module in conjunction with a full-wave bridge diode rectifier. Performance of this MPPT control scheme is verified by using the prototyped circuit to track the maximum power point of a macro-fiber composite (MFC) as the piezoelectric energy harvester. The MFC was bonded on a composite material and the whole specimen was subjected to various strain levels at frequency from 10 to 100 Hz. Experimental results showed that the implemented full analogue MPPT controller has a tracking efficiency between 81% and 98.66% independent of the load, and consumes an average power of 3.187 μW at 3 V during operation.

  3. Maximum physical capacity testing in cancer patients undergoing chemotherapy

    DEFF Research Database (Denmark)

    Knutsen, L.; Quist, M; Midtgaard, J

    2006-01-01

    BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...

  4. MAXIMUM PRINCIPLE FOR SUBSONIC FLOW WITH VARIABLE ENTROPY

    Directory of Open Access Journals (Sweden)

    B. Sizykh Grigory

    2017-01-01

    Full Text Available Maximum principle for subsonic flow is fair for stationary irrotational subsonic gas flows. According to this prin- ciple, if the value of the velocity is not constant everywhere, then its maximum is achieved on the boundary and only on the boundary of the considered domain. This property is used when designing form of an aircraft with a maximum critical val- ue of the Mach number: it is believed that if the local Mach number is less than unit in the incoming flow and on the body surface, then the Mach number is less then unit in all points of flow. The known proof of maximum principle for subsonic flow is based on the assumption that in the whole considered area of the flow the pressure is a function of density. For the ideal and perfect gas (the role of diffusion is negligible, and the Mendeleev-Clapeyron law is fulfilled, the pressure is a function of density if entropy is constant in the entire considered area of the flow. Shows an example of a stationary sub- sonic irrotational flow, in which the entropy has different values on different stream lines, and the pressure is not a function of density. The application of the maximum principle for subsonic flow with respect to such a flow would be unreasonable. This example shows the relevance of the question about the place of the points of maximum value of the velocity, if the entropy is not a constant. To clarify the regularities of the location of these points, was performed the analysis of the com- plete Euler equations (without any simplifying assumptions in 3-D case. The new proof of the maximum principle for sub- sonic flow was proposed. This proof does not rely on the assumption that the pressure is a function of density. Thus, it is shown that the maximum principle for subsonic flow is true for stationary subsonic irrotational flows of ideal perfect gas with variable entropy.

  5. Solar maximum mission: Ground support programs at the Harvard Radio Astronomy Station

    Science.gov (United States)

    Maxwell, A.

    1983-01-01

    Observations of the spectral characteristics of solar radio bursts were made with new dynamic spectrum analyzers of high sensitivity and high reliability, over the frequency range 25-580 MHz. The observations also covered the maximum period of the current solar cycle and the period of international cooperative programs designated as the Solar Maximum Year. Radio data on shock waves generated by solar flares were combined with optical data on coronal transients, taken with equipment on the SMM and other satellites, and then incorporated into computer models for the outward passage of fast-mode MHD shocks through the solar corona. The MHD models are non-linear, time-dependent and for the most recent models, quasi-three-dimensional. They examine the global response of the corona for different types of input pulses (thermal, magnetic, etc.) and for different magnetic topologies (for example, open and closed fields). Data on coronal shocks and high-velocity material ejected from solar flares have been interpreted in terms of a model consisting of three main velocity regimes.

  6. Diameter dependent failure current density of gold nanowires

    International Nuclear Information System (INIS)

    Karim, S; Maaz, K; Ali, G; Ensinger, W

    2009-01-01

    Failure current density of single gold nanowires is investigated in this paper. Single wires with diameters ranging from 80 to 720 nm and length 30 μm were electrochemically deposited in ion track-etched single-pore polycarbonate membranes. The maximum current density was investigated while keeping the wires embedded in the polymer matrix and ramping up the current until failure occurred. The current density is found to increase with diminishing diameter and the wires with a diameter of 80 nm withstand 1.2 x 10 12 A m -2 before undergoing failure. Possible reasons for these results are discussed in this paper.

  7. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    Science.gov (United States)

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  8. Assessment of Energy Production Potential from Ocean Currents along the United States Coastline

    Energy Technology Data Exchange (ETDEWEB)

    Haas, Kevin

    2013-09-15

    estimates from the Stommel model and to help determine the size and capacity of arrays necessary to extract the maximum theoretical power, further estimates of the available power based on the distribution of the kinetic power density in the undisturbed flow was completed. This used estimates of the device spacing and scaling to sum up the total power that the devices would produce. The analysis has shown that considering extraction over a region comprised of the Florida Current portion of the Gulf Stream system, the average power dissipated ranges between 4-6 GW with a mean around 5.1 GW. This corresponds to an average of approximately 45 TWh/yr. However, if the extraction area comprises the entire portion of the Gulf Stream within 200 miles of the US coastline from Florida to North Carolina, the average power dissipated becomes 18.6 GW or 163 TWh/yr. A web based GIS interface, http://www.oceancurrentpower.gatech.edu/, was developed for dissemination of the data. The website includes GIS layers of monthly and yearly mean ocean current velocity and power density for ocean currents along the entire coastline of the United States, as well as joint and marginal probability histograms for current velocities at a horizontal resolution of 4-7 km with 10-25 bins over depth. Various tools are provided for viewing, identifying, filtering and downloading the data.

  9. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    Science.gov (United States)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  10. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  11. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  12. Do Self-Regulated Processes such as Study Strategies and Satisfaction Predict Grade Point Averages for First and Second Generation College Students?

    Science.gov (United States)

    DiBenedetto, Maria K.

    2010-01-01

    The current investigation sought to determine whether self-regulatory variables: "study strategies" and "self-satisfaction" correlate with first and second generation college students' grade point averages, and to determine if these two variables would improve the prediction of their averages if used along with high school grades and SAT scores.…

  13. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  14. Emission mechanism in high current hollow cathode arcs

    International Nuclear Information System (INIS)

    Krishnan, M.

    1976-01-01

    Large (2 cm-diameter) hollow cathodes have been operated in a magnetoplasmadynamic (MPD) arc over wide ranges of current (0.25 to 17 kA) and mass flow (10 -3 to 8 g/sec), with orifice current densities and mass fluxes encompassing those encountered in low current steady-state hollow cathode arcs. Detailed cathode interior measurements of current and potential distributions show that maximum current penetration into the cathode is about one diameter axially upstream from the tip, with peak inner surface current attachment up to one cathode diameter upstream of the tip. The spontaneous attachment of peak current upstream of the cathode tip is suggested as a criterion for characteristic hollow cathode operation. This empirical criterion is verified by experiment

  15. Winter monsoon circulation of the northern Arabian Sea and Somali Current

    Science.gov (United States)

    Schott, Friedrich A.; Fischer, Jürgen

    2000-03-01

    The winter monsoon circulation in the northern inflow region of the Somali Current is discussed on the basis of an array of moored acoustic Doppler current profiler and current meter stations deployed during 1995-1996 and a ship survey carried out in January 1998. It is found that the westward inflow into the Somali Current regime occurs essentially south of 11°N and that this inflow bifurcates at the Somali coast, with the southward branch supplying the equatorward Somali Current and the northward one returning into the northwestern Arabian Sea. This northward branch partially supplies a shallow outflow through the Socotra Passage between the African continent and the banks of Socotra and partially feeds into eastward recirculation directly along the southern slopes of Socotra. Underneath this shallow surface flow, southwestward undercurrent flows are observed. Undercurrent inflow from the Gulf of Aden through the Socotra Passage occurs between 100 and 1000 m, with its current core at 700-800 m, and is clearly marked by the Red Sea Water (RSW) salinity maximum. The observations suggest that the maximum RSW inflow out of the Gulf of Aden occurs during the winter monsoon season and uses the Socotra Passage as its main route into the Indian Ocean. Westward undercurrent inflow into the Somali Current regime is also observed south of Socotra, but this flow lacks the RSW salinity maximum. Off the Arabian peninsula, eastward boundary flow is observed in the upper 800 m with a compensating westward flow to the south. The observed circulation pattern is qualitatively compared with recent high-resolution numerical model studies and is found to be in basic agreement.

  16. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  17. Effectiveness of the current method of calculating member states' contributions

    CERN Document Server

    2002-01-01

    At its Two-hundred and eighty-sixth Meeting of 19 September 2001, the Finance Committee requested the Management to re-assess the effectiveness of the current method of forecasting Net National Income (NNI) for the purposes of calculating the Member States' contributions by comparing the results of the current weighted average method with a method based on a simple arithmetic average. The Finance Committee is invited to take note of this information.

  18. Accurate estimation of the RMS emittance from single current amplifier data

    International Nuclear Information System (INIS)

    Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.

    2002-01-01

    This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H - ion source

  19. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  20. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  1. Development of laser diode-pumped high average power solid-state laser for the pumping of Ti:sapphire CPA system

    Energy Technology Data Exchange (ETDEWEB)

    Maruyama, Yoichiro; Tei, Kazuyoku; Kato, Masaaki; Niwa, Yoshito; Harayama, Sayaka; Oba, Masaki; Matoba, Tohru; Arisawa, Takashi; Takuma, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    Laser diode pumped all solid state, high repetition frequency (PRF) and high energy Nd:YAG laser using zigzag slab crystals has been developed for the pumping source of Ti:sapphire CPA system. The pumping laser installs two main amplifiers which compose ring type amplifier configuration. The maximum amplification gain of the amplifier system is 140 and the condition of saturated amplification is achieved with this high gain. The average power of fundamental laser radiation is 250 W at the PRF of 200 Hz and the pulse duration is around 20 ns. The average power of second harmonic is 105 W at the PRF of 170 Hz and the pulse duration is about 16 ns. The beam profile of the second harmonic is near top hat and will be suitable for the pumping of Ti:sapphire laser crystal. The wall plug efficiency of the laser is 2.0 %. (author)

  2. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  3. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  4. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    International Nuclear Information System (INIS)

    Ning, A; Dykes, K

    2014-01-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent

  5. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    Science.gov (United States)

    Ning, A.; Dykes, K.

    2014-06-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent.

  6. The effect of Coriolis-Stokes forcing on upper ocean circulation in a two-way coupled wave-current model

    Institute of Scientific and Technical Information of China (English)

    DENG Zeng'an; XIE Li'an; HAN Guijun; ZHANG Xuefeng; WU Kejian

    2012-01-01

    We investigated the Stokes drift-driven ocean currents and Stokes drift-induced wind energy input into the upper ocean using a two-way coupled wave-current modeling system that consists of the Princeton Ocean Model generalized coordinate system (POMgcs),Simulating WAves Nearshore (SWAN) wave model,and the Model Coupling Toolkit (MCT).The Coriolis-Stokes forcing (CSF) computed using the wave parameters from SWAN was incorporated with the momentum equation of POMgcs as the core coupling process.Experimental results in an idealized setting show that under the steady state,the scale of the speed of CSF-driven current was 0.001 m/s and the maximum reached 0.02 rn/s.The Stokes drift-induced energy rate input into the model ocean was estimated to be 28.5 GW,taking 14% of the direct wind energy rate input.Considering the Stokes drift effects,the total mechanical energy rate input was increased by approximately 14%,which highlights the importance of CSF in modulating the upper ocean circulation.The actual run conducted in Taiwan Adjacent Sea (TAS) shows that:1) CSF-based wave-current coupling has an impact on ocean surface currents,which is related to the activities of monsoon winds; 2) wave-current coupling plays a significant role in a place where strong eddies present and tends to intensify the eddy's vorticity; 3) wave-current coupling affects the volume transport of the Taiwan Strait (TS) throughflow in a nontrivial degree,3.75% on average.

  7. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  8. Energy losses in the D0 β solenoid cryostat caused by current changes

    International Nuclear Information System (INIS)

    Visser, A.T.

    1993-11-01

    The proposed D0 β solenoid is a superconducting solenoid mounted inside an aluminum tube which supports the solenoid winding over it's full length. This aluminum support tube, also called bobbin, is therefore very tightly coupled to magnetic flux changes caused by solenoid current variations. These current changes in the solenoid, will cause answer currents to flow in the resistive bobbin wall and therefore cause heat losses. The insertion of an external dump resistor in the solenoid current loop reduces energy dissipation inside the cryostat during a quench and will shorten the discharge time constant. This note presents a simple electrical model for the coupled bobbin and solenoid and makes it easier to understand the circuit behavior and losses. Estimates for the maximum allowable rate of solenoid current changes, based on the maximum permissible rate of losses can be made using this model

  9. SQUID Based Cryogenic Current Comparator for Measurements of the Dark Current of Superconducting Cavities

    CERN Document Server

    Vodel, W; Neubert, R; Nietzsche, S

    2005-01-01

    This contribution presents a LTS-SQUID based Cryogenic Current Comparator (CCC) for detecting dark currents, generated e.g. by superconducting cavities for the upcoming X-FEL project at DESY. To achieve the maximum possible energy the gradients of the superconducting RF cavities should be pushed close to the physical limit of 50 MV/m. The measurement of the undesired field emission of electrons (the so-called dark current) in correlation with the gradient will give a proper value to compare and classify the cavities. The main component of the CCC is a high performance LTS-DC SQUID system which is able to measure extremely low magnetic fields, e.g. caused by the extracted dark current. For this reason the input coil of the SQUID is connected across a special designed toroidal niobium pick-up coil (inner diameter: about 100 mm) for the passing electron beam. A noise limited current resolution of nearly 2 pA/√(Hz) with a measurement bandwidth of up to 70 kHz was achieved without the pick-up coil. Now, ...

  10. Maximum likelihood approach for several stochastic volatility models

    International Nuclear Information System (INIS)

    Camprodon, Jordi; Perelló, Josep

    2012-01-01

    Volatility measures the amplitude of price fluctuations. Despite it being one of the most important quantities in finance, volatility is not directly observable. Here we apply a maximum likelihood method which assumes that price and volatility follow a two-dimensional diffusion process where volatility is the stochastic diffusion coefficient of the log-price dynamics. We apply this method to the simplest versions of the expOU, the OU and the Heston stochastic volatility models and we study their performance in terms of the log-price probability, the volatility probability, and its Mean First-Passage Time. The approach has some predictive power on the future returns amplitude by only knowing the current volatility. The assumed models do not consider long-range volatility autocorrelation and the asymmetric return-volatility cross-correlation but the method still yields very naturally these two important stylized facts. We apply the method to different market indices and with a good performance in all cases. (paper)

  11. Average annual doses, lifetime doses and associated risk of cancer death for radiation workers in various fuel fabrication facilities in India

    International Nuclear Information System (INIS)

    Iyer, P.S.; Dhond, R.V.

    1980-01-01

    Lifetime doses based on average annual doses are estimated for radiation workers in various fuel fabrication facilities in India. For such cumulative doses, the risk of radiation-induced cancer death is computed. The methodology for arriving at these estimates and the assumptions made are discussed. Based on personnel monitoring records from 1966 to 1978, the average annual dose equivalent for radiation workers is estimated as 0.9 mSv (90 mrem), and the maximum risk of cancer death associated with this occupational dose as 1.35x10 -5 a -1 , as compared with the risk of death due to natural causes of 7x10 -4 a -1 and the risk of death due to background radiation alone of 1.5x10 -5 a -1 . (author)

  12. Current Density and Plasma Displacement Near Perturbed Rational Surface

    International Nuclear Information System (INIS)

    Boozer, A.H.; Pomphrey, N.

    2010-01-01

    The current density in the vicinity of a rational surface of a force-free magnetic field subjected to an ideal perturbation is shown to be the sum of both a smooth and a delta-function distribution, which give comparable currents. The maximum perturbation to the smooth current density is comparable to a typical equilibrium current density and the width of the layer in which the current flows is shown to be proportional to the perturbation amplitude. In the standard linearized theory, the plasma displacement has an unphysical jump across the rational surface, but the full theory gives a continuous displacement.

  13. Probabilistic maximum-value wind prediction for offshore environments

    DEFF Research Database (Denmark)

    Staid, Andrea; Pinson, Pierre; Guikema, Seth D.

    2015-01-01

    statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...... the full probabilistic distribution of maximum wind speed. Knowledge of the maximum wind speed for an offshore location within a given period can inform decision-making regarding turbine operations, planned maintenance operations and power grid scheduling in order to improve safety and reliability...

  14. Characterizing graphs of maximum matching width at most 2

    DEFF Research Database (Denmark)

    Jeong, Jisu; Ok, Seongmin; Suh, Geewon

    2017-01-01

    The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...

  15. Phase-rectified signal averaging method to predict perinatal outcome in infants with very preterm fetal growth restriction- a secondary analysis of TRUFFLE-trial

    NARCIS (Netherlands)

    Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H A; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T M; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid

    2016-01-01

    Background Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival

  16. Phase-rectified signal averaging method to predict perinatal outcome in infants with very preterm fetal growth restriction- a secondary analysis of TRUFFLE-trial

    NARCIS (Netherlands)

    Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H. A.; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T. M.; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid

    2016-01-01

    Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival after

  17. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  18. INFLUENCE OF VACUUM ARC PLASMA EVAPORATOR CATHODE GEOMETRY OF ON VALUE OF ADMISSIBLE ARC DISCHARGE CURRENT

    Directory of Open Access Journals (Sweden)

    I. A. Ivanou

    2015-01-01

    Full Text Available An analysis of main design parameters that determine a level of droplet formation intensity at the generating stage of plasma flow has been given in the paper. The paper considers the most widely used designs of water cooled consumable cathodes. Ti or Ti–Si and Fe–Cr alloys have been taken as a material for cathodes. The following calculated data: average ionic charge Zi for titanium plasma +1.6; for «titanium–silicon plasma» +1.2, an electronic discharge 1.6022 ⋅ 10–19 C, an ion velocity vi = 2 ⋅ 104 m/s, an effective volt energy equivalent of heat flow diverted in the cathode Uк = 12 V, temperature of erosion cathode surface Тп = 550 К; temperature of the cooled cathode surface То = 350 К have been accepted in order to determine dependence of a maximum admissible arc discharge current on cathode height. The calculations have been carried out for various values of the cathode heights hк (from 0.02 to 0.05 m. Diameter of a target cathode is equal to 0.08 m for a majority of technological plasma devices, therefore, the area of the erosion surface is S = 0.005 m2.A thickness selection for a consumable target cathode part in the vacuum arc plasma source has been justified in the paper. The thickness ensures formation of minimum drop phase in the plasma flow during arc cathode material evaporation. It has been shown that a maximum admissible current of an arc discharge is practically equal to the minimum current of stable arcing when thickness of the consumable cathode part is equal to 0.05 m. The admissible discharge current can be rather significant and ensure high productivity during coating process with formation of relatively low amount of droplet phase in the coating at small values of hк.

  19. Maximum permissible continuous release rates of phosphorus-32 and sulphur-35 to atmosphere in a milk producing area

    Energy Technology Data Exchange (ETDEWEB)

    Bryant, P M

    1963-01-01

    A method is given for calculating, for design purposes, the maximum permissible continuous release rates of phosphorus-32 and sulphur-35 to atmosphere with respect to milk contamination. In the absence of authoritative advice from the Medical Research Council, provisional working levels for the concentration of phosphorus-32 and sulphur-35 in milk are derived, and details are given of the agricultural assumptions involved in the calculation of the relationship between the amount of the nuclide deposited on grassland and that to be found in milk. The agricultural and meteorological conditions assumed are applicable as an annual average to England and Wales. The results (in mc/day) for phosphorus-32 and sulphur-35 for a number of stack heights and distances are shown graphically; typical values, quoted in a table, include 20 mc/day of phosphorus-32 and 30 mc/day of sulfur-35 as the maximum permissible continuous release rates with respect to ground level releases at a distance of 200 metres from pastureland.

  20. Microstructures and critical currents in high-Tc superconductors

    International Nuclear Information System (INIS)

    Suenaga, Masaki

    1998-01-01

    Microstructural defects are the primary determining factors for the values of critical-current densities in a high T c superconductor after the electronic anisotropy along the a-b plane and the c-direction. A review is made to assess firstly what would be the maximum achievable critical-current density in YBa 2 Cu 3 O 7 if nearly ideal pinning sites were introduced and secondly what types of pinning defects are currently introduced or exist in YBa 2 Cu 3 O 7 and how effective are these in pinning vortices