WorldWideScience

Sample records for maximum average output

  1. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  2. Maximum Power Output of Quantum Heat Engine with Energy Bath

    Directory of Open Access Journals (Sweden)

    Shengnan Liu

    2016-05-01

    Full Text Available The difference between quantum isoenergetic process and quantum isothermal process comes from the violation of the law of equipartition of energy in the quantum regime. To reveal an important physical meaning of this fact, here we study a special type of quantum heat engine consisting of three processes: isoenergetic, isothermal and adiabatic processes. Therefore, this engine works between the energy and heat baths. Combining two engines of this kind, it is possible to realize the quantum Carnot engine. Furthermore, considering finite velocity of change of the potential shape, here an infinite square well with moving walls, the power output of the engine is discussed. It is found that the efficiency and power output are both closely dependent on the initial and final states of the quantum isothermal process. The performance of the engine cycle is shown to be optimized by control of the occupation probability of the ground state, which is determined by the temperature and the potential width. The relation between the efficiency and power output is also discussed.

  3. Estimation of the Maximum Output Power of Double-Clad Photonic Crystal Fiber Laser

    International Nuclear Information System (INIS)

    Chen Yue-E; Wang Yong; Qu Xi-Long

    2012-01-01

    Compared with traditional optical fiber lasers, double-clad photonic crystal fiber (PCF) lasers have larger surface-area-to-volume ratios. With an increase of output power, thermal effects may severely restrict output power and deteriorate beam quality of fiber lasers. We utilize the heat-conduction equations to estimate the maximum output power of a double-clad PCF laser under natural-convection, air-cooling, and water-cooling conditions in terms of a certain surface-volume heat ratio of the PCF. The thermal effects hence define an upper power limit of double-clad PCF lasers when scaling output power. (fundamental areas of phenomenology(including applications))

  4. Theoretical and experimental investigations of the limits to the maximum output power of laser diodes

    International Nuclear Information System (INIS)

    Wenzel, H; Crump, P; Pietrzak, A; Wang, X; Erbert, G; Traenkle, G

    2010-01-01

    The factors that limit both the continuous wave (CW) and the pulsed output power of broad-area laser diodes driven at very high currents are investigated theoretically and experimentally. The decrease in the gain due to self-heating under CW operation and spectral holeburning under pulsed operation, as well as heterobarrier carrier leakage and longitudinal spatial holeburning, are the dominant mechanisms limiting the maximum achievable output power.

  5. Optimal Velocity to Achieve Maximum Power Output – Bench Press for Trained Footballers

    OpenAIRE

    Richard Billich; Jakub Štvrtňa; Karel Jelen

    2015-01-01

    Optimal Velocity to Achieve Maximum Power Output – Bench Press for Trained Footballers In today’s world of strength training there are many myths surrounding effective exercising with the least possible negative effect on one’s health. In this experiment we focus on the finding of a relationship between maximum output, used load and the velocity with which the exercise is performed. The main objective is to find the optimal speed of the exercise motion which would allow us to reach the ma...

  6. Dataset demonstrating the temperature effect on average output polarization for QCA based reversible logic gates

    Directory of Open Access Journals (Sweden)

    Md. Kamrul Hassan

    2017-08-01

    Full Text Available Quantum-dot cellular automata (QCA is a developing nanotechnology, which seems to be a good candidate to replace the conventional complementary metal-oxide-semiconductor (CMOS technology. In this article, we present the dataset of average output polarization (AOP for basic reversible logic gates presented in Ali Newaz et al. (2016 [1]. QCADesigner 2.0.3 has been employed to analysis the AOP of reversible gates at different temperature levels in Kelvin (K unit.

  7. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  8. Average output polarization dataset for signifying the temperature influence for QCA designed reversible logic circuits.

    Science.gov (United States)

    Abdullah-Al-Shafi, Md; Bahar, Ali Newaz; Bhuiyan, Mohammad Maksudur Rahman; Shamim, S M; Ahmed, Kawser

    2018-08-01

    Quantum-dot cellular automata (QCA) as nanotechnology is a pledging contestant that has incredible prospective to substitute complementary metal-oxide-semiconductor (CMOS) because of its superior structures such as intensely high device thickness, minimal power depletion with rapid operation momentum. In this study, the dataset of average output polarization (AOP) for fundamental reversible logic circuits is organized as presented in (Abdullah-Al-Shafi and Bahar, 2017; Bahar et al., 2016; Abdullah-Al-Shafi et al., 2015; Abdullah-Al-Shafi, 2016) [1-4]. QCADesigner version 2.0.3 has been utilized to survey the AOP of reversible circuits at separate temperature point in Kelvin (K) unit.

  9. Potential role of motion for enhancing maximum output energy of triboelectric nanogenerator

    Science.gov (United States)

    Byun, Kyung-Eun; Lee, Min-Hyun; Cho, Yeonchoo; Nam, Seung-Geol; Shin, Hyeon-Jin; Park, Seongjun

    2017-07-01

    Although triboelectric nanogenerator (TENG) has been explored as one of the possible candidates for the auxiliary power source of portable and wearable devices, the output energy of a TENG is still insufficient to charge the devices with daily motion. Moreover, the fundamental aspects of the maximum possible energy of a TENG related with human motion are not understood systematically. Here, we confirmed the possibility of charging commercialized portable and wearable devices such as smart phones and smart watches by utilizing the mechanical energy generated by human motion. We confirmed by theoretical extraction that the maximum possible energy is related with specific form factors of a TENG. Furthermore, we experimentally demonstrated the effect of human motion in an aspect of the kinetic energy and impulse using varying velocity and elasticity, and clarified how to improve the maximum possible energy of a TENG. This study gives insight into design of a TENG to obtain a large amount of energy in a limited space.

  10. Accurate computations of monthly average daily extraterrestrial irradiation and the maximum possible sunshine duration

    International Nuclear Information System (INIS)

    Jain, P.C.

    1985-12-01

    The monthly average daily values of the extraterrestrial irradiation on a horizontal plane and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by solar scientists and engineers each time they are needed and often by using the approximate short-cut methods. Using the accurate analytical expressions developed by Spencer for the declination and the eccentricity correction factor, computations for these parameters have been made for all the latitude values from 90 deg. N to 90 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Monthly average daily values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables would avoid the need for repetitive and approximate calculations and serve as a useful ready reference for providing accurate values to the solar energy scientists and engineers

  11. Table for monthly average daily extraterrestrial irradiation on horizontal surface and the maximum possible sunshine duration

    International Nuclear Information System (INIS)

    Jain, P.C.

    1984-01-01

    The monthly average daily values of the extraterrestrial irradiation on a horizontal surface (H 0 ) and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by scientists each time they are needed and by using the approximate short-cut methods. Computations for these values have been made once and for all for latitude values of 60 deg. N to 60 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables should avoid the need for repetition and approximate calculations and serve as a useful ready reference for solar energy scientists and engineers. (author)

  12. Potential role of motion for enhancing maximum output energy of triboelectric nanogenerator

    Directory of Open Access Journals (Sweden)

    Kyung-Eun Byun

    2017-07-01

    Full Text Available Although triboelectric nanogenerator (TENG has been explored as one of the possible candidates for the auxiliary power source of portable and wearable devices, the output energy of a TENG is still insufficient to charge the devices with daily motion. Moreover, the fundamental aspects of the maximum possible energy of a TENG related with human motion are not understood systematically. Here, we confirmed the possibility of charging commercialized portable and wearable devices such as smart phones and smart watches by utilizing the mechanical energy generated by human motion. We confirmed by theoretical extraction that the maximum possible energy is related with specific form factors of a TENG. Furthermore, we experimentally demonstrated the effect of human motion in an aspect of the kinetic energy and impulse using varying velocity and elasticity, and clarified how to improve the maximum possible energy of a TENG. This study gives insight into design of a TENG to obtain a large amount of energy in a limited space.

  13. Optimal Velocity to Achieve Maximum Power Output – Bench Press for Trained Footballers

    Directory of Open Access Journals (Sweden)

    Richard Billich

    2015-03-01

    Full Text Available Optimal Velocity to Achieve Maximum Power Output – Bench Press for Trained Footballers In today’s world of strength training there are many myths surrounding effective exercising with the least possible negative effect on one’s health. In this experiment we focus on the finding of a relationship between maximum output, used load and the velocity with which the exercise is performed. The main objective is to find the optimal speed of the exercise motion which would allow us to reach the maximum mechanic muscle output during a bench press exercise. This information could be beneficial to sporting coaches and recreational sportsmen alike in helping them improve the effectiveness of fast strength training. Fifteen football players of the FK Třinec football club participated in the experiment. The measurements were made with the use of 3D cinematic and dynamic analysis, both experimental methods. The research subjects participated in a strength test, in which the mechanic muscle output of 0, 10, 30, 50, 70, 90% and one repetition maximum (1RM was measured. The acquired result values and other required data were modified using Qualisys Track Manager and Visual 3D software (C-motion, Rockville, MD, USA. During the bench press exercise the maximum mechanic muscle output of the set of research subjects was reached at 75% of maximum exercise motion velocity. Optimální rychlost pohybu pro dosažení maxima výstupního výkonu – bench press u trénovaných fotbalistů Dnešní svět silového tréninku přináší řadu mýtů o tom, jak cvičit efektivně a zároveň s co nejmenším negativním vlivem na zdraví člověka. V tomto experimentu se zabýváme nalezením vztahu mezi maximálním výkonem, použitou zátěží a rychlostí. Hlavním úkolem je nalezení optimální rychlosti pohybu pro dosažení maximálního mechanického svalového výkonu při cvičení bench press, což pomůže nejenom trenérům, ale i rekreačním sportovc

  14. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  15. Maximizing Output Power of a Solar Panel via Combination of Sun Tracking and Maximum Power Point Tracking by Fuzzy Controllers

    Directory of Open Access Journals (Sweden)

    Mohsen Taherbaneh

    2010-01-01

    Full Text Available In applications with low-energy conversion efficiency, maximizing the output power improves the efficiency. The maximum output power of a solar panel depends on the environmental conditions and load profile. In this paper, a method based on simultaneous use of two fuzzy controllers is developed in order to maximize the generated output power of a solar panel in a photovoltaic system: fuzzy-based sun tracking and maximum power point tracking. The sun tracking is performed by changing the solar panel orientation in horizontal and vertical directions by two DC motors properly designed. A DC-DC converter is employed to track the solar panel maximum power point. In addition, the proposed system has the capability of the extraction of solar panel I-V curves. Experimental results present that the proposed fuzzy techniques result in increasing of power delivery from the solar panel, causing a reduction in size, weight, and cost of solar panels in photovoltaic systems.

  16. An Invariance Property for the Maximum Likelihood Estimator of the Parameters of a Gaussian Moving Average Process

    OpenAIRE

    Godolphin, E. J.

    1980-01-01

    It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.

  17. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  18. Scale dependence of the average potential around the maximum in Φ4 theories

    International Nuclear Information System (INIS)

    Tetradis, N.; Wetterich, C.

    1992-04-01

    The average potential describes the physics at a length scale k - 1 by averaging out the degrees of freedom with characteristic moments larger than k. The dependence on k can be described by differential evolution equations. We solve these equations for the nonconvex part of the potential around the origin in φ 4 theories, in the phase with spontaneous symmetry breaking. The average potential is real and approaches the convex effective potential in the limit k → 0. Our calculation is relevant for processes for which the shape of the potential at a given scale is important, such as tunneling phenomena or inflation. (orig.)

  19. On the average complexity of sphere decoding in lattice space-time coded multiple-input multiple-output channel

    KAUST Repository

    Abediseid, Walid

    2012-12-21

    The exact average complexity analysis of the basic sphere decoder for general space-time codes applied to multiple-input multiple-output (MIMO) wireless channel is known to be difficult. In this work, we shed the light on the computational complexity of sphere decoding for the quasi- static, lattice space-time (LAST) coded MIMO channel. Specifically, we drive an upper bound of the tail distribution of the decoder\\'s computational complexity. We show that when the computational complexity exceeds a certain limit, this upper bound becomes dominated by the outage probability achieved by LAST coding and sphere decoding schemes. We then calculate the minimum average computational complexity that is required by the decoder to achieve near optimal performance in terms of the system parameters. Our results indicate that there exists a cut-off rate (multiplexing gain) for which the average complexity remains bounded. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    Science.gov (United States)

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  1. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    Directory of Open Access Journals (Sweden)

    Sung Woo Park

    2015-03-01

    Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  2. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    Science.gov (United States)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  3. Optimal piston motion for maximum net output work of Daniel cam engines with low heat rejection

    International Nuclear Information System (INIS)

    Badescu, Viorel

    2015-01-01

    Highlights: • The piston motion of low heat rejection compression ignition engines is optimized. • A realistic model taking into account the cooling system is developed. • The optimized cam is smaller for cylinders without thermal insulation. • The optimized cam size depends on ignition moment and cooling process intensity. - Abstract: Compression ignition engines based on classical tapper-crank systems cannot provide optimal piston motion. Cam engines are more appropriate for this purpose. In this paper the piston motion of a Daniel cam engine is optimized. Piston acceleration is taken as a control. The objective is to maximize the net output work during the compression and power strokes. A major research effort has been allocated in the last two decades for the development of low heat rejection engines. A thermally insulated cylinder is considered and a realistic model taking into account the cooling system is developed. The sinusoidal approximation of piston motion in the classical tapper-crank system overestimates the engine efficiency. The exact description of the piston motion in tapper-crank system is used here as a reference. The radiation process has negligible effects during the optimization. The approach with no constraint on piston acceleration is a reasonable approximation. The net output work is much larger (by 12–13%) for the optimized system than for the classical tapper-crank system, for similar thickness of cylinder walls and thermal insulation. Low heat rejection measures are not of significant importance for optimized cam engines. The optimized cam is smaller for a cylinder without thermal insulation than for an insulated cylinder (by up to 8%, depending on the local polar radius). The auto-ignition moment is not a parameter of significant importance for optimized cam engines. However, for given cylinder wall and insulation materials there is an optimum auto-ignition moment which maximizes the net output work. The optimum auto

  4. Quantum Coherent Three-Terminal Thermoelectrics: Maximum Efficiency at Given Power Output

    Directory of Open Access Journals (Sweden)

    Robert S. Whitney

    2016-05-01

    Full Text Available This work considers the nonlinear scattering theory for three-terminal thermoelectric devices used for power generation or refrigeration. Such systems are quantum phase-coherent versions of a thermocouple, and the theory applies to systems in which interactions can be treated at a mean-field level. It considers an arbitrary three-terminal system in any external magnetic field, including systems with broken time-reversal symmetry, such as chiral thermoelectrics, as well as systems in which the magnetic field plays no role. It is shown that the upper bound on efficiency at given power output is of quantum origin and is stricter than Carnot’s bound. The bound is exactly the same as previously found for two-terminal devices and can be achieved by three-terminal systems with or without broken time-reversal symmetry, i.e., chiral and non-chiral thermoelectrics.

  5. Quantifying walking and standing behaviour of dairy cows using a moving average based on output from an accelerometer

    DEFF Research Database (Denmark)

    Nielsen, Lars Relund; Pedersen, Asger Roer; Herskin, Mette S

    2010-01-01

    in sequences of approximately 20 s for the period of 10 min. Afterwards the cows were stimulated to move/lift the legs while standing in a cubicle. The behaviour was video recorded, and the recordings were analysed second by second for walking and standing behaviour as well as the number of steps taken....... Various algorithms for predicting walking/standing status were compared. The algorithms were all based on a limit of a moving average calculated by using one of two outputs of the accelerometer, either a motion index or a step count, and applied over periods of 3 or 5 s. Furthermore, we investigated...... the effect of additionally applying the rule: a walking period must last at least 5 s. The results indicate that the lowest misclassification rate (10%) of walking and standing was obtained based on the step count with a moving average of 3 s and with the rule applied. However, the rate of misclassification...

  6. Maximum power output and load matching of a phosphoric acid fuel cell-thermoelectric generator hybrid system

    Science.gov (United States)

    Chen, Xiaohang; Wang, Yuan; Cai, Ling; Zhou, Yinghui

    2015-10-01

    Based on the current models of phosphoric acid fuel cells (PAFCs) and thermoelectric generators (TGs), a new hybrid system is proposed, in which the effects of multi-irreversibilities resulting from the activation, concentration, and ohmic overpotentials in the PAFC, Joule heat and heat leak in the TG, finite-rate heat transfer between the TG and the heat reservoirs, and heat leak from the PAFC to the environment are taken into account. Expressions for the power output and efficiency of the PAFC, TG, and hybrid system are analytically derived and directly used to discuss the performance characteristics of the hybrid system. The optimal relationship between the electric currents in the PAFC and TG is obtained. The maximum power output is numerically calculated. It is found that the maximum power output density of the hybrid system will increase about 150 Wm-2, compared with that of a single PAFC. The problem how to optimally match the load resistances of two subsystems is discussed. Some significant results for practical hybrid systems are obtained.

  7. Relation between Peak Power Output in Sprint Cycling and Maximum Voluntary Isometric Torque Production.

    Science.gov (United States)

    Kordi, Mehdi; Goodall, Stuart; Barratt, Paul; Rowley, Nicola; Leeder, Jonathan; Howatson, Glyn

    2017-08-01

    From a cycling paradigm, little has been done to understand the relationships between maximal isometric strength of different single joint lower body muscle groups and their relation with, and ability to predict PPO and how they compare to an isometric cycling specific task. The aim of this study was to establish relationships between maximal voluntary torque production from isometric single-joint and cycling specific tasks and assess their ability to predict PPO. Twenty male trained cyclists participated in this study. Peak torque was measured by performing maximum voluntary contractions (MVC) of knee extensors, knee flexors, dorsi flexors and hip extensors whilst instrumented cranks measured isometric peak torque from MVC when participants were in their cycling specific position (ISOCYC). A stepwise regression showed that peak torque of the knee extensors was the only significant predictor of PPO when using SJD and accounted for 47% of the variance. However, when compared to ISOCYC, the only significant predictor of PPO was ISOCYC, which accounted for 77% of the variance. This suggests that peak torque of the knee extensors was the best single-joint predictor of PPO in sprint cycling. Furthermore, a stronger prediction can be made from a task specific isometric task. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Countermovement depth - a variable which clarifies the relationship between the maximum power output and height of a vertical jump.

    Science.gov (United States)

    Gajewski, Jan; Michalski, Radosław; Buśko, Krzysztof; Mazur-Różycka, Joanna; Staniak, Zbigniew

    2018-01-01

    The aim of this study was to identify the determinants of peak power achieved during vertical jumps in order to clarify relationship between the height of jump and the ability to exert maximum power. One hundred young (16.8±1.8 years) sportsmen participated in the study (body height 1.861 ± 0.109 m, body weight 80.3 ± 9.2 kg). Each participant performed three jump tests: countermovement jump (CMJ), akimbo countermovement jump (ACMJ), and spike jump (SPJ). A force plate was used to measure ground reaction force and to determine peak power output. The following explanatory variables were included in the model: jump height, body mass, and the lowering of the centre of mass before launch (countermovement depth). A model was created using multiple regression analysis and allometric scaling. The model was used to calculate the expected power value for each participant, which correlated strongly with real values. The value of the coefficient of determination R2 equalled 0.89, 0.90 and 0.98, respectively, for the CMJ, ACMJ, and SPJ jumps. The countermovement depth proved to be a variable strongly affecting the maximum power of jump. If the countermovement depth remains constant, the relative peak power is a simple function of jump height. The results suggest that the jump height of an individual is an exact indicator of their ability to produce maximum power. The presented model has a potential to be utilized under field condition for estimating the maximum power output of vertical jumps.

  9. Phantom and Clinical Study of Differences in Cone Beam Computed Tomographic Registration When Aligned to Maximum and Average Intensity Projection

    Energy Technology Data Exchange (ETDEWEB)

    Shirai, Kiyonori [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Nishiyama, Kinji, E-mail: sirai-ki@mc.pref.osaka.jp [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Katsuda, Toshizo [Department of Radiology, National Cerebral and Cardiovascular Center, Osaka (Japan); Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan)

    2014-01-01

    Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error.

  10. Phantom and Clinical Study of Differences in Cone Beam Computed Tomographic Registration When Aligned to Maximum and Average Intensity Projection

    International Nuclear Information System (INIS)

    Shirai, Kiyonori; Nishiyama, Kinji; Katsuda, Toshizo; Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo

    2014-01-01

    Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error

  11. Muscle power output properties using the stretch-shortening cycle of the upper limb and their relationships with a one-repetition maximum bench press.

    Science.gov (United States)

    Miyaguchi, Kazuyoshi; Demura, Shinichi

    2006-05-01

    The purpose of this study was to examine the output properties of muscle power by the dominant upper limb using SSC, and the relationships between the power output by SSC and a one-repetition maximum bench press (1 RM BP) used as a strength indicator of the upper body. Sixteen male athletes (21.4+/-0.9 yr) participated in this study. They pulled a load of 40% of maximum voluntary contraction (MVC) at a stretch by elbow flexion of the dominant upper limb in the following three preliminary conditions: static relaxed muscle state (SR condition), isometric muscle contraction state (ISO condition), and using SSC (SSC condition). The velocity with a wire load via a pulley during elbow flexion was measured accurately using a power instrument with a rotary encoder, and the muscle power curve was drawn from the product of the velocity and load. Significant differences were found among all evaluation parameters of muscle power exerted from the above three conditions and the parameters regarding early power output during concentric contraction were larger in the SSC condition than the SR and ISO conditions. The parameters on initial muscle contraction velocity when only using SSC significantly correlated with 1 RM BP (r=0.60-0.62). The use of SSC before powerful elbow flexion may contribute largely to early explosive power output during concentric contraction. Bench press capacity relates to a development of the above early power output when using SSC.

  12. Utilizing Maximum Power Point Trackers in Parallel to Maximize the Power Output of a Solar (Photovoltaic) Array

    Science.gov (United States)

    2012-12-01

    completing the academic workload at NPS. Taking care of two toddlers all day, every day, is not an easy task. You make xxviii it seem effortless and...for the development of numerous thin-cell applications that meet the military’s requirements for ruggedness and power output. For example, the...2012, September 5). PV microinverters and power optimizers set for significant growth [PV Magazine Online]. Available: http://www.pv- magazine.com

  13. Combining site occupancy, breeding population sizes and reproductive success to calculate time-averaged reproductive output of different habitat types: an application to Tricolored Blackbirds.

    Directory of Open Access Journals (Sweden)

    Marcel Holyoak

    Full Text Available In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005-2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent

  14. Hybrid Solid Oxide Fuel Cell and Thermoelectric Generator for Maximum Power Output in Micro-CHP Systems

    DEFF Research Database (Denmark)

    Rosendahl, Lasse; Mortensen, Paw Vestergård; Enkeshafi, Ali A.

    2011-01-01

    and market segments which are not yet quantified. This paper quantifies a micro-CHP system based on a solid oxide fuel cell (SOFC) and a high-performance TE generator. Based on a 3 kW fuel input, the hybrid SOFC implementation boosts electrical output from 945 W to 1085 W, with 1794 W available for heating...... the electricity production in micro-CHP systems by more than 15%, corresponding to system electrical efficiency increases of some 4 to 5 percentage points. This will make fuel cell-based micro-CHP systems very competitive and profitable and will also open opportunities in a number of other potential business...

  15. A diode-pumped continuous-wave Nd:YAG laser with an average output power of 1 kW

    International Nuclear Information System (INIS)

    Lee, Sung Man; Cha, Byung Heon; Kim, Cheol Jung

    2004-01-01

    A diode-pumped Nd:YAG laser with an average output power of 1 kW is developed for industrial applications, such as metal cutting, precision welding, etc. To develop such a diode-pumped high power solid-state laser, a series of laser modules have been used in general with and without thermal birefringence compensation. For example, Akiyama et al. used three laser modules to obtain a output power of 5.4 kW CW.1 In the side-pumped Nd:YAG laser, which is a commonly used pump scheme to obtain high output power, the crystal rod has a short thermal focal length at a high input pump power, and the short thermal focal length in turn leads to beam distortion within a laser resonator. Therefore, to achieve a high output power with good stability, isotropic beam profile, and high optical efficiency, the detailed analysis of the resonator stability condition depending on both mirror distances and a crystal separation is essential

  16. ANALYSIS OF THE STATISTICAL BEHAVIOUR OF DAILY MAXIMUM AND MONTHLY AVERAGE RAINFALL ALONG WITH RAINY DAYS VARIATION IN SYLHET, BANGLADESH

    Directory of Open Access Journals (Sweden)

    G. M. J. HASAN

    2014-10-01

    Full Text Available Climate, one of the major controlling factors for well-being of the inhabitants in the world, has been changing in accordance with the natural forcing and manmade activities. Bangladesh, the most densely populated countries in the world is under threat due to climate change caused by excessive use or abuse of ecology and natural resources. This study checks the rainfall patterns and their associated changes in the north-eastern part of Bangladesh mainly Sylhet city through statistical analysis of daily rainfall data during the period of 1957 - 2006. It has been observed that a good correlation exists between the monthly mean and daily maximum rainfall. A linear regression analysis of the data is found to be significant for all the months. Some key statistical parameters like the mean values of Coefficient of Variability (CV, Relative Variability (RV and Percentage Inter-annual Variability (PIV have been studied and found to be at variance. Monthly, yearly and seasonal variation of rainy days also analysed to check for any significant changes.

  17. Diode-side-pumped intracavity frequency-doubled Nd:YAG/BaWO4 Raman laser generating average output power of 3.14 W at 590 nm.

    Science.gov (United States)

    Li, Shutao; Zhang, Xingyu; Wang, Qingpu; Zhang, Xiaolei; Cong, Zhenhua; Zhang, Huaijin; Wang, Jiyang

    2007-10-15

    We report a linear-cavity high-power all-solid-state Q-switched yellow laser. The laser source comprises a diode-side-pumped Nd:YAG module that produces 1064 nm fundamental radiation, an intracavity BaWO(4) Raman crystal that generates a first-Stokes laser at 1180 nm, and a KTP crystal that frequency doubles the first-Stokes laser to 590 nm. A convex-plane cavity is employed in this configuration to counteract some of the thermal effect caused by high pump power. An average output power of 3.14 W at 590 nm is obtained at a pulse repetition frequency of 10 kHz.

  18. SU-E-T-174: Evaluation of the Optimal Intensity Modulated Radiation Therapy Plans Done On the Maximum and Average Intensity Projection CTs

    Energy Technology Data Exchange (ETDEWEB)

    Jurkovic, I [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States); Stathakis, S; Li, Y; Patel, A; Vincent, J; Papanikolaou, N; Mavroidis, P [Cancer Therapy and Research Center University of Texas Health Sciences Center at San Antonio, San Antonio, TX (United States)

    2014-06-01

    Purpose: To determine the difference in coverage between plans done on average intensity projection and maximum intensity projection CT data sets for lung patients and to establish correlations between different factors influencing the coverage. Methods: For six lung cancer patients, 10 phases of equal duration through the respiratory cycle, the maximum and average intensity projections (MIP and AIP) from their 4DCT datasets were obtained. MIP and AIP datasets had three GTVs delineated (GTVaip — delineated on AIP, GTVmip — delineated on MIP and GTVfus — delineated on each of the 10 phases and summed up). From the each GTV, planning target volumes (PTV) were then created by adding additional margins. For each of the PTVs an IMRT plan was developed on the AIP dataset. The plans were then copied to the MIP data set and were recalculated. Results: The effective depths in AIP cases were significantly smaller than in MIP (p < 0.001). The Pearson correlation coefficient of r = 0.839 indicates strong degree of positive linear relationship between the average percentage difference in effective depths and average PTV coverage on the MIP data set. The V2 0 Gy of involved lung depends on the PTV coverage. The relationship between PTVaip mean CT number difference and PTVaip coverage on MIP data set gives r = 0.830. When the plans are produced on MIP and copied to AIP, r equals −0.756. Conclusion: The correlation between the AIP and MIP data sets indicates that the selection of the data set for developing the treatment plan affects the final outcome (cases with high average percentage difference in effective depths between AIP and MIP should be calculated on AIP). The percentage of the lung volume receiving higher dose depends on how well PTV is covered, regardless of on which set plan is done.

  19. Verification of average daily maximum permissible concentration of styrene in the atmospheric air of settlements under the results of epidemiological studies of the children’s population

    Directory of Open Access Journals (Sweden)

    М.А. Zemlyanova

    2015-03-01

    Full Text Available We presented the materials on the verification of the average daily maximum permissible concentration of styrene in the atmospheric air of settlements performed under the results of own in-depth epidemiological studies of children’s population according to the principles of the international risk assessment practice. It was established that children in the age of 4–7 years when exposed to styrene at the level above 1.2 of threshold level value for continuous exposure develop the negative exposure effects in the form of disorders of hormonal regulation, pigmentary exchange, antioxidative activity, cytolysis, immune reactivity and cytogenetic disbalance which contribute to the increased morbidity of diseases of the central nervous system, endocrine system, respiratory organs, digestion and skin. Based on the proved cause-and-effect relationships between the biomarkers of negative effects and styrene concentration in blood it was demonstrated that the benchmark styrene concentration in blood is 0.002 mg/dm3. The justified value complies with and confirms the average daily styrene concentration in the air of settlements at the level of 0.002 mg/m3 accepted in Russia which provides the safety for the health of population (1 threshold level value for continuous exposure.

  20. Measurement of canine pancreatic perfusion using dynamic computed tomography: Influence of input-output vessels on deconvolution and maximum slope methods

    Energy Technology Data Exchange (ETDEWEB)

    Kishimoto, Miori, E-mail: miori@mx6.et.tiki.ne.jp [Department of Clinical Veterinary Science, Obihiro University of Agriculture and Veterinary Medicine, Nishi 2-11 Inada-cho, Obihiro 080-8555 (Japan); Tsuji, Yoshihisa, E-mail: y.tsuji@extra.ocn.ne.jp [Department of Gastroenterology and Hepatology, Kyoto University Graduate School of Medicine, Shogoinkawara-cho 54, Sakyo-ku 606-8507 (Japan); Katabami, Nana; Shimizu, Junichiro; Lee, Ki-Ja [Department of Clinical Veterinary Science, Obihiro University of Agriculture and Veterinary Medicine, Nishi 2-11 Inada-cho, Obihiro 080-8555 (Japan); Iwasaki, Toshiroh [Department of Veterinary Internal Medicine, Tokyo University of Agriculture and Technology, Saiwai-cho, 3-5-8, Fuchu 183-8509 (Japan); Miyake, Yoh-Ichi [Department of Clinical Veterinary Science, Obihiro University of Agriculture and Veterinary Medicine, Nishi 2-11 Inada-cho, Obihiro 080-8555 (Japan); Yazumi, Shujiro [Digestive Disease Center, Kitano Hospital, 2-4-20 Ougi-machi, Kita-ku, Osaka 530-8480 (Japan); Chiba, Tsutomu [Department of Gastroenterology and Hepatology, Kyoto University Graduate School of Medicine, Shogoinkawara-cho 54, Sakyo-ku 606-8507 (Japan); Yamada, Kazutaka, E-mail: kyamada@obihiro.ac.jp [Department of Clinical Veterinary Science, Obihiro University of Agriculture and Veterinary Medicine, Nishi 2-11 Inada-cho, Obihiro 080-8555 (Japan)

    2011-01-15

    Objective: We investigated whether the prerequisite of the maximum slope and deconvolution methods are satisfied in pancreatic perfusion CT and whether the measured parameters between these algorithms are correlated. Methods: We examined nine beagles injected with iohexol (200 mgI kg{sup -1}) at 5.0 ml s{sup -1}. The abdominal aorta and splenic and celiac arteries were selected as the input arteries and the splenic vein, the output veins. For the maximum slope method, we determined the arterial contrast volume of each artery by measuring the area under the curve (AUC) and compared the peak enhancement time in the pancreas with the contrast appearance time in the splenic vein. For the deconvolution method, the artery-to-vein collection rate of contrast medium was calculated. We calculated the pancreatic tissue blood flow (TBF), tissue blood volume (TBV), and mean transit time (MTT) using both algorithms and investigated their correlation based on vessel selection. Results: The artery AUC significantly decreased as it neared the pancreas (P < 0.01). In all cases, the peak time of the pancreas (11.5 {+-} 1.6) was shorter than the appearance time (14.1 {+-} 1.6) in the splenic vein. The splenic artery-vein combination exhibited the highest collection rate (91.1%) and was the only combination that was significantly correlated between TBF, TBV, and MTT in both algorithms. Conclusion: Selection of a vessel nearest to the pancreas is considered as a more appropriate prerequisite. Therefore, vessel selection is important in comparison of the semi-quantitative parameters obtained by different algorithms.

  1. Measurement of canine pancreatic perfusion using dynamic computed tomography: Influence of input-output vessels on deconvolution and maximum slope methods

    International Nuclear Information System (INIS)

    Kishimoto, Miori; Tsuji, Yoshihisa; Katabami, Nana; Shimizu, Junichiro; Lee, Ki-Ja; Iwasaki, Toshiroh; Miyake, Yoh-Ichi; Yazumi, Shujiro; Chiba, Tsutomu; Yamada, Kazutaka

    2011-01-01

    Objective: We investigated whether the prerequisite of the maximum slope and deconvolution methods are satisfied in pancreatic perfusion CT and whether the measured parameters between these algorithms are correlated. Methods: We examined nine beagles injected with iohexol (200 mgI kg -1 ) at 5.0 ml s -1 . The abdominal aorta and splenic and celiac arteries were selected as the input arteries and the splenic vein, the output veins. For the maximum slope method, we determined the arterial contrast volume of each artery by measuring the area under the curve (AUC) and compared the peak enhancement time in the pancreas with the contrast appearance time in the splenic vein. For the deconvolution method, the artery-to-vein collection rate of contrast medium was calculated. We calculated the pancreatic tissue blood flow (TBF), tissue blood volume (TBV), and mean transit time (MTT) using both algorithms and investigated their correlation based on vessel selection. Results: The artery AUC significantly decreased as it neared the pancreas (P < 0.01). In all cases, the peak time of the pancreas (11.5 ± 1.6) was shorter than the appearance time (14.1 ± 1.6) in the splenic vein. The splenic artery-vein combination exhibited the highest collection rate (91.1%) and was the only combination that was significantly correlated between TBF, TBV, and MTT in both algorithms. Conclusion: Selection of a vessel nearest to the pancreas is considered as a more appropriate prerequisite. Therefore, vessel selection is important in comparison of the semi-quantitative parameters obtained by different algorithms.

  2. Prescriptive amplification recommendations for hearing losses with a conductive component and their impact on the required maximum power output: an update with accompanying clinical explanation.

    Science.gov (United States)

    Johnson, Earl E

    2013-06-01

    Hearing aid prescriptive recommendations for hearing losses having a conductive component have received less clinical and research interest than for losses of a sensorineural nature; as a result, much variation remains among current prescriptive methods in their recommendations for conductive and mixed hearing losses (Johnson and Dillon, 2011). The primary intent of this brief clinical note is to demonstrate differences between two algebraically equivalent expressions of hearing loss, which have been approaches used historically to generate a prescription for hearing losses with a conductive component. When air and bone conduction thresholds are entered into hearing aid prescriptions designed for nonlinear hearing aids, it was hypothesized that that two expressions would not yield equivalent amounts of prescribed insertion gain and output. These differences are examined for their impact on the maximum power output (MPO) requirements of the hearing aid. Subsequently, the MPO capabilities of two common behind-the-ear (BTE) receiver placement alternatives, receiver-in-aid (RIA) and receiver-in-canal (RIC), are examined. The two expressions of hearing losses examined were the 25% ABG + AC approach and the 75% ABG + BC approach, where ABG refers to air-bone gap, AC refers to air-conduction threshold, and BC refers to bone-conduction threshold. Example hearing loss cases with a conductive component are sampled for calculations. The MPO capabilities of the BTE receiver placements in commercially-available products were obtained from hearing aids on the U.S. federal purchasing contract. Prescribed gain and the required MPO differs markedly between the two approaches. The 75% ABG + BC approach prescribes a compression ratio that is reflective of the amount of sensorineural hearing loss. Not all hearing aids will have the MPO capabilities to support the output requirements for fitting hearing losses with a large conductive component particularly when combined with

  3. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  4. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  5. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    Science.gov (United States)

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  6. "Minimum input, maximum output, indeed!" Teaching Collocations ...

    African Journals Online (AJOL)

    Fifty-nine EFL college students participated in the study, and they received two 75-minute instructions between pre- and post-tests: one on the definition of colloca-tion and its importance, and the other on the skill of looking up collocational information in the Naver Dictionary — an English–Korean online dictionary. During ...

  7. Results from transcranial Doppler examination on children and adolescents with sickle cell disease and correlation between the time-averaged maximum mean velocity and hematological characteristics: a cross-sectional analytical study

    Directory of Open Access Journals (Sweden)

    Mary Hokazono

    Full Text Available CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD detects stroke risk among children with sickle cell anemia (SCA. Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD genotypes and correlate the time-averaged maximum mean (TAMM velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia; and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia. TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05. RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6% and five conditional results (8.1% in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics.

  8. A step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy and minimization of gate fee.

    Science.gov (United States)

    Kyriakis, Efstathios; Psomopoulos, Constantinos; Kokkotis, Panagiotis; Bourtsalas, Athanasios; Themelis, Nikolaos

    2017-06-23

    This study attempts the development of an algorithm in order to present a step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy, also considering the basic obstacle which is in many cases, the gate fee. Various parameters identified and evaluated in order to formulate the proposed decision making method in the form of an algorithm. The principle simulation input is the amount of municipal solid wastes (MSW) available for incineration and along with its net calorific value are the most important factors for the feasibility of the plant. Moreover, the research is focused both on the parameters that could increase the energy production and those that affect the R1 energy efficiency factor. Estimation of the final gate fee is achieved through the economic analysis of the entire project by investigating both expenses and revenues which are expected according to the selected site and outputs of the facility. In this point, a number of commonly revenue methods were included in the algorithm. The developed algorithm has been validated using three case studies in Greece-Athens, Thessaloniki, and Central Greece, where the cities of Larisa and Volos have been selected for the application of the proposed decision making tool. These case studies were selected based on a previous publication made by two of the authors, in which these areas where examined. Results reveal that the development of a «solid» methodological approach in selecting the site and the size of waste-to-energy (WtE) facility can be feasible. However, the maximization of the energy efficiency factor R1 requires high utilization factors while the minimization of the final gate fee requires high R1 and high metals recovery from the bottom ash as well as economic exploitation of recovered raw materials if any.

  9. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  10. Book Trade Research and Statistics. Prices of U.S. and Foreign Published Materials; Book Title Output and Average Prices: 2000 Final and 2001 Preliminary Figures; Book Sales Statistics, 2001: AAP Preliminary Estimates; U.S. Book Exports and Imports: 2001; Number of Book Outlets in the United States and Canada; Review Media Statistics.

    Science.gov (United States)

    Sullivan, Sharon G.; Barr, Catherine; Grabois, Andrew

    2002-01-01

    Includes six articles that report on prices of U.S. and foreign published materials; book title output and average prices; book sales statistics; book exports and imports; book outlets in the U.S. and Canada; and review media statistics. (LRW)

  11. Book Trade Research and Statistics. Prices of U.S. and Foreign Published Materials; Book Title Output and Average Prices: 2001 Final and 2002 Preliminary Figures; Book Sales Statistics, 2002: AAP Preliminary Estimates; U.S. Book Exports and Imports:2002; Number of Book Outlets in the United States and Canada; Review Media Statistics.

    Science.gov (United States)

    Sullivan, Sharon G.; Grabois, Andrew; Greco, Albert N.

    2003-01-01

    Includes six reports related to book trade statistics, including prices of U.S. and foreign materials; book title output and average prices; book sales statistics; book exports and imports; book outlets in the U.S. and Canada; and numbers of books and other media reviewed by major reviewing publications. (LRW)

  12. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  13. Determinants of mobile phone output power in a multinational study: implications for exposure assessment

    DEFF Research Database (Denmark)

    Vrijheid, M; Madsen, Stine Mann; di Vecchia, Paolo

    2009-01-01

    OBJECTIVES: The output power of a mobile phone is directly related to its radiofrequency (RF) electromagnetic field strength, and may theoretically vary substantially in different networks and phone use circumstances due to power control technologies. To improve indices of RF exposure for epidemi......OBJECTIVES: The output power of a mobile phone is directly related to its radiofrequency (RF) electromagnetic field strength, and may theoretically vary substantially in different networks and phone use circumstances due to power control technologies. To improve indices of RF exposure...... on the average output power and the percentage call time at maximum power for each call. RESULTS: Measurements of over 60,000 phone calls showed that the average output power was approximately 50% of the maximum, and that output power varied by a factor of up to 2 to 3 between study centres and network operators...

  14. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  15. Notifiable events in systems for fission of nuclear fuels - nuclear power plants and research reactors with maximum output exceeding 50 kW of thermal normal rating - in the Federal Republic of Germany. Quarterly report, 2nd quarter of 1996

    International Nuclear Information System (INIS)

    1996-01-01

    There were 32 notifiable events in nuclear power plants in Germany in the second quarter of 1996. The report lists and characterises all the 32 events notified in the reporting period. The events did not involve any radioactivity release exceeding the maximum permissible limits during this period, so that there were no radiation hazards to the population or the environment. One event was classified at level 1 of the INES event scale (Anomaly). Research reactor operators in Germany reported 5 notifiable events in the reporting period. The report lists and characterises these events. These events did not involve any radioactivity release exceeding the maximum permissible limits during this period, so that there were no radiation hazards to the population or the environment. All events notified were classified into the lowest categories of safety significance of the official event scales (N, or below scale). (orig./DG) [de

  16. System for memorizing maximum values

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1992-08-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  17. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  18. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  19. Output power distributions of terminals in a 3G mobile communication network.

    Science.gov (United States)

    Persson, Tomas; Törnevik, Christer; Larsson, Lars-Eric; Lovén, Jan

    2012-05-01

    The objective of this study was to examine the distribution of the output power of mobile phones and other terminals connected to a 3G network in Sweden. It is well known that 3G terminals can operate with very low output power, particularly for voice calls. Measurements of terminal output power were conducted in the Swedish TeliaSonera 3G network in November 2008 by recording network statistics. In the analysis, discrimination was made between rural, suburban, urban, and dedicated indoor networks. In addition, information about terminal output power was possible to collect separately for voice and data traffic. Information from six different Radio Network Controllers (RNCs) was collected during at least 1 week. In total, more than 800000 h of voice calls were collected and in addition to that a substantial amount of data traffic. The average terminal output power for 3G voice calls was below 1 mW for any environment including rural, urban, and dedicated indoor networks. This is <1% of the maximum available output power. For data applications the average output power was about 6-8 dB higher than for voice calls. For rural areas the output power was about 2 dB higher, on average, than in urban areas. Copyright © 2011 Wiley Periodicals, Inc.

  20. Output factors and scatter ratios

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, P N; Summers, R E; Samulski, T V; Baird, L C [Allegheny General Hospital, Pittsburgh, PA (USA); Ahuja, A S; Dubuque, G L; Hendee, W R; Chhabra, A S

    1979-07-01

    Reference is made to a previous publication on output factors and scatter ratios for radiotherapy units in which it was suggested that the output factor should be included in the definitions of scatter-air ratio and tissue-maximum ratio. In the present correspondence from other authors and from the authors of the previous publication, the original definitions and the proposed changes are discussed. Radiation scatter from source and collimator degradation of beam energy and calculation of dose in tissue are considered in relation to the objective of accurate dosimetry.

  1. Unit 16 - Output

    OpenAIRE

    Unit 16, CC in GIS; Star, Jeffrey L.

    1990-01-01

    This unit discusses issues related to GIS output, including the different types of output possible and the hardware for producing each. It describes text, graphic and digital data that can be generated by a GIS as well as line printers, dot matrix printers/plotters, pen plotters, optical scanners and cathode ray tubes (CRTs) as technologies for generating the output.

  2. Maximum power analysis of photovoltaic module in Ramadi city

    Energy Technology Data Exchange (ETDEWEB)

    Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)

    2013-07-01

    Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.

  3. LOAD THAT MAXIMIZES POWER OUTPUT IN COUNTERMOVEMENT JUMP

    Directory of Open Access Journals (Sweden)

    Pedro Jimenez-Reyes

    2016-02-01

    Full Text Available ABSTRACT Introduction: One of the main problems faced by strength and conditioning coaches is the issue of how to objectively quantify and monitor the actual training load undertaken by athletes in order to maximize performance. It is well known that performance of explosive sports activities is largely determined by mechanical power. Objective: This study analysed the height at which maximal power output is generated and the corresponding load with which is achieved in a group of male-trained track and field athletes in the test of countermovement jump (CMJ with extra loads (CMJEL. Methods: Fifty national level male athletes in sprinting and jumping performed a CMJ test with increasing loads up to a height of 16 cm. The relative load that maximized the mechanical power output (Pmax was determined using a force platform and lineal encoder synchronization and estimating the power by peak power, average power and flight time in CMJ. Results: The load at which the power output no longer existed was at a height of 19.9 ± 2.35, referring to a 99.1 ± 1% of the maximum power output. The load that maximizes power output in all cases has been the load with which an athlete jump a height of approximately 20 cm. Conclusion: These results highlight the importance of considering the height achieved in CMJ with extra load instead of power because maximum power is always attained with the same height. We advise for the preferential use of the height achieved in CMJEL test, since it seems to be a valid indicative of an individual's actual neuromuscular potential providing a valid information for coaches and trainers when assessing the performance status of our athletes and to quantify and monitor training loads, measuring only the height of the jump in the exercise of CMJEL.

  4. Cut-off Grade Optimization for Maximizing the Output Rate

    Directory of Open Access Journals (Sweden)

    A. Khodayari

    2012-12-01

    Full Text Available In the open-pit mining, one of the first decisions that must be made in production planning stage, after completing the design of final pit limits, is determining of the processing plant cut-off grade. Since this grade has an essential effect on operations, choosing the optimum cut-off grade is of considerable importance. Different goals may be used for determining optimum cut-off grade. One of these goals may be maximizing the output rate (amount of product per year, which is very important, especially from marketing and market share points of view. Objective of this research is determining the optimum cut-off grade of processing plant in order to maximize output rate. For performing this optimization, an Operations Research (OR model has been developed. The object function of this model is output rate that must be maximized. This model has two operational constraints namely mining and processing restrictions. For solving the model a heuristic method has been developed. Results of research show that the optimum cut-off grade for satisfying pre-stated goal is the balancing grade of mining and processing operations, and maximum production rate is a function of the maximum capacity of processing plant and average grade of ore that according to the above optimum cut-off grade must be sent to the plant.

  5. Input-output supervisor

    International Nuclear Information System (INIS)

    Dupuy, R.

    1970-01-01

    The input-output supervisor is the program which monitors the flow of informations between core storage and peripheral equipments of a computer. This work is composed of three parts: 1 - Study of a generalized input-output supervisor. With sample modifications it looks like most of input-output supervisors which are running now on computers. 2 - Application of this theory on a magnetic drum. 3 - Hardware requirement for time-sharing. (author) [fr

  6. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  7. Output hardcopy devices

    CERN Document Server

    Durbeck, Robert

    1988-01-01

    Output Hardcopy Devices provides a technical summary of computer output hardcopy devices such as plotters, computer output printers, and CRT generated hardcopy. Important related technical areas such as papers, ribbons and inks, color techniques, controllers, and character fonts are also covered. Emphasis is on techniques primarily associated with printing, as well as the plotting capabilities of printing devices that can be effectively used for computer graphics in addition to their various printing functions. Comprised of 19 chapters, this volume begins with an introduction to vector and ras

  8. WRF Model Output

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset contains WRF model output. There are three months of data: July 2012, July 2013, and January 2013. For each month, several simulations were made: A...

  9. VMS forms Output Tables

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These output tables contain parsed and format validated data from the various VMS forms that are sent from any given vessel, while at sea, from the VMS devices on...

  10. Governmentally amplified output volatility

    Science.gov (United States)

    Funashima, Yoshito

    2016-11-01

    Predominant government behavior is decomposed by frequency into several periodic components: updating cycles of infrastructure, Kuznets cycles, fiscal policy over business cycles, and election cycles. Little is known, however, about the theoretical impact of such cyclical behavior in public finance on output fluctuations. Based on a standard neoclassical growth model, this study intends to examine the frequency at which public investment cycles are relevant to output fluctuations. We find an inverted U-shaped relationship between output volatility and length of cycle in public investment. This implies that periodic behavior in public investment at a certain frequency range can cause aggravated output resonance. Moreover, we present an empirical analysis to test the theoretical implication, using the U.S. data in the period from 1968 to 2015. The empirical results suggest that such resonance phenomena change from low to high frequency.

  11. CMAQ Model Output

    Data.gov (United States)

    U.S. Environmental Protection Agency — CMAQ and CMAQ-VBS model output. This dataset is not publicly accessible because: Files too large. It can be accessed through the following means: via EPA's NCC tape...

  12. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  13. Real-time prediction models for output power and efficiency of grid-connected solar photovoltaic systems

    International Nuclear Information System (INIS)

    Su, Yan; Chan, Lai-Cheong; Shu, Lianjie; Tsui, Kwok-Leung

    2012-01-01

    Highlights: ► We develop online prediction models for solar photovoltaic system performance. ► The proposed prediction models are simple but with reasonable accuracy. ► The maximum monthly average minutely efficiency varies 10.81–12.63%. ► The average efficiency tends to be slightly higher in winter months. - Abstract: This paper develops new real time prediction models for output power and energy efficiency of solar photovoltaic (PV) systems. These models were validated using measured data of a grid-connected solar PV system in Macau. Both time frames based on yearly average and monthly average are considered. It is shown that the prediction model for the yearly/monthly average of the minutely output power fits the measured data very well with high value of R 2 . The online prediction model for system efficiency is based on the ratio of the predicted output power to the predicted solar irradiance. This ratio model is shown to be able to fit the intermediate phase (9 am to 4 pm) very well but not accurate for the growth and decay phases where the system efficiency is near zero. However, it can still serve as a useful purpose for practitioners as most PV systems work in the most efficient manner over this period. It is shown that the maximum monthly average minutely efficiency varies over a small range of 10.81% to 12.63% in different months with slightly higher efficiency in winter months.

  14. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  15. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  16. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  17. Oil output's changing fortunes

    International Nuclear Information System (INIS)

    Eldridge, D.

    1994-01-01

    The Petroleum Economist, previously the Petroleum Press Service, has been making annual surveys of output levels of petroleum in all the oil-producing countries since its founding in 1934. This article documents trends and changes in the major oil-producing countries output from 1934 until the present. This analysis is linked with the political and historical events accompanying these changes, notably the growth of Middle Eastern oil production, the North Sea finds and most recently, Iraq's invasion of Kuwait in 1990. (UK)

  18. Cardiac output measurement

    Directory of Open Access Journals (Sweden)

    Andreja Möller Petrun

    2014-02-01

    Full Text Available In recent years, developments in the measuring of cardiac output and other haemodynamic variables are focused on the so-called minimally invasive methods. The aim of these methods is to simplify the management of high-risk and haemodynamically unstable patients. Due to the need of invasive approach and the possibility of serious complications the use of pulmonary artery catheter has decreased. This article describes the methods for measuring cardiac output, which are based on volume measurement (Fick method, indicator dilution method, pulse wave analysis, Doppler effect, and electrical bioimpedance.

  19. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  20. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  1. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  2. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  3. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  4. High Output Piezo/Triboelectric Hybrid Generator

    Science.gov (United States)

    Jung, Woo-Suk; Kang, Min-Gyu; Moon, Hi Gyu; Baek, Seung-Hyub; Yoon, Seok-Jin; Wang, Zhong-Lin; Kim, Sang-Woo; Kang, Chong-Yun

    2015-03-01

    Recently, piezoelectric and triboelectric energy harvesting devices have been developed to convert mechanical energy into electrical energy. Especially, it is well known that triboelectric nanogenerators have a simple structure and a high output voltage. However, whereas nanostructures improve the output of triboelectric generators, its fabrication process is still complicated and unfavorable in term of the large scale and long-time durability of the device. Here, we demonstrate a hybrid generator which does not use nanostructure but generates much higher output power by a small mechanical force and integrates piezoelectric generator into triboelectric generator, derived from the simultaneous use of piezoelectric and triboelectric mechanisms in one press-and-release cycle. This hybrid generator combines high piezoelectric output current and triboelectric output voltage, which produces peak output voltage of ~370 V, current density of ~12 μA.cm-2, and average power density of ~4.44 mW.cm-2. The output power successfully lit up 600 LED bulbs by the application of a 0.2 N mechanical force and it charged a 10 μF capacitor to 10 V in 25 s. Beyond energy harvesting, this work will provide new opportunities for developing a small, built-in power source in self-powered electronics such as mobile electronics.

  5. High Output Piezo/Triboelectric Hybrid Generator

    Science.gov (United States)

    Jung, Woo-Suk; Kang, Min-Gyu; Moon, Hi Gyu; Baek, Seung-Hyub; Yoon, Seok-Jin; Wang, Zhong-Lin; Kim, Sang-Woo; Kang, Chong-Yun

    2015-01-01

    Recently, piezoelectric and triboelectric energy harvesting devices have been developed to convert mechanical energy into electrical energy. Especially, it is well known that triboelectric nanogenerators have a simple structure and a high output voltage. However, whereas nanostructures improve the output of triboelectric generators, its fabrication process is still complicated and unfavorable in term of the large scale and long-time durability of the device. Here, we demonstrate a hybrid generator which does not use nanostructure but generates much higher output power by a small mechanical force and integrates piezoelectric generator into triboelectric generator, derived from the simultaneous use of piezoelectric and triboelectric mechanisms in one press-and-release cycle. This hybrid generator combines high piezoelectric output current and triboelectric output voltage, which produces peak output voltage of ~370 V, current density of ~12 μA·cm−2, and average power density of ~4.44 mW·cm−2. The output power successfully lit up 600 LED bulbs by the application of a 0.2 N mechanical force and it charged a 10 μF capacitor to 10 V in 25 s. Beyond energy harvesting, this work will provide new opportunities for developing a small, built-in power source in self-powered electronics such as mobile electronics. PMID:25791299

  6. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  7. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  8. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  9. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  10. Comparison of x-ray output of inverter-type x-ray equipment

    International Nuclear Information System (INIS)

    Asano, Hiroshi; Miyake, Hiroyuki; Yamamoto, Keiichi

    2000-01-01

    The x-ray output of 54 inverter-type x-ray apparatuses used at 18 institutions was investigated. The reproducibility and linearity of x-ray output and variations among the x-ray equipment were evaluated using the same fluorescence meter. In addition, the x-ray apparatuses were re-measured using the same non-invasive instrument to check for variations in tube voltage, tube current, and irradiation time. The non-invasive instrument was calibrated by simultaneously obtaining measurements with an invasive instrument, employing the tube voltage and current used for the invasive instrument, and the difference was calculated. Reproducibility of x-ray output was satisfactory for all x-ray apparatuses. The coefficient of variation was 0.04 or less for irradiation times of 5 ms or longer. In 84.3% of all x-ray equipment, variation in the linearity of x-ray output was 15% or less for an irradiation time of 5 ms. However, for all the apparatuses, the figure was 50% when irradiation time was the shortest (1 to 3 ms). Variation in x-ray output increased as irradiation time decreased. Variation in x-ray output ranged between 1.8 and 2.5 compared with the maximum and minimum values, excluding those obtained at the shortest irradiation time. The relative standard deviation ranged from ±15.5% to ±21.0%. The largest variation in x-ray output was confirmed in regions irradiated for the shortest time, with smaller variations observed for longer irradiation times. The major factor responsible for variation in x-ray output in regions irradiated for 10 ms or longer, which is a relatively long irradiation time, was variation in tube current. Variation in tube current was slightly greater than 30% at maximum, with an average value of 7% compared with the preset tube current. Variations in x-ray output in regions irradiated for the shortest time were due to photographic effects related to the rise and fall times of the tube voltage waveform. Accordingly, in order to obtain constant x

  11. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  12. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  13. A New MPPT Control for Photovoltaic Panels by Instantaneous Maximum Power Point Tracking

    Science.gov (United States)

    Tokushima, Daiki; Uchida, Masato; Kanbei, Satoshi; Ishikawa, Hiroki; Naitoh, Haruo

    This paper presents a new maximum power point tracking control for photovoltaic (PV) panels. The control can be categorized into the Perturb and Observe (P & O) method. It utilizes instantaneous voltage ripples at PV panel output terminals caused by the switching of a chopper connected to the panel in order to identify the direction for the maximum power point (MPP). The tracking for the MPP is achieved by a feedback control of the average terminal voltage of the panel. Appropriate use of the instantaneous and the average values of the PV voltage for the separate purposes enables both the quick transient response and the good convergence with almost no ripples simultaneously. The tracking capability is verified experimentally with a 2.8 W PV panel under a controlled experimental setup. A numerical comparison with a conventional P & O confirms that the proposed control extracts much more power from the PV panel.

  14. Maximum Power Point Tracking (MPPT Pada Sistem Pembangkit Listrik Tenaga Angin Menggunakan Buck-Boost Converter

    Directory of Open Access Journals (Sweden)

    Muhamad Otong

    2017-05-01

    Full Text Available In this paper, the implementation of the Maximum Power Point Tracking (MPPT technique is developed using buck-boost converter. Perturb and observe (P&O MPPT algorithm is used to searching maximum power from the wind power plant for charging of the battery. The model used in this study is the Variable Speed Wind Turbine (VSWT with a Permanent Magnet Synchronous Generator (PMSG. Analysis, design, and modeling of wind energy conversion system has done using MATLAB/simulink. The simulation results show that the proposed MPPT produce a higher output power than the system without MPPT. The average efficiency that can be achieved by the proposed system to transfer the maximum power into battery is 90.56%.

  15. Output characteristics of Stirling thermoacoustic engine

    International Nuclear Information System (INIS)

    Sun Daming; Qiu Limin; Wang Bo; Xiao Yong; Zhao Liang

    2008-01-01

    A thermoacoustic engine (TE), which converts thermal energy into acoustic power by the thermoacoustic effect, shows several advantages due to the absence of moving parts, such as high reliability and long lifetime associated with reduced manufacturing costs. Power output and efficiency are important criteria of the performance of a TE. In order to increase the acoustic power output and thermal efficiency of a Stirling TE, the acoustic power distribution in the engine is studied with the variable load method. It is found that the thermal efficiency is independent of the output locations along the engine under the same acoustic power output. Furthermore, when the pressure ratio is kept constant at one location along the TE, it is beneficial to increasing the thermal efficiency by exporting more acoustic power. With nitrogen of 2.5 MPa as working gas and the pressure ratio at the compliance of 1.20 in the experiments, the acoustic power is measured at the compliance and the resonator simultaneously. The maximum power output, thermal efficiency and exergy efficiency reach 390.0 W, 11.2% and 16.0%, which are increased by 51.4%, 24.4% and 19.4%, respectively, compared to those with a single R-C load with 750 ml reservoir at the compliance. This research will be instructive for increasing the efficiency and making full use of the acoustic energy of a TE

  16. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  17. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  18. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  19. Cardiac output during exercise

    DEFF Research Database (Denmark)

    Siebenmann, C; Rasmussen, P.; Sørensen, H.

    2015-01-01

    Several techniques assessing cardiac output (Q) during exercise are available. The extent to which the measurements obtained from each respective technique compares to one another, however, is unclear. We quantified Q simultaneously using four methods: the Fick method with blood obtained from...... the right atrium (Q(Fick-M)), Innocor (inert gas rebreathing; Q(Inn)), Physioflow (impedance cardiography; Q(Phys)), and Nexfin (pulse contour analysis; Q(Pulse)) in 12 male subjects during incremental cycling exercise to exhaustion in normoxia and hypoxia (FiO2  = 12%). While all four methods reported...... a progressive increase in Q with exercise intensity, the slopes of the Q/oxygen uptake (VO2) relationship differed by up to 50% between methods in both normoxia [4.9 ± 0.3, 3.9 ± 0.2, 6.0 ± 0.4, 4.8 ± 0.2 L/min per L/min (mean ± SE) for Q(Fick-M), Q(Inn), QP hys and Q(Pulse), respectively; P = 0...

  20. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  1. Inverter communications using output signal

    Science.gov (United States)

    Chapman, Patrick L.

    2017-02-07

    Technologies for communicating information from an inverter configured for the conversion of direct current (DC) power generated from an alternative source to alternating current (AC) power are disclosed. The technologies include determining information to be transmitted from the inverter over a power line cable connected to the inverter and controlling the operation of an output converter of the inverter as a function of the information to be transmitted to cause the output converter to generate an output waveform having the information modulated thereon.

  2. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  3. Characterization of the electrical output of flat-plate photovoltaic arrays

    Science.gov (United States)

    Gonzalez, C. C.; Hill, G. M.; Ross, R. G., Jr.

    1982-01-01

    The electric output of flat-plate photovoltaic arrays changes constantly, due primarily to changes in cell temperature and irradiance level. As a result, array loads such as direct-current to alternating-current power conditioners must be able to accommodate widely varying input levels, while maintaining operation at or near the array maximum power point.The results of an extensive computer simulation study that was used to define the parameters necessary for the systematic design of array/power-conditioner interfaces are presented as normalized ratios of power-conditioner parameters to array parameters, to make the results universally applicable to a wide variety of system sizes, sites, and operating modes. The advantages of maximum power tracking and a technique for computing average annual power-conditioner efficiency are discussed.

  4. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  5. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  6. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  7. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  8. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  9. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  10. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  11. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  12. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  13. Enhanced performance CCD output amplifier

    Science.gov (United States)

    Dunham, Mark E.; Morley, David W.

    1996-01-01

    A low-noise FET amplifier is connected to amplify output charge from a che coupled device (CCD). The FET has its gate connected to the CCD in common source configuration for receiving the output charge signal from the CCD and output an intermediate signal at a drain of the FET. An intermediate amplifier is connected to the drain of the FET for receiving the intermediate signal and outputting a low-noise signal functionally related to the output charge signal from the CCD. The amplifier is preferably connected as a virtual ground to the FET drain. The inherent shunt capacitance of the FET is selected to be at least equal to the sum of the remaining capacitances.

  14. High Average Power Raman Conversion in Diamond: ’Eyesafe’ Output and Fiber Laser Conversion

    Science.gov (United States)

    2015-06-19

    Kitzler and RP. Mildren, Laser & Photon. Reviews, vol. 8, L37 -L41 (2014) 5 Distribution Code A: Approved for public release, distribution is... L37 -L41 (2014) O. Kitzler, A. McKay, D.J. Spence and R.P. Mildren, "Modelling and Optimization of Continuous-Wave External Cavity Raman Lasers

  15. Synchronously pumped optical parametric oscillation in periodically poled lithium niobate with 1-W average output power

    NARCIS (Netherlands)

    Graf, T.; McConnell, G.; Ferguson, A.I.; Bente, E.A.J.M.; Burns, D.; Dawson, M.D.

    1999-01-01

    We report on a rugged all-solid-state laser source of near-IR radiation in the range of 1461–1601 nm based on a high-power Nd:YVO4 laser that is mode locked by a semiconductor saturable Bragg reflector as the pump source of a synchronously pumped optical parametric oscillator with a periodically

  16. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  17. GDP Growth, Potential Output, and Output Gaps in Mexico

    OpenAIRE

    Ebrima A Faal

    2005-01-01

    This paper analyzes the sources of Mexico's economic growth since the 1960s and compares various decompositions of historical growth into its trend and cyclical components. The role of the implied output gaps in the inflationary process is then assessed. Looking ahead, the paper presents medium-term paths for GDP based on alternative assumptions for productivity growth rates. The results indicate that the most important factor underlying the slowdown in output growth was a decline in trend to...

  18. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  19. Alberta’s Changing Industrial Structure: Implications for Output and Income Volatility

    Directory of Open Access Journals (Sweden)

    Bev Dahlby

    2018-02-01

    : 16 other sectors in Alberta are linked to the same boom-bust cycle as the oil and gas sector. The more important diversification issue in the province is not output volatility, but the volatility of labour income. In the last 20 years, labour income has become increasingly concentrated in Alberta’s two most volatile sectors, oil and gas extraction and construction. As a result, volatility of aggregate labour income in Alberta increased by 40 per cent during that period. Rather than trying to change Alberta’s industrial mix by subsidizing industries that may only contribute to more volatility of economic output, a more sensible government approach would be to adopt policies that address the problem of labourincome volatility. That would include finding ways to expand unemployment insurance for Alberta workers, as the current federal government policy actually provides fewer supports to unemployed Albertans than it does to residents of other regions. Average weekly earnings of Albertans were 20 per cent higher than national average weekly earnings over the 2012 to 2016 period. However, maximum annual insurable earnings under EI are determined based on national average weekly earnings. Higherwage earners should have the opportunity to enrol in a voluntary supplemental EI program, and if the federal government does not want to provide it, the provincial government could. Additionally, the government can promote self-insurance among workers by expanding tax-sheltered savings products, like tax-free savings accounts, so workers can accumulate back-up funds when labour incomes are high, to help sustain them during downturns. Finally, the provincial government needs to abandon its procyclical spending patterns. That means spending less money when oil revenues are high, to avoid exacerbating labour and material shortages, and maintaining spending, rather than forced cutbacks, during downturns in the economy. That, of course, would require a great deal more political

  20. Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation

    OpenAIRE

    Rajiv D. Banker

    1993-01-01

    This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...

  1. Maximum likelihood Bayesian averaging of airflow models in unsaturated fractured tuff using Occam and variance windows

    NARCIS (Netherlands)

    Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.

    2010-01-01

    We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power,

  2. Characterizing the effects of cell settling on bioprinter output

    International Nuclear Information System (INIS)

    Pepper, Matthew E; Burg, Timothy C; Burg, Karen J L; Groff, Richard E; Seshadri, Vidya

    2012-01-01

    The time variation in bioprinter output, i.e. the number of cells per printed drop, was studied over the length of a typical printing experiment. This variation impacts the cell population size of bioprinted samples, which should ideally be consistent. The variation in output was specifically studied in the context of cell settling. The bioprinter studied is based on the thermal inkjet HP26A cartridge; however, the results are relevant to other cell delivery systems that draw fluid from a reservoir. A simple mathematical model suggests that the cell concentration in the bottom of the reservoir should increase linearly over time, up to some maximum, and that the cell output should be proportional to this concentration. Two studies were performed in which D1 murine stem cells and similarly sized polystyrene latex beads were printed. The bead output profiles were consistent with the model. The cell output profiles initially followed the increasing trend predicted by the settling model, but after several minutes the cell output peaked and then decreased. The decrease in cell output was found to be associated with the number of use cycles the cartridge had experienced. The differing results for beads and cells suggest that a biological process, such as adhesion, causes the decrease in cell output. Further work will be required to identify the exact process. (communication)

  3. Output

    DEFF Research Database (Denmark)

    Mehlsen, Camilla

    2010-01-01

    Hvad får vi egentlig ud af internationale komparative undersøgelser som PISA, PIRLS og TIMSS? Hvordan påvirker de dansk uddannelsespolitik? Asterisk har talt med tre forskere med ekspertise på området.......Hvad får vi egentlig ud af internationale komparative undersøgelser som PISA, PIRLS og TIMSS? Hvordan påvirker de dansk uddannelsespolitik? Asterisk har talt med tre forskere med ekspertise på området....

  4. A Monte Carlo study on multiple output stochastic frontiers

    DEFF Research Database (Denmark)

    Henningsen, Geraldine; Henningsen, Arne; Jensen, Uwe

    2015-01-01

    , dividing all other output quantities by the selected outputquantity, and using these ratios as regressors (OD). Another approach is the stochasticray production frontier (SR), which transforms the output quantities into their Euclideandistance as the dependent variable and their polar coordinates...... of the approaches is clearly superior. However, considerable differences are found between the estimates at single replications. Taking average efficiencies from both approaches gives clearly better efficiency estimates than taking just the OD or the SR. In the case of zero values in the output quantities, the SR...

  5. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  6. Appropriatie spatial scales to achieve model output uncertainty goals

    NARCIS (Netherlands)

    Booij, Martijn J.; Melching, Charles S.; Chen, Xiaohong; Chen, Yongqin; Xia, Jun; Zhang, Hailun

    2008-01-01

    Appropriate spatial scales of hydrological variables were determined using an existing methodology based on a balance in uncertainties from model inputs and parameters extended with a criterion based on a maximum model output uncertainty. The original methodology uses different relationships between

  7. Performance study of highly efficient 520 W average power long pulse ceramic Nd:YAG rod laser

    Science.gov (United States)

    Choubey, Ambar; Vishwakarma, S. C.; Ali, Sabir; Jain, R. K.; Upadhyaya, B. N.; Oak, S. M.

    2013-10-01

    We report the performance study of a 2% atomic doped ceramic Nd:YAG rod for long pulse laser operation in the millisecond regime with pulse duration in the range of 0.5-20 ms. A maximum average output power of 520 W with 180 J maximum pulse energy has been achieved with a slope efficiency of 5.4% using a dual rod configuration, which is the highest for typical lamp pumped ceramic Nd:YAG lasers. The laser output characteristics of the ceramic Nd:YAG rod were revealed to be nearly equivalent or superior to those of high-quality single crystal Nd:YAG rod. The laser pump chamber and resonator were designed and optimized to achieve a high efficiency and good beam quality with a beam parameter product of 16 mm mrad (M2˜47). The laser output beam was efficiently coupled through a 400 μm core diameter optical fiber with 90% overall transmission efficiency. This ceramic Nd:YAG laser will be useful for various material processing applications in industry.

  8. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  9. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  10. Pareto Principle in Datamining: an Above-Average Fencing Algorithm

    Directory of Open Access Journals (Sweden)

    K. Macek

    2008-01-01

    Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.

  11. Redesign lifts prep output 288%

    Energy Technology Data Exchange (ETDEWEB)

    Hamric, J

    1987-02-01

    This paper outlines the application of engineering creativity and how it brought output at an Ohio coal preparation plant up from 12,500 tpd to nearly four times that figure, 48,610 tpd. By streamlining the conveyor systems, removing surplus belt length and repositioning subplants the whole operation was able to run far more efficiently with a greater output. Various other alterations including the raw material supply and management and operating practices were also undertaken to provide a test for the achievements possible with such reorganization. The new developments have been in the following fields: fine coal cleaning, heavy media cyclones, feeders, bins, filter presses, dewatering equipment and settling tanks. Output is now limited only by the reduced demand by the Gavin power station nearby.

  12. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  13. Performance and stress analysis of oxide thermoelectric module architecture designed for maximum power output

    DEFF Research Database (Denmark)

    Wijesekara, Waruna; Rosendahl, Lasse; Wu, NingYu

    Oxide thermoelectric materials are promising candidates for energy harvesting from mid to high temperature heat sources. In this work, the oxide thermoelectric materials and the final design of the high temperature thermoelectric module were developed. Also, prototypes of oxide thermoelectric...... of real thermoelectric uni-couples, the three-dimensional governing equations for the coupled heat transfer and thermoelectric effects were developed. Finite element simulations of this system were done using the COMSOL Multiphysics solver. Prototypes of the models were developed and the analytical...... generator were built for high temperature applications. This paper specifically discusses the thermoelectric module design and the prototype validations of the design. Here p type calcium cobalt oxide and n type aluminum doped ZnO were developed as the oxide thermoelectric materials. Hot side and cold side...

  14. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  15. High-throughput machining using high average power ultrashort pulse lasers and ultrafast polygon scanner

    Science.gov (United States)

    Schille, Joerg; Schneider, Lutz; Streek, André; Kloetzer, Sascha; Loeschner, Udo

    2016-03-01

    In this paper, high-throughput ultrashort pulse laser machining is investigated on various industrial grade metals (Aluminium, Copper, Stainless steel) and Al2O3 ceramic at unprecedented processing speeds. This is achieved by using a high pulse repetition frequency picosecond laser with maximum average output power of 270 W in conjunction with a unique, in-house developed two-axis polygon scanner. Initially, different concepts of polygon scanners are engineered and tested to find out the optimal architecture for ultrafast and precision laser beam scanning. Remarkable 1,000 m/s scan speed is achieved on the substrate, and thanks to the resulting low pulse overlap, thermal accumulation and plasma absorption effects are avoided at up to 20 MHz pulse repetition frequencies. In order to identify optimum processing conditions for efficient high-average power laser machining, the depths of cavities produced under varied parameter settings are analyzed and, from the results obtained, the characteristic removal values are specified. The maximum removal rate is achieved as high as 27.8 mm3/min for Aluminium, 21.4 mm3/min for Copper, 15.3 mm3/min for Stainless steel and 129.1 mm3/min for Al2O3 when full available laser power is irradiated at optimum pulse repetition frequency.

  16. Thin disk laser with unstable resonator and reduced output coupler

    Science.gov (United States)

    Gavili, Anwar; Shayganmanesh, Mahdi

    2018-05-01

    In this paper, feasibility of using unstable resonator with reduced output coupling in a thin disk laser is studied theoretically. Unstable resonator is modeled by wave-optics using Collins integral and iterative method. An Yb:YAG crystal with 250 micron thickness is considered as a quasi-three level active medium and modeled by solving rate equations of energy levels populations. The amplification of laser beam in the active medium is calculated based on the Beer-Lambert law and Rigrod method. Using generalized beam parameters method, laser beam parameters like, width, divergence, M2 factor, output power as well as near and far-field beam profiles are calculated for unstable resonator. It is demonstrated that for thin disk laser (with single disk) in spite of the low thickness of the disk which leads to low gain factor, it is possible to use unstable resonator (with reduced output coupling) and achieve good output power with appropriate beam quality. Also, the behavior of output power and beam quality versus equivalent Fresnel number is investigated and optimized value of output coupling for maximum output power is achieved.

  17. World Input-Output Network.

    Directory of Open Access Journals (Sweden)

    Federica Cerina

    Full Text Available Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD is one of the first efforts to construct the global multi-regional input-output (GMRIO tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries.

  18. Remote input/output station

    CERN Multimedia

    1972-01-01

    A general view of the remote input/output station installed in building 112 (ISR) and used for submitting jobs to the CDC 6500 and 6600. The card reader on the left and the line printer on the right are operated by programmers on a self-service basis.

  19. Compact Circuit Preprocesses Accelerometer Output

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1993-01-01

    Compact electronic circuit transfers dc power to, and preprocesses ac output of, accelerometer and associated preamplifier. Incorporated into accelerometer case during initial fabrication or retrofit onto commercial accelerometer. Made of commercial integrated circuits and other conventional components; made smaller by use of micrologic and surface-mount technology.

  20. SIMULATION OF NEW SIMPLE FUZZY LOGIC MAXIMUM POWER ...

    African Journals Online (AJOL)

    2010-06-30

    Jun 30, 2010 ... Basic structure photovoltaic system Solar array mathematic ... The equivalent circuit model of a solar cell consists of a current generator and a diode .... control of boost converter (tracker) such that maximum power is achieved at the output of the solar panel. Fig.11. The membership function of input. Fig.12.

  1. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  2. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  3. Statistical downscaling of CMIP5 outputs for projecting future changes in rainfall in the Onkaparinga catchment

    Energy Technology Data Exchange (ETDEWEB)

    Rashid, Md. Mamunur, E-mail: mdmamunur.rashid@mymail.unisa.edu.au [Centre for Water Management and Reuse, School of Natural and Built Environments, University of South Australia, Mawson Lakes, SA 5095 (Australia); Beecham, Simon, E-mail: simon.beecham@unisa.edu.au [Centre for Water Management and Reuse, School of Natural and Built Environments, University of South Australia, Mawson Lakes, SA 5095 (Australia); Chowdhury, Rezaul K., E-mail: rezaulkabir@uaeu.ac.ae [Centre for Water Management and Reuse, School of Natural and Built Environments, University of South Australia, Mawson Lakes, SA 5095 (Australia); Department of Civil and Environmental Engineering, United Arab Emirates University, Al Ain, PO Box 15551 (United Arab Emirates)

    2015-10-15

    A generalized linear model was fitted to stochastically downscaled multi-site daily rainfall projections from CMIP5 General Circulation Models (GCMs) for the Onkaparinga catchment in South Australia to assess future changes to hydrologically relevant metrics. For this purpose three GCMs, two multi-model ensembles (one by averaging the predictors of GCMs and the other by regressing the predictors of GCMs against reanalysis datasets) and two scenarios (RCP4.5 and RCP8.5) were considered. The downscaling model was able to reasonably reproduce the observed historical rainfall statistics when the model was driven by NCEP reanalysis datasets. Significant bias was observed in the rainfall when downscaled from historical outputs of GCMs. Bias was corrected using the Frequency Adapted Quantile Mapping technique. Future changes in rainfall were computed from the bias corrected downscaled rainfall forced by GCM outputs for the period 2041–2060 and these were then compared to the base period 1961–2000. The results show that annual and seasonal rainfalls are likely to significantly decrease for all models and scenarios in the future. The number of dry days and maximum consecutive dry days will increase whereas the number of wet days and maximum consecutive wet days will decrease. Future changes of daily rainfall occurrence sequences combined with a reduction in rainfall amounts will lead to a drier catchment, thereby reducing the runoff potential. Because this is a catchment that is a significant source of Adelaide's water supply, irrigation water and water for maintaining environmental flows, an effective climate change adaptation strategy is needed in order to face future potential water shortages. - Highlights: • A generalized linear model was used for multi-site daily rainfall downscaling. • Rainfall was downscaled from CMIP5 GCM outputs. • Two multi-model ensemble approaches were used. • Bias was corrected using the Frequency Adapted Quantile Mapping

  4. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  5. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  6. High Average Power, High Energy Short Pulse Fiber Laser System

    Energy Technology Data Exchange (ETDEWEB)

    Messerly, M J

    2007-11-13

    Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.

  7. Science policy through stimulating scholarly output. Reanalyzing the Australian case

    Energy Technology Data Exchange (ETDEWEB)

    Van den Besselaar, P.; Heyman, U.; Sandström, U.

    2016-07-01

    There is a long standing debate about perverse effects of performance indicators. A main target is science policy using stimulation of output as instrument. The criticism is to a large extent based on a study of the Australian science policy in the early 1990s. Linda Butler studied the effects and argued that the effect was an growth of output, but also a decrease of average quality of the output. These results have been cited many times. In this paper we reanalyze this case and show that the analysis of Butler was wrong: the new Australian science policy did not only increase the output of the system, but also the quality went up. We discuss the implications. (Author)

  8. UFO - The Universal FEYNRULES Output

    Science.gov (United States)

    Degrande, Céline; Duhr, Claude; Fuks, Benjamin; Grellscheid, David; Mattelaer, Olivier; Reiter, Thomas

    2012-06-01

    We present a new model format for automatized matrix-element generators, the so-called Universal FEYNRULES Output (UFO). The format is universal in the sense that it features compatibility with more than one single generator and is designed to be flexible, modular and agnostic of any assumption such as the number of particles or the color and Lorentz structures appearing in the interaction vertices. Unlike other model formats where text files need to be parsed, the information on the model is encoded into a PYTHON module that can easily be linked to other computer codes. We then describe an interface for the MATHEMATICA package FEYNRULES that allows for an automatic output of models in the UFO format.

  9. Quality, precision and accuracy of the maximum No. 40 anemometer

    Energy Technology Data Exchange (ETDEWEB)

    Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  10. Bibliometrics analysis of publication output in library and information ...

    African Journals Online (AJOL)

    The Web of Science was used as indexing/citation database in the study. The findings of the study revealed increasing trend in annual publication output in LIS research in Nigerian universities which indicates that there is progress in the development of LIS research in Nigeria. It was found that, typically and on the average, ...

  11. Aggregate Supply and Potential Output

    OpenAIRE

    Razin, Assaf

    2004-01-01

    The New-Keynesian aggregate supply derives from micro-foundations an inflation-dynamics model very much like the tradition in the monetary literature. Inflation is primarily affected by: (i) economic slack; (ii) expectations; (iii) supply shocks; and (iv) inflation persistence. This paper extends the New Keynesian aggregate supply relationship to include also fluctuations in potential output, as an additional determinant of the relationship. Implications for monetary rules and to the estimati...

  12. Technique for enhancing the power output of an electrostatic generator employing parametric resonance

    Science.gov (United States)

    Post, Richard F.

    2016-02-23

    A circuit-based technique enhances the power output of electrostatic generators employing an array of axially oriented rods or tubes or azimuthal corrugated metal surfaces for their electrodes. During generator operation, the peak voltage across the electrodes occurs at an azimuthal position that is intermediate between the position of minimum gap and maximum gap. If this position is also close to the azimuthal angle where the rate of change of capacity is a maximum, then the highest rf power output possible for a given maximum allowable voltage at the minimum gap can be attained. This rf power output is then coupled to the generator load through a coupling condenser that prevents suppression of the dc charging potential by conduction through the load. Optimized circuit values produce phase shifts in the rf output voltage that allow higher power output to occur at the same voltage limit at the minimum gap position.

  13. World crude output overcomes Persian Gulf disruption

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Several OPEC producers made good on their promises to replace 2.7 MMbpd of oil exports that vanished from the world market after Iraq took over Kuwait. Even more incredibly, they accomplished this while a breathtaking 1.2- MMbopd reduction in Soviet output took place during the course of 1991. After Abu Dhabi, Indonesia, Iran, Libya, Nigeria, Saudi Arabia and Venezuela turned the taps wide open, their combined output rose 2.95 MMbopd. Put together with a 282,000-bopd increase by Norway and contributions from smaller producers, this enabled world oil production to remain within 400,000 bopd of its 1990 level. The 60.5-MMbopd average was off by just 0.7%. This paper reports that improvement took place in five of eight regions. Largest increases were in Western Europe and Africa. Greatest reductions occurred in Eastern Europe and the Middle East. Fifteen nations produced 1 MMbopd or more last year, compared with 17 during 1990

  14. Guaranteeing high output of a mine

    Energy Technology Data Exchange (ETDEWEB)

    Shetser, M G

    1983-05-01

    Operation of the Im. Kalinina coal mine in the Central Donbass is evluated. Seventeen coal seams, on the average 0.87 m thick, are prone to methane and coal dust explosions and to rock bursts. Some of the seams are also prone to spontaneous combustion. Rock layers in the roofs are prone to rock falls. Mining depth ranges from 740 to 850 m. Another working level is being constructed at a depth of 960 m. The steep coal seams are mined by means of the ANShch shield systems and the KGU system (with the 'Poisk' cutter loader). Strata control methods used in the mine are evaluated. Design of timber cribbings used for strata control in inclined workings is shown in a scheme. Construction of coal chutes and strata control in coal chutes are also described. Operation of KGU-1 powered supports which have been used in the mine for 10 years is evaluated. Improved strata control permitted daily coal output from a working face to be increased from 135 t in 1979 to 169 t in 1982. Yearly coal output increased from 605,000 t to 760,000 t. Labor productivity increased from 21.1 t/month to 25.9 t/month per miner. (In Russian)

  15. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  16. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  17. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  18. A weak current amplifier and output circuit used in nuclear weighing scales

    International Nuclear Information System (INIS)

    Sun Jinhua; Zheng Mingquan; Wang Mingqian; Jia Changchun; Jin Hanjuan; Shi Qicun; Tang Ke

    1998-01-01

    A weak current amplifier and output circuit with a maximum nonlinear error of +-0.06% has been developed. Experiments show that it can work stably and therefore be used in nuclear industrial instruments

  19. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  20. Judicial Influence on Policy Outputs?

    DEFF Research Database (Denmark)

    Martinsen, Dorte Sindbjerg

    2015-01-01

    to override unwanted jurisprudence. In this debate, the Court of Justice of the European Union (CJEU) has become famous for its central and occasionally controversial role in European integration. This article examines to what extent and under which conditions judicial decisions influence European Union (EU......) social policy outputs. A taxonomy of judicial influence is constructed, and expectations of institutional and political conditions on judicial influence are presented. The analysis draws on an extensive novel data set and examines judicial influence on EU social policies over time, that is, between 1958...

  1. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  2. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  3. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  4. A MAXIMUM POWER POINT TRACKING SCHEME FOR A 1kW ...

    African Journals Online (AJOL)

    user

    knee point of PV system under variable atmospheric conditions have been ..... of the PV generator module increases, and the maximum power output increases as well. ..... Water Pumping System” A Thesis resented to the. Faculty of California ...

  5. Maximization of energy in the output of a linear system

    International Nuclear Information System (INIS)

    Dudley, D.G.

    1976-01-01

    A time-limited signal which, when passed through a linear system, maximizes the total output energy is considered. Previous work has shown that the solution is given by the eigenfunction associated with the maximum eigenvalue in a Hilbert-Schmidt integral equation. Analytical results are available for the case where the transfer function is a low-pass filter. This work is extended by obtaining a numerical solution to the integral equation which allows results for reasonably general transfer functions

  6. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  7. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  8. Evaluation of Navigation System Accuracy Indexes for Deviation Reading from Average Range

    Directory of Open Access Journals (Sweden)

    Alexey Boykov

    2017-12-01

    Full Text Available The method for estimating the mean of square error, kurtosis and error correlation coefficient for deviations from the average range of three navigation parameter indications from the outputs of three information sensors is substantiated and developed.

  9. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    Science.gov (United States)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  10. Integration of TMVA Output into Jupyter notebooks

    CERN Document Server

    Saliji, Albulena

    2016-01-01

    The purpose of this report is to describe the work that I have been doing during these past eight weeks as a Summer Student at CERN. The task which was assigned to me had to do with the integration of TMVA Output into Jupyter notebooks. In order to integrate the TMVA Output into the Jupyter notebook, first, improvement of the TMVA Output in the terminal was required. Once the output was improved, it needed to be transformed into HTML output and at the end it would be possible to integrate that output into the Jupyter notebook.

  11. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  12. 40 CFR 1045.140 - What is my engine's maximum engine power?

    Science.gov (United States)

    2010-07-01

    ...) Maximum engine power for an engine family is generally the weighted average value of maximum engine power... engine family's maximum engine power apply in the following circumstances: (1) For outboard or personal... value for maximum engine power from all the different configurations within the engine family to...

  13. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  14. The effect of relative humidity on output performance of inclined and ...

    African Journals Online (AJOL)

    The set-up of 70 Watts solar panel was inclined stationary at 150 for maximum solar reception while the set-up of 80 Watts solar panel had automatic solar tracker for effective capturing of solar radiation. For 70 Watts solar panel, the maximum power output of 59.99 Watt was obtained when the relative humidity was 30%.

  15. Improved Maximum Strength, Vertical Jump and Sprint Performance after 8 Weeks of Jump Squat Training with Individualized Loads

    Science.gov (United States)

    Marián, Vanderka; Katarína, Longová; Dávid, Olasz; Matúš, Krčmár; Simon, Walker

    2016-01-01

    The purpose of the study was to determine the effects of 8 weeks of jump squat training on isometric half squat maximal force production (Fmax) and rate of force development over 100ms (RFD100), countermovement jump (CMJ) and squat jump (SJ) height, and 50 m sprint time in moderately trained men. Sixty eight subjects (~21 years, ~180 cm, ~75 kg) were divided into experimental (EXP; n = 36) and control (CON, n = 32) groups. Tests were completed pre-, mid- and post-training. EXP performed jump squat training 3 times per week using loads that allowed all repetitions to be performed with ≥90% of maximum average power output (13 sessions with 4 sets of 8 repetitions and 13 sessions with 8 sets of 4 repetitions). Subjects were given real-time feedback for every repetition during the training sessions. Significant improvements in Fmax from pre- to mid- (Δ ~14%, psquats with loads that allow repetitions to be performed ≥90% of maximum average power output can simultaneously improve several different athletic performance tasks in the short-term. Key points Jump squat exercise is one of many exercises to develop explosive strength that has been the focus of several researches, while the load used during the training seem to be an important factor that affects training outcomes. Experimental group improved performance in all assessed parameters, such as Fmax, RFD100, CMJ, SJ and 50 m sprint time. However, improvements in CMJ and SJ were recorded after the entire power training period and thereafter plateau occurred. The portable FitroDyne could serve as a valuable device to individualize the load that maximizes mean power output and visual feedback can be provided to athletes during the training. PMID:27803628

  16. Observability of linear systems with saturated outputs

    NARCIS (Netherlands)

    Koplon, R.; Sontag, E.D.; Hautus, M.L.J.

    1994-01-01

    We present necessary and sufficient conditions for observability of the class of output-saturated systems. These are linear systems whose output passes through a saturation function before it can be measured.

  17. Hybrid optoelectronic device with multiple bistable outputs

    Energy Technology Data Exchange (ETDEWEB)

    Costazo-Caso, Pablo A; Jin Yiye; Gelh, Michael; Granieri, Sergio; Siahmakoun, Azad, E-mail: pcostanzo@ing.unlp.edu.are, E-mail: granieri@rose-hulma.edu, E-mail: siahmako@rose-hulma.edu [Department of Physics and Optical Engineering, Rose-Hulman Institute of Technology, 5500 Wabash Avenue, Terre Haute, IN 47803 (United States)

    2011-01-01

    Optoelectronic circuits which exhibit optical and electrical bistability with hysteresis behavior are proposed and experimentally demonstrated. The systems are based on semiconductor optical amplifiers (SOA), bipolar junction transistors (BJT), PIN photodiodes (PD) and laser diodes externally modulated with integrated electro-absorption modulators (LD-EAM). The device operates based on two independent phenomena leading to both electrical bistability and optical bistability. The electrical bistability is due to the series connection of two p-i-n structures (SOA, BJT, PD or LD) in reverse bias. The optical bistability is consequence of the quantum confined Stark effect (QCSE) in the multi-quantum well (MQW) structure in the intrinsic region of the device. This effect produces the optical modulation of the transmitted light through the SOA (or reflected from the PD). Finally, because the optical transmission of the SOA (in reverse bias) and the reflected light from the PD are so small, a LD-EAM modulated by the voltage across these devices are employed to obtain a higher output optical power. Experiments show that the maximum switching frequency is in MHz range and the rise/fall times lower than 1 us. The temporal response is mainly limited by the electrical capacitance of the devices and the parasitic inductances of the connecting wires. The effects of these components can be reduced in current integration technologies.

  18. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  19. Auxetic piezoelectric energy harvesters for increased electric power output

    Directory of Open Access Journals (Sweden)

    Qiang Li

    2017-01-01

    Full Text Available This letter presents a piezoelectric bimorph with auxetic (negative Poisson’s ratio behaviors for increased power output in vibration energy harvesting. The piezoelectric bimorph comprises a 2D auxetic substrate sandwiched between two piezoelectric layers. The auxetic substrate is capable of introducing auxetic behaviors and thus increasing the transverse stress in the piezoelectric layers when the bimorph is subjected to a longitudinal stretching load. As a result, both 31- and 32-modes are simultaneously exploited to generate electric power, leading to an increased power output. The increasing power output principle was theoretically analyzed and verified by finite element (FE modelling. The FE modelling results showed that the auxetic substrate can increase the transverse stress of a bimorph by 16.7 times. The average power generated by the auxetic bimorph is 2.76 times of that generated by a conventional bimorph.

  20. GaN Nanowire Arrays for High-Output Nanogenerators

    KAUST Repository

    Huang, Chi-Te

    2010-04-07

    Three-fold symmetrically distributed GaN nanowire (NW) arrays have been epitaxially grown on GaN/sapphire substrates. The GaN NW possesses a triangular cross section enclosed by (0001), (2112), and (2112) planes, and the angle between the GaN NW and the substrate surface is ∼62°. The GaN NW arrays produce negative output voltage pulses when scanned by a conductive atomic force microscope in contact mode. The average of piezoelectric output voltage was about -20 mV, while 5-10% of the NWs had piezoelectric output voltages exceeding -(0.15-0.35) V. The GaN NW arrays are highly stable and highly tolerate to moisture in the atmosphere. The GaN NW arrays demonstrate an outstanding potential to be utilized for piezoelectric energy generation with a performance probably better than that of ZnO NWs. © 2010 American Chemical Society.

  1. The value of risk: measuring the service output of U.S. commercial banks

    OpenAIRE

    Basu, Susanto; Inklaar, Robert; Wang, J. Christina

    2011-01-01

    Rather than charging direct fees, banks often charge implicitly for their services via interest spreads. As a result, much of bank output has to be estimated indirectly. In contrast to current statistical practice, dynamic optimizing models of banks argue that compensation for bearing systematic risk is not part of bank output. We apply these models and find that between 1997 and 2007, in the U.S. National Accounts, on average, bank output is overestimated by 21 percent and GDP is overestimat...

  2. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  3. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  4. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  5. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  6. Probabilistic Output Analysis by Program Manipulation

    DEFF Research Database (Denmark)

    Rosendahl, Mads; Kirkeby, Maja Hanne

    2015-01-01

    The aim of a probabilistic output analysis is to derive a probability distribution of possible output values for a program from a probability distribution of its input. We present a method for performing static output analysis, based on program transformation techniques. It generates a probability...

  7. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  8. Active structural acoustic control of helicopter interior multifrequency noise using input-output-based hybrid control

    Science.gov (United States)

    Ma, Xunjun; Lu, Yang; Wang, Fengjiao

    2017-09-01

    This paper presents the recent advances in reduction of multifrequency noise inside helicopter cabin using an active structural acoustic control system, which is based on active gearbox struts technical approach. To attenuate the multifrequency gearbox vibrations and resulting noise, a new scheme of discrete model predictive sliding mode control has been proposed based on controlled auto-regressive moving average model. Its implementation only needs input/output data, hence a broader frequency range of controlled system is modelled and the burden on the state observer design is released. Furthermore, a new iteration form of the algorithm is designed, improving the developing efficiency and run speed. To verify the algorithm's effectiveness and self-adaptability, experiments of real-time active control are performed on a newly developed helicopter model system. The helicopter model can generate gear meshing vibration/noise similar to a real helicopter with specially designed gearbox and active struts. The algorithm's control abilities are sufficiently checked by single-input single-output and multiple-input multiple-output experiments via different feedback strategies progressively: (1) control gear meshing noise through attenuating vibrations at the key points on the transmission path, (2) directly control the gear meshing noise in the cabin using the actuators. Results confirm that the active control system is practical for cancelling multifrequency helicopter interior noise, which also weakens the frequency-modulation of the tones. For many cases, the attenuations of the measured noise exceed the level of 15 dB, with maximum reduction reaching 31 dB. Also, the control process is demonstrated to be smoother and faster.

  9. Forecasted Changes in West Africa Photovoltaic Energy Output by 2045

    Directory of Open Access Journals (Sweden)

    Serge Dimitri Yikwe Buri Bazyomo

    2016-10-01

    Full Text Available The impacts of climate change on photovoltaic (PV output in the fifteen countries of the Economic Community of West African States (ECOWAS was analyzed in this paper. Using a set of eight climate models, the trends of solar radiation and temperature between 2006–2100 were examined. Assuming a lifetime of 40 years, the future changes of photovoltaic energy output for the tilted plane receptor compared to 2006–2015 were computed for the whole region. The results show that the trends of solar irradiation are negative except for the Irish Centre for High-End Computing model which predicts a positive trend with a maximum value of 0.17 W/m2/year for Cape Verde and the minimum of −0.06 W/m2/year for Liberia. The minimum of the negative trend is −0.18 W/m2/year predicted by the Model for Interdisciplinary Research on Climate (MIROC, developed at the University of Tokyo Center for Climate System Research for Cape Verde. Furthermore, temperature trends are positive with a maximum of 0.08 K/year predicted by MIROC for Niger and minimum of 0.03 K/year predicted by Nature Conservancy of Canada (NCC, Max Planck Institute (MPI for Climate Meteorology at Hamburg, French National Meteorological Research Center (CNRM and Canadian Centre for Climate Modelling and Analysis (CCCMA for Cape Verde. Photovolataic energy output changes show increasing trends in Sierra Leone with 0.013%/year as the maximum. Climate change will lead to a decreasing trend of PV output in the rest of the countries with a minimum of 0.032%/year in Niger.

  10. Model output: fact or artefact?

    Science.gov (United States)

    Melsen, Lieke

    2015-04-01

    As a third-year PhD-student, I relatively recently entered the wonderful world of scientific Hydrology. A science that has many pillars that directly impact society, for example with the prediction of hydrological extremes (both floods and drought), climate change, applications in agriculture, nature conservation, drinking water supply, etcetera. Despite its demonstrable societal relevance, hydrology is often seen as a science between two stools. Like Klemeš (1986) stated: "By their academic background, hydrologists are foresters, geographers, electrical engineers, geologists, system analysts, physicists, mathematicians, botanists, and most often civil engineers." Sometimes it seems that the engineering genes are still present in current hydrological sciences, and this results in pragmatic rather than scientific approaches for some of the current problems and challenges we have in hydrology. Here, I refer to the uncertainty in hydrological modelling that is often neglected. For over thirty years, uncertainty in hydrological models has been extensively discussed and studied. But it is not difficult to find peer-reviewed articles in which it is implicitly assumed that model simulations represent the truth rather than a conceptualization of reality. For instance in trend studies, where data is extrapolated 100 years ahead. Of course one can use different forcing datasets to estimate the uncertainty of the input data, but how to prevent that the output is not a model artefact, caused by the model structure? Or how about impact studies, e.g. of a dam impacting river flow. Measurements are often available for the period after dam construction, so models are used to simulate river flow before dam construction. Both are compared in order to qualify the effect of the dam. But on what basis can we tell that the model tells us the truth? Model validation is common nowadays, but validation only (comparing observations with model output) is not sufficient to assume that a

  11. Canada's helium output rising fast

    Energy Technology Data Exchange (ETDEWEB)

    1966-12-01

    About 12 months from now, International Helium Limited will be almost ready to start up Canada's second helium extraction plant at Mankota, in Saskatchewan's Wood Mountain area about 100 miles southwest of Moose Jaw. Another 80 miles north is Saskatchewan's (and Canada's) first helium plant, operated by Canadian Helium and sitting on a gas deposit at Wilhelm, 9 miles north of Swift Current. It contains almost 2% helium, some COD2U, and the rest nitrogen. One year in production was apparently enough to convince Canadian Helium that the export market (it sells most of its helium in W. Europe) can take a lot more than it's getting. Construction began this summer on an addition to the Swift Current plant that will raise its capacity from 12 to 36MMcf per yr when it goes on stream next spring. Six months later, International Helium's 40 MMcf per yr plant to be located about 4 miles from its 2 Wood Mountain wells will double Canada's helium output again.

  12. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  13. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  14. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  15. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  16. PREVIMER : Meteorological inputs and outputs

    Science.gov (United States)

    Ravenel, H.; Lecornu, F.; Kerléguer, L.

    2009-09-01

    PREVIMER is a pre-operational system aiming to provide a wide range of users, from private individuals to professionals, with short-term forecasts about the coastal environment along the French coastlines bordering the English Channel, the Atlantic Ocean, and the Mediterranean Sea. Observation data and digital modelling tools first provide 48-hour (probably 96-hour by summer 2009) forecasts of sea states, currents, sea water levels and temperatures. The follow-up of an increasing number of biological parameters will, in time, complete this overview of coastal environment. Working in partnership with the French Naval Hydrographic and Oceanographic Service (Service Hydrographique et Océanographique de la Marine, SHOM), the French National Weather Service (Météo-France), the French public science and technology research institute (Institut de Recherche pour le Développement, IRD), the European Institute of Marine Studies (Institut Universitaire Européen de la Mer, IUEM) and many others, IFREMER (the French public institute fo marine research) is supplying the technologies needed to ensure this pertinent information, available daily on Internet at http://www.previmer.org, and stored at the Operational Coastal Oceanographic Data Centre. Since 2006, PREVIMER publishes the results of demonstrators assigned to limited geographic areas and to specific applications. This system remains experimental. The following topics are covered : Hydrodynamic circulation, sea states, follow-up of passive tracers, conservative or non-conservative (specifically of microbiological origin), biogeochemical state, primary production. Lastly, PREVIMER provides researchers and R&D departments with modelling tools and access to the database, in which the observation data and the modelling results are stored, to undertake environmental studies on new sites. The communication will focus on meteorological inputs to and outputs from PREVIMER. It will draw the lessons from almost 3 years during

  17. The 'icon' of output efficiency

    International Nuclear Information System (INIS)

    Bligh, L.N.; Evans, S.G.; Larcos, G.; Gruenewald, S.M.

    1999-01-01

    Full text: Output efficiency (OE) is a well-validated parameter used in the assessment of hydronephrosis. Current analysis on Microdelta appears to produce few low OE values and occasional inability to produce a result. We sought an OE program which gave a reliable response over the full range of values. The aims of this study were to determine: (1) whether OE results are comparable between two computer systems; (2) a normal range for OE on an ICON; (3) inter-observer reproducibility; and (4) the correlation between the two programs and the residual cortical activity ratio (RCA), an index which assesses tracer washout from the 20 min cortical activity/peak cortical activity. Accordingly, two blinded medical radiation scientists reviewed 41 kidneys (26 native, 15 transplant) and calculated OE for each kidney on the ICON and Microdelta computers The OE on the Microdelta and the ICON had good correspondence (r = 0.6%, SEE = 6.2). The extrapolated normal range for ICON OE was 69-92% (mean 80.9%). The inter-observer reproducibility on the ICON was excellent with a CV of 8.7%. ICON OE and RCA had a strong correlation (r = - 0.77, SEE = 0.09), compared with a weaker correlation for the Microdelta (r = 0.47, SEE = 0.13). Processing on the ICON was almost half that of the Microdelta at 4 min compared with 7 min. We conclude that OE generated by these computer programs has good correlation, an established normal range, excellent interobserver reproducibility, but differing correlation with RCA. The response of the ICON program to low ranges of OE is being investigated further

  18. Analysis and Minimization of Output Current Ripple for Discontinuous Pulse-Width Modulation Techniques in Three-Phase Inverters

    Directory of Open Access Journals (Sweden)

    Gabriele Grandi

    2016-05-01

    Full Text Available This paper gives the complete analysis of the output current ripple in three-phase voltage source inverters considering the different discontinuous pulse-width modulation (DPWM strategies. In particular, peak-to-peak current ripple amplitude is analytically evaluated over the fundamental period and compared among the most used DPWMs, including positive and negative clamped (DPWM+ and DPWM−, and the four possible combinations between them, usually named as DPWM0, DPWM1, DPWM2, and DPWM3. The maximum and the average values of peak-to-peak current ripple are estimated, and a simple method to correlate the ripple envelope with the ripple rms is proposed and verified. Furthermore, all the results obtained by DPWMs are compared to the centered pulse-width modulation (CPWM, equivalent to the space vector modulation to identify the optimal pulse-width modulation (PWM strategy as a function of the modulation index, taking into account the different average switching frequency. In this way, the PWM technique providing for the minimum output current ripple is identified over the whole modulation range. The analytical developments and the main results are experimentally verified by current ripple measurements with a three-phase PWM inverter prototype supplying an induction motor load.

  19. Output control of da Vinci surgical system's surgical graspers.

    Science.gov (United States)

    Johnson, Paul J; Schmidt, David E; Duvvuri, Umamaheswar

    2014-01-01

    The number of robot-assisted surgeries performed with the da Vinci surgical system has increased significantly over the past decade. The articulating movements of the robotic surgical grasper are controlled by grip controls at the master console. The user interface has been implicated as one contributing factor in surgical grasping errors. The goal of our study was to characterize and evaluate the user interface of the da Vinci surgical system in controlling surgical graspers. An angular manipulator with force sensors was used to increment the grip control angle as grasper output angles were measured. Input force at the grip control was simultaneously measured throughout the range of motion. Pressure film was used to assess the maximum grasping force achievable with the endoscopic grasping tool. The da Vinci robot's grip control angular input has a nonproportional relationship with the grasper instrument output. The grip control mechanism presents an intrinsic resistant force to the surgeon's fingertips and provides no haptic feedback. The da Vinci Maryland graspers are capable of applying up to 5.1 MPa of local pressure. The angular and force input at the grip control of the da Vinci robot's surgical graspers is nonproportional to the grasper instrument's output. Understanding the true relationship of the grip control input to grasper instrument output may help surgeons understand how to better control the surgical graspers and promote fewer grasping errors. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Output power analyses for the thermodynamic cycles of thermal power plants

    International Nuclear Information System (INIS)

    Sun Chen; Cheng Xue-Tao; Liang Xin-Gang

    2014-01-01

    Thermal power plant is one of the important thermodynamic devices, which is very common in all kinds of power generation systems. In this paper, we use a new concept, entransy loss, as well as exergy destruction, to analyze the single reheating Rankine cycle unit and the single stage steam extraction regenerative Rankine cycle unit in power plants. This is the first time that the concept of entransy loss is applied to the analysis of the power plant Rankine cycles with reheating and steam extraction regeneration. In order to obtain the maximum output power, the operating conditions under variant vapor mass flow rates are optimized numerically, as well as the combustion temperatures and the off-design flow rates of the flue gas. The relationship between the output power and the exergy destruction rate and that between the output power and the entransy loss rate are discussed. It is found that both the minimum exergy destruction rate and the maximum entransy loss rate lead to the maximum output power when the combustion temperature and heat capacity flow rate of the flue gas are prescribed. Unlike the minimum exergy destruction rate, the maximum entransy loss rate is related to the maximum output power when the highest temperature and heat capacity flow rate of the flue gas are not prescribed. (general)

  1. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  2. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  3. Energy and output dynamics in Bangladesh

    International Nuclear Information System (INIS)

    Paul, Biru Paksha; Uddin, Gazi Salah

    2011-01-01

    The relationship between energy consumption and output is still ambiguous in the existing literature. The economy of Bangladesh, having spectacular output growth and rising energy demand as well as energy efficiency in recent decades, can be an ideal case for examining energy-output dynamics. We find that while fluctuations in energy consumption do not affect output fluctuations, movements in output inversely affect movements in energy use. The results of Granger causality tests in this respect are consistent with those of innovative accounting that includes variance decompositions and impulse responses. Autoregressive distributed lag models also suggest a role of output in Bangladesh's energy use. Hence, the findings of this study have policy implications for other developing nations where measures for energy conservation and efficiency can be relevant in policymaking.

  4. Theoretical analysis of magnetic sensor output voltage

    International Nuclear Information System (INIS)

    Liu Haishun; Dun Chaochao; Dou Linming; Yang Weiming

    2011-01-01

    The output voltage is an important parameter to determine the stress state in magnetic stress measurement, the relationship between the output voltage and the difference in the principal stresses was investigated by a comprehensive application of magnetic circuit theory, magnetization theory, stress analysis as well as the law of electromagnetic induction, and a corresponding quantitative equation was derived. It is drawn that the output voltage is proportional to the difference in the principal stresses, and related to the angle between the principal stress and the direction of the sensor. This investigation provides a theoretical basis for the principle stresses measurement by output voltage. - Research highlights: → A comprehensive investigation of magnetic stress signal. → Derived a quantitative equation about output voltage and the principal stresses. → The output voltage is proportional to the difference of the principal stresses. → Provide a theoretical basis for the principle stresses measurement.

  5. Regulation of the output power at the resonant converter

    Energy Technology Data Exchange (ETDEWEB)

    Stefanov, Goce G.; Sarac, Vasilija J. [University Goce Delecev-Stip, Faculty of Electrical Engineering, Radovis (Macedonia, The Former Yugoslav Republic of); Karadzinov, Ljupco V., E-mail: goce.stefanov@ugd.edu.mk [University Kiril and Methodyus-Skopje, FEIT Skopje(Macedonia, The Former Yugoslav Republic of)

    2011-07-01

    In this paper a method for regulating an alternating current voltage source with pair of IGBT transistor’s modules, in a full bridge configuration with series resonant converter is given. With the developed method a solution is obtained which can regulate the phase difference between output voltage and current through the inductor, in order to maintain maximum output power. Control electronic via feedback signals regulates the energy transfer to the tank by changing the pulse width of signals which are used as inputs to the gates of the IGBTs. By increasing or decreasing the pulse width transmitted to the various gates of the IGBT the energy transfer to the tank is increased or decreased . PowerSim simulations program is used for development of controlling methodology. Developed method is practically implemented in a prototype of the device for phase control of resonant converter with variable the resonant load. Key words: pulse width method, phase regulation , power converter.

  6. Characteristic analysis of a polarization output coupling Porro prism resonator

    Science.gov (United States)

    Yang, Hailong; Meng, Junqing; Chen, Weibiao

    2015-02-01

    An Electro-optical Q-switched Nd:YAG slab laser with a crossed misalignment Porro prism resonator for space applications has been theoretically and experimentally investigated. The phase shift induced by the combination of different wave plates and Porro prism azimuth angles have been studied for creating high loss condition prior to Q-switching. The relationship of the effective output coupling reflectivity and the employed Q-switch driving voltage is explored by using Jones matrix optics. In the experiment, the maximum output pulse energy of 93 mJ with 14-ns pulse duration is obtained at the repetition rate of 20 Hz and the optical-to-optical conversion efficiency is 16.8%. The beam quality factors are M 2 x = 2.5 and M 2y = 2.2, respectively.

  7. Output Control Using Feedforward And Cascade Controllers

    Science.gov (United States)

    Seraji, Homayoun

    1990-01-01

    Report presents theoretical study of open-loop control elements in single-input, single-output linear system. Focus on output-control (servomechanism) problem, in which objective is to find control scheme that causes output to track certain command inputs and to reject certain disturbance inputs in steady state. Report closes with brief discussion of characteristics and relative merits of feedforward, cascade, and feedback controllers and combinations thereof.

  8. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  9. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  10. Efficient processing of CFRP with a picosecond laser with up to 1.4 kW average power

    Science.gov (United States)

    Onuseit, V.; Freitag, C.; Wiedenmann, M.; Weber, R.; Negel, J.-P.; Löscher, A.; Abdou Ahmed, M.; Graf, T.

    2015-03-01

    Laser processing of carbon fiber reinforce plastic (CFRP) is a very promising method to solve a lot of the challenges for large-volume production of lightweight constructions in automotive and airplane industries. However, the laser process is actual limited by two main issues. First the quality might be reduced due to thermal damage and second the high process energy needed for sublimation of the carbon fibers requires laser sources with high average power for productive processing. To achieve thermal damage of the CFRP of less than 10μm intensities above 108 W/cm² are needed. To reach these high intensities in the processing area ultra-short pulse laser systems are favored. Unfortunately the average power of commercially available laser systems is up to now in the range of several tens to a few hundred Watt. To sublimate the carbon fibers a large volume specific enthalpy of 85 J/mm³ is necessary. This means for example that cutting of 2 mm thick material with a kerf width of 0.2 mm with industry-typical 100 mm/sec requires several kilowatts of average power. At the IFSW a thin-disk multipass amplifier yielding a maximum average output power of 1100 W (300 kHz, 8 ps, 3.7 mJ) allowed for the first time to process CFRP at this average power and pulse energy level with picosecond pulse duration. With this unique laser system cutting of CFRP with a thickness of 2 mm an effective average cutting speed of 150 mm/sec with a thermal damage below 10μm was demonstrated.

  11. Mathematical model of accelerator output characteristics and their calculation on a computer

    International Nuclear Information System (INIS)

    Mishulina, O.A.; Ul'yanina, M.N.; Kornilova, T.V.

    1975-01-01

    A mathematical model is described of output characteristics of a linear accelerator. The model is a system of differential equations. Presence of phase limitations is a specific feature of setting the problem which makes it possible to ensure higher simulation accuracy and determine a capture coefficient. An algorithm is elaborated of computing output characteristics based upon the mathematical model suggested. A capture coefficient, coordinate expectation characterizing an average phase value of the beam particles, coordinate expectation characterizing an average value of the reverse relative velocity of the beam particles as well as dispersion of these coordinates are output characteristics of the accelerator. Calculation methods of the accelerator output characteristics are described in detail. The computations have been performed on the BESM-6 computer, the characteristics computing time being 2 min 20 sec. Relative error of parameter computation averages 10 -2

  12. Sub-100 fs high average power directly blue-diode-laser-pumped Ti:sapphire oscillator

    Science.gov (United States)

    Rohrbacher, Andreas; Markovic, Vesna; Pallmann, Wolfgang; Resan, Bojan

    2016-03-01

    Ti:sapphire oscillators are a proven technology to generate sub-100 fs (even sub-10 fs) pulses in the near infrared and are widely used in many high impact scientific fields. However, the need for a bulky, expensive and complex pump source, typically a frequency-doubled multi-watt neodymium or optically pumped semiconductor laser, represents the main obstacle to more widespread use. The recent development of blue diodes emitting over 1 W has opened up the possibility of directly diode-laser-pumped Ti:sapphire oscillators. Beside the lower cost and footprint, a direct diode pumping provides better reliability, higher efficiency and better pointing stability to name a few. The challenges that it poses are lower absorption of Ti:sapphire at available diode wavelengths and lower brightness compared to typical green pump lasers. For practical applications such as bio-medicine and nano-structuring, output powers in excess of 100 mW and sub-100 fs pulses are required. In this paper, we demonstrate a high average power directly blue-diode-laser-pumped Ti:sapphire oscillator without active cooling. The SESAM modelocking ensures reliable self-starting and robust operation. We will present two configurations emitting 460 mW in 82 fs pulses and 350 mW in 65 fs pulses, both operating at 92 MHz. The maximum obtained pulse energy reaches 5 nJ. A double-sided pumping scheme with two high power blue diode lasers was used for the output power scaling. The cavity design and the experimental results will be discussed in more details.

  13. Selective effects of weight and inertia on maximum lifting.

    Science.gov (United States)

    Leontijevic, B; Pazin, N; Kukolj, M; Ugarkovic, D; Jaric, S

    2013-03-01

    A novel loading method (loading ranged from 20% to 80% of 1RM) was applied to explore the selective effects of externally added simulated weight (exerted by stretched rubber bands pulling downward), weight+inertia (external weights added), and inertia (covariation of the weights and the rubber bands pulling upward) on maximum bench press throws. 14 skilled participants revealed a load associated decrease in peak velocity that was the least associated with an increase in weight (42%) and the most associated with weight+inertia (66%). However, the peak lifting force increased markedly with an increase in both weight (151%) and weight+inertia (160%), but not with inertia (13%). As a consequence, the peak power output increased most with weight (59%), weight+inertia revealed a maximum at intermediate loads (23%), while inertia was associated with a gradual decrease in the peak power output (42%). The obtained findings could be of importance for our understanding of mechanical properties of human muscular system when acting against different types of external resistance. Regarding the possible application in standard athletic training and rehabilitation procedures, the results speak in favor of applying extended elastic bands which provide higher movement velocity and muscle power output than the usually applied weights. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Multichannel display system with automatic sequential output of analog data

    International Nuclear Information System (INIS)

    Bykovskii, Yu.A.; Gruzinov, A.E.; Lagoda, V.B.

    1989-01-01

    The authors describe a device that, with maximum simplicity and autonomy, permits parallel data display from 16 measuring channels with automatic output to the screen of a storage oscilloscope in ∼ 50 μsec. The described device can be used to study the divergence characteristics of the ion component of plasma sources and in optical and x-ray spectroscopy of pulsed processes. Owing to its compactness and autonomy, the device can be located in the immediate vicinity of the detectors (for example, inside a vacuum chamber), which allows the number of vacuum electrical lead-ins and the induction level to be reduced

  15. Maximum vehicle cabin temperatures under different meteorological conditions

    Science.gov (United States)

    Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John

    2009-05-01

    A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.

  16. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  17. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  18. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  19. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  20. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  1. Inverted relativistic magnetron with a single axial output

    International Nuclear Information System (INIS)

    Ballard, W.P.; Earley, L.M.; Wharton, C.B.

    1986-01-01

    A twelve vane, 1 MV, S-band magnetron has been designed and tested. An inverted design was selected to minimize the parasitic axial electron losses. The stainless steel anode is approximately one wavelength long. One end is partially short-circuited to rf, while the other end has a mode transformer to couple the 3.16 GHz π-mode out into a TM 01 circular waveguide. The magnetron has a loaded output Q of about 100. Operation at 1 MV, 0.31 T, 5 kA routinely produces approx.150 MW peak rms and 100 MW average rms with pulse lengths adjustable from 5 to 70 ns. The microwave power pulse has a rise time of approx.2 ns. The output power is diagnosed using four methods: calorimetry, two circular-waveguide directional couplers installed on the magnetron, two transmitting-receiving systems, and gaseous breakdown. Operation at other voltages and magnetic fields shows that the oscillation frequency is somewhat dependent on the magnetron current. Frequency changes of approx.20 MHz/kA occur as the operating conditions are varied. A series of experiments varying the anode conductivity, the electron emission profile, and the output coupling transformer design showed that none of these significantly increased the output power. Therefore, we have concluded that this magnetron operates in saturation. Because of the anode lifetime and repeatability, this magnetron has the potential to be repetitively pulsed. 36 refs., 16 figs

  2. Fitting and benchmarking of Monte Carlo output parameters for iridium-192 high dose rate brachytherapy source

    International Nuclear Information System (INIS)

    Acquah, F.G.

    2011-01-01

    Brachytherapy, the use of radioactive sources for the treatment of tumours is an important tool in radiation oncology. Accurate calculations of dose delivered to malignant and normal tissues are the main responsibility of the Medical Physics staff. With the use of Treatment Planning System (TPS) computers now becoming a standard practice in the Radiation Oncology Departments, Independent calculations to certify the results of these commercial TPSs are important part of a good quality management system for brachytherapy implants. There are inherent errors in the dose distributions produced by these TPSs due to its failure to account for heterogeneity in the calculation algorithms and Monte Carlo (MC) method seems to be the panacea for these corrections. In this study, a fit functional form using MC output parameters was performed to reduce dose calculation uncertainty using the Matlab software curve fitting applications. This includes the modification of the AAPM TG-43 parameters to accommodate the new developments for a rapid brachytherapy dose rate calculation. Analytical computations were performed to hybridize the anisotropy function, F(r,θ) and radial dose function, g(r) into a single new function f(r,θ) for the Nucletron microSelectron High Dose Rate 'new or v2' (mHDRv2) 192 Ir brachytherapy source. In order to minimize computation time and to improve the accuracy of manual calculations, the dosimetry function f(r,θ) used fewer parameters and formulas for the fit. Using MC outputs as the standard, the percentage errors for the fits were calculated and used to evaluate the average and maximum uncertainties. Dose rate deviation between the MC data and fit were also quantified as errors(E), which showed minimal values. These results showed that the dosimetry parameters from this study as compared to those of MC outputs parameters were in good agreement and better than the results obtained from literature. The work confirms a lot of promise in building robust

  3. Operation of a quasi-optical gyrotron with a gaussian output coupler

    Energy Technology Data Exchange (ETDEWEB)

    Hogge, J.P.; Tran, T.M.; Paris, P.J.; Tran, M.Q. [Ecole Polytechnique Federale, Lausanne (Switzerland). Centre de Recherche en Physique des Plasma (CRPP)

    1996-03-01

    The operation of a 92 GHz quasi-optical gyrotron (QOG) having a resonator formed by a spherical mirror and a diffraction grating placed in -1 order Littrow mount is presented. A power of 150 kW with a gaussian output pattern was measured. The gaussian content in the output was 98% with less than 1% of depolarization. By optimizing the magnetic field at fixed frequency, a maximum efficiency of 15% was reached. (author) 12 figs., 2 tabs., 22 refs.

  4. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  5. DIST/AVC Out-Put Definition.

    Science.gov (United States)

    Wilkinson, Gene L.

    The first stage of development of a management information system for DIST/AVC (Division of Instructional Technology/Audio-Visual Center) is the definition of out-put units. Some constraints on the definition of output units are: 1) they should reflect goals of the organization, 2) they should reflect organizational structure and procedures, and…

  6. Fast multi-output relevance vector regression

    OpenAIRE

    Ha, Youngmin

    2017-01-01

    This paper aims to decrease the time complexity of multi-output relevance vector regression from O(VM^3) to O(V^3+M^3), where V is the number of output dimensions, M is the number of basis functions, and V

  7. Early-Transition Output Decline Revisited

    Directory of Open Access Journals (Sweden)

    Crt Kostevc

    2016-05-01

    Full Text Available In this paper we revisit the issue of aggregate output decline that took place in the early transition period. We propose an alternative explanation of output decline that is applicable to Central- and Eastern-European countries. In the first part of the paper we develop a simple dynamic general equilibrium model that builds on work by Gomulka and Lane (2001. In particular, we consider price liberalization, interpreted as elimination of distortionary taxation, as a trigger of the output decline. We show that price liberalization in interaction with heterogeneous adjustment costs and non-employment benefits lead to aggregate output decline and surge in wage inequality. While these patterns are consistent with actual dynamics in CEE countries, this model cannot generate output decline in all sectors. Instead sectors that were initially taxed even exhibit output growth. Thus, in the second part we consider an alternative general equilibrium model with only one production sector and two types of labor and distortion in a form of wage compression during the socialist era. The trigger for labor mobility and consequently output decline is wage liberalization. Assuming heterogeneity of workers in terms of adjustment costs and non-employment benefits can explain output decline in all industries.

  8. Assessing the psychological factors predicting workers' output ...

    African Journals Online (AJOL)

    The study investigated job security, communication skills, interpersonal relationship and emotional intelligence as correlates of workers' output among local government employees in Oyo State. The research adopted descriptive design of an expose facto type. The research instruments used includes Workers' output scale, ...

  9. Survey of the variation in ultraviolet outputs from ultraviolet A sunbeds in Bradford.

    Science.gov (United States)

    Wright, A L; Hart, G C; Kernohan, E; Twentyman, G

    1996-02-01

    Concerns have been expressed for some time regarding the growth of the cosmetic suntanning industry and the potential harmful effects resulting from these exposures. Recently published work has appeared to confirm a link between sunbed use and skin cancer. A previous survey in Oxford some years ago demonstrated significant output variations, and we have attempted to extend and update that work. Ultraviolet A, UVB and blue-light output measurements were made on 50 sunbeds using a radiometer fitted with broad-band filters and detectors. A number of irradiance measurements were made on each sunbed within each waveband so that the uniformity of the output could also be assessed. UVA outputs varied by a factor of 3, with a mean of 13.5 mW/cm2; UVB outputs varied by a factor of 60, with a mean of 19.2 microW/cm2; and blue-light outputs varied by a factor of 2.5, with a mean of 2.5 mW/cm2. Outputs fall on average to 80% of the central value at either end of the sunbed. Facial units in sunbeds ranged in output between 18 and 45 mW/cm2. Output uniformity shows wide variation, with 16% of the sunbeds having an axial coefficient of variation > 10%. UVB output is highly tube-specific. Eyewear used in sunbeds should also protect against blue light.

  10. S-Band AlGaN/GaN Power Amplifier MMIC with over 20 Watt Output Power

    NARCIS (Netherlands)

    Heijningen, M. van; Visser, G.C.; Wuerfl, J.; Vliet, F.E. van

    2008-01-01

    This paper presents the design of an S-band HPA MMIC in AlGaN/GaN CPW technology for radar TR-module application. The trade-offs of using an MMIC solution versus discrete power devices are discussed. The MMIC shows a maximum output power of 38 Watt at 37% Power Added Efficiency at 3.1 GHz. An output

  11. Configuration of LWR fuel enrichment or burnup yielding maximum power

    International Nuclear Information System (INIS)

    Bartosek, V.; Zalesky, K.

    1976-01-01

    An analysis is given of the spatial distribution of fuel burnup and enrichment in a light-water lattice of given dimensions with slightly enriched uranium, at which the maximum output is achieved. It is based on the spatial solution of neutron flux using a one-group diffusion model in which linear dependence may be expected of the fission cross section and the material buckling parameter on the fuel burnup and enrichment. Two problem constraints are considered, i.e., the neutron flux value and the specific output value. For the former the optimum core configuration remains qualitatively unchanged for any reflector thickness, for the latter the cases of a reactor with and without reflector must be distinguished. (Z.M.)

  12. Maximum time-dependent space-charge limited diode currents

    Energy Technology Data Exchange (ETDEWEB)

    Griswold, M. E. [Tri Alpha Energy, Inc., Rancho Santa Margarita, California 92688 (United States); Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States)

    2016-01-15

    Recent papers claim that a one dimensional (1D) diode with a time-varying voltage drop can transmit current densities that exceed the Child-Langmuir (CL) limit on average, apparently contradicting a previous conjecture that there is a hard limit on the average current density across any 1D diode, as t → ∞, that is equal to the CL limit. However, these claims rest on a different definition of the CL limit, namely, a comparison between the time-averaged diode current and the adiabatic average of the expression for the stationary CL limit. If the current were considered as a function of the maximum applied voltage, rather than the average applied voltage, then the original conjecture would not have been refuted.

  13. High Power Tm3+-Doped Fiber Lasers Tuned by a Variable Reflective Output Coupler

    Directory of Open Access Journals (Sweden)

    Yulong Tang

    2008-01-01

    Full Text Available Wide wavelength tuning by a variable reflective output coupler is demonstrated in high-power double-clad Tm3+-doped silica fiber lasers diode-pumped at ∼790  nm. Varying the output coupling from 96% to 5%, the laser wavelength is tuned over a range of 106  nm from 1949 to 2055  nm. The output power exceeds 20  W over 90-nm range and the maximum output power is 32  W at 1949  nm for 51-W launched pump power, corresponding to a slope efficiency of ∼70%. Assisted with different fiber lengths, the tuning range is expanded to 240  nm from 1866 to 2107  nm with the output power larger than 10  W.

  14. Influence of deleting some of the inputs and outputs on efficiency status of units in DEA

    Directory of Open Access Journals (Sweden)

    Abbas ali Noora

    2013-06-01

    Full Text Available One of the important issues in data envelopment analysis (DEA is sensitivity analysis. This study discusses about deleting some of the inputs and outputs and investigates the influence of it on efficiency status of Decision Making Units (DMUs. To this end some models are presented for recognizing this influence on efficient DMUs. Model 2 (Model 3 in section 3 investigates the influence of deleting i(th input (r(th output on an efficient DMU. Thereafter these models are improved for deleting multiple inputs and outputs. Furthermore, a model is presented for recognizing the maximum number of inputs and (or outputs from among specified inputs and outputs which can be deleted, whereas an efficient DMU preserves its efficiency. Finally, the presented models are utilized for a set of DMUs and the results are reported.

  15. System convergence in transport models: algorithms efficiency and output uncertainty

    DEFF Research Database (Denmark)

    Rich, Jeppe; Nielsen, Otto Anker

    2015-01-01

    of this paper is to analyse convergence performance for the external loop and to illustrate how an improper linkage between the converging parts can lead to substantial uncertainty in the final output. Although this loop is crucial for the performance of large-scale transport models it has not been analysed...... much in the literature. The paper first investigates several variants of the Method of Successive Averages (MSA) by simulation experiments on a toy-network. It is found that the simulation experiments produce support for a weighted MSA approach. The weighted MSA approach is then analysed on large......-scale in the Danish National Transport Model (DNTM). It is revealed that system convergence requires that either demand or supply is without random noise but not both. In that case, if MSA is applied to the model output with random noise, it will converge effectively as the random effects are gradually dampened...

  16. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  17. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  18. Quantitative Analysis Method of Output Loss due to Restriction for Grid-connected PV Systems

    Science.gov (United States)

    Ueda, Yuzuru; Oozeki, Takashi; Kurokawa, Kosuke; Itou, Takamitsu; Kitamura, Kiyoyuki; Miyamoto, Yusuke; Yokota, Masaharu; Sugihara, Hiroyuki

    Voltage of power distribution line will be increased due to reverse power flow from grid-connected PV systems. In the case of high density grid connection, amount of voltage increasing will be higher than the stand-alone grid connection system. To prevent the over voltage of power distribution line, PV system's output will be restricted if the voltage of power distribution line is close to the upper limit of the control range. Because of this interaction, amount of output loss will be larger in high density case. This research developed a quantitative analysis method for PV systems output and losses to clarify the behavior of grid connected PV systems. All the measured data are classified into the loss factors using 1 minute average of 1 second data instead of typical 1 hour average. Operation point on the I-V curve is estimated to quantify the loss due to the output restriction using module temperature, array output voltage, array output current and solar irradiance. As a result, loss due to output restriction is successfully quantified and behavior of output restriction is clarified.

  19. On the average complexity of sphere decoding in lattice space-time coded multiple-input multiple-output channel

    KAUST Repository

    Abediseid, Walid

    2012-01-01

    complexity of sphere decoding for the quasi- static, lattice space-time (LAST) coded MIMO channel. Specifically, we drive an upper bound of the tail distribution of the decoder's computational complexity. We show that when the computational complexity exceeds

  20. Performance analysis and comparison of an Atkinson cycle coupled to variable temperature heat reservoirs under maximum power and maximum power density conditions

    International Nuclear Information System (INIS)

    Wang, P.-Y.; Hou, S.-S.

    2005-01-01

    In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions

  1. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  2. Predicting Output Power for Nearshore Wave Energy Harvesting

    Directory of Open Access Journals (Sweden)

    Henock Mamo Deberneh

    2018-04-01

    Full Text Available Energy harvested from a Wave Energy Converter (WEC varies greatly with the location of its installation. Determining an optimal location that can result in maximum output power is therefore critical. In this paper, we present a novel approach to predicting the output power of a nearshore WEC by characterizing ocean waves using floating buoys. We monitored the movement of the buoys using an Arduino-based data collection module, including a gyro-accelerometer sensor and a wireless transceiver. The collected data were utilized to train and test prediction models. The models were developed using machine learning algorithms: SVM, RF and ANN. The results of the experiments showed that measurements from the data collection module can yield a reliable predictor of output power. Furthermore, we found that the predictors work better when the regressors are combined with a classifier. The accuracy of the proposed prediction model suggests that it could be extremely useful in both locating optimal placement for wave energy harvesting plants and designing the shape of the buoys used by them.

  3. Latitudinal Change of Tropical Cyclone Maximum Intensity in the Western North Pacific

    OpenAIRE

    Choi, Jae-Won; Cha, Yumi; Kim, Hae-Dong; Kang, Sung-Dae

    2016-01-01

    This study obtained the latitude where tropical cyclones (TCs) show maximum intensity and applied statistical change-point analysis on the time series data of the average annual values. The analysis results found that the latitude of the TC maximum intensity increased from 1999. To investigate the reason behind this phenomenon, the difference of the average latitude between 1999 and 2013 and the average between 1977 and 1998 was analyzed. In a difference of 500 hPa streamline between the two ...

  4. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  5. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  6. Study the Output Characteristics of a 90 kJ Filippove-Type Plasma Focus

    Science.gov (United States)

    Sadat Kiai, S. M.; Talaei, A.; Adlparvar, S.; Zirak, A.; Elahi, M.; Safarian, A.; Farhangi, S.; Alhooie, S.; Dabirzadeh, A. A.; Khalaj, M. M.; Mahlooji, M. S.; Talaei, M.; KaKaei, S.; Sheibani, S.; Kashani, A.; Zahedi, F.

    2010-08-01

    The output characteristics of a Filippove-Type plasma focus "Dena" (288 μF, 25 kV, 90 kJ) is numerically investigated by considering the voltage, current, current derivative, and maximum current as a function of capacitor bank energy in the constant Argon gas pressure and compared to the experiment. It is shown that increase on the bank energy leads to the increment on the maximum current and decrement on the pinch time.

  7. Optimum detection for extracting maximum information from symmetric qubit sets

    International Nuclear Information System (INIS)

    Mizuno, Jun; Fujiwara, Mikio; Sasaki, Masahide; Akiba, Makoto; Kawanishi, Tetsuya; Barnett, Stephen M.

    2002-01-01

    We demonstrate a class of optimum detection strategies for extracting the maximum information from sets of equiprobable real symmetric qubit states of a single photon. These optimum strategies have been predicted by Sasaki et al. [Phys. Rev. A 59, 3325 (1999)]. The peculiar aspect is that the detections with at least three outputs suffice for optimum extraction of information regardless of the number of signal elements. The cases of ternary (or trine), quinary, and septenary polarization signals are studied where a standard von Neumann detection (a projection onto a binary orthogonal basis) fails to access the maximum information. Our experiments demonstrate that it is possible with present technologies to attain about 96% of the theoretical limit

  8. Maximum heat flux in boiling in a large volume

    International Nuclear Information System (INIS)

    Bergmans, Dzh.

    1976-01-01

    Relationships are derived for the maximum heat flux qsub(max) without basing on the assumptions of both the critical vapor velocity corresponding to the zero growth rate, and planar interface. The Helmholz nonstability analysis of vapor column has been made to this end. The results of this examination have been used to find maximum heat flux for spherical, cylindric and flat plate heaters. The conventional hydrodynamic theory was found to be incapable of producing a satisfactory explanation of qsub(max) for small heaters. The occurrence of qsub(max) in the present case can be explained by inadequate removal of vapor output from the heater (the force of gravity for cylindrical heaters and surface tension for the spherical ones). In case of flat plate heater the qsub(max) value can be explained with the help of the hydrodynamic theory

  9. Maximum Power Point Tracking Based on Sliding Mode Control

    Directory of Open Access Journals (Sweden)

    Nimrod Vázquez

    2015-01-01

    Full Text Available Solar panels, which have become a good choice, are used to generate and supply electricity in commercial and residential applications. This generated power starts with the solar cells, which have a complex relationship between solar irradiation, temperature, and output power. For this reason a tracking of the maximum power point is required. Traditionally, this has been made by considering just current and voltage conditions at the photovoltaic panel; however, temperature also influences the process. In this paper the voltage, current, and temperature in the PV system are considered to be a part of a sliding surface for the proposed maximum power point tracking; this means a sliding mode controller is applied. Obtained results gave a good dynamic response, as a difference from traditional schemes, which are only based on computational algorithms. A traditional algorithm based on MPPT was added in order to assure a low steady state error.

  10. Farm-Level Determinants of output Commercialization:

    African Journals Online (AJOL)

    MARC-AB

    Ethiopian Institute of Agricultural Research. አኀፅሮተ- ... haricot bean output commercialization among smallholder farmers in moisture-stress areas of ..... the American Agricultural Economics Association Annual Meeting, Orlando, Florida, July.

  11. Endogenous Money, Output and Prices in India

    OpenAIRE

    Das, Rituparna

    2009-01-01

    This paper proposes to quantify the macroeconometric relationships among the variables broad money, lending by banks, price, and output in India using simultaneous equations system keeping in view the issue of endogeneity.

  12. Scintillation camera with improved output means

    International Nuclear Information System (INIS)

    Lange, K.; Wiesen, E.J.; Woronowicz, E.M.

    1978-01-01

    In a scintillation camera system, the output pulse signals from an array of photomultiplier tubes are coupled to the inputs of individual preamplifiers. The preamplifier output signals are coupled to circuitry for computing the x and y coordinates of the scintillations. A cathode ray oscilloscope is used to form an image corresponding with the pattern in which radiation is emitted by a body. Means for improving the uniformity and resolution of the scintillations are provided. The means comprise biasing means coupled to the outputs of selected preamplifiers so that output signals below a predetermined amplitude are not suppressed and signals falling within increasing ranges of amplitudes are increasingly suppressed. In effect, the biasing means make the preamplifiers non-linear for selected signal levels

  13. Input-output rearrangement of isolated converters

    DEFF Research Database (Denmark)

    Madsen, Mickey Pierre; Kovacevic, Milovan; Mønster, Jakob Døllner

    2015-01-01

    This paper presents a new way of rearranging the input and output of isolated converters. The new arrangement posses several advantages, as increased voltage range, higher power handling capabilities, reduced voltage stress and improved efficiency, for applications where galvanic isolation...

  14. Multiple Input - Multiple Output (MIMO) SAR

    Data.gov (United States)

    National Aeronautics and Space Administration — This effort will research and implement advanced Multiple-Input Multiple-Output (MIMO) Synthetic Aperture Radar (SAR) techniques which have the potential to improve...

  15. Evaluating the output stability of LINAC with a reference detector using 3D water phantom

    International Nuclear Information System (INIS)

    Shimozato, Tomohiro; Kojima, Tomo; Sakamoto, Masataka; Hata, Yuji; Sasaki, Koji; Araki, Noriyuki

    2013-01-01

    We report the discovery of abnormal fluctuations in the output obtained when measuring a water phantom and adjustments that reduce these outliers. Using a newly developed three-dimensional scanning water phantom system, we obtained the depth dose and off-axis dose ratio required for the beam data of a medical linear accelerator (LINAC). The field and reference detectors were set such that the measured values could be viewed in real time. We confirmed the scanning data using the field detector and the change in the output using the reference detector while measuring by using the water phantom. Prior to output adjustment of the LINAC, we observed output abnormalities as high as 18.4%. With optimization of accelerator conditions, the average of the output fluctuation width was reduced to less than ±0.5%. Through real-time graphing of reference detector measurements during measurement of field detector, we were able to rapidly identify abnormal fluctuations. Although beam data collected during radiation treatment planning are corrected for output fluctuations, it is possible that sudden abnormal fluctuations actually occur in the output. Therefore, the equipment should be tested for output fluctuations at least once a year. Even after minimization of fluctuations, we recommend determining the potential dose administered to the human body taking into account the width of the output fluctuation. (author)

  16. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  17. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  18. Output power distributions of mobile radio base stations based on network measurements

    International Nuclear Information System (INIS)

    Colombi, D; Thors, B; Persson, T; Törnevik, C; Wirén, N; Larsson, L-E

    2013-01-01

    In this work output power distributions of mobile radio base stations have been analyzed for 2G and 3G telecommunication systems. The approach is based on measurements in selected networks using performance surveillance tools part of the network Operational Support System (OSS). For the 3G network considered, direct measurements of output power levels were possible, while for the 2G networks, output power levels were estimated from measurements of traffic volumes. Both voice and data services were included in the investigation. Measurements were conducted for large geographical areas, to ensure good overall statistics, as well as for smaller areas to investigate the impact of different environments. For high traffic hours, the 90th percentile of the averaged output power was found to be below 65% and 45% of the available output power for the 2G and 3G systems, respectively.

  19. Output power distributions of mobile radio base stations based on network measurements

    Science.gov (United States)

    Colombi, D.; Thors, B.; Persson, T.; Wirén, N.; Larsson, L.-E.; Törnevik, C.

    2013-04-01

    In this work output power distributions of mobile radio base stations have been analyzed for 2G and 3G telecommunication systems. The approach is based on measurements in selected networks using performance surveillance tools part of the network Operational Support System (OSS). For the 3G network considered, direct measurements of output power levels were possible, while for the 2G networks, output power levels were estimated from measurements of traffic volumes. Both voice and data services were included in the investigation. Measurements were conducted for large geographical areas, to ensure good overall statistics, as well as for smaller areas to investigate the impact of different environments. For high traffic hours, the 90th percentile of the averaged output power was found to be below 65% and 45% of the available output power for the 2G and 3G systems, respectively.

  20. Measuring power output intermittency and unsteady loading in a micro wind farm model

    OpenAIRE

    Bossuyt, Juliaan; Howland, Michael; Meneveau, Charles; Meyers, Johan

    2016-01-01

    In this study porous disc models are used as a turbine model for a wind-tunnel wind farm experiment, allowing the measurement of the power output, thrust force and spatially averaged incoming velocity for every turbine. The model's capabilities for studying the unsteady turbine loading, wind farm power output intermittency and spatio temporal correlations between wind turbines are demonstrated on an aligned wind farm, consisting of 100 wind turbine models.

  1. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  2. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  3. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  4. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  5. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  6. Effect of argon plasma treatment on the output performance of triboelectric nanogenerator

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Guang-Gui, E-mail: ggcheng@ujs.edu.cn [Research Center of Micro/Nano Science and Technology, Jiangsu University, Zhenjiang (China); Jiangsu Collaborative Innovation Center of Photovoltaic Science and Engineering, Changzhou University, Changzhou (China); Jiang, Shi-Yu; Li, Kai [Research Center of Micro/Nano Science and Technology, Jiangsu University, Zhenjiang (China); Zhang, Zhong-Qiang [Research Center of Micro/Nano Science and Technology, Jiangsu University, Zhenjiang (China); Jiangsu Collaborative Innovation Center of Photovoltaic Science and Engineering, Changzhou University, Changzhou (China); Wang, Ying; Yuan, Ning-Yi [Jiangsu Collaborative Innovation Center of Photovoltaic Science and Engineering, Changzhou University, Changzhou (China); Ding, Jian-Ning, E-mail: dingjn@ujs.edu.cn [Research Center of Micro/Nano Science and Technology, Jiangsu University, Zhenjiang (China); Jiangsu Collaborative Innovation Center of Photovoltaic Science and Engineering, Changzhou University, Changzhou (China); Zhang, Wei [Research Center of Micro/Nano Science and Technology, Jiangsu University, Zhenjiang (China)

    2017-08-01

    Highlights: • Two different kinds of PDMS films were prepared by spin-coated. • The PDMS surface was plasma treated with different power and time. • The output performance of TENG was significantly enhanced by plasma treatment. • Plasma treatment effect has time-efficient, the output declines with store time. - Abstract: Physical and chemical properties of the polymer surface play great roles in the output performance of triboelectric nanogenerator (TENG). Specific texture on the surface of polymer can enlarge the contact area and enhance the power output performance of TENG. In this paper, polydimethylsiloxane (PDMS) films with smooth and micro pillar arrays on the surface were prepared respectively. The surfaces were treated by argon plasma before testing their output performance. By changing treatment parameters such as treating time and plasma power, surfaces with different roughness and their relationship were achieved. The electrical output performances of the assembled TENG for each specimen showed that argon plasma treatment has a significant etching effect on the PDMS surface and greatly strengthen its output performance. The average surface roughness of PDMS film increases with the etching time from 5 mins to 15 mins when the argon plasma power is 60 W. Nevertheless, the average surface roughness is inversely proportional to the treatment time for the power of 90W. When treated with 90 W and 5 mins, many uniform micro pillars appeared on the both PDMS surface, and the output performance of the TENG for plasma treated smooth surface is 2.6 times larger than that before treatment. The output voltage increases from 42 V to 72 V, and the short circuit current increases from 4.2 μA to 8.3 μA after plasma treatment of the micro pillar array surface. However, this plasma treatment has time-efficient due to the hydrophobic recovery property of Ar plasma treated PDMS surface, both output voltage and short circuit current decrease significantly after 3

  7. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  8. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  9. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  10. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  11. A novel maximum power point tracking method for PV systems using fuzzy cognitive networks (FCN)

    Energy Technology Data Exchange (ETDEWEB)

    Karlis, A.D. [Electrical Machines Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece); Kottas, T.L.; Boutalis, Y.S. [Automatic Control Systems Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece)

    2007-03-15

    Maximum power point trackers (MPPTs) play an important role in photovoltaic (PV) power systems because they maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency. This paper presents a novel MPPT method based on fuzzy cognitive networks (FCN). The new method gives a good maximum power operation of any PV array under different conditions such as changing insolation and temperature. The numerical results show the effectiveness of the proposed algorithm. (author)

  12. A Comparative Frequency Analysis of Maximum Daily Rainfall for a SE Asian Region under Current and Future Climate Conditions

    Directory of Open Access Journals (Sweden)

    Velautham Daksiya

    2017-01-01

    Full Text Available The impact of changing climate on the frequency of daily rainfall extremes in Jakarta, Indonesia, is analysed and quantified. The study used three different models to assess the changes in rainfall characteristics. The first method involves the use of the weather generator LARS-WG to quantify changes between historical and future daily rainfall maxima. The second approach consists of statistically downscaling general circulation model (GCM output based on historical empirical relationships between GCM output and station rainfall. Lastly, the study employed recent statistically downscaled global gridded rainfall projections to characterize climate change impact rainfall structure. Both annual and seasonal rainfall extremes are studied. The results show significant changes in annual maximum daily rainfall, with an average increase as high as 20% in the 100-year return period daily rainfall. The uncertainty arising from the use of different GCMs was found to be much larger than the uncertainty from the emission scenarios. Furthermore, the annual and wet seasonal analyses exhibit similar behaviors with increased future rainfall, but the dry season is not consistent across the models. The GCM uncertainty is larger in the dry season compared to annual and wet season.

  13. Residual gravimetric method to measure nebulizer output.

    Science.gov (United States)

    Vecellio None, Laurent; Grimbert, Daniel; Bordenave, Joelle; Benoit, Guy; Furet, Yves; Fauroux, Brigitte; Boissinot, Eric; De Monte, Michele; Lemarié, Etienne; Diot, Patrice

    2004-01-01

    The aim of this study was to assess a residual gravimetric method based on weighing dry filters to measure the aerosol output of nebulizers. This residual gravimetric method was compared to assay methods based on spectrophotometric measurement of terbutaline (Bricanyl, Astra Zeneca, France), high-performance liquid chromatography (HPLC) measurement of tobramycin (Tobi, Chiron, U.S.A.), and electrochemical measurements of NaF (as defined by the European standard). Two breath-enhanced jet nebulizers, one standard jet nebulizer, and one ultrasonic nebulizer were tested. Output produced by the residual gravimetric method was calculated by weighing the filters both before and after aerosol collection and by filter drying corrected by the proportion of drug contained in total solute mass. Output produced by the electrochemical, spectrophotometric, and HPLC methods was determined after assaying the drug extraction filter. The results demonstrated a strong correlation between the residual gravimetric method (x axis) and assay methods (y axis) in terms of drug mass output (y = 1.00 x -0.02, r(2) = 0.99, n = 27). We conclude that a residual gravimetric method based on dry filters, when validated for a particular agent, is an accurate way of measuring aerosol output.

  14. Estimation of international output-energy relation. Effects of alternative output measures

    International Nuclear Information System (INIS)

    Shrestha, R.M.

    2000-01-01

    This paper analyzes the output-energy relationship with alternative measures of output and energy. Our analysis rejects the hypothesis of non-diminishing returns to energy consumption when GDP at purchasing power parities is used as the output measure unlike the case with GNP at market exchange rates. This finding also holds when energy input includes the usage of both commercial and traditional fuels. 13 refs

  15. From Static Output Feedback to Structured Robust Static Output Feedback: A Survey

    OpenAIRE

    Sadabadi , Mahdieh ,; Peaucelle , Dimitri

    2016-01-01

    This paper reviews the vast literature on static output feedback design for linear time-invariant systems including classical results and recent developments. In particular, we focus on static output feedback synthesis with performance specifications, structured static output feedback, and robustness. The paper provides a comprehensive review on existing design approaches including iterative linear matrix inequalities heuristics, linear matrix inequalities with rank constraints, methods with ...

  16. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  17. 12 CFR 702.105 - Weighted-average life of investments.

    Science.gov (United States)

    2010-01-01

    ... investment funds. (1) For investments in registered investment companies (e.g., mutual funds) and collective investment funds, the weighted-average life is defined as the maximum weighted-average life disclosed, directly or indirectly, in the prospectus or trust instrument; (2) For investments in money market funds...

  18. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  19. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  20. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  1. Problems in Modelling Charge Output Accelerometers

    Directory of Open Access Journals (Sweden)

    Tomczyk Krzysztof

    2016-12-01

    Full Text Available The paper presents major issues associated with the problem of modelling change output accelerometers. The presented solutions are based on the weighted least squares (WLS method using transformation of the complex frequency response of the sensors. The main assumptions of the WLS method and a mathematical model of charge output accelerometers are presented in first two sections of this paper. In the next sections applying the WLS method to estimation of the accelerometer model parameters is discussed and the associated uncertainties are determined. Finally, the results of modelling a PCB357B73 charge output accelerometer are analysed in the last section of this paper. All calculations were executed using the MathCad software program. The main stages of these calculations are presented in Appendices A−E.

  2. Integrated solar thermal Brayton cycles with either one or two regenerative heat exchangers for maximum power output

    International Nuclear Information System (INIS)

    Jansen, E.; Bello-Ochende, T.; Meyer, J.P.

    2015-01-01

    The main objective of this paper is to optimise the open-air solar-thermal Brayton cycle by considering the implementation of the second law of thermodynamics and how it relates to the design of the heat exchanging components within it. These components included one or more regenerators (in the form of cross-flow heat exchangers) and the receiver of a parabolic dish concentrator where the system heat was absorbed. The generation of entropy was considered as it was associated with the destruction of exergy or available work. The dimensions of some components were used to optimise the cycles under investigation. EGM (Entropy Generation Minimisation) was employed to optimise the system parameters by considering their influence on the total generation of entropy (destruction of exergy). Various assumptions and constraints were considered and discussed. The total entropy generation rate and irreversibilities were determined by considering the individual components and ducts of the system, as well as their respective inlet and outlet conditions. The major system parameters were evaluated as functions of the mass flow rate to allow for a proper discussion of the system performance. The performances of both systems were investigated, and characteristics were listed for both. Finally, a comparison is made to shed light on the differences in performance. - Highlights: • Implementation of the second law of thermodynamics. • Design of heat exchanging and collecting equipment. • Utilisation of Entropy Generation Minimization. • Presentation of a multi-objective optimization. • Raise efficiency with more regeneration

  3. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  4. Reliability and Energy Output of Bifacial Modules

    Energy Technology Data Exchange (ETDEWEB)

    Van Aken, B.B.; Jansen, M.J.; Dekker, N.J.J. [ECN Solar Energy, Petten (Netherlands)

    2013-06-15

    Although flash tests under standard test conditions yields lower power due to transmittance of the back sheet, bifacial modules are expected to outperform their monofacial equivalents in terms of yearly energy output in the field. We compare flash tests for bifacial modules with and without a light scattering panel directly behind the modules: 3% more power output is obtained. We also report on the damp-heat reliability of modules with transparent back sheet. Finally we will present the results of an outdoor study comparing modules with transparent back sheet and modules with state-of-the-art AR coating on the front glass.

  5. Explaining output volatility: The case of taxation

    DEFF Research Database (Denmark)

    Posch, Olaf

    the second moment of output growth rates without (long-run) effects on the first moment. Taking the model to the data, we exploit observed heterogeneity patterns to estimate effects of tax rates on macro volatility using panel estimation, explicitly modeling the unobserved variance process. We find a strong......This paper studies the effects of taxation on output volatility in OECD countries to shed light on the sources of observed heterogeneity over time and across countries. To this end, we derive tax effects on macro aggregates in a stochastic neoclassical model. As a result, taxes are shown to affect...... positive effects....

  6. The light output of BGO crystals

    International Nuclear Information System (INIS)

    Gong Zhufang; Ma Wengan; Lin Zhirong; Wang Zhaomin; Xu Zhizong; Fan Yangmei

    1987-01-01

    The dependence of light output on the surface treatment of BGO crystals has been tested. The results of experiments and Monte Carlo calculation indicate that for a tapered BGO crystal the best way to improve the uniformity and the energy resolution and to obtain higher light output is roughing the surface coupled to photomultiplier tube. The authors also observed that different wrapping method can effect its uniformity and resolutoin. Monte Carlo calculation indicates that the higher one of the 'double peaks' is the photoelectron peak of γ rays

  7. Weakest solar wind of the space age and the current 'MINI' solar maximum

    International Nuclear Information System (INIS)

    McComas, D. J.; Angold, N.; Elliott, H. A.; Livadiotis, G.; Schwadron, N. A.; Smith, C. W.; Skoug, R. M.

    2013-01-01

    The last solar minimum, which extended into 2009, was especially deep and prolonged. Since then, sunspot activity has gone through a very small peak while the heliospheric current sheet achieved large tilt angles similar to prior solar maxima. The solar wind fluid properties and interplanetary magnetic field (IMF) have declined through the prolonged solar minimum and continued to be low through the current mini solar maximum. Compared to values typically observed from the mid-1970s through the mid-1990s, the following proton parameters are lower on average from 2009 through day 79 of 2013: solar wind speed and beta (∼11%), temperature (∼40%), thermal pressure (∼55%), mass flux (∼34%), momentum flux or dynamic pressure (∼41%), energy flux (∼48%), IMF magnitude (∼31%), and radial component of the IMF (∼38%). These results have important implications for the solar wind's interaction with planetary magnetospheres and the heliosphere's interaction with the local interstellar medium, with the proton dynamic pressure remaining near the lowest values observed in the space age: ∼1.4 nPa, compared to ∼2.4 nPa typically observed from the mid-1970s through the mid-1990s. The combination of lower magnetic flux emergence from the Sun (carried out in the solar wind as the IMF) and associated low power in the solar wind points to the causal relationship between them. Our results indicate that the low solar wind output is driven by an internal trend in the Sun that is longer than the ∼11 yr solar cycle, and they suggest that this current weak solar maximum is driven by the same trend.

  8. Analysis of output trends from Varian 2100C/D and 600C/D accelerators

    International Nuclear Information System (INIS)

    Grattan, M W D; Hounsell, A R

    2011-01-01

    Analysis of Varian linear accelerator output trends is reported. Two groups, consisting of four matched Varian 2100C/D and four matched Varian 600C/D accelerators, with different designs of monitor chamber, have been investigated and the data acquired from regular calibrated ion chamber/electrometer measurements of the output performance of the eight accelerators analysed. The trend of machine output with time, having removed the effect of adjusting the monitor chamber response, was compared on a monthly and annual basis for monitor chambers with ages ranging between 1 year and 7 years. The results indicate that the response is generally consistent within each set of accelerators with different monitor chamber designs. Those used in a Varian 600C/D machine result in a reduction in measured output over time, with an average monthly reduction of 0.35 ± 0.09% over the course of the first 4 years of use. The chambers used in a 2100C/D accelerator result in an increase in measured output over time, with an average monthly increase of 0.26 ± 0.09% over the course of the first 4 years of use. The output increase then reduces towards the end of this period of time, with the average monthly change falling to -0.03 ± 0.02% for the following 3 years. The output response trend was similar for all clinical energies used on the 2100C/D accelerators--6, 15 MV x-ray beams, and 4, 6, 9, 12, 16 and 20 MeV electron beams. By tracking these changes it has been possible to predict the response over time to allow appropriate adjustments in monitor chamber response to maintain a measured accelerator output within tolerance and give confidence in performance. It has also provided data to indicate the need for planned preventative intervention and indicate if the monitor chamber response is behaving as expected. (note)

  9. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  10. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  11. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  12. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  13. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  14. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  15. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  16. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  17. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  18. Analysis of the Environmental Efficiency of the Chinese Transportation Sector Using an Undesirable Output Slacks-Based Measure Data Envelopment Analysis Model

    Directory of Open Access Journals (Sweden)

    Xiaowei Song

    2015-07-01

    Full Text Available Many countries are attempting to reduce energy consumption and CO2 emissions while increasing the productivity and efficiency of their industries. An undesirable-output-oriented data envelopment analysis (DEA model with slacks-based measure (SBM was used to evaluate the changes in the environmental efficiency of the transportation sector in 30 Chinese provinces (municipalities and autonomous regions between 2003 and 2012. The potential for decreasing CO2 emissions and energy saving was also assessed. Transportation was found to be inefficient in most of the provinces and the average environmental efficiency was low (0.45. The overall average efficiency reached a maximum in 2005 and continually decreased until a minimum was reached in 2009; since then, it has increased. In general, transportation is more efficient in eastern than in central or western China. A sensitivity analysis was also carried out on the input and output indicators. Based on these findings, some policies are proposed to improve the environmental efficiency of the transportation sector in China.

  19. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  20. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  1. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  2. Muscular outputs during dynamic bench press under stable versus unstable conditions.

    Science.gov (United States)

    Koshida, Sentaro; Urabe, Yukio; Miyashita, Koji; Iwai, Kanzunori; Kagimori, Aya

    2008-09-01

    Previous studies have suggested that resistance training exercise under unstable conditions decreases the isometric force output, yet little is known about its influence on muscular outputs during dynamic movement. The objective of this study was to investigate the effect of an unstable condition on power, force, and velocity outputs during the bench press. Twenty male collegiate athletes (mean age, 21.3 +/- 1.5 years; mean height, 167.7 +/- 7.7 cm; mean weight, 75.9 +/- 17.5 kg) participated in this study. Each subject attempted 3 sets of single bench presses with 50% of 1 repetition maximum (1RM) under a stable condition with a flat bench and an unstable condition with a Swiss ball. Acceleration data were obtained with an accelerometer attached to the center of a barbell shaft, and peak outputs of power, force, and velocity were computed. Although significant loss of the peak outputs was found under the unstable condition (p velocity outputs, compared with previous findings. Such small reduction rates of muscular outputs may not compromise the training effect. Prospective studies are necessary to confirm whether the resistance training under an unstable condition permits the improvement of dynamic performance and trunk stability.

  3. A maximum power point tracking scheme for a 1kw stand-alone ...

    African Journals Online (AJOL)

    A maximum power point tracking scheme for a 1kw stand-alone solar energy based power supply. ... Nigerian Journal of Technology ... A method for efficiently maximizing the output power of a solar panel supplying a load or battery bus under ...

  4. Output formatting in Apple-Soft Basic

    International Nuclear Information System (INIS)

    Navale, A.S.

    1987-01-01

    Personal computers are being used extensively in various fields. BASIC is a very popular and widely used language in personal computers. Apple computer is one of the popular machines used for scientific and engineering applications. Presenting output from computers in a neat and easy to read form is very important. Languages like FORTRAN have utility command 'FORMAT' which takes care of the formatting of the output in user-defined form. In some versions of BASIC a PRINT USING facility is available but it is not as powerful as the FORTRAN statement 'FORMAT'. Applesoft basic does not have even this PRINT USING command. Programmers have to write their own program segments to handle output formatting in Applesoft BASIC. Generally, such user written programs are of limited use as they cannot be used easily with other programs. A general purpose and easily transportable subroutine in Applesoft BASIC is presented here for handling output formatting in user-defined structure. The subroutine is nearly as powerful as the FORMAT statement in FORTRAN. It can also be used in other versions of BASIC with very little modifications. 3 tables, 4 refs. (author)

  5. On output regulation for linear systems

    NARCIS (Netherlands)

    Saberi, Ali; Stoorvogel, Antonie Arij; Sannuti, Peddapullaiah

    For both continuous- and discrete-time systems, we revisit the output regulation problem for linear systems. We generalize the problem formulation in order • to expand the class of reference or disturbance signals, • to utilize the derivative or feedforward information of reference signals whenever

  6. Fast Output-sensitive Matrix Multiplication

    DEFF Research Database (Denmark)

    Jacob, Riko; Stöckel, Morten

    2015-01-01

    We consider the problem of multiplying two $U \\times U$ matrices $A$ and $C$ of elements from a field $\\F$. We present a new randomized algorithm that can use the known fast square matrix multiplication algorithms to perform fewer arithmetic operations than the current state of the art for output...

  7. Predicting Color Output of Additive Manufactured Parts

    DEFF Research Database (Denmark)

    Eiríksson, Eyþór Rúnar; Pedersen, David Bue; Aanæs, Henrik

    2015-01-01

    In this paper we address the colorimetric performance of a multicolor additive manufacturing process. A method on how to measure and characterize color performance of said process is presented. Furthermore, a method on predicting the color output is demonstrated, allowing for previsualization...

  8. Multiple output timing and trigger generator

    Energy Technology Data Exchange (ETDEWEB)

    Wheat, Robert M. [Los Alamos National Laboratory; Dale, Gregory E [Los Alamos National Laboratory

    2009-01-01

    In support of the development of a multiple stage pulse modulator at the Los Alamos National Laboratory, we have developed a first generation, multiple output timing and trigger generator. Exploiting Commercial Off The Shelf (COTS) Micro Controller Units (MCU's), the timing and trigger generator provides 32 independent outputs with a timing resolution of about 500 ns. The timing and trigger generator system is comprised of two MCU boards and a single PC. One of the MCU boards performs the functions of the timing and signal generation (the timing controller) while the second MCU board accepts commands from the PC and provides the timing instructions to the timing controller. The PC provides the user interface for adjusting the on and off timing for each of the output signals. This system provides 32 output or timing signals which can be pre-programmed to be in an on or off state for each of 64 time steps. The width or duration of each of the 64 time steps is programmable from 2 {micro}s to 2.5 ms with a minimum time resolution of 500 ns. The repetition rate of the programmed pulse train is only limited by the time duration of the programmed event. This paper describes the design and function of the timing and trigger generator system and software including test results and measurements.

  9. What shapes output of policy reform?

    DEFF Research Database (Denmark)

    Carlsen, Kirsten

    This thesis deals with the factors shaping forest policy output during the stages implementation and bases its main message on empirical findings from the forestry sector in Ghana. Policy and institutional factors are important underlying causes for deforestation, especially in the tropics. Fores...

  10. Monetary policy and regional output in Brazil

    Directory of Open Access Journals (Sweden)

    Rafael Rockenbach da Silva Guimarães

    2014-03-01

    Full Text Available This work presents an analysis of whether the effects of the Brazilian monetary policy on regional outputs are symmetric. The strategy developed combines the techniques of principal component analysis (PCA to decompose the variables that measure regional economic activity into common and region-specific components and vector autoregressions (VAR to observe the behavior of these variables in response to monetary policy shocks. The common component responds to monetary policy as expected. Additionally, the idiosyncratic components of the regions showed no impact of monetary policy. The main finding of this paper is that the monetary policy responses on regional output are symmetrical when the regional output decomposition is performed, and the responses are asymmetrical when this decomposition is not performed. Therefore, performing the regional output decomposition corroborates the economic intuition that monetary policy has no impact on region-specific issues. Once monetary policy affects the common component of the regional economic activity and does not impact its idiosyncratic components, it can be considered symmetrical.

  11. Comparison of cardiac output measurement techniques

    DEFF Research Database (Denmark)

    Espersen, K; Jensen, E W; Rosenborg, D

    1995-01-01

    Simultaneously measured cardiac output obtained by thermodilution (TD), transcutaneous suprasternal ultrasonic Doppler (DOP), CO2-rebreathing (CR) and the direct Fick method (FI) were compared in eleven healthy subjects in a supine position (SU), a sitting position (SI), and during sitting exercise...

  12. Methodological concerns for determining power output in the jump squat.

    Science.gov (United States)

    Cormie, Prue; Deane, Russell; McBride, Jeffrey M

    2007-05-01

    The purpose of this study was to investigate the validity of power measurement techniques during the jump squat (JS) utilizing various combinations of a force plate and linear position transducer (LPT) devices. Nine men with at least 6 months of prior resistance training experience participated in this acute investigation. One repetition maximums (1RM) in the squat were determined, followed by JS testing under 2 loading conditions (30% of 1RM [JS30] and 90% of 1RM [JS90]). Three different techniques were used simultaneously in data collection: (a) 1 linear position transducer (1-LPT); (b) 1 linear position transducer and a force plate (1-LPT + FP); and (c) 2 linear position transducers and a force place (2-LPT + FP). Vertical velocity-, force-, and power-time curves were calculated for each lift using these methodologies and were compared. Peak force and peak power were overestimated by 1-LPT in both JS30 and JS90 compared with 2-LPT + FP and 1-LPT + FP (p squat varies according to the measurement technique utilized. The 1-LPT methodology is not a valid means of determining power output in the jump squat. Furthermore, the 1-LPT + FP method may not accurately represent power output in free weight movements that involve a significant amount of horizontal motion.

  13. Power output of field-based downhill mountain biking.

    Science.gov (United States)

    Hurst, Howard Thomas; Atkins, Stephen

    2006-10-01

    The purpose of this study was to assess the power output of field-based downhill mountain biking. Seventeen trained male downhill cyclists (age 27.1 +/- 5.1 years) competing nationally performed two timed runs of a measured downhill course. An SRM powermeter was used to simultaneously record power, cadence, and speed. Values were sampled at 1-s intervals. Heart rates were recorded at 5-s intervals using a Polar S710 heart rate monitor. Peak and mean power output were 834 +/- 129 W and 75 +/- 26 W respectively. Mean power accounted for only 9% of peak values. Paradoxically, mean heart rate was 168 +/- 9 beats x min(-1) (89% of age-predicted maximum heart rate). Mean cadence (27 +/- 5 rev x min(-1)) was significantly related to speed (r = 0.51; P biking. The poor relationships between power and run time and between cadence and run time suggest they are not essential pre-requisites to downhill mountain biking performance and indicate the importance of riding dynamics to overall performance.

  14. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  15. Modeling of Maximum Power Point Tracking Controller for Solar Power System

    Directory of Open Access Journals (Sweden)

    Aryuanto Soetedjo

    2012-09-01

    Full Text Available In this paper, a Maximum Power Point Tracking (MPPT controller for solar power system is modeled using MATLAB Simulink. The model consists of PV module, buck converter, and MPPT controller. The contribution of the work is in the modeling of buck converter that allowing the input voltage of the converter, i.e. output voltage of PV is changed by varying the duty cycle, so that the maximum power point could be tracked when the environmental changes. The simulation results show that the developed model performs well in tracking the maximum power point (MPP of the PV module using Perturb and Observe (P&O Algorithm.

  16. MAXIMUM POWEWR POINT TRACKING SYSTEM FOR PHOTOVOLTAIC STATION: A REVIEW

    Directory of Open Access Journals (Sweden)

    I. Elzein

    2015-01-01

    Full Text Available In recent years there has been a growing attention towards the use of renewable energy sources. Among them solar energy is one of the most promising green energy resources due to its environment sustainability and inexhaustibility. However photovoltaic systems (PhV suffer from big cost of equipment and low efficiency. Moreover, the solar cell V-I characteristic is nonlinear and varies with irradiation and temperature. In general, there is a unique point of PhV operation, called the Maximum Power Point (MPP, in which the PV system operates with maximum efficiency and produces its maximum output power. The location of the MPP is not known in advance, but can be located, either through calculation models or by search algorithms. Therefore MPPT techniques are important to maintain the PV array’s high efficiency. Many different techniques for MPPT are discussed. This review paper hopefully will serve as a convenient tool for future work in PhV power conversion.

  17. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  18. Novel maximum-margin training algorithms for supervised neural networks.

    Science.gov (United States)

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by

  19. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  20. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  1. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  2. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  3. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  4. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  5. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  6. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  7. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  8. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  9. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  10. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  11. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  12. Self-mode-locking operation of a diode-end-pumped Tm:YAP laser with watt-level output power

    Science.gov (United States)

    Zhang, Su; Zhang, Xinlu; Huang, Jinjer; Wang, Tianhan; Dai, Junfeng; Dong, Guangzong

    2018-03-01

    We report on a high power continuous wave (CW) self-mode-locked Tm:YAP laser pumped by a 792 nm laser diode. Without any additional mode-locking elements in the cavity, stable and self-starting mode-locking operation has been realized. The threshold pump power of the CW self-mode-locked Tm:YAP laser is only 5.4 W. The maximum average output power is as high as 1.65 W at the pump power of 12 W, with the repetition frequency of 468 MHz and the center wavelength of 1943 nm. To the best of our knowledge, this is the first CW self-mode-locked Tm:YAP laser. The experiment results show that the Tm:YAP crystal is a promising gain medium for realizing the high power self-mode-locking operation at 2 µm.

  13. Dynamic performance of maximum power point tracking circuits using sinusoidal extremum seeking control for photovoltaic generation

    Science.gov (United States)

    Leyva, R.; Artillan, P.; Cabal, C.; Estibals, B.; Alonso, C.

    2011-04-01

    The article studies the dynamic performance of a family of maximum power point tracking circuits used for photovoltaic generation. It revisits the sinusoidal extremum seeking control (ESC) technique which can be considered as a particular subgroup of the Perturb and Observe algorithms. The sinusoidal ESC technique consists of adding a small sinusoidal disturbance to the input and processing the perturbed output to drive the operating point at its maximum. The output processing involves a synchronous multiplication and a filtering stage. The filter instance determines the dynamic performance of the MPPT based on sinusoidal ESC principle. The approach uses the well-known root-locus method to give insight about damping degree and settlement time of maximum-seeking waveforms. This article shows the transient waveforms in three different filter instances to illustrate the approach. Finally, an experimental prototype corroborates the dynamic analysis.

  14. Comprehensive performance analyses and optimization of the irreversible thermodynamic cycle engines (TCE) under maximum power (MP) and maximum power density (MPD) conditions

    International Nuclear Information System (INIS)

    Gonca, Guven; Sahin, Bahri; Ust, Yasin; Parlak, Adnan

    2015-01-01

    This paper presents comprehensive performance analyses and comparisons for air-standard irreversible thermodynamic cycle engines (TCE) based on the power output, power density, thermal efficiency, maximum dimensionless power output (MP), maximum dimensionless power density (MPD) and maximum thermal efficiency (MEF) criteria. Internal irreversibility of the cycles occurred during the irreversible-adiabatic processes is considered by using isentropic efficiencies of compression and expansion processes. The performances of the cycles are obtained by using engine design parameters such as isentropic temperature ratio of the compression process, pressure ratio, stroke ratio, cut-off ratio, Miller cycle ratio, exhaust temperature ratio, cycle temperature ratio and cycle pressure ratio. The effects of engine design parameters on the maximum and optimal performances are investigated. - Highlights: • Performance analyses are conducted for irreversible thermodynamic cycle engines. • Comprehensive computations are performed. • Maximum and optimum performances of the engines are shown. • The effects of design parameters on performance and power density are examined. • The results obtained may be guidelines to the engine designers

  15. The impact of monetary policy on output and inflation in India: A frequency domain analysis

    Directory of Open Access Journals (Sweden)

    Salunkhe Bhavesh

    2017-01-01

    Full Text Available In the recent past, several attempts by the RBI to control inflation through tight monetary policy have ended up slowing the growth process, thereby provoking prolonged discussion among academics and policymakers about the efficacy of monetary policy in India. Against this backdrop, the present study attempts to estimate the causal relationship between monetary policy and its final objectives; i.e., growth, and controlling inflation in India. The methodological tool used is testing for Granger Causality in the frequency domain as developed by Lemmens et al. (2008, and monetary policy has been proxied by the weighted average call money rate. In view of the fact that output gap is one of the determinants of future inflation, an attempt has also been made to study the causal relationship between output gap and inflation. The results of empirical estimation show a bi-directional causality between policy rate and inflation and between policy rate and output, which implies that the monetary authorities in India were equally concerned about inflation and output growth when determining policy. Furthermore, any attempt to control inflation affects output with the same or even greater magnitude than inflation, thereby damaging the growth process. The relationship between output gap and inflation was found to be positive, as reported in earlier studies for India. Furthermore, the output gap causes inflation only in the short-tomediumrun.

  16. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  17. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  18. Elastic Cube Actuator with Six Degrees of Freedom Output

    Directory of Open Access Journals (Sweden)

    Pengchuan Wang

    2015-09-01

    Full Text Available Unlike conventional rigid actuators, soft robotic technologies possess inherent compliance, so they can stretch and twist along every axis without the need for articulated joints. This compliance is exploited here using dielectric elastomer membranes to develop a novel six degrees of freedom (6-DOF polymer actuator that unifies ordinarily separate components into a simple cubic structure. This cube actuator design incorporates elastic dielectric elastomer membranes on four faces which are coupled by a cross-shaped end effector. The inherent elasticity of each membrane greatly reduces kinematic constraint and enables a 6-DOF actuation output to be produced via the end effector. An electro-mechanical model of the cube actuator is presented that captures the non-linear hyperelastic behaviour of the active membranes. It is demonstrated that the model accurately predicts actuator displacement and blocking moment for a range of input voltages. Experimental testing of a prototype 60 mm device demonstrates 6-DOF operation. The prototype produces maximum linear and rotational displacements of ±2.6 mm (±4.3% and ±4.8° respectively and a maximum blocking moment of ±76 mNm. The capacity for full 6-DOF actuation from a compact, readily scalable and easily fabricated polymeric package enables implementation in a range of mechatronics and robotics applications.

  19. Cardiac output measurement instruments controlled by microprocessors

    International Nuclear Information System (INIS)

    Spector, M.; Barritault, L.; Boeri, C.; Fauchet, M.; Gambini, D.; Vernejoul, P. de

    The nuclear medicine and biophysics laboratory of the Necker-Enfants malades University Hospital Centre has built a microprocessor controlled Cardiac flowmetre. The principle of the cardiac output measurement from a radiocardiogram is well established. After injection of a radioactive indicator upstream from the heart cavities the dilution curve is obtained by the use of a gamma-ray precordial detector. This curve normally displays two peaks due to passage of the indicator into the right and left sides of the heart respectively. The output is then obtained from the stewart Hamilton principle once recirculation is eliminated. The graphic method used for the calculation however is long and tedious. The decreasing fraction of the dilution curve is projected in logarithmic space in order to eliminate recirculation by determining the mean straight line from which the decreasing exponential is obtained. The principle of the use of microprocessors is explained (electronics, logics) [fr

  20. Output levels of commercially available portable compact disc players and the potential risk to hearing.

    Science.gov (United States)

    Fligor, Brian J; Cox, L Clarke

    2004-12-01

    To measure the sound levels generated by the headphones of commercially available portable compact disc players and provide hearing healthcare providers with safety guidelines based on a theoretical noise dose model. Using a Knowles Electronics Manikin for Acoustical Research and a personal computer, output levels across volume control settings were recorded from headphones driven by a standard signal (white noise) and compared with output levels from music samples of eight different genres. Many commercially available models from different manufacturers were investigated. Several different styles of headphones (insert, supra-aural, vertical, and circumaural) were used to determine if style of headphone influenced output level. Free-field equivalent sound pressure levels measured at maximum volume control setting ranged from 91 dBA to 121 dBA. Output levels varied across manufacturers and style of headphone, although generally the smaller the headphone, the higher the sound level for a given volume control setting. Specifically, in one manufacturer, insert earphones increased output level 7-9 dB, relative to the output from stock headphones included in the purchase of the CD player. In a few headphone-CD player combinations, peak sound pressure levels exceeded 130 dB SPL. Based on measured sound pressure levels across systems and the noise dose model recommended by National Institute for Occupational Safety and Health for protecting the occupational worker, a maximum permissible noise dose would typically be reached within 1 hr of listening with the volume control set to 70% of maximum gain using supra-aural headphones. Using headphones that resulted in boosting the output level (e.g., insert earphones used in this study) would significantly decrease the maximum safe volume control setting; this effect was unpredictable from one manufacturer to another. In the interest of providing a straightforward recommendation that should protect the hearing of the majority of

  1. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  2. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  3. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  4. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  5. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  6. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  7. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  8. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  9. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  10. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  11. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  12. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  13. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  14. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  15. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  16. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  17. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  18. Speaking Math--A Voice Input, Speech Output Calculator for Students with Visual Impairments

    Science.gov (United States)

    Bouck, Emily C.; Flanagan, Sara; Joshi, Gauri S.; Sheikh, Waseem; Schleppenbach, Dave

    2011-01-01

    This project explored a newly developed computer-based voice input, speech output (VISO) calculator. Three high school students with visual impairments educated at a state school for the blind and visually impaired participated in the study. The time they took to complete assessments and the average number of attempts per problem were recorded…

  19. Unregulated heat output of a storage heater

    OpenAIRE

    Lysak, Oleg Віталійович

    2017-01-01

    In the article the factors determining the heat transfer between the outer surfaces of a storage heater and the ambient air. This heat exchange is unregulated, and its definition is a precondition for assessing heat output range of this type of units. It was made the analysis of the literature on choosing insulating materials for each of the external surfaces of storage heaters: in foreign literature, there are recommendations on the use of various types of insulation depending on the type of...

  20. Computing multiple-output regression quantile regions

    Czech Academy of Sciences Publication Activity Database

    Paindaveine, D.; Šiman, Miroslav

    2012-01-01

    Roč. 56, č. 4 (2012), s. 840-853 ISSN 0167-9473 R&D Projects: GA MŠk(CZ) 1M06047 Institutional research plan: CEZ:AV0Z10750506 Keywords : halfspace depth * multiple-output regression * parametric linear programming * quantile regression Subject RIV: BA - General Mathematics Impact factor: 1.304, year: 2012 http://library.utia.cas.cz/separaty/2012/SI/siman-0376413.pdf

  1. Galois connection for multiple-output operations

    Czech Academy of Sciences Publication Activity Database

    Jeřábek, Emil

    2018-01-01

    Roč. 79 (2018), č. článku 17. ISSN 0002-5240 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : clones and coclones * Galois connection * multiple-output operations Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.625, year: 2016 https://link.springer.com/ article /10.1007%2Fs00012-018-0499-7

  2. Carnot efficiency at divergent power output

    Science.gov (United States)

    Polettini, Matteo; Esposito, Massimiliano

    2017-05-01

    The widely debated feasibility of thermodynamic machines achieving Carnot efficiency at finite power has been convincingly dismissed. Yet, the common wisdom that efficiency can only be optimal in the limit of infinitely slow processes overlooks the dual scenario of infinitely fast processes. We corroborate that efficient engines at divergent power output are not theoretically impossible, framing our claims within the theory of Stochastic Thermodynamics. We inspect the case of an electronic quantum dot coupled to three particle reservoirs to illustrate the physical rationale.

  3. Galois connection for multiple-output operations

    Czech Academy of Sciences Publication Activity Database

    Jeřábek, Emil

    2018-01-01

    Roč. 79, č. 2 (2018), č. článku 17. ISSN 0002-5240 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : clones and coclones * Galois connection * multiple-output operations Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.625, year: 2016 https://link.springer.com/article/10.1007%2Fs00012-018-0499-7

  4. Burst firing enhances neural output correlation

    Directory of Open Access Journals (Sweden)

    Ho Ka eChan

    2016-05-01

    Full Text Available Neurons communicate and transmit information predominantly through spikes. Given that experimentally observed neural spike trains in a variety of brain areas can be highly correlated, it is important to investigate how neurons process correlated inputs. Most previous work in this area studied the problem of correlation transfer analytically by making significant simplifications on neural dynamics. Temporal correlation between inputs that arises from synaptic filtering, for instance, is often ignored when assuming that an input spike can at most generate one output spike. Through numerical simulations of a pair of leaky integrate-and-fire (LIF neurons receiving correlated inputs, we demonstrate that neurons in the presence of synaptic filtering by slow synapses exhibit strong output correlations. We then show that burst firing plays a central role in enhancing output correlations, which can explain the above-mentioned observation because synaptic filtering induces bursting. The observed changes of correlations are mostly on a long time scale. Our results suggest that other features affecting the prevalence of neural burst firing in biological neurons, e.g., adaptive spiking mechanisms, may play an important role in modulating the overall level of correlations in neural networks.

  5. Multi-model MPC with output feedback

    Directory of Open Access Journals (Sweden)

    J. M. Perez

    2014-03-01

    Full Text Available In this work, a new formulation is presented for the model predictive control (MPC of a process system that is represented by a finite set of models, each one corresponding to a different operating point. The general case is considered of systems with stable and integrating outputs in closed-loop with output feedback. For this purpose, the controller is based on a non-minimal order model where the state is built with the measured outputs and the manipulated inputs of the control system. Therefore, the state can be considered as perfectly known and, consequently, there is no need to include a state observer in the control loop. This property of the proposed modeling approach is convenient to extend previous stability results of the closed loop system with robust MPC controllers based on state feedback. The controller proposed here is based on the solution of two optimization problems that are solved sequentially at the same time step. The method is illustrated with a simulated example of the process industry. The rigorous simulation of the control of an adiabatic flash of a multi-component hydrocarbon mixture illustrates the application of the robust controller. The dynamic simulation of this process is performed using EMSO - Environment Model Simulation and Optimization. Finally, a comparison with a linear MPC using a single model is presented.

  6. Solar Power Station Output Inverter Control Design

    Directory of Open Access Journals (Sweden)

    J. Bauer

    2011-04-01

    Full Text Available The photovoltaic applications spreads in these days fast, therefore they also undergo great development. Because the amount of the energy obtained from the panel depends on the surrounding conditions, as intensity of the sun exposure or the temperature of the solar array, the converter must be connected to the panel output. The Solar system equipped with inverter can supply small loads like notebooks, mobile chargers etc. in the places where the supplying network is not present. Or the system can be used as a generator and it shall deliver energy to the supply network. Each type of the application has different requirements on the converter and its control algorithm. But for all of them the one thing is common – the maximal efficiency. The paper focuses on design and simulation of the low power inverter that acts as output part of the whole converter. In the paper the design of the control algorithm of the inverter for both types of inverter application – for islanding mode and for operation on the supply grid – is discussed. Attention is also paid to the design of the output filter that should reduce negative side effects of the converter on the supply network.

  7. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  8. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    Science.gov (United States)

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  9. Evaluating lexical characteristics of verbal fluency output in schizophrenia.

    Science.gov (United States)

    Juhasz, Barbara J; Chambers, Destinee; Shesler, Leah W; Haber, Alix; Kurtz, Matthew M

    2012-12-30

    Standardized lexical analysis of verbal output has not been applied to verbal fluency tasks in schizophrenia. Performance of individuals with schizophrenia on both a letter (n=139) and semantic (n=137) fluency task was investigated. The lexical characteristics (word frequency, age-of-acquisition, word length, and semantic typicality) of words produced were evaluated and compared to those produced by a healthy control group matched on age, gender, and Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) vocabulary scores (n=20). Overall, individuals with schizophrenia produced fewer words than healthy controls, replicating past research (see Bokat and Goldberg, 2003). Words produced in the semantic fluency task by individuals with schizophrenia were, on average, earlier acquired and more typical of the category. In contrast, no differences in lexical characteristics emerged in the letter fluency task. The results are informative regarding how individuals with schizophrenia access their mental lexicons during the verbal fluency task. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. New topology of multiple-input single-output PV system for DC load applications

    Directory of Open Access Journals (Sweden)

    Mohsen M. ELhagry

    2016-12-01

    Full Text Available Improving PV system structure and maximizing the output power of a PV system has drawn many researchers attention nowadays. A proposed multi-input single-output PV system is proposed in this paper. The system consists of multiple PV modules; each module feeds a DC–DC converter. The outputs of the converters are tied together to form a DC voltage source. In order to minimize the output ripples of the converters, the control signal of each converter is time shifted from each other by a certain time interval depending on the number of converters used in the topology. In this study a battery is used as the main load, the load current used as the control variable. A fuzzy logic controller designed to modulate the operating point of the system to get the maximum power. The results show that the proposed system has very good response for various operating conditions of the PV system. In addition the output filter is minimized with excellent quality of the DC output voltage.

  11. Flow Control in Wells Turbines for Harnessing Maximum Wave Power

    Science.gov (United States)

    Garrido, Aitor J.; Garrido, Izaskun; Otaola, Erlantz; Maseda, Javier

    2018-01-01

    Oceans, and particularly waves, offer a huge potential for energy harnessing all over the world. Nevertheless, the performance of current energy converters does not yet allow us to use the wave energy efficiently. However, new control techniques can improve the efficiency of energy converters. In this sense, the plant sensors play a key role within the control scheme, as necessary tools for parameter measuring and monitoring that are then used as control input variables to the feedback loop. Therefore, the aim of this work is to manage the rotational speed control loop in order to optimize the output power. With the help of outward looking sensors, a Maximum Power Point Tracking (MPPT) technique is employed to maximize the system efficiency. Then, the control decisions are based on the pressure drop measured by pressure sensors located along the turbine. A complete wave-to-wire model is developed so as to validate the performance of the proposed control method. For this purpose, a novel sensor-based flow controller is implemented based on the different measured signals. Thus, the performance of the proposed controller has been analyzed and compared with a case of uncontrolled plant. The simulations demonstrate that the flow control-based MPPT strategy is able to increase the output power, and they confirm both the viability and goodness. PMID:29439408

  12. Emf, maximum power and efficiency of fuel cells

    International Nuclear Information System (INIS)

    Gaggioli, R.A.; Dunbar, W.R.

    1990-01-01

    This paper discusses the ideal voltage of steady-flow fuel cells usually expressed by Emf = -ΔG/nF where ΔG is the Gibbs free energy of reaction for the oxidation of the fuel at the supposed temperature of operation of the cell. Furthermore, the ideal power of the cell is expressed as the product of the fuel flow rate with this emf, and the efficiency of a real fuel cell, sometimes called the Gibbs efficiency, is defined as the ratio of the actual power output to this ideal power. Such viewpoints are flawed in several respects. While it is true that if a cell operates isothermally the maximum conceivable work output is equal to the difference between the Gibbs free energy of the incoming reactants and that of the leaving products, nevertheless, even if the cell operates isothermally, the use of the conventional ΔG of reaction assumes that the products of reaction leave separately from one another (and from any unused fuel), and when ΔS of reaction is positive it assumes that a free heat source exists at the operating temperature, whereas if ΔS is negative it neglects the potential power which theoretically could be obtained form the heat released during oxidation. Moreover, the usual cell does not operate isothermally but (virtually) adiabatically

  13. Flow Control in Wells Turbines for Harnessing Maximum Wave Power.

    Science.gov (United States)

    Lekube, Jon; Garrido, Aitor J; Garrido, Izaskun; Otaola, Erlantz; Maseda, Javier

    2018-02-10

    Oceans, and particularly waves, offer a huge potential for energy harnessing all over the world. Nevertheless, the performance of current energy converters does not yet allow us to use the wave energy efficiently. However, new control techniques can improve the efficiency of energy converters. In this sense, the plant sensors play a key role within the control scheme, as necessary tools for parameter measuring and monitoring that are then used as control input variables to the feedback loop. Therefore, the aim of this work is to manage the rotational speed control loop in order to optimize the output power. With the help of outward looking sensors, a Maximum Power Point Tracking (MPPT) technique is employed to maximize the system efficiency. Then, the control decisions are based on the pressure drop measured by pressure sensors located along the turbine. A complete wave-to-wire model is developed so as to validate the performance of the proposed control method. For this purpose, a novel sensor-based flow controller is implemented based on the different measured signals. Thus, the performance of the proposed controller has been analyzed and compared with a case of uncontrolled plant. The simulations demonstrate that the flow control-based MPPT strategy is able to increase the output power, and they confirm both the viability and goodness.

  14. A Markovian model of evolving world input-output network.

    Directory of Open Access Journals (Sweden)

    Vahid Moosavi

    Full Text Available The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money.

  15. A Markovian model of evolving world input-output network.

    Science.gov (United States)

    Moosavi, Vahid; Isacchini, Giulio

    2017-01-01

    The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money.

  16. Study of nominal daily output of urine from workers

    International Nuclear Information System (INIS)

    Lima, Marina F.; Carneiro, Janete C.G. Gaburo; Todo, Alberto S.

    2007-01-01

    A retrospective study of the 24-hour urine volumes from workers selected for the internal individual monitoring compares the average volume collected by sample and the average volume per individual with the nominal daily output of urine from 'Reference Man'. This work considers 134 registers of urine samples from 18 male workers, with semester routine sampling, between the years of 2000 and 2005. For this group, the average volume per collection was (971±371)mL and (962±376)mL per individual. In a cohort group of 9 male workers, which supplied at least 10 samples in this period, it was observed that the average volume per collection decreased to (955±308)mL and the average volume per individual increased to (1027±400)mL. For the female group, composed by 11 individuals, the 29 urine samples supplied between 1999 and 2005 were considered. The average volume per sampling and for worker was, respectively, (1122±337)mL and (1105±337)mL. Another cohort group of only 4 female workers with at least one annual collection during five years, of the seven years considered, the values decreased to (1112±336)mL per collection and the average volume per individual was maintained. The major variability of the volume among all the individuals was 927%, and for the same individual was 562%. This difference can be indicative of the individual differences of retention and excretion, alimentary diet interferences and for lack of awareness by the individual to collect urine during a period of 24-hour. The radionuclides clearance does not occur in constant rates and for the purpose of assessing intakes, in our routine analysis, the total volume of urine from worker is corrected for 1,4 L. Based in the results obtained over the years, and to minimize the errors of the nominal daily excretion rate in urine, actions about the aware of the individual in carrying out an accurately sampling and/or the implementation of the measurements of creatinine levels in urine are suggested

  17. Effects of bruxism on the maximum bite force

    Directory of Open Access Journals (Sweden)

    Todić Jelena T.

    2017-01-01

    Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.

  18. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  19. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  20. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  1. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  2. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  3. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  4. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  5. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  6. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  7. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  8. Jarzynski equality in the context of maximum path entropy

    Science.gov (United States)

    González, Diego; Davis, Sergio

    2017-06-01

    In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.

  9. Evaluation of statistically downscaled GCM output as input for hydrological and stream temperature simulation in the Apalachicola–Chattahoochee–Flint River Basin (1961–99)

    Science.gov (United States)

    Hay, Lauren E.; LaFontaine, Jacob H.; Markstrom, Steven

    2014-01-01

    The accuracy of statistically downscaled general circulation model (GCM) simulations of daily surface climate for historical conditions (1961–99) and the implications when they are used to drive hydrologic and stream temperature models were assessed for the Apalachicola–Chattahoochee–Flint River basin (ACFB). The ACFB is a 50 000 km2 basin located in the southeastern United States. Three GCMs were statistically downscaled, using an asynchronous regional regression model (ARRM), to ⅛° grids of daily precipitation and minimum and maximum air temperature. These ARRM-based climate datasets were used as input to the Precipitation-Runoff Modeling System (PRMS), a deterministic, distributed-parameter, physical-process watershed model used to simulate and evaluate the effects of various combinations of climate and land use on watershed response. The ACFB was divided into 258 hydrologic response units (HRUs) in which the components of flow (groundwater, subsurface, and surface) are computed in response to climate, land surface, and subsurface characteristics of the basin. Daily simulations of flow components from PRMS were used with the climate to simulate in-stream water temperatures using the Stream Network Temperature (SNTemp) model, a mechanistic, one-dimensional heat transport model for branched stream networks.The climate, hydrology, and stream temperature for historical conditions were evaluated by comparing model outputs produced from historical climate forcings developed from gridded station data (GSD) versus those produced from the three statistically downscaled GCMs using the ARRM methodology. The PRMS and SNTemp models were forced with the GSD and the outputs produced were treated as “truth.” This allowed for a spatial comparison by HRU of the GSD-based output with ARRM-based output. Distributional similarities between GSD- and ARRM-based model outputs were compared using the two-sample Kolmogorov–Smirnov (KS) test in combination with descriptive

  10. High performance monolithic power management system with dynamic maximum power point tracking for microbial fuel cells.

    Science.gov (United States)

    Erbay, Celal; Carreon-Bautista, Salvador; Sanchez-Sinencio, Edgar; Han, Arum

    2014-12-02

    Microbial fuel cell (MFC) that can directly generate electricity from organic waste or biomass is a promising renewable and clean technology. However, low power and low voltage output of MFCs typically do not allow directly operating most electrical applications, whether it is supplementing electricity to wastewater treatment plants or for powering autonomous wireless sensor networks. Power management systems (PMSs) can overcome this limitation by boosting the MFC output voltage and managing the power for maximum efficiency. We present a monolithic low-power-consuming PMS integrated circuit (IC) chip capable of dynamic maximum power point tracking (MPPT) to maximize the extracted power from MFCs, regardless of the power and voltage fluctuations from MFCs over time. The proposed PMS continuously detects the maximum power point (MPP) of the MFC and matches the load impedance of the PMS for maximum efficiency. The system also operates autonomously by directly drawing power from the MFC itself without any external power. The overall system efficiency, defined as the ratio between input energy from the MFC and output energy stored into the supercapacitor of the PMS, was 30%. As a demonstration, the PMS connected to a 240 mL two-chamber MFC (generating 0.4 V and 512 μW at MPP) successfully powered a wireless temperature sensor that requires a voltage of 2.5 V and consumes power of 85 mW each time it transmit the sensor data, and successfully transmitted a sensor reading every 7.5 min. The PMS also efficiently managed the power output of a lower-power producing MFC, demonstrating that the PMS works efficiently at various MFC power output level.

  11. Multi-decadal Variability of the Wind Power Output

    Science.gov (United States)

    Kirchner Bossi, Nicolas; García-Herrera, Ricardo; Prieto, Luis; Trigo, Ricardo M.

    2014-05-01

    The knowledge of the long-term wind power variability is essential to provide a realistic outlook on the power output during the lifetime of a planned wind power project. In this work, the Power Output (Po) of a market wind turbine is simulated with a daily resolution for the period 1871-2009 at two different locations in Spain, one at the Central Iberian Plateau and another at the Gibraltar Strait Area. This is attained through a statistical downscaling of the daily wind conditions. It implements a Greedy Algorithm as classificator of a geostrophic-based wind predictor, which is derived by considering the SLP daily field from the 56 ensemble members of the longest homogeneous reanalysis available (20CR, 1871-2009). For calibration and validation purposes we use 10 years of wind observations (the predictand) at both sites. As a result, a series of 139 annual wind speed Probability Density Functions (PDF) are obtained, with a good performance in terms of wind speed uncertainty reduction (average daily wind speed MAE=1.48 m/s). The obtained centennial series allow to investigate the multi-decadal variability of wind power from different points of view. Significant periodicities around the 25-yr frequency band, as well as long-term linear trends are detected at both locations. In addition, a negative correlation is found between annual Po at both locations, evidencing the differences in the dynamical mechanisms ruling them (and possible complementary behavior). Furthermore, the impact that the three leading large-scale circulation patterns over Iberia (NAO, EA and SCAND) exert over wind power output is evaluated. Results show distinct (and non-stationary) couplings to these forcings depending on the geographical position and season or month. Moreover, significant non-stationary correlations are observed with the slow varying Atlantic Multidecadal Oscillation (AMO) index for both case studies. Finally, an empirical relationship is explored between the annual Po and the

  12. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  13. A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator

    Science.gov (United States)

    Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai

    2017-05-01

    To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.

  14. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  15. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  16. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  17. Influence of Special Weather on Output of PV System

    Science.gov (United States)

    Zhang, Zele

    2018-01-01

    The output of PV system is affected by different environmental factors, therefore, it is important to study the output of PV system under different environmental conditions. Through collecting data on the spot, collecting the output of photovoltaic panels under special weather conditions, and comparing the collected data, the output characteristics of the photovoltaic panels under different weather conditions are obtained. The influence of weather factors such as temperature, humidity and irradiance on the output of photovoltaic panels was investigated.

  18. Input and output constraints affecting irrigation development

    Science.gov (United States)

    Schramm, G.

    1981-05-01

    In many of the developing countries the expansion of irrigated agriculture is used as a major development tool for bringing about increases in agricultural output, rural economic growth and income distribution. Apart from constraints imposed by water availability, the major limitations considered to any acceleration of such programs are usually thought to be those of costs and financial resources. However, as is shown on the basis of empirical data drawn from Mexico, in reality the feasibility and effectiveness of such development programs is even more constrained by the lack of specialized physical and human factors on the input and market limitations on the output side. On the input side, the limited availability of complementary factors such as, for example, truly functioning credit systems for small-scale farmers or effective agricultural extension services impose long-term constraints on development. On the output side the limited availability, high risk, and relatively slow growth of markets for high-value crops sharply reduce the usually hoped-for and projected profitable crop mix that would warrant the frequently high costs of irrigation investments. Three conclusions are drawn: (1) Factors in limited supply have to be shadow-priced to reflect their high opportunity costs in alternative uses. (2) Re-allocation of financial resources from immediate construction of projects to longer-term increase in the supply of scarce, highly-trained manpower resources are necessary in order to optimize development over time. (3) Inclusion of high-value, high-income producing crops in the benefit-cost analysis of new projects is inappropriate if these crops could potentially be grown in already existing projects.

  19. Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis

    Directory of Open Access Journals (Sweden)

    LiMin Wang

    2014-01-01

    Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.

  20. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  1. On output measurements via radiation pressure

    DEFF Research Database (Denmark)

    Leeman, S.; Healey, A.J.; Forsberg, F.

    1990-01-01

    It is shown, by simple physical argument, that measurements of intensity with a radiation pressure balance should not agree with those based on calorimetric techniques. The conclusion is ultimately a consequence of the circumstance that radiation pressure measurements relate to wave momentum, while...... calorimetric methods relate to wave energy. Measurements with some typical ultrasound fields are performed with a novel type of hydrophone, and these allow an estimate to be made of the magnitude of the discrepancy to be expected between the two types of output measurement in a typical case....

  2. On directional multiple-output quantile regression

    Czech Academy of Sciences Publication Activity Database

    Paindaveine, D.; Šiman, Miroslav

    2011-01-01

    Roč. 102, č. 2 (2011), s. 193-212 ISSN 0047-259X R&D Projects: GA MŠk(CZ) 1M06047 Grant - others:Commision EC(BE) Fonds National de la Recherche Scientifique Institutional research plan: CEZ:AV0Z10750506 Keywords : multivariate quantile * quantile regression * multiple-output regression * halfspace depth * portfolio optimization * value-at risk Subject RIV: BA - General Mathematics Impact factor: 0.879, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/siman-0364128.pdf

  3. Uncertainties in predicting solar panel power output

    Science.gov (United States)

    Anspaugh, B.

    1974-01-01

    The problem of calculating solar panel power output at launch and during a space mission is considered. The major sources of uncertainty and error in predicting the post launch electrical performance of the panel are considered. A general discussion of error analysis is given. Examples of uncertainty calculations are included. A general method of calculating the effect on the panel of various degrading environments is presented, with references supplied for specific methods. A technique for sizing a solar panel for a required mission power profile is developed.

  4. Output-Sensitive Pattern Extraction in Sequences

    DEFF Research Database (Denmark)

    Grossi, Roberto; Menconi, Giulia; Pisanti, Nadia

    2014-01-01

    Genomic Analysis, Plagiarism Detection, Data Mining, Intrusion Detection, Spam Fighting and Time Series Analysis are just some examples of applications where extraction of recurring patterns in sequences of objects is one of the main computational challenges. Several notions of patterns exist...... or extend them causes a loss of significant information (where the number of occurrences changes). Output-sensitive algorithms have been proposed to enumerate and list these patterns, taking polynomial time O(nc) per pattern for constant c > 1, which is impractical for massive sequences of very large length...

  5. Utilization of INIS output in Czechoslovakia

    International Nuclear Information System (INIS)

    Stanik, Z.; Blazek, J.

    1978-01-01

    Information on INIS output materials - INIS magnetic tape, INIS Atomindex, full texts of non-conventional literature on microfiches. Complex is provided of INIS-SDI service by the Nuclear Information Centre for CSSR. The Unified Software System (USS) of the UVTEI-UTZ (the Central Technical Base of the Central Office for Scientific, Technical and Economic Information) is used for the automated processing of INIS magnetic tapes. A survey of INIS-SDI services in the years 1974 to 1978 is given. The further development of the system consists in the use of the terminal network, with direct access to the IAEA computer in Vienna. (author)

  6. Three-level grid-connected photovoltaic inverter with maximum power point tracking

    International Nuclear Information System (INIS)

    Tsang, K.M.; Chan, W.L.

    2013-01-01

    Highlight: ► This paper reports a novel 3-level grid connected photovoltaic inverter. ► The inverter features maximum power point tracking and grid current shaping. ► The inverter can be acted as an active filter and a renewable power source. - Abstract: This paper presents a systematic way of designing control scheme for a grid-connected photovoltaic (PV) inverter featuring maximum power point tracking (MPPT) and grid current shaping. Unlike conventional design, only four power switches are required to achieve three output levels and it is not necessary to use any phase-locked-loop circuitry. For the proposed scheme, a simple integral controller has been designed for the tracking of the maximum power point of a PV array based on an improved extremum seeking control method. For the grid-connected inverter, a current loop controller and a voltage loop controller have been designed. The current loop controller is designed to shape the inverter output current while the voltage loop controller can maintain the capacitor voltage at a certain level and provide a reference inverter output current for the PV inverter without affecting the maximum power point of the PV array. Experimental results are included to demonstrate the effectiveness of the tracking and control scheme.

  7. Determination of the 4 mm Gamma Knife helmet relative output factor using a variety of detectors

    International Nuclear Information System (INIS)

    Tsai, J.-S.; Rivard, Mark J.; Engler, Mark J.; Mignano, John E.; Wazer, David E.; Shucart, William A.

    2003-01-01

    Though the 4 mm Gamma Knife helmet is used routinely, there is disagreement in the Gamma Knife users community on the value of the 4 mm helmet relative output factor. A range of relative output factors is used, and this variation may impair observations of dose response and optimization of prescribed dose. To study this variation, measurements were performed using the following radiation detectors: silicon diode, diamond detector, radiographic film, radiochromic film, and TLD cubes. To facilitate positioning of the silicon diode and diamond detector, a three-dimensional translation micrometer was used to iteratively determine the position of maximum detector response. Positioning of the films and TLDs was accomplished by manufacturing custom holders for each technique. Results from all five measurement techniques indicate that the 4 mm helmet relative output factor is 0.868±0.014. Within the experimental uncertainties, this value is in good agreement with results obtained by other investigators using diverse techniques

  8. Modelling and Prediction of Photovoltaic Power Output Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Aminmohammad Saberian

    2014-01-01

    Full Text Available This paper presents a solar power modelling method using artificial neural networks (ANNs. Two neural network structures, namely, general regression neural network (GRNN feedforward back propagation (FFBP, have been used to model a photovoltaic panel output power and approximate the generated power. Both neural networks have four inputs and one output. The inputs are maximum temperature, minimum temperature, mean temperature, and irradiance; the output is the power. The data used in this paper started from January 1, 2006, until December 31, 2010. The five years of data were split into two parts: 2006–2008 and 2009-2010; the first part was used for training and the second part was used for testing the neural networks. A mathematical equation is used to estimate the generated power. At the end, both of these networks have shown good modelling performance; however, FFBP has shown a better performance comparing with GRNN.

  9. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  10. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  11. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  12. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  13. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  14. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  15. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  16. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  17. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  18. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  19. Improved Maximum Strength, Vertical Jump and Sprint Performance after 8 Weeks of Jump Squat Training with Individualized Loads

    Directory of Open Access Journals (Sweden)

    Vanderka Marián, Longová Katarína, Olasz Dávid, Krčmár Matúš, Walker Simon

    2016-09-01

    Full Text Available The purpose of the study was to determine the effects of 8 weeks of jump squat training on isometric half squat maximal force production (Fmax and rate of force development over 100ms (RFD100, countermovement jump (CMJ and squat jump (SJ height, and 50 m sprint time in moderately trained men. Sixty eight subjects (~21 years, ~180 cm, ~75 kg were divided into experimental (EXP; n = 36 and control (CON, n = 32 groups. Tests were completed pre-, mid- and post-training. EXP performed jump squat training 3 times per week using loads that allowed all repetitions to be performed with ≥90% of maximum average power output (13 sessions with 4 sets of 8 repetitions and 13 sessions with 8 sets of 4 repetitions. Subjects were given real-time feedback for every repetition during the training sessions. Significant improvements in Fmax from pre- to mid- (Δ ~14%, p<0.001, and from mid- to post-training (Δ ~4%, p < 0.001 in EXP were observed. In CON significantly enhanced Fmax from pre- to mid-training (Δ ~3.5%, p < 0.05 was recorded, but no other significant changes were observed in any other test. In RFD100 significant improvements from pre- to mid-training (Δ ~27%, p < 0.001, as well as from mid- to post-training (Δ ~17%, p < 0.01 were observed. CMJ and SJ height were significantly enhanced from pre- to mid-training (Δ ~10%, ~15%, respectively, p < 0.001 but no further changes occurred from mid- to post-training. Significant improvements in 50 m sprint time from pre- to mid-training (Δ -1%, p < 0.05, and from mid- to post-training (Δ -1.9%, p < 0.001 in EXP were observed. Furthermore, percent changes in EXP were greater than changes in CON during training. It appears that using jump squats with loads that allow repetitions to be performed ≥90% of maximum average power output can simultaneously improve several different athletic performance tasks in the short-term.

  20. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  1. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  2. The output, incomes and assets-capital relations in individual farms

    Directory of Open Access Journals (Sweden)

    Roma Ryś-Jurek

    2009-01-01

    Full Text Available In this article an attempt was made to analyse the output, incomes as well as other components of assets and sources that provided their financing in Polish individual farms, in comparison with farms from other EU countries. A special emphasis was put on examination of the interrelations between income, output and stocks observed within individual farms. Research was based on the FADN database that included basic information about average individual farms in years 2004-2006. The research showed, that (among other things the average output and family farm income were three times lower in Poland than the average in the Union. Also the increase of income was possible only thanks to the subsidies from the Union. According to the regression models, in Poland the positive influence on the increase of family farm income had stocks, crops and livestock output. While in the EU positive influence had crops and livestock production and negative influence had the stocks on an income’s growth.

  3. An Experimental Observation of Axial Variation of Average Size of Methane Clusters in a Gas Jet

    International Nuclear Information System (INIS)

    Ji-Feng, Han; Chao-Wen, Yang; Jing-Wei, Miao; Jian-Feng, Lu; Meng, Liu; Xiao-Bing, Luo; Mian-Gong, Shi

    2010-01-01

    Axial variation of average size of methane clusters in a gas jet produced by supersonic expansion of methane through a cylindrical nozzle of 0.8 mm in diameter is observed using a Rayleigh scattering method. The scattered light intensity exhibits a power scaling on the backing pressure ranging from 16 to 50 bar, and the power is strongly Z dependent varying from 8.4 (Z = 3 mm) to 5.4 (Z = 11 mm), which is much larger than that of the argon cluster. The scattered light intensity versus axial position shows that the position of 5 mm has the maximum signal intensity. The estimation of the average cluster size on axial position Z indicates that the cluster growth process goes forward until the maximum average cluster size is reached at Z = 9 mm, and the average cluster size will decrease gradually for Z > 9 mm

  4. Saturated output tabletop x-ray lasers

    International Nuclear Information System (INIS)

    Dunn, J.; Osterheld, A.L.; Nilsen, J.; Hunter, J.R.; Li, Y.; Faenov, A.Ya.; Pikuz, T.A.; Shlyaptsev, N.

    2000-01-01

    The high efficiency method of transient collisional excitation has been successfully demonstrated for Ne-like and Ni-like ion x-ray laser schemes with small 5-10 J laser facilities. Our recent studies using the tabletop COMET (Compact Multipulse Terawatt) laser system at the Lawrence Livermore National Laboratory (LLNL) have produced several x-ray lasers operating in the saturation regime. Output energy of 10-15 (micro)J corresponding to a gL product of 18 has been achieved on the Ni-like Pd 4d → 4p transition at 147 (angstrom) with a total energy of 5-7 J in a 600 ps pulse followed by a 1.2 ps pulse. Analysis of the laser beam angular profile indicates that refraction plays an important role in the amplification and propagation process in the plasma column. We report further improvement in the extraction efficiency by varying a number of laser driver parameters. In particular, the duration of the second short pulse producing the inversion has an observed effect on the x-ray laser output

  5. Saturated output tabletop X-ray lasers

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, J.; Osterheld, A.L.; Nilsen, J.; Hunter, J.R. [Lawrence Livermore National Lab., CA (United States); Yuelin Li [Lawrence Livermore National Lab., CA (United States); ILSA, Lawrence Livermore National Lab., Livermore, CA (United States); Faenov, A.Ya.; Pikuz, T.A. [Lawrence Livermore National Lab., CA (United States); MISDC of VNIIFTRI, Mendeleevo (Russian Federation); Shlyaptsev, V.N. [Lawrence Livermore National Lab., CA (United States); DAS, Univ. of California Davis-Livermore, Livermore, CA (United States)

    2001-07-01

    The high efficiency method of transient collisional excitation has been successfully demonstrated for Ne-like and Ni-like ion X-ray laser schemes with small 5-10 J laser facilities. Our recent studies using the tabletop COMET (compact multipulse terawatt) laser system at the Lawrence livermore national laboratory (LLNL) have produced several X-ray lasers operating in the saturation regime. Output energy of 10-15 {mu}J corresponding to a gL product of 18 has been achieved on the Ni-like Pd 4d{yields}4p transition at 147 A with a total energy of 5-7 J in a 600 ps pulse followed by a 1.2 ps pulse. Analysis of the laser beam angular profile indicates that refraction plays an important role in the amplification and propagation process in the plasma column. We report further improvement in the extraction efficiency by varying a number of laser driver parameters. In particular, the duration of the second short pulse producing the inversion has an observed effect on the X-ray laser output. (orig.)

  6. Analysis of Output Levels of an MP3 Player: Effects of Earphone Type, Music Genre, and Listening Duration.

    Science.gov (United States)

    Shim, Hyunyong; Lee, Seungwan; Koo, Miseung; Kim, Jinsook

    2018-02-26

    To prevent noise induced hearing losses caused by listening to music with personal listening devices for young adults, this study was aimed to measure output levels of an MP3 and to identify preferred listening levels (PLLs) depending on earphone types, music genres, and listening durations. Twenty-two normal hearing young adults (mean=18.82, standard deviation=0.57) participated. Each participant was asked to select his or her most PLLs when listened to Korean ballade or dance music with an earbud or an over-the-ear earphone for 30 or 60 minutes. One side of earphone was connected to the participant's better ear and the other side was connected to a sound level meter via a 2 or 6 cc-couplers. Depending on earphone types, music genres, and listening durations, loudness A-weighted equivalent (LAeq) and loudness maximum time-weighted with A-frequency sound levels in dBA were measured. Neither main nor interaction effects of the PLLs among the three factors were significant. Overall output levels of earbuds were about 10-12 dBA greater than those of over-the-ear earphones. The PLLs were 1.73 dBA greater for earbuds than over-the-ear earphones. The average PLL for ballad was higher than for dance music. The PLLs at LAeq for both music genres were the greatest at 0.5 kHz followed by 1, 0.25, 2, 4, 0.125, 8 kHz in the order. The PLLs were not different significantly when listening to Korean ballad or dance music as functions of earphone types, music genres, and listening durations. However, over-the-ear earphones seemed to be more suitable to prevent noise induce hearing loss when listening to music, showing lower PLLs, possibly due to isolation from the background noise by covering ears.

  7. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  8. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  9. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  10. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  11. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  12. High average power 1314 nm Nd:YLF laser, passively Q-switched with V:YAG

    CSIR Research Space (South Africa)

    Botha, RC

    2013-03-01

    Full Text Available A 1314 nm Nd:YLF laser was designed and operated both CW and passively Q-switched. Maximum CW output of 10.4 W resulted from 45.2 Wof incident pump power. Passive Q-switching was obtained by inserting a V:YAG saturable absorber in the cavity...

  13. A Monte Carlo Study on Multiple Output Stochastic Frontiers

    DEFF Research Database (Denmark)

    Henningsen, Géraldine; Henningsen, Arne; Jensen, Uwe

    , dividing all other output quantities by the selected output quantity, and using these ratios as regressors (OD). Another approach is the stochastic ray production frontier (SR) which transforms the output quantities into their Euclidean distance as the dependent variable and their polar coordinates......In the estimation of multiple output technologies in a primal approach, the main question is how to handle the multiple outputs. Often an output distance function is used, where the classical approach is to exploit its homogeneity property by selecting one output quantity as the dependent variable...... of both specifications for the case of a Translog output distance function with respect to different common statistical problems as well as problems arising as a consequence of zero values in the output quantities. Although, our results partly show clear reactions to statistical misspecifications...

  14. Entanglement in random pure states: spectral density and average von Neumann entropy

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Santosh; Pandey, Akhilesh, E-mail: skumar.physics@gmail.com, E-mail: ap0700@mail.jnu.ac.in [School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110 067 (India)

    2011-11-04

    Quantum entanglement plays a crucial role in quantum information, quantum teleportation and quantum computation. The information about the entanglement content between subsystems of the composite system is encoded in the Schmidt eigenvalues. We derive here closed expressions for the spectral density of Schmidt eigenvalues for all three invariant classes of random matrix ensembles. We also obtain exact results for average von Neumann entropy. We find that maximum average entanglement is achieved if the system belongs to the symplectic invariant class. (paper)

  15. Monte Carlo simulation of the effect of miniphantom on in-air output ratio

    International Nuclear Information System (INIS)

    Li Jun; Zhu, Timothy C.

    2010-01-01

    Purpose: The aim of the study was to quantify the effect of miniphantoms on in-air output ratio measurements, i.e., to determine correction factors for in-air output ratio. Methods: Monte Carlo (MC) simulations were performed to simulate in-air output ratio measurements by using miniphantoms made of various materials (PMMA, graphite, copper, brass, and lead) and with different longitudinal thicknesses or depths (2-30 g/cm 2 ) in photon beams of 6 and 15 MV, respectively, and with collimator settings ranging from 3x3 to 40x40 cm 2 . EGSnrc and BEAMnrc (2007) software packages were used. Photon energy spectra corresponding to the collimator settings were obtained from BEAMnrc code simulations on a linear accelerator and were used to quantify the components of in-air output ratio correction factors, i.e., attenuation, mass energy absorption, and phantom scatter correction factors. In-air output ratio correction factors as functions of miniphantom material, miniphantom longitudinal thickness, and collimator setting were calculated and compared to a previous experimental study. Results: The in-air output ratio correction factors increase with collimator opening and miniphantom longitudinal thickness for all the materials and for both energies. At small longitudinal thicknesses, the in-air output ratio correction factors for PMMA and graphite are close to 1. The maximum magnitudes of the in-air output ratio correction factors occur at the largest collimator setting (40x40 cm 2 ) and the largest miniphantom longitudinal thickness (30 g/cm 2 ): 1.008±0.001 for 6 MV and 1.012±0.001 for 15 MV, respectively. The MC simulations of the in-air output ratio correction factor confirm the previous experimental study. Conclusions: The study has verified that a correction factor for in-air output ratio can be obtained as a product of attenuation correction factor, mass energy absorption correction factor, and phantom scatter correction factor. The correction factors obtained in the

  16. Comparison of fuzzy logic and neural network in maximum power point tracker for PV systems

    Energy Technology Data Exchange (ETDEWEB)

    Ben Salah, Chokri; Ouali, Mohamed [Research Unit on Intelligent Control, Optimization, Design and Optimization of Complex Systems (ICOS), Department of Electrical Engineering, National School of Engineers of Sfax, BP. W, 3038, Sfax (Tunisia)

    2011-01-15

    This paper proposes two methods of maximum power point tracking using a fuzzy logic and a neural network controllers for photovoltaic systems. The two maximum power point tracking controllers receive solar radiation and photovoltaic cell temperature as inputs, and estimated the optimum duty cycle corresponding to maximum power as output. The approach is validated on a 100 Wp PVP (two parallels SM50-H panel) connected to a 24 V dc load. The new method gives a good maximum power operation of any photovoltaic array under different conditions such as changing solar radiation and PV cell temperature. From the simulation and experimental results, the fuzzy logic controller can deliver more power than the neural network controller and can give more power than other different methods in literature. (author)

  17. Dense Output for Strong Stability Preserving Runge–Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2016-12-10

    We investigate dense output formulae (also known as continuous extensions) for strong stability preserving (SSP) Runge–Kutta methods. We require that the dense output formula also possess the SSP property, ideally under the same step-size restriction as the method itself. A general recipe for first-order SSP dense output formulae for SSP methods is given, and second-order dense output formulae for several optimal SSP methods are developed. It is shown that SSP dense output formulae of order three and higher do not exist, and that in any method possessing a second-order SSP dense output, the coefficient matrix A has a zero row.

  18. Maximum Power Point Tracking in Variable Speed Wind Turbine Based on Permanent Magnet Synchronous Generator Using Maximum Torque Sliding Mode Control Strategy

    Institute of Scientific and Technical Information of China (English)

    Esmaeil Ghaderi; Hossein Tohidi; Behnam Khosrozadeh

    2017-01-01

    The present study was carried out in order to track the maximum power point in a variable speed turbine by minimizing electromechanical torque changes using a sliding mode control strategy.In this strategy,fhst,the rotor speed is set at an optimal point for different wind speeds.As a result of which,the tip speed ratio reaches an optimal point,mechanical power coefficient is maximized,and wind turbine produces its maximum power and mechanical torque.Then,the maximum mechanical torque is tracked using electromechanical torque.In this technique,tracking error integral of maximum mechanical torque,the error,and the derivative of error are used as state variables.During changes in wind speed,sliding mode control is designed to absorb the maximum energy from the wind and minimize the response time of maximum power point tracking (MPPT).In this method,the actual control input signal is formed from a second order integral operation of the original sliding mode control input signal.The result of the second order integral in this model includes control signal integrity,full chattering attenuation,and prevention from large fluctuations in the power generator output.The simulation results,calculated by using MATLAB/m-file software,have shown the effectiveness of the proposed control strategy for wind energy systems based on the permanent magnet synchronous generator (PMSG).

  19. SU-E-T-136: Assessment of Seasonal Linear Accelerator Output Variations and Associated Impacts

    International Nuclear Information System (INIS)

    Bartolac, S; Letourneau, D

    2015-01-01

    Purpose: Application of process control theory in quality assurance programs promises to allow earlier identification of problems and potentially better quality in delivery than traditional paradigms based primarily on tolerances and action levels. The purpose of this project was to characterize underlying seasonal variations in linear accelerator output that can be used to improve performance or trigger preemptive maintenance. Methods: Review of runtime plots of daily (6 MV) output data acquired using in house ion chamber based devices over three years and for fifteen linear accelerators of varying make and model were evaluated. Shifts in output due to known interventions with the machines were subtracted from the data to model an uncorrected scenario for each linear accelerator. Observable linear trends were also removed from the data prior to evaluation of periodic variations. Results: Runtime plots of output revealed sinusoidal, seasonal variations that were consistent across all units, irrespective of manufacturer, model or age of machine. The average amplitude of the variation was on the order of 1%. Peak and minimum variations were found to correspond to early April and September, respectively. Approximately 48% of output adjustments made over the period examined were potentially avoidable if baseline levels had corresponded to the mean output, rather than to points near a peak or valley. Linear trends were observed for three of the fifteen units, with annual increases in output ranging from 2–3%. Conclusion: Characterization of cyclical seasonal trends allows for better separation of potentially innate accelerator behaviour from other behaviours (e.g. linear trends) that may be better described as true out of control states (i.e. non-stochastic deviations from otherwise expected behavior) and could indicate service requirements. Results also pointed to an optimal setpoint for accelerators such that output of machines is maintained within set tolerances

  20. SU-E-T-136: Assessment of Seasonal Linear Accelerator Output Variations and Associated Impacts

    Energy Technology Data Exchange (ETDEWEB)

    Bartolac, S; Letourneau, D [Princess Margaret Cancer Centre, Toronto, Ontario (Canada); University of Toronto, Toronto, Ontario (Canada)

    2015-06-15

    Purpose: Application of process control theory in quality assurance programs promises to allow earlier identification of problems and potentially better quality in delivery than traditional paradigms based primarily on tolerances and action levels. The purpose of this project was to characterize underlying seasonal variations in linear accelerator output that can be used to improve performance or trigger preemptive maintenance. Methods: Review of runtime plots of daily (6 MV) output data acquired using in house ion chamber based devices over three years and for fifteen linear accelerators of varying make and model were evaluated. Shifts in output due to known interventions with the machines were subtracted from the data to model an uncorrected scenario for each linear accelerator. Observable linear trends were also removed from the data prior to evaluation of periodic variations. Results: Runtime plots of output revealed sinusoidal, seasonal variations that were consistent across all units, irrespective of manufacturer, model or age of machine. The average amplitude of the variation was on the order of 1%. Peak and minimum variations were found to correspond to early April and September, respectively. Approximately 48% of output adjustments made over the period examined were potentially avoidable if baseline levels had corresponded to the mean output, rather than to points near a peak or valley. Linear trends were observed for three of the fifteen units, with annual increases in output ranging from 2–3%. Conclusion: Characterization of cyclical seasonal trends allows for better separation of potentially innate accelerator behaviour from other behaviours (e.g. linear trends) that may be better described as true out of control states (i.e. non-stochastic deviations from otherwise expected behavior) and could indicate service requirements. Results also pointed to an optimal setpoint for accelerators such that output of machines is maintained within set tolerances

  1. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  2. Quetelet, the average man and medical knowledge.

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  3. [Quetelet, the average man and medical knowledge].

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  4. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  5. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  6. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  7. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  8. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  9. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  10. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  11. Comparison of bile acid synthesis determined by isotope dilution versus fecal acidic sterol output in human subjects

    International Nuclear Information System (INIS)

    Duane, W.C.; Holloway, D.E.; Hutton, S.W.; Corcoran, P.J.; Haas, N.A.

    1982-01-01

    Fecal acidic sterol output has been found to be much lower than bile acid synthesis determined by isotope dilution. Because of this confusing discrepancy, we compared these 2 measurements done simultaneously on 13 occasions in 5 normal volunteers. In contrast to previous findings, bile acid synthesis by the Lindstedt isotope dilution method averaged 16.3% lower than synthesis simultaneously determined by fecal acidic sterol output (95% confidence limit for the difference - 22.2 to -10.4%). When one-sample determinations of bile acid pools were substituted for Lindstedt pools, bile acid synthesis by isotope dilution averaged 5.6% higher than synthesis by fecal acidic sterol output (95% confidence limits -4.9 to 16.1%). These data indicate that the 2 methods yield values in reasonably close agreement with one another. If anything, fecal acidic sterol outputs are slightly higher than synthesis by isotope dilution

  12. Skill and reliability of climate model ensembles at the Last Glacial Maximum and mid-Holocene

    Directory of Open Access Journals (Sweden)

    J. C. Hargreaves

    2013-03-01

    Full Text Available Paleoclimate simulations provide us with an opportunity to critically confront and evaluate the performance of climate models in simulating the response of the climate system to changes in radiative forcing and other boundary conditions. Hargreaves et al. (2011 analysed the reliability of the Paleoclimate Modelling Intercomparison Project, PMIP2 model ensemble with respect to the MARGO sea surface temperature data synthesis (MARGO Project Members, 2009 for the Last Glacial Maximum (LGM, 21 ka BP. Here we extend that work to include a new comprehensive collection of land surface data (Bartlein et al., 2011, and introduce a novel analysis of the predictive skill of the models. We include output from the PMIP3 experiments, from the two models for which suitable data are currently available. We also perform the same analyses for the PMIP2 mid-Holocene (6 ka BP ensembles and available proxy data sets. Our results are predominantly positive for the LGM, suggesting that as well as the global mean change, the models can reproduce the observed pattern of change on the broadest scales, such as the overall land–sea contrast and polar amplification, although the more detailed sub-continental scale patterns of change remains elusive. In contrast, our results for the mid-Holocene are substantially negative, with the models failing to reproduce the observed changes with any degree of skill. One cause of this problem could be that the globally- and annually-averaged forcing anomaly is very weak at the mid-Holocene, and so the results are dominated by the more localised regional patterns in the parts of globe for which data are available. The root cause of the model-data mismatch at these scales is unclear. If the proxy calibration is itself reliable, then representativity error in the data-model comparison, and missing climate feedbacks in the models are other possible sources of error.

  13. Turbulent Output-Based Anisotropic Adaptation

    Science.gov (United States)

    Park, Michael A.; Carlson, Jan-Renee

    2010-01-01

    Controlling discretization error is a remaining challenge for computational fluid dynamics simulation. Grid adaptation is applied to reduce estimated discretization error in drag or pressure integral output functions. To enable application to high O(10(exp 7)) Reynolds number turbulent flows, a hybrid approach is utilized that freezes the near-wall boundary layer grids and adapts the grid away from the no slip boundaries. The hybrid approach is not applicable to problems with under resolved initial boundary layer grids, but is a powerful technique for problems with important off-body anisotropic features. Supersonic nozzle plume, turbulent flat plate, and shock-boundary layer interaction examples are presented with comparisons to experimental measurements of pressure and velocity. Adapted grids are produced that resolve off-body features in locations that are not known a priori.

  14. Proportional chamber with data analog output

    International Nuclear Information System (INIS)

    Popov, V.E.; Prokof'ev, A.N.

    1977-01-01

    A proportional multiwier chamber is described. The chamber makes it possible to determine angles at wich a pion strikes a polarized target. A delay line, made of 60-core flat cable is used for removing signals from the chamber. From the delay line, signals are amplified and successively injected into shapers and a time-to-amplitude converter. An amplitude of the time-to amplitude converter output signal unambiguously determines the coordinate of a point at which a particle strikes the chamber plane. There are also given circuits of amplifiers, which consist of a preamplifier with gain 30 and a main amplifier with adjustable gain. Data on testing the chamber with the 450 MeV pion beam is demonstrated. The chamber features an efficiency of about 98 per cent under load of 2x10 5 s -1

  15. Optimizing microwave photodetection: input-output theory

    Science.gov (United States)

    Schöndorf, M.; Govia, L. C. G.; Vavilov, M. G.; McDermott, R.; Wilhelm, F. K.

    2018-04-01

    High fidelity microwave photon counting is an important tool for various areas from background radiation analysis in astronomy to the implementation of circuit quantum electrodynamic architectures for the realization of a scalable quantum information processor. In this work we describe a microwave photon counter coupled to a semi-infinite transmission line. We employ input-output theory to examine a continuously driven transmission line as well as traveling photon wave packets. Using analytic and numerical methods, we calculate the conditions on the system parameters necessary to optimize measurement and achieve high detection efficiency. With this we can derive a general matching condition depending on the different system rates, under which the measurement process is optimal.

  16. Improvement of Output Power of ECF Micromotor

    Science.gov (United States)

    Yokota, Shinichi; Kawamura, Kiyomi; Takemura, Kenjiro; Edamura, Kazuya

    Electro-conjugate fluid (ECF) is a kind of dielectric fluids, which produces jet-flow (ECF jet) when subjected to a high DC voltage. By using the ECF jet, a new type of micromotor with simple structure and lightweight can be realized. Up to now, we developed a disk-plate type ECF micromotor with inner diameter of 9 mm. In this study, we develope novel ECF micromotors with inner diameter of 5 mm in order to improve the output power density. First, we designed and produced the ECF micromotors with 4-layered and 8-layered disk plate rotors. Then, the performances of the motors are measured. The experimental results confirm the motor developed has a higher performance than the previous ones.

  17. Floating Gate CMOS Dosimeter With Frequency Output

    Science.gov (United States)

    Garcia-Moreno, E.; Isern, E.; Roca, M.; Picos, R.; Font, J.; Cesari, J.; Pineda, A.

    2012-04-01

    This paper presents a gamma radiation dosimeter based on a floating gate sensor. The sensor is coupled with a signal processing circuitry, which furnishes a square wave output signal, the frequency of which depends on the total dose. Like any other floating gate dosimeter, it exhibits zero bias operation and reprogramming capabilities. The dosimeter has been designed in a standard 0.6 m CMOS technology. The whole dosimeter occupies a silicon area of 450 m250 m. The initial sensitivity to a radiation dose is Hz/rad, and to temperature and supply voltage is kHz/°C and 0.067 kHz/mV, respectively. The lowest detectable dose is less than 1 rad.

  18. Application of computer voice input/output

    International Nuclear Information System (INIS)

    Ford, W.; Shirk, D.G.

    1981-01-01

    The advent of microprocessors and other large-scale integration (LSI) circuits is making voice input and output for computers and instruments practical; specialized LSI chips for speech processing are appearing on the market. Voice can be used to input data or to issue instrument commands; this allows the operator to engage in other tasks, move about, and to use standard data entry systems. Voice synthesizers can generate audible, easily understood instructions. Using voice characteristics, a control system can verify speaker identity for security purposes. Two simple voice-controlled systems have been designed at Los Alamos for nuclear safeguards applicaations. Each can easily be expanded as time allows. The first system is for instrument control that accepts voice commands and issues audible operator prompts. The second system is for access control. The speaker's voice is used to verify his identity and to actuate external devices

  19. Advanced Output Coupling for High Power Gyrotrons

    Energy Technology Data Exchange (ETDEWEB)

    Read, Michael [Calabazas Creek Research, Inc., San Mateo, CA (United States); Ives, Robert Lawrence [Calabazas Creek Research, Inc., San Mateo, CA (United States); Marsden, David [Calabazas Creek Research, Inc., San Mateo, CA (United States); Collins, George [Calabazas Creek Research, Inc., San Mateo, CA (United States); Temkin, Richard [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Guss, William [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Lohr, John [General Atomics, La Jolla, CA (United States); Neilson, Jeffrey [Lexam Research, Redwood City, CA (United States); Bui, Thuc [Calabazas Creek Research, Inc., San Mateo, CA (United States)

    2016-11-28

    The Phase II program developed an internal RF coupler that transforms the whispering gallery RF mode produced in gyrotron cavities to an HE11 waveguide mode propagating in corrugated waveguide. This power is extracted from the vacuum using a broadband, chemical vapor deposited (CVD) diamond, Brewster angle window capable of transmitting more than 1.5 MW CW of RF power over a broad range of frequencies. This coupling system eliminates the Mirror Optical Units now required to externally couple Gaussian output power into corrugated waveguide, significantly reducing system cost and increasing efficiency. The program simulated the performance using a broad range of advanced computer codes to optimize the design. Both a direct coupler and Brewster angle window were built and tested at low and high power. Test results confirmed the performance of both devices and demonstrated they are capable of achieving the required performance for scientific, defense, industrial, and medical applications.

  20. A simple maximum power point tracker for thermoelectric generators

    International Nuclear Information System (INIS)

    Paraskevas, Alexandros; Koutroulis, Eftichios

    2016-01-01

    Highlights: • A Maximum Power Point Tracking (MPPT) method for thermoelectric generators is proposed. • A power converter is controlled to operate on a pre-programmed locus. • The proposed MPPT technique has the advantage of operational and design simplicity. • The experimental average deviation from the MPP power of the TEG source is 1.87%. - Abstract: ThermoElectric Generators (TEGs) are capable to harvest the ambient thermal energy for power-supplying sensors, actuators, biomedical devices etc. in the μW up to several hundreds of Watts range. In this paper, a Maximum Power Point Tracking (MPPT) method for TEG elements is proposed, which is based on controlling a power converter such that it operates on a pre-programmed locus of operating points close to the MPPs of the power–voltage curves of the TEG power source. Compared to the past-proposed MPPT methods for TEGs, the technique presented in this paper has the advantage of operational and design simplicity. Thus, its implementation using off-the-shelf microelectronic components with low-power consumption characteristics is enabled, without being required to employ specialized integrated circuits or signal processing units of high development cost. Experimental results are presented, which demonstrate that for MPP power levels of the TEG source in the range of 1–17 mW, the average deviation of the power produced by the proposed system from the MPP power of the TEG source is 1.87%.

  1. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  2. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  3. Shower maximum detector for SDC calorimetry

    International Nuclear Information System (INIS)

    Ernwein, J.

    1994-01-01

    A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

  4. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  5. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  6. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions

    DEFF Research Database (Denmark)

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving...... average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...

  7. Area/latency optimized early output asynchronous full adders and relative-timed ripple carry adders.

    Science.gov (United States)

    Balasubramanian, P; Yamashita, S

    2016-01-01

    This article presents two area/latency optimized gate level asynchronous full adder designs which correspond to early output logic. The proposed full adders are constructed using the delay-insensitive dual-rail code and adhere to the four-phase return-to-zero handshaking. For an asynchronous ripple carry adder (RCA) constructed using the proposed early output full adders, the relative-timing assumption becomes necessary and the inherent advantages of the relative-timed RCA are: (1) computation with valid inputs, i.e., forward latency is data-dependent, and (2) computation with spacer inputs involves a bare minimum constant reverse latency of just one full adder delay, thus resulting in the optimal cycle time. With respect to different 32-bit RCA implementations, and in comparison with the optimized strong-indication, weak-indication, and early output full adder designs, one of the proposed early output full adders achieves respective reductions in latency by 67.8, 12.3 and 6.1 %, while the other proposed early output full adder achieves corresponding reductions in area by 32.6, 24.6 and 6.9 %, with practically no power penalty. Further, the proposed early output full adders based asynchronous RCAs enable minimum reductions in cycle time by 83.4, 15, and 8.8 % when considering carry-propagation over the entire RCA width of 32-bits, and maximum reductions in cycle time by 97.5, 27.4, and 22.4 % for the consideration of a typical carry chain length of 4 full adder stages, when compared to the least of the cycle time estimates of various strong-indication, weak-indication, and early output asynchronous RCAs of similar size. All the asynchronous full adders and RCAs were realized using standard cells in a semi-custom design fashion based on a 32/28 nm CMOS process technology.

  8. Industrial output restriction and the Kyoto protocol. An input-output approach with application to Canada

    International Nuclear Information System (INIS)

    Lixon, Benoit; Thomassin, Paul J.; Hamaide, Bertrand

    2008-01-01

    The objective of this paper is to assess the economic impacts of reducing greenhouse gas emissions by decreasing industrial output in Canada to a level that will meet the target set out in the Kyoto Protocol. The study uses an ecological-economic Input-Output model combining economic components valued in monetary terms with ecologic components - GHG emissions - expressed in physical terms. Economic and greenhouse gas emissions data for Canada are computed in the same sectoral disaggregation. Three policy scenarios are considered: the first one uses the direct emission coefficients to allocate the reduction in industrial output, while the other two use the direct plus indirect emission coefficients. In the first two scenarios, the reduction in industrial sector output is allocated uniformly across sectors while it is allocated to the 12 largest emitting industries in the last one. The estimated impacts indicate that the results vary with the different allocation methods. The third policy scenario, allocation to the 12 largest emitting sectors, is the most cost effective of the three as the impacts of the Kyoto Protocol reduces Gross Domestic Product by 3.1% compared to 24% and 8.1% in the first two scenarios. Computed economic costs should be considered as upper-bounds because the model assumes immediate adjustment to the Kyoto Protocol and because flexibility mechanisms are not incorporated. The resulting upper-bound impact of the third scenario may seem to contradict those who claim that the Kyoto Protocol would place an unbearable burden on the Canadian economy. (author)

  9. Maximum attainable power density and wall load in tokamaks underlying reactor relevant constraints

    International Nuclear Information System (INIS)

    Borrass, K.; Buende, R.

    1979-09-01

    The characteristic data of tokamaks optimized with respect to their power density or wall load are determined. Reactor relevant constraints are imposed, such as a fixed plant net power output, a fixed blanket thickness and the dependence of the maximum toroidal field on the geometry and conductor material. The impact of finite burn times is considered. Various scaling laws of the toroidal beta with the aspect ratio are discussed. (orig.) 891 GG/orig. 892 RDG [de

  10. Dense Output for Strong Stability Preserving Runge–Kutta Methods

    KAUST Repository

    Ketcheson, David I.; Loczi, Lajos; Jangabylova, Aliya; Kusmanov, Adil

    2016-01-01

    We investigate dense output formulae (also known as continuous extensions) for strong stability preserving (SSP) Runge–Kutta methods. We require that the dense output formula also possess the SSP property, ideally under the same step

  11. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  12. Maximum speed of dewetting on a fiber

    NARCIS (Netherlands)

    Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed

  13. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  14. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  15. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  16. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  17. The maximum-entropy method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš; Schneider, M.

    2003-01-01

    Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003

  18. Achieving maximum sustainable yield in mixed fisheries

    NARCIS (Netherlands)

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna

    2017-01-01

    Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

  19. 5 CFR 534.203 - Maximum stipends.

    Science.gov (United States)

    2010-01-01

    ... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...

  20. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  1. Generation and Applications of High Average Power Mid-IR Supercontinuum in Chalcogenide Fibers

    OpenAIRE

    Petersen, Christian Rosenberg

    2016-01-01

    Mid-infrared supercontinuum with up to 54.8 mW average power, and maximum bandwidth of 1.77-8.66 μm is demonstrated as a result of pumping tapered chalcogenide photonic crystal fibers with a MHz parametric source at 4 μm

  2. Development of Compact Ozonizer with High Ozone Output by Pulsed Power

    Science.gov (United States)

    Tanaka, Fumiaki; Ueda, Satoru; Kouno, Kanako; Sakugawa, Takashi; Akiyama, Hidenori; Kinoshita, Youhei

    Conventional ozonizer with a high ozone output using silent or surface discharges needs a cooling system and a dielectric barrier, and therefore becomes a large machine. A compact ozonizer without the cooling system and the dielectric barrier has been developed by using a pulsed power generated discharge. The wire to plane electrodes made of metal have been used. However, the ozone output was low. Here, a compact and high repetition rate pulsed power generator is used as an electric source of a compact ozonizer. The ozone output of 6.1 g/h and the ozone yield of 86 g/kWh are achieved at 500 pulses per second, input average power of 280 W and an air flow rate of 20 L/min.

  3. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  4. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  5. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  6. The concept of average LET values determination

    International Nuclear Information System (INIS)

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  7. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  8. MCNP output data analysis with ROOT (MODAR)

    Science.gov (United States)

    Carasco, C.

    2010-12-01

    MCNP Output Data Analysis with ROOT (MODAR) is a tool based on CERN's ROOT software. MODAR has been designed to handle time-energy data issued by MCNP simulations of neutron inspection devices using the associated particle technique. MODAR exploits ROOT's Graphical User Interface and functionalities to visualize and process MCNP simulation results in a fast and user-friendly way. MODAR allows to take into account the detection system time resolution (which is not possible with MCNP) as well as detectors energy response function and counting statistics in a straightforward way. New version program summaryProgram title: MODAR Catalogue identifier: AEGA_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGA_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 150 927 No. of bytes in distributed program, including test data, etc.: 4 981 633 Distribution format: tar.gz Programming language: C++ Computer: Most Unix workstations and PCs Operating system: Most Unix systems, Linux and windows, provided the ROOT package has been installed. Examples where tested under Suse Linux and Windows XP. RAM: Depends on the size of the MCNP output file. The example presented in the article, which involves three two dimensional 139×740 bins histograms, allocates about 60 MB. These data are running under ROOT and include consumption by ROOT itself. Classification: 17.6 Catalogue identifier of previous version: AEGA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 1161 External routines: ROOT version 5.24.00 ( http://root.cern.ch/drupal/) Does the new version supersede the previous version?: Yes Nature of problem: The output of a MCNP simulation is an ascii file. The data processing is usually performed by copying and pasting the relevant parts of the ascii

  9. Effects of Measurement Error on the Output Gap in Japan

    OpenAIRE

    Koichiro Kamada; Kazuto Masuda

    2000-01-01

    Potential output is the largest amount of products that can be produced by fully utilizing available labor and capital stock; the output gap is defined as the discrepancy between actual and potential output. If data on production factors contain measurement errors, total factor productivity (TFP) cannot be estimated accurately from the Solow residual(i.e., the portion of output that is not attributable to labor and capital inputs). This may give rise to distortions in the estimation of potent...

  10. Increasing Efficiency by Maximizing Electrical Output

    Science.gov (United States)

    2016-07-27

    included all components. An equally important figure is the forecast for the sales price of the ORCA when production is on a higher volume commercial...reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding

  11. Maximum-power-point tracking control of solar heating system

    KAUST Repository

    Huang, Bin-Juine

    2012-11-01

    The present study developed a maximum-power point tracking control (MPPT) technology for solar heating system to minimize the pumping power consumption at an optimal heat collection. The net solar energy gain Q net (=Q s-W p/η e) was experimentally found to be the cost function for MPPT with maximum point. The feedback tracking control system was developed to track the optimal Q net (denoted Q max). A tracking filter which was derived from the thermal analytical model of the solar heating system was used to determine the instantaneous tracking target Q max(t). The system transfer-function model of solar heating system was also derived experimentally using a step response test and used in the design of tracking feedback control system. The PI controller was designed for a tracking target Q max(t) with a quadratic time function. The MPPT control system was implemented using a microprocessor-based controller and the test results show good tracking performance with small tracking errors. It is seen that the average mass flow rate for the specific test periods in five different days is between 18.1 and 22.9kg/min with average pumping power between 77 and 140W, which is greatly reduced as compared to the standard flow rate at 31kg/min and pumping power 450W which is based on the flow rate 0.02kg/sm 2 defined in the ANSI/ASHRAE 93-1986 Standard and the total collector area 25.9m 2. The average net solar heat collected Q net is between 8.62 and 14.1kW depending on weather condition. The MPPT control of solar heating system has been verified to be able to minimize the pumping energy consumption with optimal solar heat collection. © 2012 Elsevier Ltd.

  12. S-Band AlGaN/GaN power amplifier MMIC with over 20 Watt output power

    NARCIS (Netherlands)

    van Heijningen, M; Visser, G.C.; Wurfl, J.; van Vliet, Frank Edward

    2008-01-01

    Abstract This paper presents the design of an S-band HPA MMIC in AlGaN/GaN CPW technology for radar TR-module application. The trade-offs of using an MMIC solution versus discrete power devices are discussed. The MMIC shows a maximum output power of 38 Watt at 37% Power Added Efficiency at 3.1 GHz.

  13. Maximum wind energy extraction strategies using power electronic converters

    Science.gov (United States)

    Wang, Quincy Qing

    2003-10-01

    This thesis focuses on maximum wind energy extraction strategies for achieving the highest energy output of variable speed wind turbine power generation systems. Power electronic converters and controls provide the basic platform to accomplish the research of this thesis in both hardware and software aspects. In order to send wind energy to a utility grid, a variable speed wind turbine requires a power electronic converter to convert a variable voltage variable frequency source into a fixed voltage fixed frequency supply. Generic single-phase and three-phase converter topologies, converter control methods for wind power generation, as well as the developed direct drive generator, are introduced in the thesis for establishing variable-speed wind energy conversion systems. Variable speed wind power generation system modeling and simulation are essential methods both for understanding the system behavior and for developing advanced system control strategies. Wind generation system components, including wind turbine, 1-phase IGBT inverter, 3-phase IGBT inverter, synchronous generator, and rectifier, are modeled in this thesis using MATLAB/SIMULINK. The simulation results have been verified by a commercial simulation software package, PSIM, and confirmed by field test results. Since the dynamic time constants for these individual models are much different, a creative approach has also been developed in this thesis to combine these models for entire wind power generation system simulation. An advanced maximum wind energy extraction strategy relies not only on proper system hardware design, but also on sophisticated software control algorithms. Based on literature review and computer simulation on wind turbine control algorithms, an intelligent maximum wind energy extraction control algorithm is proposed in this thesis. This algorithm has a unique on-line adaptation and optimization capability, which is able to achieve maximum wind energy conversion efficiency through

  14. Maximum power point tracking: a cost saving necessity in solar energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Enslin, J H.R. [Stellenbosch Univ. (South Africa). Dept. of Electrical and Electronic Engineering

    1992-12-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking (MPPT) can improve cost effectiveness, has a higher reliability and can improve the quality of life in remote areas. A high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of between 15 and 25% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply (RAPS) systems. The advantages at large temperature variations and high power rated systems are much higher. Other advantages include optimal sizing and system monitor and control. (author).

  15. Defining the Benefits, Outputs, and Knowledge Elements of Program Evaluation.

    Science.gov (United States)

    Zorzi, Rochelle; Perrin, Burt; McGuire, Martha; Long, Bud; Lee, Linda

    2002-01-01

    The Canadian Evaluation Society explored the benefits that can be attributed to program evaluation, the outputs necessary to achieve those benefits, and the knowledge and skills needed to produce outputs. Findings, which articulate benefits, outputs, and skills, can be used by evaluation organizations to support advocacy and professional…

  16. A multi-centre analysis of radiotherapy beam output measurement

    Directory of Open Access Journals (Sweden)

    Matthew A. Bolt

    2017-10-01

    Conclusions: Machine beam output measurements were largely within ±2% of 1.00 cGy/MU. Clear trends in measured output over time were seen, with some machines having large drifts which would result in additional burden to maintain within acceptable tolerances. This work may act as a baseline for future comparison of beam output measurements.

  17. Geographic trends of scientific output and citation practices in psychiatry.

    Science.gov (United States)

    Igoumenou, Artemis; Ebmeier, Klaus; Roberts, Nia; Fazel, Seena

    2014-12-06

    Measures of research productivity are increasingly used to determine how research should be evaluated and funding decisions made. In psychiatry, citation patterns within and between countries are not known, and whether these differ by choice of citation metric. In this study, we examined publication characteristics and citation practices in articles published in 50 Web of Science indexed psychiatric and relevant clinical neurosciences journals, between January 2004 and December 2009 comprising 51,072 records that produced 375,962 citations. We compared citation patterns, including self-citations, between countries using standard x(2) tests. We found that most publications came from the USA, with Germany being second and UK third in productivity. USA articles received most citations and the highest citation rate with an average 11.5 citations per article. The UK received the second highest absolute number of citations, but came fourth by citation rate (9.7 citations/article), after the Netherlands (11.4 citations/article) and Canada (9.8 citations/article). Within the USA, Harvard University published most articles and these articles were the most cited, on average 20.0 citations per paper. In Europe, UK institutions published and were cited most often. The Institute of Psychiatry/Kings College London was the leading institution in terms of number of published records and overall citations, while Oxford University had the highest citation rate (18.5 citations/record). There were no differences between the self-citation practices of American and European researchers. Articles that examined some aspect of treatment in psychiatry were the most published. In terms of diagnosis, papers about schizophrenia-spectrum disorders were the most published and the most cited. We found large differences between and within countries in terms of their research productivity in psychiatry and clinical neuroscience. In addition, the ranking of countries and institutions differed widely

  18. A novel 3D detector configuration enabling high quantum efficiency, low crosstalk, and low output capacitance

    International Nuclear Information System (INIS)

    Aurola, A.; Marochkin, V.; Tuuva, T.

    2016-01-01

    The benefits of pixelated planar direct conversion semiconductor radiation detectors comprising a thick fully depleted substrate are that they offer low crosstalk, small output capacitance, and that the planar configuration simplifies manufacturing. In order to provide high quantum efficiency for high energy X-rays and Gamma-rays such a radiation detector should be as thick as possible. The maximum thickness and thus the maximum quantum efficiency has been limited by the substrate doping concentration: the lower the substrate doping the thicker the detector can be before reaching the semiconductor material's electric breakdown field. Thick direct conversion semiconductor detectors comprising vertical three-dimensional electrodes protruding through the substrate have been previously proposed by Sherwood Parker in order to promote rapid detection of radiation. An additional advantage of these detectors is that their thickness is not limited by the substrate doping, i.e., the size of the maximum electric field value in the detector does not depend on detector thickness. However, the thicker the substrate of such three dimensional detectors is the larger the output capacitance is and thus the larger the output noise is. In the novel direct conversion pixelated radiation detector utilizing a novel three dimensional semiconductor architecture, which is proposed in this work, the detector thickness is not limited by the substrate doping and the output capacitance is small and does not depend on the detector thickness. In addition, by incorporating an additional node to the novel three-dimensional semiconductor architecture it can be utilized as a high voltage transistor that can deliver current across high voltages. Furthermore, it is possible to connect a voltage difference of any size to the proposed novel three dimensional semiconductor architecture provided that it is thick enough—this is a novel feature that has not been previously possible for semiconductor

  19. Power output and efficiency of a thermoelectric generator under temperature control

    International Nuclear Information System (INIS)

    Chen, Wei-Hsin; Wu, Po-Hua; Wang, Xiao-Dong; Lin, Yu-Li

    2016-01-01

    Highlights: • Power output and efficiency of a thermoelectric generator (TEG) is studied. • Temperatures at the module’s surfaces are approximated by sinusoidal functions. • Mean output power and efficiency are enhanced by the temperature oscillation. • The maximum mean efficiency of the TEG in this study is 8.45%. • The phase angle of 180° is a feasible operation for maximizing the performance. - Abstract: Operation control is an effective way to improve the output power of thermoelectric generators (TEGs). The present study is intended to numerically investigate the power output and efficiency of a TEG and find the operating conditions for maximizing its performance. The temperature distributions at the hot side and cold side surfaces of the TEG are approximated by sinusoidal functions. The influences of the temperature amplitudes at the hot side surface and the cold side surface, the phase angle, and the figure-of-merit (ZT) on the performance of the TEG are analyzed. The predictions indicate that the mean output power and efficiency of the TEG are significantly enhanced by the temperature oscillation, whereas the mean absorbed heat by the TEG is slightly influenced. An increase in the temperature amplitude of the hot side surface and the phase angle can effectively improve the performance. For the phase angle of 0°, a smaller temperature amplitude at the cold side surface renders the better performance compared to that with a larger amplitude. When the ZT value increases from 0.736 to 1.8, the mean efficiency at the phase angle of 180° is amplified by a factor of 1.72, and the maximum mean efficiency is 8.45%. In summary, a larger temperature amplitude at the hot side surface with the phase angle of 180° is a feasible operation for maximizing the performance.

  20. Epilepsy Research in Iran: a Scientometric Analysis of Publications Output During 2000-2014.

    Science.gov (United States)

    Rasolabadi, Masoud; Rasouli-Ghahfarkhi, Seyedeh Moloud; Ardalan, Marlin; Kalhor, Marya Maryam; Seidi, Jamal; Gharib, Alireza

    2015-12-01

    The aim of this study is to analyze the epilepsy research output of Iran in national and global contexts, as reflected in its publication output indexed in Scopus citation database during 2000-2014. This study was based on the publications of epilepsy research from Iranian authors retrieved Feb. 2015 from Scopus Citation database [www.scopus.com]. The string used to retrieve the data was developed using "epilepsy OR epilepsies" keywords in title, abstract and keywords and Iran in affiliation field was our main string. Cumulative publication output of Iran in epilepsy research consisted of 702 papers from 2000 to 2014, with an average number of 46.53 papers per year. The total publication output of Iran in epilepsy research increased from 2 papers in 2000 to 88 papers in 2014. Hence, with 702 paper, Iran ranked 25(th) among the top 25 countries with a global share of 0.82 %. Iranian publication average citation per paper increased from 0 in 2000 to 7.88 in 2014. Overall, the received citations were 3184 citations during those years. Iran is collaborating with 36 countries with no more than 244 of its papers (35% of its total papers). It is necessary to prepare conditions for epilepsy researchers to collaborate more with international scientific societies in order to produce more and high quality papers.