WorldWideScience

Sample records for average color-magnitude relation

  1. Color-magnitude relations of late-type galaxies

    OpenAIRE

    Chang, Ruixiang; Shen, Shiyin; Hou, Jinliang; Shu, Chenggang; Shao, Zhengyi

    2006-01-01

    We use a large sample of galaxies drawn from the Sloan Digital Sky Survey (SDSS) and Two Micro All Sky Survey (2MASS) to present Color-Magnitude Relations (CMRs) for late-type galaxies in both optical and optical-infrared bands. A sample from SDSS Data Release 4 (DR4) is selected to investigate the optical properties. Optical-infrared colors are estimated from a position matched sample of DR4 and 2MASS, in which the photometric aperture mismatch between these two surveys is carefully correcte...

  2. THE AGE OF ELLIPTICALS AND THE COLOR-MAGNITUDE RELATION

    International Nuclear Information System (INIS)

    Schombert, James; Rakos, Karl

    2009-01-01

    Using new narrowband color observations of early-type galaxies in clusters, we reconstruct the color-magnitude relation (CMR) with a higher degree of accuracy than previous work. We then use the spectroscopically determined ages and metallicities from three samples, combined with multimetallicity spectral energy distribution models, to compare predicted colors for galaxies with young ages (less than 8 Gyr) with the known CMR. We find that the CMR cannot by reproduced by the spectroscopically determined ages and metallicities in any of the samples despite the high internal accuracies to the spectroscopic indices. In contrast, using only the (Fe) index to determine [Fe/H], and assuming a mean age of 12 Gyr for a galaxy's stellar population, we derive colors that exactly match not only the color zero point of the CMR but also its slope. We consider the source of young age estimates, the Hβ index, and examine the conflict between red continuum colors and large Hβ values in galaxy spectra. We conclude that our current understanding of stellar populations is insufficient to correctly interpret Hβ values.

  3. THE EFFECT OF DRY MERGERS ON THE COLOR-MAGNITUDE RELATION OF EARLY-TYPE GALAXIES

    International Nuclear Information System (INIS)

    Skelton, Rosalind E.; Bell, Eric F.; Somerville, Rachel S.

    2009-01-01

    We investigate the effect of dry merging on the color-magnitude relation (CMR) of galaxies and find that the amount of merging predicted by a hierarchical model results in a red sequence that compares well with the observed low-redshift relation. A sample of ∼ 29,000 early-type galaxies selected from the Sloan Digital Sky Survey Data Release 6 shows that the bright end of the CMR has a shallower slope and smaller scatter than the faint end. This magnitude dependence is predicted by a simple toy model in which gas-rich mergers move galaxies onto a 'creation red sequence' (CRS) by quenching their star formation, and subsequent mergers between red, gas-poor galaxies (so-called 'dry' mergers) move galaxies along the relation. We use galaxy merger trees from a semianalytic model of galaxy formation to test the amplitude of this effect and find a change in slope at the bright end that brackets the observations, using gas fraction thresholds of 10%-30% to separate wet and dry mergers. A more realistic model that includes scatter in the CRS shows that dry merging decreases the scatter at the bright end. Contrary to previous claims, the small scatter in the observed CMR thus cannot be used to constrain the amount of dry merging.

  4. ABOUT THE LINEARITY OF THE COLOR-MAGNITUDE RELATION OF EARLY-TYPE GALAXIES IN THE VIRGO CLUSTER

    International Nuclear Information System (INIS)

    Smith Castelli, Analía V.; Faifer, Favio R.; González, Nélida M.; Forte, Juan Carlos

    2013-01-01

    We revisit the color-magnitude relation of Virgo Cluster early-type galaxies in order to explore its alleged nonlinearity. To this aim, we reanalyze the relation already published from data obtained within the ACS Virgo Cluster Survey of the Hubble Space Telescope and perform our own photometry and analysis of the images of 100 early-type galaxies observed as part of this survey. In addition, we compare our results with those reported in the literature from data of the Sloan Digital Sky Survey. We have found that when the brightest galaxies and untypical systems are excluded from the sample, a linear relation arises in agreement with what is observed in other groups and clusters. The central regions of the brightest galaxies also follow this relation. In addition, we notice that Virgo contains at least four compact elliptical galaxies besides the well-known object VCC 1297 (NGC 4486B). Their locations in the (μ eff )-luminosity diagram define a trend different from that followed by normal early-type dwarf galaxies, setting an upper limit in effective surface brightness and a lower limit in the effective radius for their luminosities. Based on the distribution of different galaxy sub-samples in the color-magnitude and (μ eff )-luminosity diagrams, we draw some conclusions on their formation and the history of their evolution

  5. THE COLOR-MAGNITUDE RELATION FOR METAL-POOR GLOBULAR CLUSTERS IN M87: CONFIRMATION FROM DEEP HST/ACS IMAGING

    International Nuclear Information System (INIS)

    Peng, Eric W.; Jordan, Andres; Blakeslee, John P.; Cote, Patrick; Ferrarese, Laura; Mieske, Steffen; Harris, William E.; Madrid, Juan P.; Meurer, Gerhardt R.

    2009-01-01

    Metal-poor globular clusters (GCs) are our local link to the earliest epochs of star formation and galaxy building. Studies of extragalactic GC systems using deep, high-quality imaging have revealed a small but significant slope to the color-magnitude relation for metal-poor GCs in a number of galaxies. We present a study of the M87 GC system using deep, archival HST/ACS imaging with the F606W and F814W filters, in which we find a significant color-magnitude relation for the metal-poor GCs. The slope of this relation in the I versus V-I color-magnitude diagram (γ I = -0.024 ± 0.006) is perfectly consistent with expectations based on previously published results using data from the ACS Virgo Cluster Survey. The relation is driven by the most luminous GCs, those with M I ∼ I = -7.8, a luminosity which is ∼1 mag fainter than our fitted Gaussian mean for the luminosity function (LF) of blue, metal-poor GCs (∼0.8 mag fainter than the mean for all GCs). These results indicate that there is a mass scale at which the correlation begins, and is consistent with a scenario where self-enrichment drives a mass-metallicity relationship. We show that previously measured half-light radii of M87 GCs from best-fit PSF-convolved King models are consistent with the more accurate measurements in this study, and we also explain how the color-magnitude relation for metal-poor GCs is real and cannot be an artifact of the photometry. We fit Gaussian and evolved Schechter functions to the luminosity distribution of GCs across all colors, as well as divided into blue and red subpopulations, finding that the blue GCs have a brighter mean luminosity and a narrower distribution than the red GCs. Finally, we present a catalog of astrometry and photometry for 2250 M87 GCs.

  6. Hierarchical Probabilistic Inference of the Color-Magnitude Diagram and Shrinkage of Stellar Distance Uncertainties

    Science.gov (United States)

    Leistedt, Boris; Hogg, David W.

    2017-12-01

    We present a hierarchical probabilistic model for improving geometric stellar distance estimates using color-magnitude information. This is achieved with a data-driven model of the color-magnitude diagram, not relying on stellar models but instead on the relative abundances of stars in color-magnitude cells, which are inferred from very noisy magnitudes and parallaxes. While the resulting noise-deconvolved color-magnitude diagram can be useful for a range of applications, we focus on deriving improved stellar distance estimates relying on both parallax and photometric information. We demonstrate the efficiency of this approach on the 1.4 million stars of the Gaia TGAS sample that also have AAVSO Photometric All Sky Survey magnitudes. Our hierarchical model has 4 million parameters in total, most of which are marginalized out numerically or analytically. We find that distance estimates are significantly improved for the noisiest parallaxes and densest regions of the color-magnitude diagram. In particular, the average distance signal-to-noise ratio (S/N) and uncertainty improve by 19% and 36%, respectively, with 8% of the objects improving in S/N by a factor greater than 2. This computationally efficient approach fully accounts for both parallax and photometric noise and is a first step toward a full hierarchical probabilistic model of the Gaia data.

  7. Powerful CMD: a tool for color-magnitude diagram studies

    Science.gov (United States)

    Li, Zhong-Mu; Mao, Cai-Yan; Luo, Qi-Ping; Fan, Zhou; Zhao, Wen-Chang; Chen, Li; Li, Ru-Xi; Guo, Jian-Po

    2017-07-01

    We present a new tool for color-magnitude diagram (CMD) studies, Powerful CMD. This tool is built based on the advanced stellar population synthesis (ASPS) model, in which single stars, binary stars, rotating stars and star formation history have been taken into account. Via Powerful CMD, the distance modulus, color excess, metallicity, age, binary fraction, rotating star fraction and star formation history of star clusters can be determined simultaneously from observed CMDs. The new tool is tested via both simulated and real star clusters. Five parameters of clusters NGC 6362, NGC 6652, NGC 6838 and M67 are determined and compared to other works. It is shown that this tool is useful for CMD studies, in particular for those utilizing data from the Hubble Space Telescope (HST). Moreover, we find that inclusion of binaries in theoretical stellar population models may lead to smaller color excess compared to the case of single-star population models.

  8. The gap in the color-magnitude diagram of NGC 2420: A test of convective overshoot and cluster age

    Science.gov (United States)

    Demarque, Pierre; Sarajedini, Ata; Guo, X.-J.

    1994-05-01

    Theoretical isochrones have been constructed using the OPAL opacities specifically to study the color-magnitude diagram of the open star cluster NGC 2420. This cluster provides a rare test of core convection in intermediate-mass stars. At the same time, its age is of interest because of its low metallicity and relatively high Galactic latitude for an open cluster. The excellent color-magnitude diagram constructed by Anthony-Twarog et al. (1990) allows a detailed fit of the isochrones to the photometric data. We discuss the importance of convective overshoot at the convective core edge in determining the morphology of the gap located near the main-sequence turnoff. We find that given the assumptions made in the models, a modest amount of overshoot (0.23 Hp) is required for the best fit. Good agreement is achieved with all features of the turnoff gap for a cluster age of 2.4 +/- 0.2 Gyr. We note that a photometrically complete luminosity function near the main-sequence turnoff and subgiant branch would also provide an important test of the overshoot models.

  9. Multi-color light curves of type Ia supernovae on the color-magnitude diagram: A novel step toward more precise distance and extinction estimates

    International Nuclear Information System (INIS)

    Wang, Lifan; Goldhaber, Gerson; Aldering, Greg; Perlmutter, Saul

    2003-01-01

    We show empirically that fits to the color-magnitude relation of Type Ia supernovae after optical maximum can provide accurate relative extragalactic distances. We report the discovery of an empirical color relation for Type Ia light curves: During much of the first month past maximum, the magnitudes of Type Ia supernovae defined at a given value of color index have a very small magnitude dispersion; moreover, during this period the relation between B magnitude and B-V color (or B-Ror B-I color) is strikingly linear, to the accuracy of existing well-measured data. These linear relations can provide robust distance estimates, in particular, by using the magnitudes when the supernova reaches a given color. After correction for light curve stretch factor or decline rate, the dispersion of the magnitudes taken at the intercept of the linear color-magnitude relation are found to be around 0 m .08 for the sub-sample of supernovae with (B max - V max ) (le) 0 m 0.5, and around 0 m .11 for the sub-sample with (B max - V max ) (le) 0 m .2. This small dispersion is consistent with being mostly due to observational errors. The method presented here and the conventional light curve fitting methods can be combined to further improve statistical dispersions of distance estimates. It can be combined with the magnitude at maximum to deduce dust extinction. The slopes of the color-magnitude relation may also be used to identify intrinsically different SN Ia systems. The method provides a tool that is fundamental to using SN Ia to estimate cosmological parameters such as the Hubble constant and the mass and dark energy content of the universe

  10. THE GLOBULAR CLUSTER NGC 6402 (M14). I. A NEW BV COLOR-MAGNITUDE DIAGRAM

    Energy Technology Data Exchange (ETDEWEB)

    Contreras Pena, C.; Catelan, M. [Pontificia Universidad Catolica de Chile, Departamento de Astronomia y Astrofisica, Av. Vicuna Mackenna 4860, 782-0436 Macul, Santiago (Chile); Grundahl, F. [Danish AsteroSeismology Centre (DASC), Department of Physics and Astronomy, Aarhus University, DK-8000 Aarhus C (Denmark); Stephens, A. W. [Gemini Observatory, 670 North A' ohoku Place, Hilo, HI 96720 (United States); Smith, H. A., E-mail: mcatelan@astro.puc.cl, E-mail: c.contreras@herts.ac.uk [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States)

    2013-09-15

    We present BV photometry of the Galactic globular cluster NGC 6402 (M14), based on 65 V frames and 67 B frames, reaching two magnitudes below the turnoff level. This represents, to the best of our knowledge, the deepest color-magnitude diagram (CMD) of NGC 6402 available in the literature. Statistical decontamination of field stars as well as differential reddening corrections are performed in order to derive a precise ridgeline and hence physical parameters of the cluster. We discuss previous attempts at deriving a reddening value for the cluster, and argue in favor of a value E(B - V) = 0.57 {+-} 0.02, which is significantly higher than indicated by either the Burstein and Heiles or Schlegel et al. (corrected according to Bonifacio et al.) interstellar dust maps. Differential reddening across the face of the cluster, which we find to be present at the level of {Delta}E(B - V) Almost-Equal-To 0.17 mag, is taken into account in our analysis. We measure several metallicity indicators based on the position of the red giant branch (RGB) in the cluster CMD. These give a metallicity of [Fe/H] = -1.38 {+-} 0.07 on the Zinn and West scale and [Fe/H] = -1.28 {+-} 0.08 on the new Carretta et al. (UVES) scale. We also provide measurements of other important photometric parameters for this cluster, including the position of the RGB luminosity function ''bump'' and the horizontal branch morphology. We compare the NGC 6402 ridgeline with that of NGC 5904 (M5) derived by Sandquist et al., and find evidence that NGC 6402 and M5 have approximately the same age to within the uncertainties, although the possibility that M14 may be slightly older cannot be ruled out.

  11. Analysis of color-magnitude diagrams from three large Magellanic Cloud clusters

    International Nuclear Information System (INIS)

    Jones, J.H.

    1985-01-01

    The color-magnitude diagrams of three LMC clusters and a field were derived from photographic and CCD data provided by Dr. P.J. Flower of Clemson University and Dr. R. Schommer of Rutgers University. The photographic data were scanned and converted to intensity images at KPNO. The stellar photometry program RICHFLD was used to measure the raw magnitudes from these images. Problems with the standard sequence on the plate kept the color terms for the photographic data from being well determined. A version of DAOPHOT was installed on the VAX 11/280s at Clemson and was used to measure the magnitudes from the CCD images of NGC 2249. These magnitudes were used to define another photoelectric sequence for the photographic data which were used to determine a well defined transformation into the standard BV system. The CMDs derived from both the photographic and CCD images of NGC 2249 showed a gap near the tip of the MS. This gap was taken to be the period of rapid evolution just after core hydrogen exhaustion. Using a true distance modulus of 18.3 for the LMC and a reddening taken from the literature, an age of 600 +/- 75 million years was found for NGC 2249. Comparing the CMD of SL 889 to that of NGC 2249 gives a similar age for this small LMC cluster. A subgiant branch was identified in the CMD of NGC 2241. Comparison to old metal poor galactic clusters gave an age near 4 billion years, favoring the short distance scale to the LMC

  12. MEASURING GALAXY STAR FORMATION RATES FROM INTEGRATED PHOTOMETRY: INSIGHTS FROM COLOR-MAGNITUDE DIAGRAMS OF RESOLVED STARS

    International Nuclear Information System (INIS)

    Johnson, Benjamin D.; Weisz, Daniel R.; Dalcanton, Julianne J.; Johnson, L. C.; Williams, Benjamin F.; Dale, Daniel A.; Dolphin, Andrew E.; Gil de Paz, Armando; Kennicutt, Robert C. Jr.; Lee, Janice C.; Skillman, Evan D.; Boquien, Mèdèric

    2013-01-01

    We use empirical star formation histories (SFHs), measured from Hubble-Space-Telescope-based resolved star color-magnitude diagrams, as input into population synthesis codes to model the broadband spectral energy distributions (SEDs) of 50 nearby dwarf galaxies (6.5 * /M ☉ < 8.5, with metallicities ∼10% solar). In the presence of realistic SFHs, we compare the modeled and observed SEDs from the ultraviolet (UV) through near-infrared and assess the reliability of widely used UV-based star formation rate (SFR) indicators. In the FUV through i bands, we find that the observed and modeled SEDs are in excellent agreement. In the Spitzer 3.6 μm and 4.5 μm bands, we find that modeled SEDs systematically overpredict observed luminosities by up to ∼0.2 dex, depending on treatment of the TP-AGB stars in the synthesis models. We assess the reliability of UV luminosity as a SFR indicator, in light of independently constrained SFHs. We find that fluctuations in the SFHs alone can cause factor of ∼2 variations in the UV luminosities relative to the assumption of a constant SFH over the past 100 Myr. These variations are not strongly correlated with UV-optical colors, implying that correcting UV-based SFRs for the effects of realistic SFHs is difficult using only the broadband SED. Additionally, for this diverse sample of galaxies, we find that stars older than 100 Myr can contribute from <5%-100% of the present day UV luminosity, highlighting the challenges in defining a characteristic star formation timescale associated with UV emission. We do find a relationship between UV emission timescale and broadband UV-optical color, though it is different than predictions based on exponentially declining SFH models. Our findings have significant implications for the comparison of UV-based SFRs across low-metallicity populations with diverse SFHs.

  13. MEASURING GALAXY STAR FORMATION RATES FROM INTEGRATED PHOTOMETRY: INSIGHTS FROM COLOR-MAGNITUDE DIAGRAMS OF RESOLVED STARS

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Benjamin D. [Institute d' Astrophysique de Paris, CNRS, UPMC, 98bis Bd Arago, F-75014 Paris (France); Weisz, Daniel R.; Dalcanton, Julianne J.; Johnson, L. C.; Williams, Benjamin F. [Department of Astronomy, Box 351580, University of Washington, Seattle, WA 98195 (United States); Dale, Daniel A. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Dolphin, Andrew E. [Raytheon, 1151 E. Hermans Road, Tucson, AZ 85756 (United States); Gil de Paz, Armando [CEI Campus Moncloa, UCM-UPM, Departamento de Astrofisica y CC. de la Atmosfera, Facultad de CC. Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Kennicutt, Robert C. Jr. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Lee, Janice C. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Skillman, Evan D. [Department of Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States); Boquien, Mederic [Marseille Universite, CNRS, LAM (Laboratoire d' Astrophysique de Marseille) UMR 7326, F-13388 Marseille (France)

    2013-07-20

    We use empirical star formation histories (SFHs), measured from Hubble-Space-Telescope-based resolved star color-magnitude diagrams, as input into population synthesis codes to model the broadband spectral energy distributions (SEDs) of 50 nearby dwarf galaxies (6.5 < log M{sub *}/M{sub Sun} < 8.5, with metallicities {approx}10% solar). In the presence of realistic SFHs, we compare the modeled and observed SEDs from the ultraviolet (UV) through near-infrared and assess the reliability of widely used UV-based star formation rate (SFR) indicators. In the FUV through i bands, we find that the observed and modeled SEDs are in excellent agreement. In the Spitzer 3.6 {mu}m and 4.5 {mu}m bands, we find that modeled SEDs systematically overpredict observed luminosities by up to {approx}0.2 dex, depending on treatment of the TP-AGB stars in the synthesis models. We assess the reliability of UV luminosity as a SFR indicator, in light of independently constrained SFHs. We find that fluctuations in the SFHs alone can cause factor of {approx}2 variations in the UV luminosities relative to the assumption of a constant SFH over the past 100 Myr. These variations are not strongly correlated with UV-optical colors, implying that correcting UV-based SFRs for the effects of realistic SFHs is difficult using only the broadband SED. Additionally, for this diverse sample of galaxies, we find that stars older than 100 Myr can contribute from <5%-100% of the present day UV luminosity, highlighting the challenges in defining a characteristic star formation timescale associated with UV emission. We do find a relationship between UV emission timescale and broadband UV-optical color, though it is different than predictions based on exponentially declining SFH models. Our findings have significant implications for the comparison of UV-based SFRs across low-metallicity populations with diverse SFHs.

  14. The reliability of age measurements for Young Stellar Objects from Hertzsprung-Russell or color-magnitude diagrams

    International Nuclear Information System (INIS)

    Preibisch, Thomas

    2012-01-01

    The possibility to estimate ages and masses of Young Stellar Objects (YSOs) from their location in the Hertzsprung-Russell diagram (HRD) or a color-magnitude diagram provides a very important tool for the investigation of fundamental questions related to the processes of star formation and early stellar evolution. Age estimates are essential for studies of the temporal evolution of circumstellar material around YSOs and the conditions for planet formation. The characterization of the age distribution of the YSOs in a star forming region allows researchers to reconstruct the star formation history and provides important information on the fundamental question of whether star formation is a slow or a fast process. However, the reliability of these age measurements and the ability to detect possible age spreads in the stellar population of star forming regions are fundamentally limited by several factors. The variability of YSOs, unresolved binary components, and uncertainties in the calibrations of the stellar parameters cause uncertainties in the derived luminosities that are usually much larger than the typical photometry errors. Furthermore, the pre-main sequence evolution track of a YSO depends to some degree on the initial conditions and the details of its individual accretion history. I discuss how these observational and model uncertainties affect the derived isochronal ages, and demonstrate how neglecting or underestimating these uncertainties can easily lead to severe misinterpretations, gross overestimates of the age spread, and ill-based conclusions about the star formation history. These effects are illustrated by means of Monte-Carlo simulations of observed star clusters with realistic observational uncertainties. The most important points are as follows. First, the observed scatter in the HRD must not be confused with a genuine age spread, but is always just an upper limit to the true age spread. Second, histograms of isochronal ages naturally show a

  15. First results on the cluster galaxy population from the Subaru Hyper Suprime-Cam survey. II. Faint end color-magnitude diagrams and radial profiles of red and blue galaxies at 0.1 < z < 1.1

    Science.gov (United States)

    Nishizawa, Atsushi J.; Oguri, Masamune; Oogi, Taira; More, Surhud; Nishimichi, Takahiro; Nagashima, Masahiro; Lin, Yen-Ting; Mandelbaum, Rachel; Takada, Masahiro; Bahcall, Neta; Coupon, Jean; Huang, Song; Jian, Hung-Yu; Komiyama, Yutaka; Leauthaud, Alexie; Lin, Lihwai; Miyatake, Hironao; Miyazaki, Satoshi; Tanaka, Masayuki

    2018-01-01

    We present a statistical study of the redshift evolution of the cluster galaxy population over a wide redshift range from 0.1 to 1.1, using ˜1900 optically-selected CAMIRA clusters from ˜232 deg2 of the Hyper Suprime-Cam (HSC) Wide S16A data. Our stacking technique with a statistical background subtraction reveals color-magnitude diagrams of red-sequence and blue cluster galaxies down to faint magnitudes of mz ˜ 24. We find that the linear relation of red-sequence galaxies in the color-magnitude diagram extends down to the faintest magnitudes we explore with a small intrinsic scatter σint(g - r) < 0.1. The scatter does not evolve significantly with redshift. The stacked color-magnitude diagrams are used to define red and blue galaxies in clusters in order to study their radial number density profiles without resorting to photometric redshifts of individual galaxies. We find that red galaxies are significantly more concentrated toward cluster centers and blue galaxies dominate the outskirts of clusters. We explore the fraction of red galaxies in clusters as a function of redshift, and find that the red fraction decreases with increasing distances from cluster centers. The red fraction exhibits a moderate decrease with increasing redshift. The radial number density profiles of cluster member galaxies are also used to infer the location of the steepest slope in the three-dimensional galaxy density profiles. For a fixed threshold in richness, we find little redshift evolution in this location.

  16. THE DEEPEST HUBBLE SPACE TELESCOPE COLOR-MAGNITUDE DIAGRAM OF M32. EVIDENCE FOR INTERMEDIATE-AGE POPULATIONS

    International Nuclear Information System (INIS)

    Monachesi, Antonela; Trager, Scott C.; Lauer, Tod R.; Mighell, Kenneth J.; Freedman, Wendy; Dressler, Alan; Grillmair, Carl

    2011-01-01

    We present the deepest optical color-magnitude diagram (CMD) to date of the local elliptical galaxy M32. We have obtained F435W and F555W photometries based on Hubble Space Telescope (HST) Advanced Camera for Surveys/High-Resolution Channel images for a region 110'' from the center of M32 (F1) and a background field (F2) about 320'' away from M32 center. Due to the high resolution of our Nyquist-sampled images, the small photometric errors, and the depth of our data (the CMD of M32 goes as deep as F435W ∼ 28.5 at 50% completeness level), we obtain the most detailed resolved photometric study of M32 yet. Deconvolution of HST images proves to be superior than other standard methods to derive stellar photometry on extremely crowded HST images, as its photometric errors are ∼2x smaller than other methods tried. The location of the strong red clump in the CMD suggests a mean age between 8 and 10 Gyr for [Fe/H] = -0.2 dex in M32. We detect for the first time a red giant branch bump and an asymptotic giant branch (AGB) bump in M32 which, together with the red clump, allow us to constrain the age and metallicity of the dominant population in this region of M32. These features indicate that the mean age of M32's population at ∼2' from its center is between 5 and 10 Gyr. We see evidence of an intermediate-age population in M32 mainly due to the presence of AGB stars rising to M F555W ∼ -2.0. Our detection of a blue component of stars (blue plume) may indicate for the first time the presence of a young stellar population, with ages of the order of 0.5 Gyr, in our M32 field. However, it is likely that the brighter stars of this blue plume belong to the disk of M31 rather than to M32. The fainter stars populating the blue plume indicate the presence of stars not younger than 1 Gyr and/or BSSs in M32. The CMD of M32 displays a wide color distribution of red giant branch stars indicating an intrinsic spread in metallicity with a peak at [Fe/H] ∼ -0.2. There is not a

  17. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... section; or (ii) For alcohol-fueled model types, the fuel economy value calculated for that model type in...) For alcohol dual fuel model types, for model years 1993 through 2019, the harmonic average of the... combined model type fuel economy value for operation on alcohol fuel as determined in § 600.208-12(b)(5)(ii...

  18. THE PAndAS VIEW OF THE ANDROMEDA SATELLITE SYSTEM. I. A BAYESIAN SEARCH FOR DWARF GALAXIES USING SPATIAL AND COLOR-MAGNITUDE INFORMATION

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Nicolas F.; Ibata, Rodrigo A. [Observatoire Astronomique de Strasbourg, Université de Strasbourg, CNRS, UMR 7550, 11 rue de l' Université, F-67000 Strasbourg (France); McConnachie, Alan W. [NRC Herzberg Institute of Astrophysics, 5071 West Saanich Road, Victoria, BC V9E 2E7 (Canada); Mackey, A. Dougal [Research School of Astronomy and Astrophysics, The Australian National University, Mount Stromlo Observatory, via Cotter Road, Weston, ACT 2611 (Australia); Ferguson, Annette M. N. [Institute for Astronomy, University of Edinburgh, Blackford Hill, Edinburgh EH9 3HJ (United Kingdom); Irwin, Michael J. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Lewis, Geraint F. [Institute of Astronomy, School of Physics A28, University of Sydney, NSW 2006 (Australia); Fardal, Mark A., E-mail: nicolas.martin@astro.unistra.fr [Department of Astronomy, University of Massachusetts, Amherst, MA 01003 (United States)

    2013-10-20

    We present a generic algorithm to search for dwarf galaxies in photometric catalogs and apply it to the Pan-Andromeda Archaeological Survey (PAndAS). The algorithm is developed in a Bayesian framework and, contrary to most dwarf galaxy search codes, makes use of both the spatial and color-magnitude information of sources in a probabilistic approach. Accounting for the significant contamination from the Milky Way foreground and from the structured stellar halo of the Andromeda galaxy, we recover all known dwarf galaxies in the PAndAS footprint with high significance, even for the least luminous ones. Some Andromeda globular clusters are also recovered and, in one case, discovered. We publish a list of the 143 most significant detections yielded by the algorithm. The combined properties of the 39 most significant isolated detections show hints that at least some of these trace genuine dwarf galaxies, too faint to be individually detected. Follow-up observations by the community are mandatory to establish which are real members of the Andromeda satellite system. The search technique presented here will be used in an upcoming contribution to determine the PAndAS completeness limits for dwarf galaxies. Although here tuned to the search of dwarf galaxies in the PAndAS data, the algorithm can easily be adapted to the search for any localized overdensity whose properties can be modeled reliably in the parameter space of any catalog.

  19. THE PAndAS VIEW OF THE ANDROMEDA SATELLITE SYSTEM. I. A BAYESIAN SEARCH FOR DWARF GALAXIES USING SPATIAL AND COLOR-MAGNITUDE INFORMATION

    International Nuclear Information System (INIS)

    Martin, Nicolas F.; Ibata, Rodrigo A.; McConnachie, Alan W.; Mackey, A. Dougal; Ferguson, Annette M. N.; Irwin, Michael J.; Lewis, Geraint F.; Fardal, Mark A.

    2013-01-01

    We present a generic algorithm to search for dwarf galaxies in photometric catalogs and apply it to the Pan-Andromeda Archaeological Survey (PAndAS). The algorithm is developed in a Bayesian framework and, contrary to most dwarf galaxy search codes, makes use of both the spatial and color-magnitude information of sources in a probabilistic approach. Accounting for the significant contamination from the Milky Way foreground and from the structured stellar halo of the Andromeda galaxy, we recover all known dwarf galaxies in the PAndAS footprint with high significance, even for the least luminous ones. Some Andromeda globular clusters are also recovered and, in one case, discovered. We publish a list of the 143 most significant detections yielded by the algorithm. The combined properties of the 39 most significant isolated detections show hints that at least some of these trace genuine dwarf galaxies, too faint to be individually detected. Follow-up observations by the community are mandatory to establish which are real members of the Andromeda satellite system. The search technique presented here will be used in an upcoming contribution to determine the PAndAS completeness limits for dwarf galaxies. Although here tuned to the search of dwarf galaxies in the PAndAS data, the algorithm can easily be adapted to the search for any localized overdensity whose properties can be modeled reliably in the parameter space of any catalog

  20. Leading multiple teams: average and relative external leadership influences on team empowerment and effectiveness.

    Science.gov (United States)

    Luciano, Margaret M; Mathieu, John E; Ruddy, Thomas M

    2014-03-01

    External leaders continue to be an important source of influence even when teams are empowered, but it is not always clear how they do so. Extending research on structurally empowered teams, we recognize that teams' external leaders are often responsible for multiple teams. We adopt a multilevel approach to model external leader influences at both the team level and the external leader level of analysis. In doing so, we distinguish the influence of general external leader behaviors (i.e., average external leadership) from those that are directed differently toward the teams that they lead (i.e., relative external leadership). Analysis of data collected from 451 individuals, in 101 teams, reporting to 25 external leaders, revealed that both relative and average external leadership related positively to team empowerment. In turn, team empowerment related positively to team performance and member job satisfaction. However, while the indirect effects were all positive, we found that relative external leadership was not directly related to team performance, and average external leadership evidenced a significant negative direct influence. Additionally, relative external leadership exhibited a significant direct positive influence on member job satisfaction as anticipated, whereas average external leadership did not. These findings attest to the value in distinguishing external leaders' behaviors that are exhibited consistently versus differentially across empowered teams. Implications and future directions for the study and management of external leaders overseeing multiple teams are discussed.

  1. An Experimental Study Related to Planning Abilities of Gifted and Average Students

    Directory of Open Access Journals (Sweden)

    Marilena Z. Leana-Taşcılar

    2016-02-01

    Full Text Available Gifted students differ from their average peers in psychological, social, emotional and cognitive development. One of these differences in the cognitive domain is related to executive functions. One of the most important executive functions is planning and organization ability. The aim of this study was to compare planning abilities of gifted students with those of their average peers and to test the effectiveness of a training program on planning abilities of gifted students and average students. First, students’ intelligence and planning abilities were measured and then assigned to either experimental or control group. The groups were matched by intelligence and planning ability (experimental: (13 gifted and 8 average; control: 14 gifted and 8 average. In total 182 students (79 gifted and 103 average participated in the study. Then, a training program was implemented in the experimental group to find out if it improved students’ planning ability. Results showed that boys had better planning abilities than girls did, and gifted students had better planning abilities than their average peers did. Significant results were obtained in favor of the experimental group in the posttest scores

  2. Compression of head-related transfer function using autoregressive-moving-average models and Legendre polynomials

    DEFF Research Database (Denmark)

    Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob

    2013-01-01

    -moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere...

  3. Novel relations between the ergodic capacity and the average bit error rate

    KAUST Repository

    Yilmaz, Ferkan

    2011-11-01

    Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.

  4. Optimal weighted averaging of event related activity from acquisitions with artifacts.

    Science.gov (United States)

    Vollero, Luca; Petrichella, Sara; Innello, Giulio

    2016-08-01

    In several biomedical applications that require the signal processing of biological data, the starting procedure for noise reduction is the ensemble averaging of multiple repeated acquisitions (trials). This method is based on the assumption that each trial is composed of two additive components: (i) a time-locked activity related to some sensitive/stimulation phenomenon (ERA, Event Related Activity in the following) and (ii) a sum of several other non time-locked background activities. The averaging aims at estimating the ERA activity under very low Signal to Noise and Interference Ratio (SNIR). Although averaging is a well established tool, its performance can be improved in the presence of high-power disturbances (artifacts) by a trials classification and removal stage. In this paper we propose, model and evaluate a new approach that avoids trials removal, managing trials classified as artifact-free and artifact-prone with two different weights. Based on the model, a weights tuning is possible and through modeling and simulations we show that, when optimally configured, the proposed solution outperforms classical approaches.

  5. Relations between a typical scale and averages in the breaking of fractal distribution

    Science.gov (United States)

    Ishikawa, Atushi; Suzuki, Tadao

    2004-11-01

    We study distributions which have both fractal and non-fractal scale regions by introducing a typical scale into a scale invariant system. As one of models in which distributions follow power law in the large-scale region and deviate further from the power law in the smaller-scale region, we employ 2-dim quantum gravity modified by the R2 term. As examples of distributions in the real world which have similar property to this model, we consider those of personal income in Japan over latest twenty fiscal years. We find relations between the typical scale and several kinds of averages in this model, and observe that these relations are also valid in recent personal income distributions in Japan with sufficient accuracy. We show the existence of the fiscal years so called bubble term in which the gap has arisen in power law, by observing that the data are away from one of these relations. We confirm, therefore, that the distribution of this model has close similarity to those of personal income. In addition, we can estimate the value of Pareto index and whether a big gap exists in power law by using only these relations. As a result, we point out that the typical scale is an useful concept different from average value and that the distribution function derived in this model is an effective tool to investigate these kinds of distributions.

  6. Average and dispersion of the luminosity-redshift relation in the concordance model

    CERN Document Server

    Ben-Dayan, I.; Marozzi, G.; Nugier, F.; Veneziano, G.

    2013-01-01

    Starting from the luminosity-redshift relation recently given up to second order in the Poisson gauge, we calculate the effects of the realistic stochastic background of perturbations of the so-called concordance model on the combined light-cone and ensemble average of various functions of the luminosity distance, and on their variance, as functions of redshift. We apply a gauge-invariant light-cone averaging prescription which is free from infrared and ultraviolet divergences, making our results robust with respect to changes of the corresponding cutoffs. Our main conclusions, in part already anticipated in a recent letter for the case of a perturbation spectrum computed in the linear regime, are that such inhomogeneities not only cannot avoid the need for dark energy, but also cannot prevent, in principle, the determination of its parameters down to an accuracy of order $10^{-3}-10^{-5}$, depending on the averaged observable and on the regime considered for the power spectrum. However, taking into account t...

  7. Average Tropical Relative Humidity from AIRS, Dec-Feb 2002-2005

    Science.gov (United States)

    2007-01-01

    The average tropospheric relative humidity from AIRS for the four December-February periods during 2002 through 2005. The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.

  8. Category structure determines the relative attractiveness of global versus local averages.

    Science.gov (United States)

    Vogel, Tobias; Carr, Evan W; Davis, Tyler; Winkielman, Piotr

    2018-02-01

    Stimuli that capture the central tendency of presented exemplars are often preferred-a phenomenon also known as the classic beauty-in-averageness effect . However, recent studies have shown that this effect can reverse under certain conditions. We propose that a key variable for such ugliness-in-averageness effects is the category structure of the presented exemplars. When exemplars cluster into multiple subcategories, the global average should no longer reflect the underlying stimulus distributions, and will thereby become unattractive. In contrast, the subcategory averages (i.e., local averages) should better reflect the stimulus distributions, and become more attractive. In 3 studies, we presented participants with dot patterns belonging to 2 different subcategories. Importantly, across studies, we also manipulated the distinctiveness of the subcategories. We found that participants preferred the local averages over the global average when they first learned to classify the patterns into 2 different subcategories in a contrastive categorization paradigm (Experiment 1). Moreover, participants still preferred local averages when first classifying patterns into a single category (Experiment 2) or when not classifying patterns at all during incidental learning (Experiment 3), as long as the subcategories were sufficiently distinct. Finally, as a proof-of-concept, we mapped our empirical results onto predictions generated by a well-known computational model of category learning (the Generalized Context Model [GCM]). Overall, our findings emphasize the key role of categorization for understanding the nature of preferences, including any effects that emerge from stimulus averaging. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Using NDVI to assess departure from average greenness and its relation to fire business

    Science.gov (United States)

    Robert E. Burgan; Roberta A. Hartford; Jeffery C. Eidenshink

    1996-01-01

    A new satellite-derived vegetation greenness map, departure from average, is designed to compare current-year vegetation greenness with average greenness for the same time of year. Live-fuel condition as portrayed on this map, and the calculated 1,000-hour fuel moistures, are compared to fire occurrence and area burned in Montana and Idaho during the 1993 and 1994 fire...

  10. Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure

    NARCIS (Netherlands)

    Talsma, D.

    2008-01-01

    The auto-adaptive averaging procedure proposed here classifies artifacts in event-related potential data by optimizing the signal-to-noise ratio. This method rank orders single trials according to the impact of each trial on the ERP average. Then, the minimum residual background noise level in the

  11. Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure.

    NARCIS (Netherlands)

    Talsma, D.

    2008-01-01

    The auto-adaptive averaging procedure proposed here classifies artifacts in event-related potential data by optimizing the signal-to-noise ratio. This method rank orders single trials according to the impact of each trial on the ERP average. Then, the minimum residual background noise level in the

  12. Optical band gap in relation to the average coordination number in Ge - S - Bi thin films

    Science.gov (United States)

    Saffarini, G.; Schmitt, H.; Shanak, H.; Nowoczin, J.; Müller, J.

    2003-09-01

    Chalcogenide glasses belonging to the GexS94 - xBi6 system (14 x ≤ 28.33 at%) have been prepared from high purity constituent elements. Thin films of the same materials have been deposited by vacuum thermal evaporation. Optical absorbance measurements have been performed on the as-deposited films. The allowed optical transition is found to be indirect and the corresponding optical gaps, Eg, are determined. The variation of Eg with the average coordination number, r, is also investigated. The observed Eg - r dependence is discussed on the basis of the chemical bonding between the constituents and the rigidity percolation threshold behavior of the network.

  13. Toddlers' bias to look at average versus obese figures relates to maternal anti-fat prejudice.

    Science.gov (United States)

    Ruffman, Ted; O'Brien, Kerry S; Taumoepeau, Mele; Latner, Janet D; Hunter, John A

    2016-02-01

    Anti-fat prejudice (weight bias, obesity stigma) is strong, prevalent, and increasing in adults and is associated with negative outcomes for those with obesity. However, it is unknown how early in life this prejudice forms and the reasons for its development. We examined whether infants and toddlers might display an anti-fat bias and, if so, whether it was influenced by maternal anti-fat attitudes through a process of social learning. Mother-child dyads (N=70) split into four age groups participated in a preferential looking paradigm whereby children were presented with 10 pairs of average and obese human figures in random order, and their viewing times (preferential looking) for the figures were measured. Mothers' anti-fat prejudice and education were measured along with mothers' and fathers' body mass index (BMI) and children's television viewing time. We found that older infants (M=11months) had a bias for looking at the obese figures, whereas older toddlers (M=32months) instead preferred looking at the average-sized figures. Furthermore, older toddlers' preferential looking was correlated significantly with maternal anti-fat attitudes. Parental BMI, education, and children's television viewing time were unrelated to preferential looking. Looking times might signal a precursor to explicit fat prejudice socialized via maternal anti-fat attitudes. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Scaling Relations and Self-Similarity of 3-Dimensional Reynolds-Averaged Navier-Stokes Equations.

    Science.gov (United States)

    Ercan, Ali; Kavvas, M Levent

    2017-07-25

    Scaling conditions to achieve self-similar solutions of 3-Dimensional (3D) Reynolds-Averaged Navier-Stokes Equations, as an initial and boundary value problem, are obtained by utilizing Lie Group of Point Scaling Transformations. By means of an open-source Navier-Stokes solver and the derived self-similarity conditions, we demonstrated self-similarity within the time variation of flow dynamics for a rigid-lid cavity problem under both up-scaled and down-scaled domains. The strength of the proposed approach lies in its ability to consider the underlying flow dynamics through not only from the governing equations under consideration but also from the initial and boundary conditions, hence allowing to obtain perfect self-similarity in different time and space scales. The proposed methodology can be a valuable tool in obtaining self-similar flow dynamics under preferred level of detail, which can be represented by initial and boundary value problems under specific assumptions.

  15. RELATIONS BETWEEN ANTHROPOMETRIC CHARACTERISTICS AND COORDINATION IN PEOPLE WITH ABOVE-AVERAGE MOTOR ABILITIES

    Directory of Open Access Journals (Sweden)

    Milan Cvetković

    2011-09-01

    Full Text Available The sample of 149 male persons whose average age is 20.15 in decimal years (±0.83, and all of whom are students at the Faculty of Sport and Physical Education, underwent a battery of tests consisting of 17 anthropometric measures taken from the measures index of the International Biological Program and 4 tests designed to assess coordination as follows: Coordination with stick, Hand and leg drumming, Nonrhythmic drumming and Slalom with three balls. One statistically significant canonical correlation was determined by means of the canonical correlation analysis. The isolated canonical correlation from the space of coordination variables, was the one used for assessment of coordination of the whole body – Coordination with stick. On the other hand, out of the variables from the right array, the ones which covered longinality were singled out – Body height and Arm length, circular dimensionality – Circumference of stretched upper arm, Circumference of bent upper arm and Circumference of upper leg, as well as subcutaneous fat tissue – Skin fold of the back.

  16. Empirical average-case relation between undersampling and sparsity in X-ray CT

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Sauer; Sidky, Emil Y.; Hansen, Per Christian

    2015-01-01

    In X-ray computed tomography (CT) it is generally acknowledged that reconstruction methods exploiting image sparsity allow reconstruction from a significantly reduced number of projections. The use of such reconstruction methods is inspired by recent progress in compressed sensing (CS). However......, the CS framework provides neither guarantees of accurate CT reconstruction, nor any relation between sparsity and a sufficient number of measurements for recovery, i.e., perfect reconstruction from noise-free data. We consider reconstruction through 1-norm minimization, as proposed in CS, from data...... obtained using a standard CT fan-beam sampling pattern. In empirical simulation studies we establish quantitatively a relation between the image sparsity and the sufficient number of measurements for recovery within image classes motivated by tomographic applications. We show empirically that the specific...

  17. Inferring regional vertical crustal velocities from averaged relative sea level trends: A proof of concept

    Science.gov (United States)

    Bâki Iz, H.; Shum, C. K.; Zhang, C.; Kuo, C. Y.

    2017-11-01

    We report the design of a high-throughput gradient hyperbolic lenslet built with real-life materials and capable of focusing a beam into a deep sub-wavelength spot of λ/23. This efficient design is achieved through high-order transformation optics and circular effective-medium theory (CEMT), which are used to engineer the radially varying anisotropic artificial material based on the thin alternating cylindrical metal and dielectric layers. The radial gradient of the effective anisotropic optical constants allows for matching the impedances at the input and output interfaces, drastically improving the throughput of the lenslet. However, it is the use of the zeroth-order CEMT that enables the practical realization of a gradient hyperlens with realistic materials. To illustrate the importance of using the CEMT versus the conventional planar effective-medium theory (PEMT) for cylindrical anisotropic systems, such as our hyperlens, both the CEMT and PEMT are adopted to design gradient hyperlenses with the same materials and order of elemental layers. The CEMT- and PEMT-based designs show similar performance if the number of metal-dielectric binary layers is sufficiently large (9+ pairs) and if the layers are sufficiently thin. However, for the manufacturable lenses with realistic numbers of layers (e.g. five pairs) and thicknesses, the performance of the CEMT design continues to be practical, whereas the PEMT-based design stops working altogether. The accurate design of transformation optics-based layered cylindrical devices enabled by CEMT allow for a new class of robustly manufacturable nanophotonic systems, even with relatively thick layers of real-life materials.

  18. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  19. Cycle-averaged phase-space states for the harmonic and the Morse oscillators, and the corresponding uncertainty relations

    Energy Technology Data Exchange (ETDEWEB)

    Nicolaides, Cleanthes A; Constantoudis, Vasilios [Physics Department, National Technical University, Athens (Greece)], E-mail: caan@eie.gr, E-mail: vconst@imel.demokritos.gr

    2009-11-15

    In Planck's model of the harmonic oscillator (HO) a century ago, both the energy and the phase space were quantized according to {epsilon}{sub n} = nh{nu}, n = 0, 1, 2..., and {integral}{integral}dp{sub x} dx = h. By referring to just these two relations, we show how the adoption of cycle-averaged phase-space states (CAPSSs) leads to the quantum mechanical energy spectrum of the HO, =(n+1/2)h{nu}, n = 0, 1, 2, ..., where (E{sub n}) are the average energies, and to =(n+1/2)h/2{pi}, where (J{sub n}) are the average actions. When anharmonicity to all orders is added in the form of the Morse oscillator (MO), the concept of CAPSS is implemented in terms of action-angle variables and it is shown that the use of (J{sub n}) of each MO CAPSS also produces the correct discrete spectrum of the MO, again without applying quantum mechanics (QM). In addition, the concept of CAPSS leads to two well-known post-QM relations which are obtained in terms of time averages of the classical trajectories and of (J{sub n}): (1) conintp dx = 2{pi}(J{sub n}) = (n+1/2)h, which is the quantum condition of the old quantum theory, albeit with half-integers (i.e. the result of the WKB approximation), and (2) ({delta}p{delta}x){sub n} {>=} h/4{pi}, which is the Heisenberg uncertainty principle. It is shown via numerical computations that, for two MOs, one with intermediate anharmonicity, supporting 22 levels, and another with strong anharmonicity, with 5 levels, the quantities({delta}p{delta}x){sub n}, n = 0, 1, ..., 4, which are computed classically for the appropriately chosen trajectories agree very well with the results of computations that apply QM. The introduction of the CAPSS and the concomitant results underline the significance of the concept of the state of the system in physics, both classical and quantum.

  20. Effect of confounding variables on hemodynamic response function estimation using averaging and deconvolution analysis: An event-related NIRS study.

    Science.gov (United States)

    Aarabi, Ardalan; Osharina, Victoria; Wallois, Fabrice

    2017-07-15

    Slow and rapid event-related designs are used in fMRI and functional near-infrared spectroscopy (fNIRS) experiments to temporally characterize the brain hemodynamic response to discrete events. Conventional averaging (CA) and the deconvolution method (DM) are the two techniques commonly used to estimate the Hemodynamic Response Function (HRF) profile in event-related designs. In this study, we conducted a series of simulations using synthetic and real NIRS data to examine the effect of the main confounding factors, including event sequence timing parameters, different types of noise, signal-to-noise ratio (SNR), temporal autocorrelation and temporal filtering on the performance of these techniques in slow and rapid event-related designs. We also compared systematic errors in the estimates of the fitted HRF amplitude, latency and duration for both techniques. We further compared the performance of deconvolution methods based on Finite Impulse Response (FIR) basis functions and gamma basis sets. Our results demonstrate that DM was much less sensitive to confounding factors than CA. Event timing was the main parameter largely affecting the accuracy of CA. In slow event-related designs, deconvolution methods provided similar results to those obtained by CA. In rapid event-related designs, our results showed that DM outperformed CA for all SNR, especially above -5 dB regardless of the event sequence timing and the dynamics of background NIRS activity. Our results also show that periodic low-frequency systemic hemodynamic fluctuations as well as phase-locked noise can markedly obscure hemodynamic evoked responses. Temporal autocorrelation also affected the performance of both techniques by inducing distortions in the time profile of the estimated hemodynamic response with inflated t-statistics, especially at low SNRs. We also found that high-pass temporal filtering could substantially affect the performance of both techniques by removing the low-frequency components of

  1. Predicting long-term average concentrations of traffic-related air pollutants using GIS-based information

    Science.gov (United States)

    Hochadel, Matthias; Heinrich, Joachim; Gehring, Ulrike; Morgenstern, Verena; Kuhlbusch, Thomas; Link, Elke; Wichmann, H.-Erich; Krämer, Ursula

    Global regression models were developed to estimate individual levels of long-term exposure to traffic-related air pollutants. The models are based on data of a one-year measurement programme including geographic data on traffic and population densities. This investigation is part of a cohort study on the impact of traffic-related air pollution on respiratory health, conducted at the westerly end of the Ruhr-area in North-Rhine Westphalia, Germany. Concentrations of NO 2, fine particle mass (PM 2.5) and filter absorbance of PM 2.5 as a marker for soot were measured at 40 sites spread throughout the study region. Fourteen-day samples were taken between March 2002 and March 2003 for each season and site. Annual average concentrations for the sites were determined after adjustment for temporal variation. Information on traffic counts in major roads, building densities and community population figures were collected in a geographical information system (GIS). This information was used to calculate different potential traffic-based predictors: (a) daily traffic flow and maximum traffic intensity of buffers with radii from 50 to 10 000 m and (b) distances to main roads and highways. NO 2 concentration and PM 2.5 absorbance were strongly correlated with the traffic-based variables. Linear regression prediction models, which involved predictors with radii of 50 to 1000 m, were developed for the Wesel region where most of the cohort members lived. They reached a model fit ( R2) of 0.81 and 0.65 for NO 2 and PM 2.5 absorbance, respectively. Regression models for the whole area required larger spatial scales and reached R2=0.90 and 0.82. Comparison of predicted values with NO 2 measurements at independent public monitoring stations showed a satisfactory association ( r=0.66). PM 2.5 concentration, however, was only slightly correlated and thus poorly predictable by traffic-based variables ( rGIS-based regression models offer a promising approach to assess individual levels of

  2. Average extinction curves and relative abundances for quasi-stellar object absorption-line systems at 1 <=zabs < 2

    Science.gov (United States)

    York, Donald G.; Khare, Pushpa; Vanden Berk, Daniel; Kulkarni, Varsha P.; Crotts, Arlin P. S.; Lauroesch, James T.; Richards, Gordon T.; Schneider, Donald P.; Welty, Daniel E.; Alsayyad, Yusra; Kumar, Abhishek; Lundgren, Britt; Shanidze, Natela; Smith, Tristan; Vanlandingham, Johnny; Baugher, Britt; Hall, Patrick B.; Jenkins, Edward B.; Menard, Brice; Rao, Sandhya; Tumlinson, Jason; Turnshek, David; Yip, Ching-Wa; Brinkmann, Jon

    2006-04-01

    We have studied a sample of 809 MgII absorption systems with 1.0 <=zabs<= 1.86 in the spectra of Sloan Digital Sky Survey quasi-stellar objects (QSOs), with the aim of understanding the nature and abundance of the dust and the chemical abundances in the intervening absorbers. Normalized, composite spectra were derived, for abundance measurements, for the full sample and several subsamples, chosen on the basis of the line strengths and other absorber and QSO properties. Average extinction curves were obtained for the subsamples by comparing their geometric mean spectra with those of matching samples of QSOs without absorbers in their spectra. There is clear evidence for the presence of dust in the intervening absorbers. The 2175-Å feature is not present in the extinction curves, for any of the subsamples. The extinction curves are similar to the Small Magellanic Cloud (SMC) extinction curve with a rising ultraviolet (UV) extinction below 2200 Å. The absorber rest-frame colour excess, E(B-V), derived from the extinction curves, depends on the absorber properties and ranges from <0.001 to 0.085 for various subsamples. The column densities of MgII, AlII, SiII, CaII, TiII, CrII, MnII, FeII, CoII, NiII and ZnII do not show such a correspondingly large variation. The overall depletions in the high E(B-V) samples are consistent with those found for individual damped Lyman α systems, the depletion pattern being similar to halo clouds in the Galaxy. Assuming an SMC gas-to-dust ratio, we find a trend of increasing abundance with decreasing extinction; systems with NHI~ 1020cm-2 show solar abundance of Zn. The large velocity spread of strong MgII systems seems to be mimicked by weak lines of other elements. The ionization of the absorbers, in general appears to be low: the ratio of the column densities of AlIII to AlII is always less than 1/2. QSOs with absorbers are, in general, at least three times as likely to have highly reddened spectra as compared to QSOs without any

  3. Association of average daily alcohol consumption, binge drinking and alcohol-related social problems: results from the German Epidemiological Surveys of Substance Abuse.

    Science.gov (United States)

    Kraus, Ludwig; Baumeister, Sebastian E; Pabst, Alexander; Orth, Boris

    2009-01-01

    The present study investigates the combined effect of average volume and binge drinking in predicting alcohol-related social problems and estimates the proportion of alcohol-related harms related to specific drinking patterns that could be prevented if transferred to a low-risk drinking group. Data came from the 1997 and 2000 German Epidemiological Survey of Substance Abuse (ESA) (age: 18-59 years; response rate: 65% and 51%, respectively). The pooled sample consisted of 12,668 current drinkers. By using nine categories of average daily intake and three groups of binge drinking, individuals were grouped into 22 mutual exclusive groups. Social problems were defined as the occurrence of 'repeated family quarrels', 'concern of family members or friends', 'loss of partner or friend' or 'physical fight or injury' in relation to alcohol. The effect of average daily intake is modified by binge drinking frequency such that the association was strongest in those with four or more binge drinking occasions during the last 30 days. Within each binge drinking group, adjusted relative risks (aRR) increased with alcohol intake up to a certain threshold and decreased thereafter. Overall, compared to the reference group (alcohol-related social problems than volume. Alcohol-related social harms especially among drinkers with moderate volume per day may be reduced by targeting prevention strategies towards episodic heavy drinkers.

  4. Demographic and Psychological Predictors of Grade Point Average (GPA) in North-Norway: A Particular Analysis of Cognitive/School-Related and Literacy Problems

    Science.gov (United States)

    Saele, Rannveig Grøm; Sørlie, Tore; Nergård-Nilssen, Trude; Ottosen, Karl-Ottar; Goll, Charlotte Bjørnskov; Friborg, Oddgeir

    2016-01-01

    Approximately 30% of students drop out from Norwegian upper secondary schools. Academic achievement, as indexed by grade point average (GPA), is one of the strongest predictors of dropout. The present study aimed to examine the role of cognitive, school-related and affective/psychological predictors of GPA. In addition, we examined the…

  5. A Bayesian model averaging approach for estimating the relative risk of mortality associated with heat waves in 105 U.S. cities.

    Science.gov (United States)

    Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D

    2011-12-01

    Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.

  6. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    Science.gov (United States)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  7. Averaging in cosmological models

    OpenAIRE

    Coley, Alan

    2010-01-01

    The averaging problem in cosmology is of considerable importance for the correct interpretation of cosmological data. We review cosmological observations and discuss some of the issues regarding averaging. We present a precise definition of a cosmological model and a rigorous mathematical definition of averaging, based entirely in terms of scalar invariants.

  8. A comparative study between a simplified Kalman filter and Sliding Window Averaging for single trial dynamical estimation of event-related potentials

    DEFF Research Database (Denmark)

    Vedel-Larsen, Esben; Fuglø, Jacob; Channir, Fouad

    2010-01-01

    , are variable and depend on cognitive function. This study compares the performance of a simplified Kalman filter with Sliding Window Averaging in tracking dynamical changes in single trial P300. The comparison is performed on simulated P300 data with added background noise consisting of both simulated and real...... background EEG in various input signal to noise ratios. While both methods can be applied to track dynamical changes, the simplified Kalman filter has an advantage over the Sliding Window Averaging, most notable in a better noise suppression when both are optimized for faster changing latency and amplitude...

  9. A comparative study between a simplified Kalman filter and Sliding Window Averaging for single trial dynamical estimation of event-related potentials

    DEFF Research Database (Denmark)

    Vedel-Larsen, Esben; Fuglø, Jacob; Channir, Fouad

    2010-01-01

    , are variable and depend on cognitive function. This study compares the performance of a simplified Kalman filter with Sliding Window Averaging in tracking dynamical changes in single trial P300. The comparison is performed on simulated P300 data with added background noise consisting of both simulated and real...

  10. Convergence of multiple ergodic averages

    OpenAIRE

    Host, Bernard

    2006-01-01

    These notes are based on a course for a general audience given at the Centro de Modeliamento Matem\\'atico of the University of Chile, in December 2004. We study the mean convergence of multiple ergodic averages, that is, averages of a product of functions taken at different times. We also describe the relations between this area of ergodic theory and some classical and some recent results in additive number theory.

  11. Relative importance of first and second derivatives of nuclear magnetic resonance chemical shifts and spin-spin coupling constants for vibrational averaging

    Czech Academy of Sciences Publication Activity Database

    Dračínský, Martin; Kaminský, Jakub; Bouř, Petr

    2009-01-01

    Roč. 130, č. 9 (2009), 094106/1-094106/13 ISSN 0021-9606 R&D Projects: GA ČR GA203/06/0420; GA ČR GA202/07/0732; GA AV ČR IAA400550702 Institutional research plan: CEZ:AV0Z40550506 Keywords : NMR * anharmonic forces * vibrational averaging Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.093, year: 2009

  12. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...

  13. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  14. Moving Average Convergence Divergence filter preprocessing for real-time event-related peak activity onset detection : application to fNIRS signals.

    Science.gov (United States)

    Durantin, Gautier; Scannella, Sebastien; Gateau, Thibault; Delorme, Arnaud; Dehais, Frederic

    2014-01-01

    Real-time solutions for noise reduction and signal processing represent a central challenge for the development of Brain Computer Interfaces (BCI). In this paper, we introduce the Moving Average Convergence Divergence (MACD) filter, a tunable digital passband filter for online noise reduction and onset detection without preliminary learning phase, used in economic markets analysis. MACD performance was tested and benchmarked with other filters using data collected with functional Near Infrared Spectoscopy (fNIRS) during a digit sequence memorization task. This filter has a good performance on filtering and real-time peak activity onset detection, compared to other techniques. Therefore, MACD could be implemented for efficient BCI design using fNIRS.

  15. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Arithmetic mean of objects in a space need not lie in the space. [Frechet; 1948] Finding mean of right-angled triangles. S = {(x,y,z) ∈ R+3 : x2 + y2 = z2}. = {. [ z x − ιy x + ιy z. ] : x,y,z > 0,z2 = x2 + y2}. Surface of right triangles : Arithmetic mean not on S. Tanvi Jain. Averaging operations on matrices ...

  16. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. ... then the expected extension of geometric mean A1/2B1/2 is not even self-adjoint, leave alone positive definite. Tanvi Jain. Averaging operations on matrices ...

  17. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  18. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  19. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... to a non-linear manifold and re-normalization or orthogonalization must be applied to obtain proper rotations. These latter steps have been viewed as ad hoc corrections for the errors introduced by assuming a vector space. The article shows that the two approximative methods can be derived from natural...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation....

  20. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  1. Cosmological ensemble and directional averages of observables

    CERN Document Server

    Bonvin, Camille; Durrer, Ruth; Maartens, Roy; Umeh, Obinna

    2015-01-01

    We show that at second order ensemble averages of observables and directional averages do not commute due to gravitational lensing. In principle this non-commutativity is significant for a variety of quantities we often use as observables. We derive the relation between the ensemble average and the directional average of an observable, at second-order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focussing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance is increased by gravitational lensing, whereas the directional average of the distance is decreased. We show that for a generic observable, there exists a particular function of the observable that is invariant under second-order lensing perturbations.

  2. Correctional Facility Average Daily Population

    Data.gov (United States)

    Montgomery County of Maryland — This dataset contains Accumulated monthly with details from Pre-Trial Average daily caseload * Detention Services, Average daily population for MCCF, MCDC, PRRS and...

  3. Averaging Einstein's equations : The linearized case

    NARCIS (Netherlands)

    Stoeger, William R.; Helmi, Amina; Torres, Diego F.

    We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW

  4. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  5. Monthly snow/ice averages (ISCCP)

    Data.gov (United States)

    National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets in...

  6. Gene-by-environment interactions of the CLOCK, PEMT, and GHRELIN loci with average sleep duration in relation to obesity traits using a cohort of 643 New Zealand European children.

    Science.gov (United States)

    Krishnan, Mohanraj; Shelling, Andrew N; Wall, Clare R; Mitchell, Edwin A; Murphy, Rinki; McCowan, Lesley M E; Thompson, John M D

    2017-09-01

    Modern technology may have desensitised the 'biological clock' to environmental cues, disrupting the appropriate co-ordination of metabolic processes. Susceptibility to misalignment of circadian rhythms may be partly genetically influenced and effects on sleep quality and duration could predispose to poorer health outcomes. Shorter sleep duration is associated with obesity traits, which are brought on by an increased opportunity to eat and/or a shift of hormonal profile promoting hunger. We hypothesised that increased sleep duration will offset susceptible genetic effects, resulting in reduced obesity risk. We recruited 643 (male: 338; female: 305) European children born to participants in the New Zealand centre of the International Screening for Pregnancy Endpoints sleep study. Ten genes directly involved in the circadian rhythm machinery and a further 20 genes hypothesised to be driven by cyclic oscillations were evaluated by Sequenom assay. Multivariable regression was performed to test the interaction between gene variants and average sleep length (derived from actigraphy), in relation to obesity traits (body mass index (BMI) z-scores and percentage body fat (PBF)). No association was found between average sleep length and BMI z-scores (p = 0.056) or PBF (p = 0.609). Uncorrected genotype associations were detected between STAT-rs8069645 (p = 0.0052) and ADIPOQ-rs266729 (p = 0.019) with differences in average sleep duration. Evidence for uncorrected gene-by-sleep interactions of the CLOCK-rs4864548 (p = 0.0039), PEMT-936108 (p = 0.016) and GHRELIN-rs696217 (p = 0.046) were found in relation to BMI z-scores but not for PBF. Our results indicate that children may have different genetic susceptibility to the effects of sleep duration on obesity. Further confirmatory studies are required in other population cohorts of different age groups. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Lagrangian averaging with geodesic mean

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler-α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  8. Cosmic inhomogeneities and averaged cosmological dynamics.

    Science.gov (United States)

    Paranjape, Aseem; Singh, T P

    2008-10-31

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.

  9. Measurement of the average number of prompt neutrons emitted per fission of 235U relative to 252Cf for the energy region 500 eV to 10 MeV

    International Nuclear Information System (INIS)

    Gwin, R.; Spencer, R.R.; Ingle, R.W.; Todd, J.H.; Weaver, H.

    1980-01-01

    The average number of prompt neutrons emitted per fission ν/sub p/-bar(E), was measured for 235 U relative to ν/sub p/-bar for the spontaneous fission of 252 Cf over the neutron energy range from 500 eV to 10 MeV. The samples of 235 U and 252 Cf were contained in fission chambers located in the center of a large liquid scintillator. Fission neutrons were detected by the large liquid scintillator. The present values of ν/sub p/-bar(E) for 235 U are about 0.8% larger than those measured by Boldeman. In earlier work with the present system, it was noted that Boldeman's value of ν/sub p/-bar(E) for thermal energy neutrons was about 0.8% lower than obtained at ORELA. It is suggested that the thickness of the fission foil used in Boldeman's experiment may cause some of the discrepancy between his and the present values of ν/sub p/-bar(E). For the energy region up to 700 keV, the present values of ν/sub p/-bar(E) for 235 U agree, within the uncertainty, with those given in ENDF/B-V. Above 1 MeV the present results for ν/sub p/-bar(E) range about the ENDF/B-V values with differences up to 1.3%. 6 figures, 1 table

  10. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  11. Averaging of multivalued differential equations

    Directory of Open Access Journals (Sweden)

    G. Grammel

    2003-04-01

    Full Text Available Nonlinear multivalued differential equations with slow and fast subsystems are considered. Under transitivity conditions on the fast subsystem, the slow subsystem can be approximated by an averaged multivalued differential equation. The approximation in the Hausdorff sense is of order O(ϵ1/3 as ϵ→0.

  12. Fuzzy Weighted Average: Analytical Solution

    NARCIS (Netherlands)

    van den Broek, P.M.; Noppen, J.A.R.

    2009-01-01

    An algorithm is presented for the computation of analytical expressions for the extremal values of the α-cuts of the fuzzy weighted average, for triangular or trapeizoidal weights and attributes. Also, an algorithm for the computation of the inverses of these expressions is given, providing exact

  13. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  14. Polyhedral Painting with Group Averaging

    Science.gov (United States)

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  15. Backus and Wyllie Averages for Seismic Attenuation

    Science.gov (United States)

    Qadrouh, Ayman N.; Carcione, José M.; Ba, Jing; Gei, Davide; Salim, Ahmed M.

    2018-01-01

    Backus and Wyllie equations are used to obtain average seismic velocities at zero and infinite frequencies, respectively. Here, these equations are generalized to obtain averages of the seismic quality factor (inversely proportional to attenuation). The results indicate that the Wyllie velocity is higher than the corresponding Backus quantity, as expected, since the ray velocity is a high-frequency limit. On the other hand, the Wyllie quality factor is higher than the Backus one, following the velocity trend, i.e., the higher the velocity (the stiffer the medium), the higher the attenuation. Since the quality factor can be related to properties such as porosity, permeability, and fluid viscosity, these averages can be useful for evaluating reservoir properties.

  16. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  17. THE MASS-METALLICITY RELATION OF GLOBULAR CLUSTERS IN THE CONTEXT OF NONLINEAR COLOR-METALLICTY RELATIONS

    International Nuclear Information System (INIS)

    Blakeslee, John P.; Cantiello, Michele; Peng, Eric W.

    2010-01-01

    Two recent empirical developments in the study of extragalactic globular cluster (GC) populations are the color-magnitude relation of the blue GCs (the 'blue tilt') and the nonlinearity of the dependence of optical GC colors on metallicity. The color-magnitude relation, interpreted as a mass-metallicity relation, is thought to be a consequence of self-enrichment. Nonlinear color-metallicity relations have been shown to produce bimodal color distributions from unimodal metallicity distributions. We simulate GC populations including both a mass-metallicity scaling relation and nonlinear color-metallicity relations motivated by theory and observations. Depending on the assumed range of metallicities and the width of the GC luminosity function (GCLF), we find that the simulated populations can have bimodal color distributions with a 'blue tilt' similar to observations, even though the metallicity distribution appears unimodal. The models that produce these features have the relatively high mean GC metallicities and nearly equal blue and red peaks characteristic of giant elliptical galaxies. The blue tilt is less apparent in the models with metallicities typical of dwarf ellipticals; the narrower GCLF in these galaxies has an even bigger effect in reducing the significance of their color-magnitude slopes. We critically examine the evidence for nonlinearity versus bimodal metallicities as explanations for the characteristic double-peaked color histograms of giant ellipticals and conclude that the question remains open. We discuss the prospects for further theoretical and observational progress in constraining the models presented here and for uncovering the true metallicity distributions of extragalactic GC systems.

  18. Averaging theorems in finite deformation plasticity

    CERN Document Server

    Nemat-Nasser, S C

    1999-01-01

    The transition from micro- to macro-variables of a representative volume element (RVE) of a finitely deformed aggregate (e.g., a composite or a polycrystal) is explored. A number of exact fundamental results on averaging techniques, $9 valid at finite deformations and rotations of any arbitrary heterogeneous continuum, are obtained. These results depend on the choice of suitable kinematic and dynamic variables. For finite deformations, the deformation gradient and $9 its rate, and the nominal stress and its rate, are optimally suited for the averaging purposes. A set of exact identities is presented in terms of these variables. An exact method for homogenization of an ellipsoidal inclusion in an $9 unbounded finitely deformed homogeneous solid is presented, generalizing Eshelby's method for application to finite deformation problems. In terms of the nominal stress rate and the rate of change of the deformation gradient, $9 measured relative to any arbitrary state, a general phase-transformation problem is con...

  19. Site Averaged Neutron Soil Moisture: 1988 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the neutron probe soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...

  20. Site Averaged Gravimetric Soil Moisture: 1989 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...

  1. Site Averaged Gravimetric Soil Moisture: 1988 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...

  2. Site Averaged Gravimetric Soil Moisture: 1987 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...

  3. Site Averaged Gravimetric Soil Moisture: 1987 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...

  4. Weighted south-wide average pulpwood prices

    Science.gov (United States)

    James E. Granskog; Kevin D. Growther

    1991-01-01

    Weighted average prices provide a more accurate representation of regional pulpwood price trends when production volumes valy widely by state. Unweighted South-wide average delivered prices for pulpwood, as reported by Timber Mart-South, were compared to average annual prices weighted by each state's pulpwood production from 1977 to 1986. Weighted average prices...

  5. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  6. Averaging in cosmological models using scalars

    International Nuclear Information System (INIS)

    Coley, A A

    2010-01-01

    The averaging problem in cosmology is of considerable importance for the correct interpretation of cosmological data. A rigorous mathematical definition of averaging in a cosmological model is necessary. In general, a spacetime is completely characterized by its scalar curvature invariants, and this suggests a particular spacetime averaging scheme based entirely on scalars. We clearly identify the problems of averaging in a cosmological model. We then present a precise definition of a cosmological model, and based upon this definition, we propose an averaging scheme in terms of scalar curvature invariants. This scheme is illustrated in a simple static spherically symmetric perfect fluid cosmological spacetime, where the averaging scales are clearly identified.

  7. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  8. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  9. Bayesian Model Averaging for Propensity Score Analysis.

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2014-01-01

    This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.

  10. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  11. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  12. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  13. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  14. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  15. Fixed Average Spectra of Orchestral Instrument Tones

    Directory of Open Access Journals (Sweden)

    Joseph Plazak

    2010-04-01

    Full Text Available The fixed spectrum for an average orchestral instrument tone is presented based on spectral data from the Sandell Harmonic Archive (SHARC. This database contains non-time-variant spectral analyses for 1,338 recorded instrument tones from 23 Western instruments ranging from contrabassoon to piccolo. From these spectral analyses, a grand average was calculated, providing what might be considered an average non-time-variant harmonic spectrum. Each of these tones represents the average of all instruments in the SHARC database capable of producing that pitch. These latter tones better represent common spectral changes with respect to pitch register, and might be regarded as an “average instrument.” Although several caveats apply, an average harmonic tone or instrument may prove useful in analytic and modeling studies. In addition, for perceptual experiments in which non-time-variant stimuli are needed, an average harmonic spectrum may prove to be more ecologically appropriate than common technical waveforms, such as sine tones or pulse trains. Synthesized average tones are available via the web.

  16. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  17. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    is less clear if the teacher distribution is unknown. I define a class of averaging procedures, the temperated likelihoods, including both Bayes averaging with a uniform prior and maximum likelihood estimation as special cases. I show that Bayes is generalization optimal in this family for any teacher...

  18. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  19. Average action for models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.; Wetterich, C.

    1993-01-01

    The average action is a new tool for investigating spontaneous symmetry breaking in elementary particle theory and statistical mechanics beyond the validity of standard perturbation theory. The aim of this work is to provide techniques for an investigation of models with fermions and scalars by means of the average potential. In the phase with spontaneous symmetry breaking, the inner region of the average potential becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations in this region necessitate a calculation of the fermion determinant in a spin wave background. We also compute the fermionic contribution to the wave function renormalization in the scalar kinetic term. (orig.)

  20. Small scale magnetic flux-averaged magnetohydrodynamics

    International Nuclear Information System (INIS)

    Pfirsch, D.; Sudan, R.N.

    1994-01-01

    By relaxing exact magnetic flux conservation below a scale λ a system of flux-averaged magnetohydrodynamic equations are derived from Hamilton's principle with modified constraints. An energy principle can be derived from the linearized averaged system because the total system energy is conserved. This energy principle is employed to treat the resistive tearing instability and the exact growth rate is recovered when λ is identified with the resistive skin depth. A necessary and sufficient stability criteria of the tearing instability with line tying at the ends for solar coronal loops is also obtained. The method is extended to both spatial and temporal averaging in Hamilton's principle. The resulting system of equations not only allows flux reconnection but introduces irreversibility for appropriate choice of the averaging function. Except for boundary contributions which are modified by the time averaging process total energy and momentum are conserved over times much longer than the averaging time τ but not for less than τ. These modified boundary contributions correspond to the existence, also, of damped waves and shock waves in this theory. Time and space averaging is applied to electron magnetohydrodynamics and in one-dimensional geometry predicts solitons and shocks in different limits

  1. Average-passage flow model development

    Science.gov (United States)

    Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

    1989-01-01

    A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

  2. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  3. Average geodesic distance of skeleton networks of Sierpinski tetrahedron

    Science.gov (United States)

    Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao

    2018-04-01

    The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.

  4. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  5. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  6. Schedule of average annual equipment ownership expense

    Science.gov (United States)

    2003-03-06

    The "Schedule of Average Annual Equipment Ownership Expense" is designed for use on Force Account bills of Contractors performing work for the Illinois Department of Transportation and local government agencies who choose to adopt these rates. This s...

  7. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  8. Symmetric Euler orientation representations for orientational averaging.

    Science.gov (United States)

    Mayerhöfer, Thomas G

    2005-09-01

    A new kind of orientation representation called symmetric Euler orientation representation (SEOR) is presented. It is based on a combination of the conventional Euler orientation representations (Euler angles) and Hamilton's quaternions. The properties of the SEORs concerning orientational averaging are explored and compared to those of averaging schemes that are based on conventional Euler orientation representations. To that aim, the reflectance of a hypothetical polycrystalline material with orthorhombic crystal symmetry was calculated. The calculation was carried out according to the average refractive index theory (ARIT [T.G. Mayerhöfer, Appl. Spectrosc. 56 (2002) 1194]). It is shown that the use of averaging schemes based on conventional Euler orientation representations leads to a dependence of the result from the specific Euler orientation representation that was utilized and from the initial position of the crystal. The latter problem can be overcome partly by the introduction of a weighing factor, but only for two-axes-type Euler orientation representations. In case of a numerical evaluation of the average, a residual difference remains also if a two-axes type Euler orientation representation is used despite of the utilization of a weighing factor. In contrast, this problem does not occur if a symmetric Euler orientation representation is used as a matter of principle, while the result of the averaging for both types of orientation representations converges with increasing number of orientations considered in the numerical evaluation. Additionally, the use of a weighing factor and/or non-equally spaced steps in the numerical evaluation of the average is not necessary. The symmetrical Euler orientation representations are therefore ideally suited for the use in orientational averaging procedures.

  9. Aplikasi Moving Average Filter Pada Teknologi Enkripsi

    OpenAIRE

    Hermawi, Adrianto

    2007-01-01

    A method of encrypting and decrypting is introduced. The type of information experimented on is a mono wave sound file with frequency 44 KHZ. The encryption technology uses a regular noise wave sound file (with equal frequency) and moving average filter to decrypt and obtain the original signal. All experiments are programmed using MATLAB. By the end of the experiment the author concludes that the Moving Average Filter can indeed be used as an alternative to encryption technology.

  10. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  11. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  12. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  13. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  14. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  15. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  16. Interpreting Sky-Averaged 21-cm Measurements

    Science.gov (United States)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  17. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...

  18. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  19. Quantum Averaging of Squeezed States of Light

    DEFF Research Database (Denmark)

    Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single qu...

  20. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  1. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating ...

  2. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...

  3. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  4. Average Transverse Momentum Quantities Approaching the Lightfront

    NARCIS (Netherlands)

    Boer, Daniel

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the p (T) broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large

  5. Averages of operators in finite Fermion systems

    International Nuclear Information System (INIS)

    Ginocchio, J.N.

    1980-01-01

    The important ingredients in the spectral analysis of Fermion systems are the average of operators. In this paper we shall derive expressions for averages of operators in truncated Fermion spaces in terms of the minimal information needed about the operator. If we take the operator to be powers of the Hamiltonian we can then study the conditions on a Hamiltonian for the eigenvalues of the Hamiltonian in the truncated space to be Gaussian distributed. The theory of scalar traces is reviewed, and the dependence on nucleon number and single-particle states is reviewed. These results are used to show that a dilute non-interacting system will have Gaussian distributed eigenvalues, i.e., its cumulants will tend to zero, for a large number of Fermions. The dominant terms in the cumulants of a dilute interacting Fermion system are derived. In this case the cumulants depend crucially on the interaction even for a large number of Fermions. Configuration averaging is briefly discussed. Finally, comments are made on averaging for a fixed number of Fermions and angular momentum

  6. Full averaging of fuzzy impulsive differential inclusions

    Directory of Open Access Journals (Sweden)

    Natalia V. Skripnik

    2010-09-01

    Full Text Available In this paper the substantiation of the method of full averaging for fuzzy impulsive differential inclusions is studied. We extend the similar results for impulsive differential inclusions with Hukuhara derivative (Skripnik, 2007, for fuzzy impulsive differential equations (Plotnikov and Skripnik, 2009, and for fuzzy differential inclusions (Skripnik, 2009.

  7. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  8. Cryo-Electron Tomography and Subtomogram Averaging.

    Science.gov (United States)

    Wan, W; Briggs, J A G

    2016-01-01

    Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations. © 2016 Elsevier Inc. All rights reserved.

  9. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  10. High Average Power Optical FEL Amplifiers

    CERN Document Server

    Ben-Zvi, I; Litvinenko, V

    2005-01-01

    Historically, the first demonstration of the FEL was in an amplifier configuration at Stanford University. There were other notable instances of amplifying a seed laser, such as the LLNL amplifier and the BNL ATF High-Gain Harmonic Generation FEL. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance a 100 kW average power FEL. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting energy recovery linacs combine well with the high-gain FEL amplifier to produce unprecedented average power FELs with some advantages. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Li...

  11. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  12. Pareto Principle in Datamining: an Above-Average Fencing Algorithm

    Directory of Open Access Journals (Sweden)

    K. Macek

    2008-01-01

    Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.

  13. Calculation of average landslide frequency using climatic records

    Science.gov (United States)

    L. M. Reid

    1998-01-01

    Abstract - Aerial photographs are used to develop a relationship between the number of debris slides generated during a hydrologic event and the size of the event, and the long-term average debris-slide frequency is calculated from climate records using the relation.

  14. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  15. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  16. Average Annual Rainfall over the Globe

    Science.gov (United States)

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  17. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.

    Energy Technology Data Exchange (ETDEWEB)

    BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.

    2005-08-21

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

  18. Technological progress and average job matching quality

    OpenAIRE

    Centeno, Mário; Corrêa, Márcio V.

    2009-01-01

    Our objective is to study, in a labor market characterized by search frictions, the effect of technological progress on the average quality of job matches. For that, we use an extension of Mortensen and Pissarides (1998) and obtain as results that the effects of technological progress on the labor market depend upon the initial conditions of the economy. If the economy is totally characterized by the presence of low-quality job matches, an increase in technological progress is accompanied by ...

  19. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  20. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  1. Cumulative and Averaging Fission of Beliefs

    OpenAIRE

    Josang, Audun

    2007-01-01

    Belief fusion is the principle of combining separate beliefs or bodies of evidence originating from different sources. Depending on the situation to be modelled, different belief fusion methods can be applied. Cumulative and averaging belief fusion is defined for fusing opinions in subjective logic, and for fusing belief functions in general. The principle of fission is the opposite of fusion, namely to eliminate the contribution of a specific belief from an already fused belief, with the pur...

  2. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  3. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  4. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  5. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  6. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  7. Average local ionization energy: A review.

    Science.gov (United States)

    Politzer, Peter; Murray, Jane S; Bulat, Felipe A

    2010-11-01

    The average local ionization energy I(r) is the energy necessary to remove an electron from the point r in the space of a system. Its lowest values reveal the locations of the least tightly-held electrons, and thus the favored sites for reaction with electrophiles or radicals. In this paper, we review the definition of I(r) and some of its key properties. Apart from its relevance to reactive behavior, I(r) has an important role in several fundamental areas, including atomic shell structure, electronegativity and local polarizability and hardness. All of these aspects of I(r) are discussed.

  8. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  9. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  10. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  11. The Effects of Cooperative Learning and Learner Control on High- and Average-Ability Students.

    Science.gov (United States)

    Hooper, Simon; And Others

    1993-01-01

    Describes a study that examined the effects of cooperative versus individual computer-based instruction on the performance of high- and average-ability fourth-grade students. Effects of learner and program control are investigated; student attitudes toward instructional content, learning in groups, and partners are discussed; and further research…

  12. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  13. Group averaging for de Sitter free fields

    Energy Technology Data Exchange (ETDEWEB)

    Marolf, Donald; Morrison, Ian A, E-mail: marolf@physics.ucsb.ed, E-mail: ian_morrison@physics.ucsb.ed [Department of Physics, University of California, Santa Barbara, CA 93106 (United States)

    2009-12-07

    Perturbative gravity about global de Sitter space is subject to linearization-stability constraints. Such constraints imply that quantum states of matter fields couple consistently to gravity only if the matter state has vanishing de Sitter charges, i.e. only if the state is invariant under the symmetries of de Sitter space. As noted by Higuchi, the usual Fock spaces for matter fields contain no de Sitter-invariant states except the vacuum, though a new Hilbert space of de Sitter-invariant states can be constructed via so-called group-averaging techniques. We study this construction for free scalar fields of arbitrary positive mass in any dimension, and for linear vector and tensor gauge fields in any dimension. Our main result is to show in each case that group averaging converges for states containing a sufficient number of particles. We consider general N-particle states with smooth wavefunctions, though we obtain somewhat stronger results when the wavefunctions are finite linear combinations of de Sitter harmonics. Along the way we obtain explicit expressions for general boost matrix elements in a familiar basis.

  14. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  15. Global atmospheric circulation statistics: Four year averages

    Science.gov (United States)

    Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.

    1987-01-01

    Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.

  16. Atomic configuration average simulations for plasma spectroscopy

    International Nuclear Information System (INIS)

    Kilcrease, D.P.; Abdallah, J. Jr.; Keady, J.J.; Clark, R.E.H.

    1993-01-01

    Configuration average atomic physics based on Hartree-Fock methods and an unresolved transition array (UTA) simulation theory are combined to provide a computationally efficient approach for calculating the spectral properties of plasmas involving complex ions. The UTA theory gives an overall representation for the many lines associated with a radiative transition from one configuration to another without calculating the fine structure in full detail. All of the atomic quantities required for synthesis of the spectrum are calculated in the same approximation and used to generate the parameters required for representation of each UTA, the populations of the various atomic states, and the oscillator strengths. We use this method to simulate the transmission of x-rays through an aluminium plasma. (author)

  17. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  18. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  19. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  20. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  1. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  2. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  3. Comparison of averaging techniques for the calculation of the 'European average exposure indicator' for particulate matter.

    Science.gov (United States)

    Brown, Richard J C; Woods, Peter T

    2012-01-01

    A comparison of various averaging techniques to calculate the Average Exposure Indicator (AEI) specified in European Directive 2008/50/EC for particulate matter in ambient air has been performed. This was done for data from seventeen sites around the UK for which PM(10) mass concentration data is available for the years 1998-2000 and 2008-2010 inclusive. The results have shown that use of the geometric mean produces significantly lower AEI values within the required three year averaging periods and slightly lower changes in the AEI value between the three year averaging periods than the use of the arithmetic mean. The use of weighted means in the calculation, using the data capture at each site as the weighting parameter, has also been tested and this is proposed as a useful way of taking account of the confidence of each data set.

  4. New Nordic diet versus average Danish diet

    DEFF Research Database (Denmark)

    Khakimov, Bekzod; Poulsen, Sanne Kellebjerg; Savorani, Francesco

    2016-01-01

    and 3-hydroxybutanoic acid were related to a higher weight loss, while higher concentrations of salicylic, lactic and N-aspartic acids, and 1,5-anhydro-D-sorbitol were related to a lower weight loss. Specific gender- and seasonal differences were also observed. The study strongly indicates that healthy...... metabolites reflecting specific differences in the diets, especially intake of plant foods and seafood, and in energy metabolism related to ketone bodies and gluconeogenesis, formed the predominant metabolite pattern discriminating the intervention groups. Among NND subjects higher levels of vaccenic acid...

  5. Forecasting stock market averages to enhance profitable trading strategies

    OpenAIRE

    Haefke, Christian; Helmenstein, Christian

    1995-01-01

    In this paper we design a simple trading strategy to exploit the hypothesized distinct informational content of the arithmetic and geometric mean. The rejection of cointegration between the two stock market indicators supports this conjecture. The profits generated by this cheaply replicable trading scheme cannot be expected to persist. Therefore we forecast the averages using autoregressive linear and neural network models to gain a competitive advantage relative to other investors. Refining...

  6. Average Likelihood Methods for Code Division Multiple Access (CDMA)

    Science.gov (United States)

    2014-05-01

    the number of unknown variables grows, the averaging process becomes an extremely complex task. In the multiuser detection , a closely related problem...Theoretical Background The classification of DS/CDMA signals should not be confused with the problem of multiuser detection . The multiuser detection deals...beginning of the sequence. For simplicity, our approach will use similar assumptions to those used in multiuser detection , i.e., chip

  7. Technological progress and average job matching quality

    Directory of Open Access Journals (Sweden)

    Mário Centeno

    2009-12-01

    Full Text Available Our objective is to study, in a labor market characterized by search frictions, the effect of technological progress on the average quality of job matches. For that, we use an extension of Mortensen and Pissarides (1998 and obtain as results that the effects of technological progress on the labor market depend upon the initial conditions of the economy. If the economy is totally characterized by the presence of low-quality job matches, an increase in technological progress is accompanied by an increase in the quality of jobs. In turn, if the economy is totally characterized by the presence of high-quality job matches, an increase in the technological progress rate implies the reverse effect. Finally, if the economy is totally characterized by the presence of very high-quality jobs, an increase in the technological progress rate implies an increase in the average quality of the job matches.O objetivo deste artigo é o de estudar, em um mercado de trabalho caracterizado por fricções, os efeitos do progresso tecnológico sobre a qualidade média das parcerias produtivas. Para tal, utilizamos uma extensão do modelo de Mortensen and Pissarides (1998 e obtivemos, como resultados, que os efeitos de variações na taxa de progresso tecnológico sobre o mercado de trabalho dependerão das condições da economia. Se a economia for totalmente caracterizada pela presença de parcerias produtivas de baixa qualidade, um aumento na taxa de progresso tecnológico vem acompanhado por um aumento na qualidade médias das parcerias produtivas. Por sua vez, se a economia for totalmente caracterizada pela presença de parcerias produtivas de alta qualidade, um aumento na taxa de progresso tecnológico gera um efeito inverso. Finalmente, se a economia for totalmente caracterizada pela presença de parcerias produtivas de muito alta qualidade, um aumento na taxa de progresso tecnológico virá acompanhado de uma elevação na qualidade média dos empregos.

  8. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  9. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  10. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  11. 49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.

    Science.gov (United States)

    2010-10-01

    ... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year does...

  12. SEASONAL AVERAGE FLOW IN RÂUL NEGRU HYDROGRAPHIC BASIN

    Directory of Open Access Journals (Sweden)

    VIGH MELINDA

    2015-03-01

    Full Text Available The Râul Negru hydrographic basin is a well individualised physical-geographical unit inside the Braşov Depression. The flow is controlled by six hydrometric stations placed on the main river and on two important tributaries. The data base for seasonal flow analysis contains the discharges from 1950-2012. The results of data analysis show that there significant space-time differences between multiannual seasonal averages. Some interesting conclusions can be obtained by comparing abundant and scarce periods. Flow analysis was made using seasonal charts Q = f(T. The similarities come from the basin’s relative homogeneity, and the differences from flow’s evolution and trend. Flow variation is analysed using variation coefficient. In some cases appear significant Cv values differences. Also, Cv values trends are analysed according to basins’ average altitude.

  13. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  14. Multiple-level defect species evaluation from average carrier decay

    Science.gov (United States)

    Debuf, Didier

    2003-10-01

    An expression for the average decay is determined by solving the the carrier continuity equations, which include terms for multiple defect recombination. This expression is the decay measured by techniques such as the contactless photoconductance decay method, which determines the average or volume integrated decay. Implicit in the above is the requirement for good surface passivation such that only bulk properties are observed. A proposed experimental configuration is given to achieve the intended goal of an assessment of the type of defect in an n-type Czochralski-grown silicon semiconductor with an unusually high relative lifetime. The high lifetime is explained in terms of a ground excited state multiple-level defect system. Also, minority carrier trapping is investigated.

  15. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  16. A Predictive Likelihood Approach to Bayesian Averaging

    Directory of Open Access Journals (Sweden)

    Tomáš Jeřábek

    2015-01-01

    Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.

  17. ANTINOMY OF THE MODERN AVERAGE PROFESSIONAL EDUCATION

    Directory of Open Access Journals (Sweden)

    A. A. Listvin

    2017-01-01

    of ways of their decision and options of the valid upgrade of the SPE system answering to the requirements of economy. The inefficiency of the concept of one-leveled SPE and its non-competitiveness against the background of development of an applied bachelor degree at the higher school is shown. It is offered to differentiate programs of basic level for training of skilled workers and the program of the increased level for training of specialists of an average link (technicians, technologists on the basis of basic level for forming of a single system of continuous professional training and effective functioning of regional systems of professional education. Such system will help to eliminate disproportions in a triad «a worker – a technician – an engineer», and will increase the quality of professional education. Furthermore, it is indicated the need of polyprofessional education wherein the integrated educational structures differing in degree of formation of split-level educational institutions on the basis of network interaction, convergence and integration are required. According to the author, in the regions it is necessary to develop two types of organizations and SPE organizations: territorial multi-profile colleges with flexible variable programs and the organizations realizing educational programs of applied qualifications in specific industries (metallurgical, chemical, construction, etc. according to the specifics of economy of territorial subjects.Practical significance. The results of the research can be useful to specialists of management of education, heads and pedagogical staff of SPE institutions, and also representatives of regional administrations and employers while organizing the multilevel network system of training of skilled workers and experts of middle ranking.

  18. Unpredictable visual changes cause temporal memory averaging.

    Science.gov (United States)

    Ohyama, Junji; Watanabe, Katsumi

    2007-09-01

    Various factors influence the perceived timing of visual events. Yet, little is known about the ways in which transient visual stimuli affect the estimation of the timing of other visual events. In the present study, we examined how a sudden color change of an object would influence the remembered timing of another transient event. In each trial, subjects saw a green or red disk travel in circular motion. A visual flash (white frame) occurred at random times during the motion sequence. The color of the disk changed either at random times (unpredictable condition), at a fixed time relative to the motion sequence (predictable condition), or it did not change (no-change condition). The subjects' temporal memory of the visual flash in the predictable condition was as veridical as that in the no-change condition. In the unpredictable condition, however, the flash was reported to occur closer to the timing of the color change than actual timing. Thus, an unpredictable visual change distorts the temporal memory of another visual event such that the remembered moment of the event is closer to the timing of the unpredictable visual change.

  19. Instantaneous, phase-averaged, and time-averaged pressure from particle image velocimetry

    Science.gov (United States)

    de Kat, Roeland

    2015-11-01

    Recent work on pressure determination using velocity data from particle image velocimetry (PIV) resulted in approaches that allow for instantaneous and volumetric pressure determination. However, applying these approaches is not always feasible (e.g. due to resolution, access, or other constraints) or desired. In those cases pressure determination approaches using phase-averaged or time-averaged velocity provide an alternative. To assess the performance of these different pressure determination approaches against one another, they are applied to a single data set and their results are compared with each other and with surface pressure measurements. For this assessment, the data set of a flow around a square cylinder (de Kat & van Oudheusden, 2012, Exp. Fluids 52:1089-1106) is used. RdK is supported by a Leverhulme Trust Early Career Fellowship.

  20. Site Averaged Neutron Soil Moisture: 1987-1989 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the neutron probe soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...

  1. Site Averaged Gravimetric Soil Moisture: 1987-1989 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...

  2. Site Averaged Gravimetric Soil Moisture: 1987-1989 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...

  3. Human perceptions of colour rendition vary with average fidelity, average gamut, and gamut shape

    Energy Technology Data Exchange (ETDEWEB)

    Royer, MP [Pacific Northwest National Laboratory, Portland, OR, USA; Wilkerson, A. [Pacific Northwest National Laboratory, Portland, OR, USA; Wei, M. [The Hong Kong Polytechnic University, Hong Kong, China; Houser, K. [The Pennsylvania State University, University Park, PA, USA; Davis, R. [Pacific Northwest National Laboratory, Portland, OR, USA

    2016-08-10

    An experiment was conducted to evaluate how subjective impressions of color quality vary with changes in average fidelity, average gamut, and gamut shape (which considers the specific hues that are saturated or desaturated). Twenty-eight participants each evaluated 26 lighting conditions—created using four, seven-channel, tunable LED luminaires—in a 3.1 m by 3.7 m room filled with objects selected to cover a range of hue, saturation, and lightness. IES TM-30 fidelity index (Rf) values ranged from 64 to 93, IES TM-30 gamut index (Rg¬) values from 79 to 117, and IES TM-30 Rcs,h1 values (a proxy for gamut shape) from -19% to 26%. All lighting conditions delivered the same nominal illuminance and chromaticity. Participants were asked to rate each condition on eight point semantic differential scales for saturated-dull, normal-shifted, and like-dislike. They were also asked one multiple choice question, classifying the condition as saturated, dull, normal, or shifted. The findings suggest that gamut shape is more important than average gamut for human preference, where reds play a more important role than other hues. Additionally, average fidelity alone is a poor predictor of human perceptions, although Rf was somewhat better than CIE Ra. The most preferred source had a CIE Ra value of 68, and 9 of the top 12 rated products had a CIE Ra value of 73 or less, which indicates that the commonly used criteria of CIE Ra ≥ 80 may be excluding a majority of preferred light sources.

  4. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  5. Relativity

    CERN Document Server

    Einstein, Albert

    2013-01-01

    Time magazine's ""Man of the Century"", Albert Einstein is the founder of modern physics and his theory of relativity is the most important scientific idea of the modern era. In this short book, Einstein explains, using the minimum of mathematical terms, the basic ideas and principles of the theory that has shaped the world we live in today. Unsurpassed by any subsequent books on relativity, this remains the most popular and useful exposition of Einstein's immense contribution to human knowledge.With a new foreword by Derek Raine.

  6. Suicide attempts, platelet monoamine oxidase and the average evoked response

    International Nuclear Information System (INIS)

    Buchsbaum, M.S.; Haier, R.J.; Murphy, D.L.

    1977-01-01

    The relationship between suicides and suicide attempts and two biological measures, platelet monoamine oxidase levels (MAO) and average evoked response (AER) augmenting was examined in 79 off-medication psychiatric patients and in 68 college student volunteers chosen from the upper and lower deciles of MAO activity levels. In the patient sample, male individuals with low MAO and AER augmenting, a pattern previously associated with bipolar affective disorders, showed a significantly increased incidence of suicide attempts in comparison with either non-augmenting low MAO or high MAO patients. Within the normal volunteer group, all male low MAO probands with a family history of suicide or suicide attempts were AER augmenters themselves. Four completed suicides were found among relatives of low MAO probands whereas no high MAO proband had a relative who committed suicide. These findings suggest that the combination of low platelet MAO activity and AER augmenting may be associated with a possible genetic vulnerability to psychiatric disorders. (author)

  7. 20 CFR 226.62 - Computing average monthly compensation.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is...

  8. Dynamic time warping-based averaging framework for functional near-infrared spectroscopy brain imaging studies

    Science.gov (United States)

    Zhu, Li; Najafizadeh, Laleh

    2017-06-01

    We investigate the problem related to the averaging procedure in functional near-infrared spectroscopy (fNIRS) brain imaging studies. Typically, to reduce noise and to empower the signal strength associated with task-induced activities, recorded signals (e.g., in response to repeated stimuli or from a group of individuals) are averaged through a point-by-point conventional averaging technique. However, due to the existence of variable latencies in recorded activities, the use of the conventional averaging technique can lead to inaccuracies and loss of information in the averaged signal, which may result in inaccurate conclusions about the functionality of the brain. To improve the averaging accuracy in the presence of variable latencies, we present an averaging framework that employs dynamic time warping (DTW) to account for the temporal variation in the alignment of fNIRS signals to be averaged. As a proof of concept, we focus on the problem of localizing task-induced active brain regions. The framework is extensively tested on experimental data (obtained from both block design and event-related design experiments) as well as on simulated data. In all cases, it is shown that the DTW-based averaging technique outperforms the conventional-based averaging technique in estimating the location of task-induced active regions in the brain, suggesting that such advanced averaging methods should be employed in fNIRS brain imaging studies.

  9. Global robust image rotation from combined weighted averaging

    Science.gov (United States)

    Reich, Martin; Yang, Michael Ying; Heipke, Christian

    2017-05-01

    In this paper we present a novel rotation averaging scheme as part of our global image orientation model. This model is based on homologous points in overlapping images and is robust against outliers. It is applicable to various kinds of image data and provides accurate initializations for a subsequent bundle adjustment. The computation of global rotations is a combined optimization scheme: First, rotations are estimated in a convex relaxed semidefinite program. Rotations are required to be in the convex hull of the rotation group SO (3) , which in most cases leads to correct rotations. Second, the estimation is improved in an iterative least squares optimization in the Lie algebra of SO (3) . In order to deal with outliers in the relative rotations, we developed a sequential graph optimization algorithm that is able to detect and eliminate incorrect rotations. From the beginning, we propagate covariance information which allows for a weighting in the least squares estimation. We evaluate our approach using both synthetic and real image datasets. Compared to recent state-of-the-art rotation averaging and global image orientation algorithms, our proposed scheme reaches a high degree of robustness and accuracy. Moreover, it is also applicable to large Internet datasets, which shows its efficiency.

  10. Direct determination approach for the multifractal detrending moving average analysis

    Science.gov (United States)

    Xu, Hai-Chuan; Gu, Gao-Feng; Zhou, Wei-Xing

    2017-11-01

    In the canonical framework, we propose an alternative approach for the multifractal analysis based on the detrending moving average method (MF-DMA). We define a canonical measure such that the multifractal mass exponent τ (q ) is related to the partition function and the multifractal spectrum f (α ) can be directly determined. The performances of the direct determination approach and the traditional approach of the MF-DMA are compared based on three synthetic multifractal and monofractal measures generated from the one-dimensional p -model, the two-dimensional p -model, and the fractional Brownian motions. We find that both approaches have comparable performances to unveil the fractal and multifractal nature. In other words, without loss of accuracy, the multifractal spectrum f (α ) can be directly determined using the new approach with less computation cost. We also apply the new MF-DMA approach to the volatility time series of stock prices and confirm the presence of multifractality.

  11. Voter dynamics on an adaptive network with finite average connectivity

    Science.gov (United States)

    Mukhopadhyay, Abhishek; Schmittmann, Beate

    2009-03-01

    We study a simple model for voter dynamics in a two-party system. The opinion formation process is implemented in a random network of agents in which interactions are not restricted by geographical distance. In addition, we incorporate the rapidly changing nature of the interpersonal relations in the model. At each time step, agents can update their relationships, so that there is no history dependence in the model. This update is determined by their own opinion, and by their preference to make connections with individuals sharing the same opinion and with opponents. Using simulations and analytic arguments, we determine the final steady states and the relaxation into these states for different system sizes. In contrast to earlier studies, the average connectivity (``degree'') of each agent is constant here, independent of the system size. This has significant consequences for the long-time behavior of the model.

  12. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  13. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  14. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  15. Stochastic Simulation of Hourly Average Wind Speed in Umudike ...

    African Journals Online (AJOL)

    Ten years of hourly average wind speed data were used to build a seasonal autoregressive integrated moving average (SARIMA) model. The model was used to simulate hourly average wind speed and recommend possible uses at Umudike, South eastern Nigeria. Results showed that the simulated wind behaviour was ...

  16. Average Weekly Alcohol Consumption: Drinking Percentiles for American College Students.

    Science.gov (United States)

    Meilman, Philip W.; And Others

    1997-01-01

    Reports a study that examined the average number of alcoholic drinks that college students (N=44,433) consumed per week. Surveys indicated that most students drank little or no alcohol on an average weekly basis. Only about 10% of the students reported consuming an average of 15 drinks or more per week. (SM)

  17. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  18. 40 CFR 600.510-86 - Calculation of average fuel economy.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy...) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy... Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy § 600...

  19. 40 CFR 600.510-93 - Calculation of average fuel economy.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy...) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy... Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy § 600...

  20. 49 CFR 537.9 - Determination of fuel economy values and average fuel economy.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 6 2010-10-01 2010-10-01 false Determination of fuel economy values and average fuel economy. 537.9 Section 537.9 Transportation Other Regulations Relating to Transportation... ECONOMY REPORTS § 537.9 Determination of fuel economy values and average fuel economy. (a) Vehicle...

  1. 40 CFR 600.510-08 - Calculation of average fuel economy.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy...) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy... Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy § 600...

  2. Metamemory, Memory Performance, and Causal Attributions in Gifted and Average Children.

    Science.gov (United States)

    Kurtz, Beth E.; Weinert, Franz E.

    1989-01-01

    Tested high- and average-achieving German fifth- and seventh-grade students' metacognitive knowledge, attributional beliefs, and performance on a sort recall test. Found ability-related differences in all three areas. Gifted children tended to attribute academic success to high ability while average children attributed success to effort. (SAK)

  3. 47 CFR 64.1801 - Geographic rate averaging and rate integration.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Geographic rate averaging and rate integration. 64.1801 Section 64.1801 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) MISCELLANEOUS RULES RELATING TO COMMON CARRIERS Geographic Rate Averaging and...

  4. “Simpson’s paradox” as a manifestation of the properties of weighted average (part 2)

    OpenAIRE

    Zhekov, Encho

    2012-01-01

    The article proves that the so-called “Simpson's paradox” is a special case of manifestation of the properties of weighted average. In this case always comes to comparing two weighted averages, where the average of the larger variables is less than that of the smaller. The article demonstrates one method for analyzing the relative change of magnitudes of the type: k S = Σ x iy i i=1 who gives answer to question: what is the reason, the weighted average of few variables with higher values, to ...

  5. “Simpson’s paradox” as a manifestation of the properties of weighted average (part 1)

    OpenAIRE

    Zhekov, Encho

    2012-01-01

    The article proves that the so-called “Simpson's paradox” is a special case of manifestation of the properties of weighted average. In this case always comes to comparing two weighted averages, where the average of the larger variables is less than that of the smaller. The article demonstrates one method for analyzing the relative change of magnitudes of the type: S = Σ ki=1x iy i who gives answer to question: what is the reason, the weighted average of few variables with higher values, to be...

  6. Averages of B-Hadron Properties at the End of 2005

    Energy Technology Data Exchange (ETDEWEB)

    Barberio, E.; /Melbourne U.; Bizjak, I.; /Novosibirsk, IYF; Blyth, S.; /CERN; Cavoto, G.; /Rome U.; Chang, P.; /Taiwan, Natl. Taiwan U.; Dingfelder, J.; /SLAC; Eidelman, S.; /Novosibirsk, IYF; Gershon, T.; /WARWICK U.; Godang, R.; /Mississippi U.; Harr, R.; /Wayne State U.; Hocker, A; /CERN; Iijima, T.; /Nagoya U.; Kowalewski, R.; /Victoria U.; Lehner, F.; /Fermilab; Limosani, A.; /Novosibirsk, IYF; Lin, C.-J.; /Fermilab; Long, O.; /UC, Riverside; Luth, V.; /SLAC; Morii, M.; /Harvard U.; Prell, S.; /Iowa State U.; Schneider, O.; /LPHE,

    2006-09-27

    This article reports world averages for measurements on b-hadron properties obtained by the Heavy Flavor Averaging Group (HFAG) using the available results as of at the end of 2005. In the averaging, the input parameters used in the various analyses are adjusted (rescaled) to common values, and all known correlations are taken into account. The averages include lifetimes, neutral meson mixing parameters, parameters of semileptonic decays, branching fractions of B decays to final states with open charm, charmonium and no charm, and measurements related to CP asymmetries.

  7. Potential of high-average-power solid state lasers

    International Nuclear Information System (INIS)

    Emmett, J.L.; Krupke, W.F.; Sooy, W.R.

    1984-01-01

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels

  8. Potential of high-average-power solid state lasers

    Energy Technology Data Exchange (ETDEWEB)

    Emmett, J.L.; Krupke, W.F.; Sooy, W.R.

    1984-09-25

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels.

  9. Consistency of the structure of Legendre transform in thermodynamics with the Kolmogorov–Nagumo average

    Energy Technology Data Exchange (ETDEWEB)

    Scarfone, A.M., E-mail: antoniomaria.scarfone@cnr.it [Istituto dei Sistemi Complessi (ISC-CNR) c/o Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino (Italy); Matsuzoe, H. [Department of Computer Science and Engineering, Nagoya Institute of Technology, Nagoya 466-8555 (Japan); Wada, T. [Department of Electrical and Electronic Engineering, Ibaraki University, Nakanarusawacho, Hitachi 316-8511 (Japan)

    2016-09-07

    We show the robustness of the structure of Legendre transform in thermodynamics against the replacement of the standard linear average with the Kolmogorov–Nagumo nonlinear average to evaluate the expectation values of the macroscopic physical observables. The consequence of this statement is twofold: 1) the relationships between the expectation values and the corresponding Lagrange multipliers still hold in the present formalism; 2) the universality of the Gibbs equation as well as other thermodynamic relations are unaffected by the structure of the average used in the theory. - Highlights: • The robustness of the Legendre structure has been shown within the KN average. • The relationships between the expectation values and the Lagrange multipliers still hold in the present formalism. • The universality of the Gibbs equation and other thermodynamic relations are unaffected by the structure of the average used.

  10. Trends in the association between average income, poverty and income inequality and life expectancy in Spain.

    Science.gov (United States)

    Regidor, Enrique; Calle, M Elisa; Navarro, Pedro; Domínguez, Vicente

    2003-03-01

    In this paper, we study the relation between life expectancy and both average income and measures of income inequality in 1980 and 1990, using the 17 Spanish regions as units of analysis. Average income was measured as average total income per household. The indicators of income inequality used were three measures of relative poverty-the percentage of households with total income less than 25%, 40% and 50% of the average total household income-the Gini index and the Atkinson indices with parameters alpha=1, 1.5 and 2. Pearson and partial correlation coefficients were used to evaluate the association between average income and measures of income inequality and life expectancy. None of the correlation coefficients for the association between life expectancy and average household income was significant for men. The association between life expectancy and average household income in women, adjusted for any of the measures of income inequality, was significant in 1980, although this association decreased or disappeared in 1990 after adjusting for measures of poverty. In both men and women, the partial correlation coefficients between life expectancy and the measures of relative income adjusted for average income were positive in 1980 and negative in 1990, although none of them was significant. The results with regard to women confirm the hypothesis that life expectancy in the developed countries has become more dissociated from average income level and more associated with income inequality. The absence of a relation in men in 1990 may be due to the large impact of premature mortality from AIDS in regions with the highest average total income per household and/or smallest income inequality.

  11. Subdiffusion in time-averaged, confined random walks.

    Science.gov (United States)

    Neusius, Thomas; Sokolov, Igor M; Smith, Jeremy C

    2009-07-01

    Certain techniques characterizing diffusive processes, such as single-particle tracking or molecular dynamics simulation, provide time averages rather than ensemble averages. Whereas the ensemble-averaged mean-squared displacement (MSD) of an unbounded continuous time random walk (CTRW) with a broad distribution of waiting times exhibits subdiffusion, the time-averaged MSD, delta2, does not. We demonstrate that, in contrast to the unbounded CTRW, in which delta2 is linear in the lag time Delta, the time-averaged MSD of the CTRW of a walker confined to a finite volume is sublinear in Delta, i.e., for long lag times delta2 approximately Delta1-alpha. The present results permit the application of CTRW to interpret time-averaged experimental quantities.

  12. Relationships between feeding behavior and average daily gain in cattle

    Directory of Open Access Journals (Sweden)

    Bruno Fagundes Cunha Lage

    2013-12-01

    Full Text Available Several studies have reported relationship between eating behavior and performance in feedlot cattle. The evaluation of behavior traits demands high degree of work and trained manpower, therefore, in recent years has been used an automated feed intake measurement system (GrowSafe System ®, that identify and record individual feeding patterns. The aim of this study was to evaluate the relationship between feeding behavior traits and average daily gain in Nellore calves undergoing feed efficiency test. Date from 85 Nelore males was recorded during the feed efficiency test performed in 2012, at Centro APTA Bovinos de Corte, Instituto de Zootecnia, São Paulo State. Were analyzed the behavioral traits: time at feeder (TF, head down duration (HD, representing the time when the animal is actually eating, frequency of visits (FV and feed rate (FR calculated as the amount of dry matter (DM consumed by time at feeder (g.min-1. The ADG was calculated by linear regression of individual weights on days in test. ADG classes were obtained considering the average ADG and standard deviation (SD being: high ADG (>mean + 1.0 SD, medium ADG (± 1.0 SD from the mean and low ADG (0.05 among ADG classes for FV, indicating that these traits are not related to each other. These results shows that the ADG is related to the agility in eat food and not to the time spent in the bunk or to the number of visits in a range of 24 hours.

  13. Measurement of the average lifetime of hadrons containing bottom quarks

    International Nuclear Information System (INIS)

    Klem, D.E.

    1986-06-01

    This thesis reports a measurement of the average lifetime of hadrons containing bottom quarks. It is based on data taken with the DELCO detector at the PEP e + e - storage ring at a center of mass energy of 29 GeV. The decays of hadrons containing bottom quarks are tagged in hadronic events by the presence of electrons with a large component of momentum transverse to the event axis. Such electrons are identified in the DELCO detector by an atmospheric pressure Cherenkov counter assisted by a lead/scintillator electromagnetic shower counter. The lifetime measured is 1.17 psec, consistent with previous measurements. This measurement, in conjunction with a limit on the non-charm branching ratio in b-decay obtained by other experiments, can be used to constrain the magnitude of the V/sub cb/ element of the Kobayashi-Maskawa matrix to the range 0.042 (+0.005 or -0.004 (stat.), +0.004 or -0.002 (sys.)), where the errors reflect the uncertainty on tau/sub b/ only and not the uncertainties in the calculations which relate the b-lifetime and the element of the Kobayashi-Maskawa matrix

  14. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    Science.gov (United States)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  15. The dynamics of multimodal integration: The averaging diffusion model.

    Science.gov (United States)

    Turner, Brandon M; Gao, Juan; Koenig, Scott; Palfy, Dylan; L McClelland, James

    2017-12-01

    We combine extant theories of evidence accumulation and multi-modal integration to develop an integrated framework for modeling multimodal integration as a process that unfolds in real time. Many studies have formulated sensory processing as a dynamic process where noisy samples of evidence are accumulated until a decision is made. However, these studies are often limited to a single sensory modality. Studies of multimodal stimulus integration have focused on how best to combine different sources of information to elicit a judgment. These studies are often limited to a single time point, typically after the integration process has occurred. We address these limitations by combining the two approaches. Experimentally, we present data that allow us to study the time course of evidence accumulation within each of the visual and auditory domains as well as in a bimodal condition. Theoretically, we develop a new Averaging Diffusion Model in which the decision variable is the mean rather than the sum of evidence samples and use it as a base for comparing three alternative models of multimodal integration, allowing us to assess the optimality of this integration. The outcome reveals rich individual differences in multimodal integration: while some subjects' data are consistent with adaptive optimal integration, reweighting sources of evidence as their relative reliability changes during evidence integration, others exhibit patterns inconsistent with optimality.

  16. A simple consensus algorithm for distributed averaging in random ...

    Indian Academy of Sciences (India)

    http://www.ias.ac.in/article/fulltext/pram/079/03/0493-0499. Keywords. Sensor networks; random geographical networks; distributed averaging; consensus algorithms. Abstract. Random geographical networks are realistic models for wireless sensor networks which are used in many applications. Achieving average ...

  17. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160... years; or (iii) 1974, we count the years beginning with 1951 and ending with the year before you reached...

  18. Averaged EMG profiles in jogging and running at different speeds

    NARCIS (Netherlands)

    Gazendam, Marnix G. J.; Hof, At L.

    EMGs were collected from 14 muscles with surface electrodes in 10 subjects walking 1.25-2.25 m s(-1) and running 1.25-4.5 m s(-1). The EMGs were rectified, interpolated in 100% of the stride, and averaged over all subjects to give an average profile. In running, these profiles could be decomposed

  19. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  20. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average (TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video...

  1. Exact Membership Functions for the Fuzzy Weighted Average

    NARCIS (Netherlands)

    van den Broek, P.M.; Noppen, J.A.R.

    2011-01-01

    The problem of computing the fuzzy weighted average, where both attributes and weights are fuzzy numbers, is well studied in the literature. Generally, the approach is to apply Zadeh’s extension principle to compute α-cuts of the fuzzy weighted average from the α-cuts of the attributes and weights

  2. The Average Covering Tree Value for Directed Graph Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.

    2012-01-01

    Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all

  3. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  4. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each Account... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost...

  5. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  6. Super convergence of ergodic averages for quasiperiodic orbits

    Science.gov (United States)

    Das, Suddhasattwa; Yorke, James A.

    2018-02-01

    The Birkhoff ergodic theorem asserts that time averages of a function evaluated along a trajectory of length N converge to the space average, the integral of f, as N\\to∞ , for ergodic dynamical systems. But that convergence can be slow. Instead of uniform averages that assign equal weights to points along the trajectory, we use an average with a non-uniform distribution of weights, weighting the early and late points of the trajectory much less than those near the midpoint N/2 . We show that in quasiperiodic dynamical systems, our weighted averages converge far faster provided f is sufficiently differentiable. This result can be applied to obtain efficient numerical computation of rotation numbers, invariant densities and conjugacies of quasiperiodic systems.

  7. Some series of intuitionistic fuzzy interactive averaging aggregation operators.

    Science.gov (United States)

    Garg, Harish

    2016-01-01

    In this paper, some series of new intuitionistic fuzzy averaging aggregation operators has been presented under the intuitionistic fuzzy sets environment. For this, some shortcoming of the existing operators are firstly highlighted and then new operational law, by considering the hesitation degree between the membership functions, has been proposed to overcome these. Based on these new operation laws, some new averaging aggregation operators namely, intuitionistic fuzzy Hamacher interactive weighted averaging, ordered weighted averaging and hybrid weighted averaging operators, labeled as IFHIWA, IFHIOWA and IFHIHWA respectively has been proposed. Furthermore, some desirable properties such as idempotency, boundedness, homogeneity etc. are studied. Finally, a multi-criteria decision making method has been presented based on proposed operators for selecting the best alternative. A comparative concelebration between the proposed operators and the existing operators are investigated in detail.

  8. Large-signal analysis of DC motor drive system using state-space averaging technique

    International Nuclear Information System (INIS)

    Bekir Yildiz, Ali

    2008-01-01

    The analysis of a separately excited DC motor driven by DC-DC converter is realized by using state-space averaging technique. Firstly, a general and unified large-signal averaged circuit model for DC-DC converters is given. The method converts power electronic systems, which are periodic time-variant because of their switching operation, to unified and time independent systems. Using the averaged circuit model enables us to combine the different topologies of converters. Thus, all analysis and design processes about DC motor can be easily realized by using the unified averaged model which is valid during whole period. Some large-signal variations such as speed and current relating to DC motor, steady-state analysis, large-signal and small-signal transfer functions are easily obtained by using the averaged circuit model

  9. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  10. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  11. Average action for the N-component φ4 theory

    International Nuclear Information System (INIS)

    Ringwald, A.; Wetterich, C.

    1990-01-01

    The average action is a continuum version of the block spin action in lattice field theories. We compute the one-loop approximation to the average potential for the N-component φ 4 theory in the spontaneously broken phase. For a finite (linear) block size ∝ anti k -1 this potential is real and nonconvex. For small φ the average potential is quadratic, U k =-1/2anti k 2 φ 2 , and independent of the original mass parameter and quartic coupling constant. It approaches the convex effective potential as anti k vanishes. (orig.)

  12. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  13. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  14. Salecker-Wigner-Peres clock and average tunneling times

    Energy Technology Data Exchange (ETDEWEB)

    Lunardi, Jose T., E-mail: jttlunardi@uepg.b [Departamento de Matematica e Estatistica, Universidade Estadual de Ponta Grossa, Av. General Carlos Cavalcanti, 4748. Cep 84030-000, Ponta Grossa, PR (Brazil); Manzoni, Luiz A., E-mail: manzoni@cord.ed [Department of Physics, Concordia College, 901 8th St. S., Moorhead, MN 56562 (United States); Nystrom, Andrew T., E-mail: atnystro@cord.ed [Department of Physics, Concordia College, 901 8th St. S., Moorhead, MN 56562 (United States)

    2011-01-17

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  15. Averaging underwater noise levels for environmental assessment of shipping.

    Science.gov (United States)

    Merchant, Nathan D; Blondel, Philippe; Dakin, D Tom; Dorocicz, John

    2012-10-01

    Rising underwater noise levels from shipping have raised concerns regarding chronic impacts to marine fauna. However, there is a lack of consensus over how to average local shipping noise levels for environmental impact assessment. This paper addresses this issue using 110 days of continuous data recorded in the Strait of Georgia, Canada. Probability densities of ~10(7) 1-s samples in selected 1/3 octave bands were approximately stationary across one-month subsamples. Median and mode levels varied with averaging time. Mean sound pressure levels averaged in linear space, though susceptible to strong bias from outliers, are most relevant to cumulative impact assessment metrics.

  16. The asymptotic average-shadowing property and transitivity for flows

    International Nuclear Information System (INIS)

    Gu Rongbao

    2009-01-01

    The asymptotic average-shadowing property is introduced for flows and the relationships between this property and transitivity for flows are investigated. It is shown that a flow on a compact metric space is chain transitive if it has positively (or negatively) asymptotic average-shadowing property and a positively (resp. negatively) Lyapunov stable flow is positively (resp. negatively) topologically transitive provided it has positively (resp. negatively) asymptotic average-shadowing property. Furthermore, two conditions for which a flow is a minimal flow are obtained.

  17. Preference for Averageness in Faces Does Not Generalize to Non-Human Primates

    Directory of Open Access Journals (Sweden)

    Olivia B. Tomeo

    2017-07-01

    Full Text Available Facial attractiveness is a long-standing topic of active study in both neuroscience and social science, motivated by its positive social consequences. Over the past few decades, it has been established that averageness is a major factor influencing judgments of facial attractiveness in humans. Non-human primates share similar social behaviors as well as neural mechanisms related to face processing with humans. However, it is unknown whether monkeys, like humans, also find particular faces attractive and, if so, which kind of facial traits they prefer. To address these questions, we investigated the effect of averageness on preferences for faces in monkeys. We tested three adult male rhesus macaques using a visual paired comparison (VPC task, in which they viewed pairs of faces (both individual faces, or one individual face and one average face; viewing time was used as a measure of preference. We did find that monkeys looked longer at certain individual faces than others. However, unlike humans, monkeys did not prefer the average face over individual faces. In fact, the more the individual face differed from the average face, the longer the monkeys looked at it, indicating that the average face likely plays a role in face recognition rather than in judgments of facial attractiveness: in models of face recognition, the average face operates as the norm against which individual faces are compared and recognized. Taken together, our study suggests that the preference for averageness in faces does not generalize to non-human primates.

  18. Analysis of the average daily radon variations in the soil air

    International Nuclear Information System (INIS)

    Holy, K.; Matos, M.; Boehm, R.; Stanys, T.; Polaskova, A.; Hola, O.

    1998-01-01

    In this contribution the search of the relation between the daily variations of the radon concentration and the regular daily oscillations of the atmospheric pressure are presented. The deviation of the radon activity concentration in the soil air from the average daily value reaches only a few percent. For the dry summer months the average daily course of the radon activity concentration can be described by the obtained equation. The analysis of the average daily courses could give the information concerning the depth of the gas permeable soil layer. The soil parameter is determined by others method with difficulty

  19. Passive quantum error correction of linear optics networks through error averaging

    Science.gov (United States)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  20. Classical properties and semiclassical calculations in a spherical nuclear average potential

    International Nuclear Information System (INIS)

    Carbonell, J.; Brut, F.; Arvieu, R.; Touchard, J.

    1984-03-01

    We study the relation between the classical properties or an average nuclear potential and its spectral properties. We have drawn the energy-action surface of this potential and related its properties to the spectral ones in the framework of the EBK semiclassical method. We also describe a method allowing us to get the evolution of the spectrum with the mass number

  1. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Using numerical example, DL, PDL, ARPDL and. ARMAPDL models were fitted. Autoregressive Moving Average Polynomial Distributed Lag Model. (ARMAPDL) model performed better than the other models. Keywords: Distributed Lag Model, Selection Criterion, Parameter Estimation, Residual Variance. ABSTRACT. 247.

  2. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    There is considerable safety potential in ensuring that motorists respect the speed limits. High speeds increase the number and severity of accidents. Technological development over the last 20 years has enabled the development of systems that allow automatic speed control. The first generation...... or section control. This article discusses the different methods for automatic speed control and presents an evaluation of the safety effects of average speed control, documented through changes in speed levels and accidents before and after the implementation of average speed control at selected sites...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  3. High Average Power Fiber Laser for Satellite Communications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...

  4. Medicare Part B Drug Average Sales Pricing Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  5. Averaging of diffusing contaminant concentrations in atmosphere surface layer

    International Nuclear Information System (INIS)

    Ivanov, E.A.; Ramzina, T.V.

    1985-01-01

    Calculations permitting to average concentration fields of diffusing radioactive contaminant coming from the NPP exhaust stack in the atmospheric surface layer are given. Formulae of contaminant concentration field calculation are presented; it depends on the average wind direction value (THETA) for time(T) and stability of this direction (σsub(tgTHETA) or σsub(THETA)). Probability of wind direction deviation from the average value for time T is satisfactory described by the Gauss law. With instability increase in the atmosphere σ increases, when wind velocity increasing the values of σ decreases for all types of temperature gradients. Nonuniformity of σ value dependence on averaging time T is underlined, that requires accurate choice of σsub(tgTHETA) and σsub(THETA) parameters in calculations

  6. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  7. GIS Tools to Estimate Average Annual Daily Traffic

    Science.gov (United States)

    2012-06-01

    This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...

  8. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  9. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  10. Historical Data for Average Processing Time Until Hearing Held

    Data.gov (United States)

    Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...

  11. Annual average equivalent dose of workers form health area

    International Nuclear Information System (INIS)

    Daltro, T.F.L.; Campos, L.L.

    1992-01-01

    The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)

  12. The average action for scalar fields near phase transitions

    International Nuclear Information System (INIS)

    Wetterich, C.

    1991-08-01

    We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)

  13. Bounds on the Average Sensitivity of Nested Canalizing Functions

    OpenAIRE

    Klotz, Johannes Georg; Heckel, Reinhard; Schober, Steffen

    2012-01-01

    Nested canalizing Boolean (NCF) functions play an important role in biological motivated regulative networks and in signal processing, in particular describing stack filters. It has been conjectured that NCFs have a stabilizing effect on the network dynamics. It is well known that the average sensitivity plays a central role for the stability of (random) Boolean networks. Here we provide a tight upper bound on the average sensitivity for NCFs as a function of the number of relevant input vari...

  14. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  15. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  16. Demonstration of a Model Averaging Capability in FRAMES

    Science.gov (United States)

    Meyer, P. D.; Castleton, K. J.

    2009-12-01

    Uncertainty in model structure can be incorporated in risk assessment using multiple alternative models and model averaging. To facilitate application of this approach to regulatory applications based on risk or dose assessment, a model averaging capability was integrated with the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) version 2 software. FRAMES is a software platform that allows the non-parochial communication between disparate models, databases, and other frameworks. Users have the ability to implement and select environmental models for specific risk assessment and management problems. Standards are implemented so that models produce information that is readable by other downstream models and accept information from upstream models. Models can be linked across multiple media and from source terms to quantitative risk/dose estimates. Parameter sensitivity and uncertainty analysis tools are integrated. A model averaging module was implemented to accept output from multiple models and produce average results. These results can be deterministic quantities or probability distributions obtained from an analysis of parameter uncertainty. Output from alternative models is averaged using weights determined from user input and/or model calibration results. A model calibration module based on the PEST code was implemented to provide FRAMES with a general calibration capability. An application illustrates the implementation, user interfaces, execution, and results of the FRAMES model averaging capabilities.

  17. Spatial Averaging Combined with a Perturbation/Iteration Procedure

    Directory of Open Access Journals (Sweden)

    F. E. C. Culick

    2012-09-01

    have caused some confusion. The paper ends with a brief discussion answering a serious criticism, of the method, nearly fifteen years old. The basis for the criticism, arising from solution to a relatively simple problem, is shown to be a result of an omission of a term that arises when the average density in a flow changes abruptly. Presently, there is no known problem of combustion instability for which the kind of analysis discussed here is not applicable. The formalism is general; much effort is generally required to apply the analysis to a particular problem. A particularly significant point, not elaborated here, is the inextricable dependence on expansion of the equations and their boundary conditions, in two small parameters, measures of the steady and unsteady flows. Whether or not those Mach numbers are actually ‘small’ in fact, is really beside the point. Work out applications of the method as if they were! Then maybe to get more accurate results, resort to some form of CFD. It is a huge practical point that the approach taken and advocated here cannot be expected to give precise results, but however accurate they may be, they will be obtained with relative ease and will always be instructive. In any case, the expansions must be carried out carefully with faithful attention to the rules of systematic procedures. Otherwise, inadvertent errors may arise from inclusion or exclusion of contributions. I state without proof or further examples that the general method discussed here has been quite well and widely tested for practical systems much more complex than those normally studied in the laboratory. Every case has shown encouraging results. Thus the lifetimes of approximate analyses developed before computing resources became commonplace seem to be very long indeed.

  18. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  19. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  20. The state support of small and average entrepreneurship in Ukraine

    Directory of Open Access Journals (Sweden)

    Т.О. Melnyk

    2015-03-01

    Full Text Available Purposes, principles and the basic directions of a state policy in development of small and average business in Ukraine are defined. Conditions and restrictions in granting of the state support to subjects of small and average business are outlined. The modern infrastructure of business support by regions is considered. Different kinds of the state support of small and average business are characterized: financial, information, consulting, in sphere of innovations, science and industrial production, subjects who conduct export activity, in sphere of preparation, retraining and improvement of professional skill of administrative and business dealing stuff. Approaches to reforming the state control of small and average business are generalized, esp. in aspects of risk degree estimation of economic activities, quantity and frequency of checks, registration of certificates which are made by the results of planned state control actions, creation of the effective mechanism of the state control bodies coordination. The most perspective directions of the state support of small and average business in Ukraine in modern economic conditions are defined.

  1. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Science.gov (United States)

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  2. The average orbit system upgrade for the Brookhaven AGS

    International Nuclear Information System (INIS)

    Ciardullo, D.J.; Brennan, J.M.

    1995-01-01

    The flexibility of the AGS to accelerate protons, polarized protons and heavy ions requires average orbit instrumentation capable of performing over a wide range of beam intensity (10 9 to 6 x 10 13 charges) and accelerating frequency (1.7MHz to 4.5MHz). In addition, the system must be tolerant of dramatic changes in bunch shape, such as those occurring near transition. Reliability and maintenance issues preclude the use of active electronics within the high radiation environment of the AGS tunnel, prompting the use of remote bunch signal processing. The upgrade for the AGS Average Orbit system is divided into three areas: (1) a new Pick Up Electrode (PUE) signal delivery system; (2) new average orbit processing electronics; and (3) centralized peripheral and data acquisition hardware. A distributed processing architecture was chosen to minimize the PUE signal cable lengths, the group of four from each detector location being phase matched to within ±5 degree

  3. The Health Effects of Income Inequality: Averages and Disparities.

    Science.gov (United States)

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  4. Bounds on the average sensitivity of nested canalizing functions.

    Directory of Open Access Journals (Sweden)

    Johannes Georg Klotz

    Full Text Available Nested canalizing Boolean functions (NCF play an important role in biologically motivated regulatory networks and in signal processing, in particular describing stack filters. It has been conjectured that NCFs have a stabilizing effect on the network dynamics. It is well known that the average sensitivity plays a central role for the stability of (random Boolean networks. Here we provide a tight upper bound on the average sensitivity of NCFs as a function of the number of relevant input variables. As conjectured in literature this bound is smaller than 4/3. This shows that a large number of functions appearing in biological networks belong to a class that has low average sensitivity, which is even close to a tight lower bound.

  5. Bounds on the average sensitivity of nested canalizing functions.

    Science.gov (United States)

    Klotz, Johannes Georg; Heckel, Reinhard; Schober, Steffen

    2013-01-01

    Nested canalizing Boolean functions (NCF) play an important role in biologically motivated regulatory networks and in signal processing, in particular describing stack filters. It has been conjectured that NCFs have a stabilizing effect on the network dynamics. It is well known that the average sensitivity plays a central role for the stability of (random) Boolean networks. Here we provide a tight upper bound on the average sensitivity of NCFs as a function of the number of relevant input variables. As conjectured in literature this bound is smaller than 4/3. This shows that a large number of functions appearing in biological networks belong to a class that has low average sensitivity, which is even close to a tight lower bound.

  6. The Role of the Harmonic Vector Average in Motion Integration

    Directory of Open Access Journals (Sweden)

    Alan eJohnston

    2013-10-01

    Full Text Available The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC solution. Here a new combination rule, the harmonic vector average (HVA, is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The harmonic vector average however provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the intersection of constraints direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the harmonic vector average.

  7. On the linear dynamic response of average atom in plasma

    Science.gov (United States)

    Blenski, Thomas

    2006-05-01

    The theory of linear response of an average atom (AA) in plasma is considered. It is assumed that the model of the AA is of the density functional theory (DFT) type, i.e., can be characterized by an equilibrium electron density in terms of a complete set of bound and free one-electron wave functions of a screened self-consistent-field (SCF) potential. The starting point in the linear response theory is the cluster expansion of the energy extinction coefficient. This coefficient expresses the rate of extinction of the energy due to an external potential irradiating a plasma composed if ions and electrons. As shown in previous publications the first-order term in the cluster expansion correctly gives the linear response of the AA. The first-order formula for the extinction coefficient is directly related to the AA photo-absorption cross-section. The cluster expansion technique shows how the homogeneous plasma contribution shall be correctly subtracted from the response of the AA and its surrounding plasma. The linear response is considered in the dipole approximation and in the framework of the random phase approximation which can be viewed as a version of a time-dependent DFT with local density approximation to the exchange correlation potential. In the paper, we discuss a possible practical scheme for calculation of the dipole linear response and obtain some theoretical results that allow one to reduce the theoretical and numerical difficulties of the approach. We show formally how in the dipole approximation the homogeneous plasma contribution to the induced potential results in the appearance of the cold-plasma dielectric function. We also derive a new sum rule which allows one to calculate the induced dipole using the localized density and potential gradients of the AA in equilibrium. We further propose a change of variables that allows us to eliminate the leading dipole divergence in the first-order Schrödinger equations. We next discuss some aspects of a

  8. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  9. Vehicle target detection method based on the average optical flow

    Science.gov (United States)

    Ma, J. Y.; Jie, F. R.; Hu, Y. J.

    2017-07-01

    Moving target detection in image sequence for dynamic scene is an important research topic in the field of computer vision. Block projection and matching are utilized for global motion estimation. Then, the background image is compensated applying the estimated motion parameters so as to stabilize the image sequence. Consequently, background subtraction is employed in the stabilized image sequence to extract moving targets. Finally, divide the difference image into uniform grids and average optical flow is employed for motion analysis. Experiment tests show that the proposed average optical flow method can efficiently extract the vehicle targets from dynamic scene meanwhile decreasing the false alarm.

  10. Bounce-averaged Fokker-Planck code for stellarator transport

    International Nuclear Information System (INIS)

    Mynick, H.E.; Hitchon, W.N.G.

    1985-07-01

    A computer code for solving the bounce-averaged Fokker-Planck equation appropriate to stellarator transport has been developed, and its first applications made. The code is much faster than the bounce-averaged Monte-Carlo codes, which up to now have provided the most efficient numerical means for studying stellarator transport. Moreover, because the connection to analytic kinetic theory of the Fokker-Planck approach is more direct than for the Monte-Carlo approach, a comparison of theory and numerical experiment is now possible at a considerably more detailed level than previously

  11. Research & development and growth: A Bayesian model averaging analysis

    Czech Academy of Sciences Publication Activity Database

    Horváth, Roman

    2011-01-01

    Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economics Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf

  12. Non-self-averaging nucleation rate due to quenched disorder

    International Nuclear Information System (INIS)

    Sear, Richard P

    2012-01-01

    We study the nucleation of a new thermodynamic phase in the presence of quenched disorder. The quenched disorder is a generic model of both impurities and disordered porous media; both are known to have large effects on nucleation. We find that the nucleation rate is non-self-averaging. This is in a simple Ising model with clusters of quenched spins. We also show that non-self-averaging behaviour is straightforward to detect in experiments, and may be rather common. (fast track communication)

  13. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  14. 26 CFR 1.1301-1 - Averaging of farm income.

    Science.gov (United States)

    2010-04-01

    ... January 1, 2003, rental income based on a share of a tenant's production determined under an unwritten... the Collection of Income Tax at Source on Wages (Federal income tax withholding), or the amount of net... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Averaging of farm income. 1.1301-1 Section 1...

  15. Significance of power average of sinusoidal and non-sinusoidal ...

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 87; Issue 1. Significance of power average of ... Additional sinusoidal and different non-sinusoidal periodic perturbations applied to the periodically forced nonlinear oscillators decide the maintainance or inhibitance of chaos. It is observed that the weak amplitude of ...

  16. Error estimates in horocycle averages asymptotics: challenges from string theory

    NARCIS (Netherlands)

    Cardella, M.A.

    2010-01-01

    For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth

  17. Average weighted receiving time in recursive weighted Koch networks

    Indian Academy of Sciences (India)

    https://www.ias.ac.in/article/fulltext/pram/086/06/1173-1182. Keywords. Weighted Koch network; recursive division method; average weighted receiving time. Abstract. Motivated by the empirical observation in airport networks and metabolic networks, we introduce the model of the recursive weighted Koch networks created ...

  18. Average Distance Travelled To School by Primary and Secondary ...

    African Journals Online (AJOL)

    This study investigated average distance travelled to school by students in primary and secondary schools in Anambra, Enugu, and Ebonyi States and effect on attendance. These are among the top ten densely populated and educationally advantaged States in Nigeria. Research evidences report high dropout rates in ...

  19. Determination of the average lifetime of bottom hadrons

    International Nuclear Information System (INIS)

    Althoff, M.; Braunschweig, W.; Kirschfink, F.J.; Martyn, H.U.; Rosskamp, P.; Schmitz, D.; Siebke, H.; Wallraff, W.; Hilger, E.; Kracht, T.; Krasemann, H.L.; Leu, P.; Lohrmann, E.; Pandoulas, D.; Poelz, G.; Poesnecker, K.U.; Duchovni, E.; Eisenberg, Y.; Karshon, U.; Mikenberg, G.; Mir, R.; Revel, D.; Shapira, A.; Baranko, G.; Caldwell, A.; Cherney, M.; Izen, J.M.; Mermikides, M.; Ritz, S.; Rudolph, G.; Strom, D.; Takashima, M.; Venkataramania, H.; Wicklund, E.; Wu, S.L.; Zobernig, G.

    1984-01-01

    We have determined the average lifetime of hadrons containing b quarks produced in e + e - annihilation to be tausub(B)=1.83x10 -12 s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes. (orig./HSI)

  20. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  1. A depth semi-averaged model for coastal dynamics

    Science.gov (United States)

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  2. Trend of Average Wages as Indicator of Hypothetical Money Illusion

    Directory of Open Access Journals (Sweden)

    Julian Daszkowski

    2010-06-01

    Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.

  3. 40 CFR 63.1332 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... other controls for a Group 1 storage vessel, batch process vent, aggregate batch vent stream, continuous... calculated using the procedures in § 63.1323(b). (B) If the batch process vent is controlled using a control... pollution prevention in generating emissions averaging credits. (1) Storage vessels, batch process vents...

  4. Implications of Methodist clergies' average lifespan and missional ...

    African Journals Online (AJOL)

    We are born, we touch the lives of others, we die – and then we are remembered. For the purpose of this article, I have assessed from obituaries the average lifespan of the clergy (ministers) in the Methodist Church of South Africa (MCSA), who died between 2003 and 2014. These obituaries were published in the ...

  5. High Average Power UV Free Electron Laser Experiments At JLAB

    International Nuclear Information System (INIS)

    Douglas, David; Benson, Stephen; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle; Tennant, Christopher; Williams, Gwyn

    2012-01-01

    Having produced 14 kW of average power at ∼2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.

  6. Investigation of average daily water consumption and its impact on ...

    African Journals Online (AJOL)

    Investigation of average daily water consumption and its impact on weight gain in captive common buzzards ( Buteo buteo ) in Greece. ... At the end of 24 hours, the left over water was carefully brought out and re-measured to determine the quantity the birds have consumed. A control was set with a ceramic bowl with same ...

  7. proposed average values of some engineering properties of palm ...

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... Coefficient of sliding friction of palm ker- nels. Gbadamosi [2] determined the coefficient of sliding friction of palm kernels using a bottomless four-sided container on adjustable tilting surface of plywood, gal- vanized steel, and glass. The average values were 0.38,. 0.45, and 0.44 for dura, tenera, and pisifera ...

  8. 40 CFR 80.67 - Compliance on average.

    Science.gov (United States)

    2010-07-01

    ... of this section apply to all reformulated gasoline and RBOB produced or imported for which compliance... use to ensure the gasoline is produced by the refiner or is imported by the importer and is used only... on average. (1) The VOC-controlled reformulated gasoline and RBOB produced at any refinery or...

  9. Speckle averaging system for laser raster-scan image projection

    Science.gov (United States)

    Tiszauer, Detlev H.; Hackel, Lloyd A.

    1998-03-17

    The viewers' perception of laser speckle in a laser-scanned image projection system is modified or eliminated by the addition of an optical deflection system that effectively presents a new speckle realization at each point on the viewing screen to each viewer for every scan across the field. The speckle averaging is accomplished without introduction of spurious imaging artifacts.

  10. Moving average rules as a source of market instability

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    Despite the pervasiveness of the efficient markets paradigm in the academic finance literature, the use of various moving average (MA) trading rules remains popular with financial market practitioners. This paper proposes a stochastic dynamic financial market model in which demand for traded assets

  11. The background effective average action approach to quantum gravity

    DEFF Research Database (Denmark)

    D’Odorico, G.; Codello, A.; Pagani, C.

    2016-01-01

    of an UV attractive non-Gaussian fixed-point, which we find characterized by real critical exponents. Our closure method is general and can be applied systematically to more general truncations of the gravitational effective average action. © Springer International Publishing Switzerland 2016....

  12. On the average-case complexity of Shellsort

    NARCIS (Netherlands)

    Vitányi, P.

    We prove a lower bound expressed in the increment sequence on the average-case complexity of the number of inversions of Shellsort. This lower bound is sharp in every case where it could be checked. A special case of this lower bound yields the general Jiang-Li-Vitányi lower bound. We obtain new

  13. Environmental stresses can alleviate the average deleterious effect of mutations

    Directory of Open Access Journals (Sweden)

    Leibler Stanislas

    2003-05-01

    Full Text Available Abstract Background Fundamental questions in evolutionary genetics, including the possible advantage of sexual reproduction, depend critically on the effects of deleterious mutations on fitness. Limited existing experimental evidence suggests that, on average, such effects tend to be aggravated under environmental stresses, consistent with the perception that stress diminishes the organism's ability to tolerate deleterious mutations. Here, we ask whether there are also stresses with the opposite influence, under which the organism becomes more tolerant to mutations. Results We developed a technique, based on bioluminescence, which allows accurate automated measurements of bacterial growth rates at very low cell densities. Using this system, we measured growth rates of Escherichia coli mutants under a diverse set of environmental stresses. In contrast to the perception that stress always reduces the organism's ability to tolerate mutations, our measurements identified stresses that do the opposite – that is, despite decreasing wild-type growth, they alleviate, on average, the effect of deleterious mutations. Conclusions Our results show a qualitative difference between various environmental stresses ranging from alleviation to aggravation of the average effect of mutations. We further show how the existence of stresses that are biased towards alleviation of the effects of mutations may imply the existence of average epistatic interactions between mutations. The results thus offer a connection between the two main factors controlling the effects of deleterious mutations: environmental conditions and epistatic interactions.

  14. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau, A.

    1996-01-01

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  15. Crystallographic extraction and averaging of data from small image areas

    NARCIS (Netherlands)

    Perkins, GA; Downing, KH; Glaeser, RM

    The accuracy of structure factor phases determined from electron microscope images is determined mainly by the level of statistical significance, which is limited by the low level of allowed electron exposure and by the number of identical unit cells that can be averaged. It is shown here that

  16. Establishment of Average Body Measurement and the Development ...

    African Journals Online (AJOL)

    cce

    there is a change in their shape as well as in their size. This growth according to Aldrich ... Establishment of Average Body Measurement and the Development of Block Patterns for Pre-School Children. Igbo, C. A. (Ph.D). 62 ..... Poverty according to Igbo (2002) is one of the reasons for food insecurity. Inaccessibility and ...

  17. Maximum and average field strength in enclosed environments

    NARCIS (Netherlands)

    Leferink, Frank Bernardus Johannes

    2010-01-01

    Electromagnetic fields in large enclosed environments are reflected many times and cannot be predicted anymore using conventional models. The common approach is to compare such environments with highly reflecting reverberation chambers. The average field strength can easily be predicted using the

  18. arXiv Averaged Energy Conditions and Bouncing Universes

    CERN Document Server

    Giovannini, Massimo

    2017-11-16

    The dynamics of bouncing universes is characterized by violating certain coordinate-invariant restrictions on the total energy-momentum tensor, customarily referred to as energy conditions. Although there could be epochs in which the null energy condition is locally violated, it may perhaps be enforced in an averaged sense. Explicit examples of this possibility are investigated in different frameworks.

  19. Comparing averaging limits for social cues over space and time.

    Science.gov (United States)

    Florey, Joseph; Dakin, Steven C; Mareschal, Isabelle

    2017-08-01

    Observers are able to extract summary statistics from groups of faces, such as their mean emotion or identity. This can be done for faces presented simultaneously and also from sequences of faces presented at a fixed location. Equivalent noise analysis, which estimates an observer's internal noise (the uncertainty in judging a single element) and effective sample size (ESS; the effective number of elements being used to judge the average), reveals what limits an observer's averaging performance. It has recently been shown that observers have lower ESSs and higher internal noise for judging the mean gaze direction of a group of spatially distributed faces compared to the mean head direction of the same faces. In this study, we use the equivalent noise technique to compare limits on these two cues to social attention under two presentation conditions: spatially distributed and sequentially presented. We find that the differences in ESS are replicated in spatial arrays but disappear when both cue types are averaged over time, suggesting that limited peripheral gaze perception prevents accurate averaging performance. Correlation analysis across participants revealed generic limits for internal noise that may act across stimulus and presentation types, but no clear shared limits for ESS. This result supports the idea of some shared neural mechanisms b in early stages of visual processing.

  20. Modeling of Sokoto Daily Average Temperature: A Fractional ...

    African Journals Online (AJOL)

    Modeling of Sokoto Daily Average Temperature: A Fractional Integration Approach. 22 extension of the class of ARIMA processes stemming from Box and Jenkins methodology. One of their originalities is the explicit modeling of the long term correlation structure (Diebolt and. Guiraud, 2000). Autoregressive fractionally.

  1. Accuracy of averaged auditory brainstem response amplitude and latency estimates

    DEFF Research Database (Denmark)

    Madsen, Sara Miay Kim; M. Harte, James; Elberling, Claus

    2017-01-01

    Objective: The aims were to 1) establish which of the four algorithms for estimating residual noise level and signal-to-noise ratio (SNR) in auditory brainstem responses (ABRs) perform better in terms of post-average wave-V peak latency and amplitude errors and 2) determine whether SNR or noise...

  2. Domain-averaged Fermi-hole Analysis for Solids

    Czech Academy of Sciences Publication Activity Database

    Baranov, A.; Ponec, Robert; Kohout, M.

    2012-01-01

    Roč. 137, č. 21 (2012), s. 214109 ISSN 0021-9606 R&D Projects: GA ČR GA203/09/0118 Institutional support: RVO:67985858 Keywords : bonding in solids * domain averaged fermi hole * natural orbitals Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012

  3. Significance of power average of sinusoidal and non-sinusoidal ...

    Indian Academy of Sciences (India)

    2016-06-08

    Jun 8, 2016 ... Corresponding author. E-mail: venkatesh.sprv@gmail.com ... of the total power average technique, one can say whether the chaos in that nonlinear system is to be supppressed or not. Keywords. Chaos; controlling .... the instantaneous values of power taken during one complete cycle T and is given as.

  4. 94 GHz High-Average-Power Broadband Amplifier

    National Research Council Canada - National Science Library

    Luhmann, Neville

    2003-01-01

    A state-of-the-art gyro-TWT amplifier operating in the low loss TE01 mode has been developed with the objective of producing an average power of 140 kW in the W-Band with a predicted efficiency of 28%, 50dB gain, and 5% bandwidth...

  5. Climate Prediction Center (CPC) Zonally Average 500 MB Temperature Anomalies

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is one of the CPC?s Monthly Atmospheric and SST Indices. It is the 500-hPa temperature anomalies averaged over the latitude band 20oN ? 20oS. The anomalies are...

  6. A simple consensus algorithm for distributed averaging in random ...

    Indian Academy of Sciences (India)

    guaranteed convergence with this simple algorithm. Keywords. Sensor networks; random geographical networks; distributed averaging; consensus algorithms. PACS Nos 89.75.Hc; 89.75.Fb; 89.20.Ff. 1. Introduction. Wireless sensor networks are increasingly used in many applications ranging from envi- ronmental to ...

  7. 40 CFR 63.150 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES National Emission Standards for Organic Hazardous Air Pollutants From the Synthetic Organic Chemical Manufacturing Industry for Process Vents, Storage Vessels, Transfer Operations, and Wastewater § 63.150 Emissions averaging...

  8. Grade Point Average: What's Wrong and What's the Alternative?

    Science.gov (United States)

    Soh, Kay Cheng

    2011-01-01

    Grade point average (GPA) has been around for more than two centuries. However, it has created a lot of confusion, frustration, and anxiety to GPA-producers and users alike, especially when used across-nation for different purposes. This paper looks into the reasons for such a state of affairs from the perspective of educational measurement. It…

  9. The Effect of Honors Courses on Grade Point Averages

    Science.gov (United States)

    Spisak, Art L.; Squires, Suzanne Carter

    2016-01-01

    High-ability entering college students give three main reasons for not choosing to become part of honors programs and colleges; they and/or their parents believe that honors classes at the university level require more work than non-honors courses, are more stressful, and will adversely affect their self-image and grade point average (GPA) (Hill;…

  10. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  11. Condition monitoring of gearboxes using synchronously averaged electric motor signals

    Science.gov (United States)

    Ottewill, J. R.; Orkisz, M.

    2013-07-01

    Due to their prevalence in rotating machinery, the condition monitoring of gearboxes is extremely important in the minimization of potentially dangerous and expensive failures. Traditionally, gearbox condition monitoring has been conducted using measurements obtained from casing-mounted vibration transducers such as accelerometers. A well-established technique for analyzing such signals is the synchronous signal average, where vibration signals are synchronized to a measured angular position and then averaged from rotation to rotation. Driven, in part, by improvements in control methodologies based upon methods of estimating rotor speed and torque, induction machines are used increasingly in industry to drive rotating machinery. As a result, attempts have been made to diagnose defects using measured terminal currents and voltages. In this paper, the application of the synchronous signal averaging methodology to electric drive signals, by synchronizing stator current signals with a shaft position estimated from current and voltage measurements is proposed. Initially, a test-rig is introduced based on an induction motor driving a two-stage reduction gearbox which is loaded by a DC motor. It is shown that a defect seeded into the gearbox may be located using signals acquired from casing-mounted accelerometers and shaft mounted encoders. Using simple models of an induction motor and a gearbox, it is shown that it should be possible to observe gearbox defects in the measured stator current signal. A robust method of extracting the average speed of a machine from the current frequency spectrum, based on the location of sidebands of the power supply frequency due to rotor eccentricity, is presented. The synchronous signal averaging method is applied to the resulting estimations of rotor position and torsional vibration. Experimental results show that the method is extremely adept at locating gear tooth defects. Further results, considering different loads and different

  12. Rescuing Collective Wisdom when the Average Group Opinion Is Wrong

    Directory of Open Access Journals (Sweden)

    Andres Laan

    2017-11-01

    Full Text Available The total knowledge contained within a collective supersedes the knowledge of even its most intelligent member. Yet the collective knowledge will remain inaccessible to us unless we are able to find efficient knowledge aggregation methods that produce reliable decisions based on the behavior or opinions of the collective’s members. It is often stated that simple averaging of a pool of opinions is a good and in many cases the optimal way to extract knowledge from a crowd. The method of averaging has been applied to analysis of decision-making in very different fields, such as forecasting, collective animal behavior, individual psychology, and machine learning. Two mathematical theorems, Condorcet’s theorem and Jensen’s inequality, provide a general theoretical justification for the averaging procedure. Yet the necessary conditions which guarantee the applicability of these theorems are often not met in practice. Under such circumstances, averaging can lead to suboptimal and sometimes very poor performance. Practitioners in many different fields have independently developed procedures to counteract the failures of averaging. We review such knowledge aggregation procedures and interpret the methods in the light of a statistical decision theory framework to explain when their application is justified. Our analysis indicates that in the ideal case, there should be a matching between the aggregation procedure and the nature of the knowledge distribution, correlations, and associated error costs. This leads us to explore how machine learning techniques can be used to extract near-optimal decision rules in a data-driven manner. We end with a discussion of open frontiers in the domain of knowledge aggregation and collective intelligence in general.

  13. High average power diode pumped solid state lasers for CALIOPE

    International Nuclear Information System (INIS)

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers

  14. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements.

    Science.gov (United States)

    Hourdakis, C J

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, Ū(P), the average, Ū, the effective, U(eff) or the maximum peak, U(P) tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average Ū or the average peak, Ū(p) voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k(PPV,kVp) and the average k(PPV,Uav) conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated-according to the proposed method-PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from Ū(p) and Ū measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  15. Derivation of a volume-averaged neutron diffusion equation; Atomos para el desarrollo de Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez R, R.; Espinosa P, G. [UAM-Iztapalapa, Av. San Rafael Atlixco 186, Col. Vicentina, Mexico D.F. 09340 (Mexico); Morales S, Jaime B. [UNAM, Laboratorio de Analisis en Ingenieria de Reactores Nucleares, Paseo Cuauhnahuac 8532, Jiutepec, Morelos 62550 (Mexico)]. e-mail: rvr@xanum.uam.mx

    2008-07-01

    This paper presents a general theoretical analysis of the problem of neutron motion in a nuclear reactor, where large variations on neutron cross sections normally preclude the use of the classical neutron diffusion equation. A volume-averaged neutron diffusion equation is derived which includes correction terms to diffusion and nuclear reaction effects. A method is presented to determine closure-relationships for the volume-averaged neutron diffusion equation (e.g., effective neutron diffusivity). In order to describe the distribution of neutrons in a highly heterogeneous configuration, it was necessary to extend the classical neutron diffusion equation. Thus, the volume averaged diffusion equation include two corrections factor: the first correction is related with the absorption process of the neutron and the second correction is a contribution to the neutron diffusion, both parameters are related to neutron effects on the interface of a heterogeneous configuration. (Author)

  16. The classical correlation limits the ability of the measurement-induced average coherence

    Science.gov (United States)

    Zhang, Jun; Yang, Si-Ren; Zhang, Yang; Yu, Chang-Shui

    2017-04-01

    Coherence is the most fundamental quantum feature in quantum mechanics. For a bipartite quantum state, if a measurement is performed on one party, the other party, based on the measurement outcomes, will collapse to a corresponding state with some probability and hence gain the average coherence. It is shown that the average coherence is not less than the coherence of its reduced density matrix. In particular, it is very surprising that the extra average coherence (and the maximal extra average coherence with all the possible measurements taken into account) is upper bounded by the classical correlation of the bipartite state instead of the quantum correlation. We also find the sufficient and necessary condition for the null maximal extra average coherence. Some examples demonstrate the relation and, moreover, show that quantum correlation is neither sufficient nor necessary for the nonzero extra average coherence within a given measurement. In addition, the similar conclusions are drawn for both the basis-dependent and the basis-free coherence measure.

  17. Is average daily travel time expenditure constant? In search of explanations for an increase in average travel time.

    NARCIS (Netherlands)

    van Wee, B.; Rietveld, P.; Meurs, H.

    2006-01-01

    Recent research suggests that the average time spent travelling by the Dutch population has increased over the past decades. However, different data sources show different levels of increase. This paper explores possible causes for this increase. They include a rise in incomes, which has probably

  18. Diversity of growth hormone gene and its relation with average daily ...

    African Journals Online (AJOL)

    The research was conducted in the Padang Mangatas Breeding Centre, Limapuluh Kota district, West Sumatera Province and Biotechnology Laboratory of Faculty of Animal Husbandry, Andalas University. The research used 100 Simmental calves. DNA were isolation from blood sample using DNA purification Kit from ...

  19. Deciphering DNA replication dynamics in eukaryotic cell populations in relation with their averaged chromatin conformations

    DEFF Research Database (Denmark)

    Goldar, A.; Arneodo, A.; Audit, B.

    2016-01-01

    , and by taking into account the chromatin's fractal dimension, we derive an analytical expression for the rate of replication initiation. This model predicts with no free parameter the temporal profiles of initiation rate, replication fork density and fraction of replicated DNA, in quantitative agreement...

  20. A Dependence between Average Call Duration and Voice Transmission Quality: Measurement and applications

    NARCIS (Netherlands)

    Holub, J.; Beerends, J.G.; Smid, R.

    2004-01-01

    This contribution deals with the estimation of the relation between speech transmission quality and average call duration for a given network and portfolio of customers. It uses non-intrusive speech quality measurements on live speech calls. The basic idea behind this analysis is an expectation that

  1. Factors That Predict Marijuana Use and Grade Point Average among Undergraduate College Students

    Science.gov (United States)

    Coco, Marlena B.

    2017-01-01

    The purpose of this study was to analyze factors that predict marijuana use and grade point average among undergraduate college students using the Core Institute national database. The Core Alcohol and Drug Survey was used to collect data on students' attitudes, beliefs, and experiences related to substance use in college. The sample used in this…

  2. The predictive validity of grade point average scores in a partial lottery medical school admission system

    NARCIS (Netherlands)

    Cohen-Schotanus, Janke; Muijtjens, Arno M. M.; Reinders, Jan J.; Agsteribbe, Jessica; van Rossum, Herman J. M.; van der Vleuten, Cees P. M.

    2006-01-01

    PURPOSE To ascertain whether the grade point average (GPA) of school-leaving examinations is related to study success, career development and scientific performance. The problem of restriction of range was expected to be partially reduced due to the use of a national lottery system weighted in

  3. Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods

    Science.gov (United States)

    Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor

    2016-10-01

    Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.

  4. Thermal effects in high average power optical parametric amplifiers.

    Science.gov (United States)

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given.

  5. A collisional-radiative average atom model for hot plasmas

    International Nuclear Information System (INIS)

    Rozsnyai, B.F.

    1996-01-01

    A collisional-radiative 'average atom' (AA) model is presented for the calculation of opacities of hot plasmas not in the condition of local thermodynamic equilibrium (LTE). The electron impact and radiative rate constants are calculated using the dipole oscillator strengths of the average atom. A key element of the model is the photon escape probability which at present is calculated for a semi infinite slab. The Fermi statistics renders the rate equation for the AA level occupancies nonlinear, which requires iterations until the steady state. AA level occupancies are found. Detailed electronic configurations are built into the model after the self-consistent non-LTE AA state is found. The model shows a continuous transition from the non-LTE to the LTE state depending on the optical thickness of the plasma. 22 refs., 13 figs., 1 tab

  6. Partial Averaged Navier-Stokes approach for cavitating flow

    International Nuclear Information System (INIS)

    Zhang, L; Zhang, Y N

    2015-01-01

    Partial Averaged Navier Stokes (PANS) is a numerical approach developed for studying practical engineering problems (e.g. cavitating flow inside hydroturbines) with a resonance cost and accuracy. One of the advantages of PANS is that it is suitable for any filter width, leading a bridging method from traditional Reynolds Averaged Navier-Stokes (RANS) to direct numerical simulations by choosing appropriate parameters. Comparing with RANS, the PANS model will inherit many physical nature from parent RANS but further resolve more scales of motion in great details, leading to PANS superior to RANS. As an important step for PANS approach, one need to identify appropriate physical filter-width control parameters e.g. ratios of unresolved-to-total kinetic energy and dissipation. In present paper, recent studies of cavitating flow based on PANS approach are introduced with a focus on the influences of filter-width control parameters on the simulation results

  7. Results from the average power laser experiment photocathode injector test

    International Nuclear Information System (INIS)

    Dowell, D.H.; Bethel, S.Z.; Friddell, K.D.

    1995-01-01

    Tests of the electron beam injector for the Boeing/Los Alamos Average Power Laser Experiment (APLE) have demonstrated first time operation of a photocathode RF gun accelerator at 25% duty factor. This exceeds previous photocathode operation by three orders of magnitude. The success of these tests was dependent upon the development of reliable and efficient photocathode preparation and processing. This paper describes the fabrication details for photocathodes with quantum efficiencies up to 12% which were used during electron beam operation. Measurements of photocathode lifetime as it depends upon the presence of water vapor are also presented. Observations of photocathode quantum efficiency rejuvenation and extended lifetime in the RF cavities are described. The importance of these effects upon photocathode lifetime during high average power operation are discussed. ((orig.))

  8. Sample size for estimating average productive traits of pigeon pea

    Directory of Open Access Journals (Sweden)

    Giovani Facco

    2016-04-01

    Full Text Available ABSTRACT: The objectives of this study were to determine the sample size, in terms of number of plants, needed to estimate the average values of productive traits of the pigeon pea and to determine whether the sample size needed varies between traits and between crop years. Separate uniformity trials were conducted in 2011/2012 and 2012/2013. In each trial, 360 plants were demarcated, and the fresh and dry masses of roots, stems, and leaves and of shoots and the total plant were evaluated during blossoming for 10 productive traits. Descriptive statistics were calculated, normality and randomness were checked, and the sample size was calculated. There was variability in the sample size between the productive traits and crop years of the pigeon pea culture. To estimate the averages of the productive traits with a 20% maximum estimation error and 95% confidence level, 70 plants are sufficient.

  9. Average diurnal variation of summer lightning over the Florida peninsula

    Science.gov (United States)

    Maier, L. M.; Krider, E. P.; Maier, M. W.

    1984-01-01

    Data derived from a large network of electric field mills are used to determine the average diurnal variation of lightning in a Florida seacoast environment. The variation at the NASA Kennedy Space Center and the Cape Canaveral Air Force Station area is compared with standard weather observations of thunder, and the variation of all discharges in this area is compared with the statistics of cloud-to-ground flashes over most of the South Florida peninsula and offshore waters. The results show average diurnal variations that are consistent with statistics of thunder start times and the times of maximum thunder frequency, but that the actual lightning tends to stop one to two hours before the recorded thunder. The variation is also consistent with previous determinations of the times of maximum rainfall and maximum rainfall rate.

  10. Microchannel heatsinks for high-average-power laser diode arrays

    Science.gov (United States)

    Benett, William J.; Freitas, Barry L.; Beach, Raymond J.; Ciarlo, Dino R.; Sperry, Verry; Comaskey, Brian J.; Emanuel, Mark A.; Solarz, Richard W.; Mundinger, David C.

    1992-06-01

    Detailed performance results and fabrication techniques for an efficient and low thermal impedance laser diode array heatsink are presented. High duty factor or even CW operation of fully filled laser diode arrays is enabled at high average power. Low thermal impedance is achieved using a liquid coolant and laminar flow through microchannels. The microchannels are fabricated in silicon using a photolithographic pattern definition procedure followed by anisotropic chemical etching. A modular rack-and-stack architecture is adopted for the heatsink design allowing arbitrarily large two-dimensional arrays to be fabricated and easily maintained. The excellent thermal control of the microchannel cooled heatsinks is ideally suited to pump array requirements for high average power crystalline lasers because of the stringent temperature demands that result from coupling the diode light to several nanometers wide absorption features characteristic of lasing ions in crystals.

  11. The B-dot Earth Average Magnetic Field

    Science.gov (United States)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  12. Pulsar average waveforms and hollow cone beam models

    Science.gov (United States)

    Backer, D. C.

    1975-01-01

    An analysis of pulsar average waveforms at radio frequencies from 40 MHz to 15 GHz is presented. The analysis is based on the hypothesis that the observer sees one cut of a hollow-cone beam pattern and that stationary properties of the emission vary over the cone. The distributions of apparent cone widths for different observed forms of the average pulse profiles (single, double/unresolved, double/resolved, triple and multiple) are in modest agreement with a model of a circular hollow-cone beam with random observer-spin axis orientation, a random cone axis-spin axis alignment, and a small range of physical hollow-cone parameters for all objects.

  13. The definition and computation of average neutron lifetimes

    International Nuclear Information System (INIS)

    Henry, A.F.

    1983-01-01

    A precise physical definition is offered for a class of average lifetimes for neutrons in an assembly of materials, either multiplying or not, or if the former, critical or not. A compact theoretical expression for the general member of this class is derived in terms of solutions to the transport equation. Three specific definitions are considered. Particular exact expressions for these are derived and reduced to simple algebraic formulas for one-group and two-group homogeneous bare-core models

  14. A simple consensus algorithm for distributed averaging in random ...

    Indian Academy of Sciences (India)

    Distributed averaging in random geographical networks. It can be simply proved that for the values of the uniform step size σ in the range. (0,1/kmax], with kmax being the maximum degree of the graph, the above system is asymptotically globally convergent to [17]. ∀i; lim k→∞ xi (k) = α = 1. N. N. ∑ i=1 xi (0),. (3) which is ...

  15. Characterizations of Sobolev spaces via averages on balls

    Czech Academy of Sciences Publication Activity Database

    Dai, F.; Gogatishvili, Amiran; Yang, D.; Yuan, W.

    2015-01-01

    Roč. 128, November (2015), s. 86-99 ISSN 0362-546X R&D Projects: GA ČR GA13-14743S Institutional support: RVO:67985840 Keywords : Sobolev space * average on ball * difference * Euclidean space * space of homogeneous type Subject RIV: BA - General Math ematics Impact factor: 1.125, year: 2015 http://www.sciencedirect.com/science/article/pii/S0362546X15002618

  16. Spinal cord imaging using averaged magnetization inversion recovery acquisitions.

    Science.gov (United States)

    Weigel, Matthias; Bieri, Oliver

    2018-04-01

    To establish a novel approach for fast high-resolution spinal cord (SC) imaging using averaged magnetization inversion recovery acquisitions (AMIRA). The AMIRA concept is based on an inversion recovery (IR) prepared, segmented, and time-limited cine balanced steady state free precession sequence. Typically, for the fastest SC imaging without any signal averaging, eight consecutive images in time with an in-plane resolution of 0.67 × 0.67 mm 2 and 6 mm to 8 mm slice thickness are acquired in 51 s. AMIRA does not require parallel acquisition techniques. AMIRA measures eight images of remarkable tissue contrast variation between spinal cord gray (GM) and white matter (WM) and cerebrospinal fluid (CSF). Following the AMIRA concept, averaging the first IR contrast images not only improves the signal-to-noise ratio but also offers a surprising enhancement of the contrast-to-noise ratio between GM and WM, whereas averaging the last images considerably improves the contrast-to-noise ratio between WM and CSF. These observations are supported by quantitative data. The AMIRA concept provides 2D spinal cord imaging with multiple tissue contrasts and enhanced contrast-to-noise ratios with a typical 0.67 × 0.67 mm 2 in-plane resolution and a slice thickness between 4 mm and 8 mm acquired in only 1 to 2 min per slice. Magn Reson Med 79:1870-1881, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  17. Average Case Analysis of Java 7's Dual Pivot Quicksort

    OpenAIRE

    Wild, Sebastian; Nebel, Markus E.

    2013-01-01

    Recently, a new Quicksort variant due to Yaroslavskiy was chosen as standard sorting method for Oracle's Java 7 runtime library. The decision for the change was based on empirical studies showing that on average, the new algorithm is faster than the formerly used classic Quicksort. Surprisingly, the improvement was achieved by using a dual pivot approach, an idea that was considered not promising by several theoretical studies in the past. In this paper, we identify the reason for this unexpe...

  18. Effect of land area on average annual suburban water demand ...

    African Journals Online (AJOL)

    ... values in the range between 4.4 kℓ∙d−1·ha−1 and 8.7 kℓ∙d−1·ha−1. The average demand was 10.4 kℓ∙d−1·ha−1 for calculation based on the residential area. The results are useful when crude estimates of AADD are required for planning new land developments. Keywords: urban water demand, suburb area, residential ...

  19. Modeling methane emission via the infinite moving average process

    Czech Academy of Sciences Publication Activity Database

    Jordanova, D.; Dušek, Jiří; Stehlík, M.

    2013-01-01

    Roč. 122, - (2013), s. 40-49 ISSN 0169-7439 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0073; GA ČR(CZ) GAP504/11/1151 Institutional support: RVO:67179843 Keywords : Environmental chemistry * Pareto tails * t-Hill estimator * Weak consistency * Moving average process * Methane emission model Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013

  20. Averaging underwater noise levels for environmental assessment of shipping

    OpenAIRE

    Merchant, Nathan D.; Blondel, Philippe; Dakin, D. Tom; Dorocicz, John

    2012-01-01

    Rising underwater noise levels from shipping have raised concerns regarding chronic impacts to marine fauna. However, there is a lack of consensus over how to average local shipping noise levels for environmental impact assessment. This paper addresses this issue using 110 days of continuous data recorded in the Strait of Georgia, Canada. Probability densities of ∼ 10 7 1-s samples in selected 1/3 octave bands were approximately stationary across one-month subsamples. Median and mode levels v...

  1. Light-cone averages in a Swiss-cheese universe

    International Nuclear Information System (INIS)

    Marra, Valerio; Kolb, Edward W.; Matarrese, Sabino

    2008-01-01

    We analyze a toy Swiss-cheese cosmological model to study the averaging problem. In our Swiss-cheese model, the cheese is a spatially flat, matter only, Friedmann-Robertson-Walker solution (i.e., the Einstein-de Sitter model), and the holes are constructed from a Lemaitre-Tolman-Bondi solution of Einstein's equations. We study the propagation of photons in the Swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the expansion scalar is unaffected by the inhomogeneities (i.e., the phenomenological homogeneous model is the cheese model). This is because of the spherical symmetry of the model; it is unclear whether the expansion scalar will be affected by nonspherical voids. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the ΛCDM concordance model. It is interesting that, although the sole source in the Swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we study how the equation of state of the phenomenological homogeneous model depends on the size of the inhomogeneities, and find that the equation-of-state parameters w 0 and w a follow a power-law dependence with a scaling exponent equal to unity. That is, the equation of state depends linearly on the distance the photon travels through voids. We conclude that, within our toy model, the holes must have a present size of about 250 Mpc to be able to mimic the concordance model

  2. Averaging approximation to singularly perturbed nonlinear stochastic wave equations

    Science.gov (United States)

    Lv, Yan; Roberts, A. J.

    2012-06-01

    An averaging method is applied to derive effective approximation to a singularly perturbed nonlinear stochastic damped wave equation. Small parameter ν > 0 characterizes the singular perturbation, and να, 0 ⩽ α ⩽ 1/2, parametrizes the strength of the noise. Some scaling transformations and the martingale representation theorem yield the effective approximation, a stochastic nonlinear heat equation, for small ν in the sense of distribution.

  3. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    . Alternatively, for time stationary and homogeneous turbulence, analytical expressions, involving higher order correlation functions R(n)(r, t) = , can be derived for the conditional averages. These expressions have the form of series expansions, which have...... to be truncated for practical applications. The convergence properties of these series are not known, except in the limit of Gaussian statistics. By applying the analysis to numerically simulated ion acoustic turbulence, we demonstrate that by keeping two or three terms in these series an acceptable approximation...

  4. On the average uncertainty for systems with nonlinear coupling

    Science.gov (United States)

    Nelson, Kenric P.; Umarov, Sabir R.; Kon, Mark A.

    2017-02-01

    The increased uncertainty and complexity of nonlinear systems have motivated investigators to consider generalized approaches to defining an entropy function. New insights are achieved by defining the average uncertainty in the probability domain as a transformation of entropy functions. The Shannon entropy when transformed to the probability domain is the weighted geometric mean of the probabilities. For the exponential and Gaussian distributions, we show that the weighted geometric mean of the distribution is equal to the density of the distribution at the location plus the scale (i.e. at the width of the distribution). The average uncertainty is generalized via the weighted generalized mean, in which the moment is a function of the nonlinear source. Both the Rényi and Tsallis entropies transform to this definition of the generalized average uncertainty in the probability domain. For the generalized Pareto and Student's t-distributions, which are the maximum entropy distributions for these generalized entropies, the appropriate weighted generalized mean also equals the density of the distribution at the location plus scale. A coupled entropy function is proposed, which is equal to the normalized Tsallis entropy divided by one plus the coupling.

  5. High Average Power, High Energy Short Pulse Fiber Laser System

    Energy Technology Data Exchange (ETDEWEB)

    Messerly, M J

    2007-11-13

    Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.

  6. Averaging kernels for DOAS total-column satellite retrievals

    Directory of Open Access Journals (Sweden)

    H. J. Eskes

    2003-01-01

    Full Text Available The Differential Optical Absorption Spectroscopy (DOAS method is used extensively to retrieve total column amounts of trace gases based on UV-visible measurements of satellite spectrometers, such as ERS-2 GOME. In practice the sensitivity of the instrument to the tracer density is strongly height dependent, especially in the troposphere. The resulting tracer profile dependence may introduce large systematic errors in the retrieved columns that are difficult to quantify without proper additional information, as provided by the averaging kernel (AK. In this paper we discuss the DOAS retrieval method in the context of the general retrieval theory as developed by Rodgers. An expression is derived for the DOAS AK for optically thin absorbers. It is shown that the comparison with 3D chemistry-transport models and independent profile measurements, based on averaging kernels, is no longer influenced by errors resulting from a priori profile assumptions. The availability of averaging kernel information as part of the total column retrieval product is important for the interpretation of the observations, and for applications like chemical data assimilation and detailed satellite validation studies.

  7. Relative Age Dating of Young Star Clusters from YSOVAR

    Science.gov (United States)

    Johnson, Chelen H.; Gibbs, John C.; Linahan, Marcella; Rebull, Luisa; Bernstein, Alexandra E.; Child, Sierra; Eakins, Emma; Elert, Julia T.; Frey, Grace; Gong, Nathaniel; Hedlund, Audrey R.; Karos, Alexandra D.; Medeiros, Emma M.; Moradi, Madeline; Myers, Keenan; Packer, Benjamin M.; Reader, Livia K.; Sorenson, Benjamin; Stefo, James S.; Strid, Grace; Sumner, Joy; Sundeen, Kiera A.; Taylor, Meghan; Ujjainwala, Zakir L.

    2018-01-01

    The YSOVAR (Young Stellar Object VARiability; Rebull et al. 2014) Spitzer Space Telescope observing program monitored a dozen star forming cores in the mid-infrared (3.6 and 4.5 microns). Rebull et al. (2014) placed these cores in relative age order based on numbers of YSO candidates in SED class bins (I, flat, II, III), which is based on the slope of the SED between 2 and 25 microns. PanSTARRS data have recently been released (Chambers et al. 2016); deep optical data are now available over all the YSOVAR clusters. We worked with eight of the YSOVAR targets (IC1396-N, AFGL 490, NGC 1333, Mon R2, GGD 12-15, L 1688, IRAS 20050+2720, and Ceph C) and the YSO candidates identified therein as part of YSOVAR (through their infrared colors or X-ray detections plus a star-like SED; see Rebull et al. 2014). We created and examined optical and NIR color-magnitude diagrams and color-color diagrams of these YSO candidates to determine if the addition of optical data contradicted or reinforced the relative age dating of the clusters obtained with SED class ratios.This project is a collaborative effort of high school students and teachers from three states. We analyzed data individually and later collaborated online to compare results. This project is the result of many years of work with the NASA/IPAC Teacher Archive Research Program (NITARP).

  8. Evolution of statistical averages: An interdisciplinary proposal using the Chapman-Enskog method

    Science.gov (United States)

    Mariscal-Sanchez, A.; Sandoval-Villalbazo, A.

    2017-08-01

    This work examines the idea of applying the Chapman-Enskog (CE) method for approximating the solution of the Boltzmann equation beyond the realm of physics, using an information theory approach. Equations describing the evolution of averages and their fluctuations in a generalized phase space are established up to first-order in the Knudsen parameter which is defined as the ratio of the time between interactions (mean free time) and a characteristic macroscopic time. Although the general equations here obtained may be applied in a wide range of disciplines, in this paper, only a particular case related to the evolution of averages in speculative markets is examined.

  9. High average power parametric frequency conversion-new concepts and new pump sources

    Energy Technology Data Exchange (ETDEWEB)

    Velsko, S.P.; Webb, M.S.

    1994-03-01

    A number of applications, including long range remote sensing and antisensor technology, require high average power tunable radiation in several distinct spectral regions. Of the many issues which determine the deployability of optical parametric oscillators (OPOS) and related systems, efficiency and simplicity are among the most important. It is only recently that the advent of compact diode laser pumped solid state lasers has produced pump sources for parametric oscillators which can make compact, efficient, high average power tunable sources possible. In this paper we outline several different issues in parametric oscillator and pump laser development which are currently under study at Lawrence Livermore National Laboratory.

  10. Estimation of Daily Average Downward Shortwave Radiation over Antarctica

    Directory of Open Access Journals (Sweden)

    Yingji Zhou

    2018-03-01

    Full Text Available Surface shortwave (SW irradiation is the primary driving force of energy exchange in the atmosphere and land interface. The global climate is profoundly influenced by irradiation changes due to the special climatic condition in Antarctica. Remote-sensing retrieval can offer only the instantaneous values in an area, whilst daily cycle and average values are necessary for further studies and applications, including climate change, ecology, and land surface process. When considering the large values of and small diurnal changes of solar zenith angle and cloud coverage, we develop two methods for the temporal extension of remotely sensed downward SW irradiance over Antarctica. The first one is an improved sinusoidal method, and the second one is an interpolation method based on cloud fraction change. The instantaneous irradiance data and cloud products are used in both methods to extend the diurnal cycle, and obtain the daily average value. Data from South Pole and Georg von Neumayer stations are used to validate the estimated value. The coefficient of determination (R2 between the estimated daily averages and the measured values based on the first method is 0.93, and the root mean square error (RMSE is 32.21 W/m2 (8.52%. As for the traditional sinusoidal method, the R2 and RMSE are 0.68 and 70.32 W/m2 (18.59%, respectively The R2 and RMSE of the second method are 0.96 and 25.27 W/m2 (6.98%, respectively. These values are better than those of the traditional linear interpolation (0.79 and 57.40 W/m2 (15.87%.

  11. On Averaging Timescales for the Surface Energy Budget Closure Problem

    Science.gov (United States)

    Grachev, A. A.; Fairall, C. W.; Persson, O. P. G.; Uttal, T.; Blomquist, B.; McCaffrey, K.

    2017-12-01

    An accurate determination of the surface energy budget (SEB) and all SEB components at the air-surface interface is of obvious relevance for the numerical modelling of the coupled atmosphere-land/ocean/snow system over different spatial and temporal scales, including climate modelling, weather forecasting, environmental impact studies, and many other applications. This study analyzes and discusses comprehensive measurements of the SEB and the surface energy fluxes (turbulent, radiative, and ground heat) made over different underlying surfaces based on the data collected during several field campaigns. Hourly-averaged, multiyear data sets collected at two terrestrial long-term research observatories located near the coast of the Arctic Ocean at Eureka (Canadian Archipelago) and Tiksi (East Siberia) and half-hourly averaged fluxes collected during a year-long field campaign (Wind Forecast Improvement Project 2, WFIP 2) at the Columbia River Gorge (Oregon) in areas of complex terrain. Our direct measurements of energy balance show that the sum of the turbulent sensible and latent heat fluxes systematically underestimate the available energy at half-hourly and hourly time scales by around 20-30% at these sites. This imbalance of the surface energy budget is comparable to other terrestrial sites. Surface energy balance closure is a formulation of the conservation of energy principle (the first law of thermodynamics). The lack of energy balance closure at hourly time scales is a fundamental and pervasive problem in micrometeorology and may be caused by inaccurate estimates of the energy storage terms in soils, air and biomass in the layer below the measurement height and above the heat flux plates. However, the residual energy imbalance is significantly reduced at daily and monthly timescales. Increasing the averaging time to daily scales substantially reduces the storage terms because energy locally entering the soil, air column, and vegetation in the morning is

  12. Exploring JLA supernova data with improved flux-averaging technique

    International Nuclear Information System (INIS)

    Wang, Shuang; Wen, Sixiang; Li, Miao

    2017-01-01

    In this work, we explore the cosmological consequences of the ''Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the ( z _c_u_t, Δ z ) plane, where z _c_u_t and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying z _c_u_t and varying Δ z , revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is ( z _c_u_t = 0.6, Δ z =0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at z _c_u_t ≥ 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ω _m . In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.

  13. Average waiting time profiles of uniform DQDB model

    Energy Technology Data Exchange (ETDEWEB)

    Rao, N.S.V. [Oak Ridge National Lab., TN (United States); Maly, K.; Olariu, S.; Dharanikota, S.; Zhang, L.; Game, D. [Old Dominion Univ., Norfolk, VA (United States). Dept. of Computer Science

    1993-09-07

    The Distributed Queue Dual Bus (DQDB) system consists of a linear arrangement of N nodes that communicate with each other using two contra-flowing buses; the nodes use an extremely simple protocol to send messages on these buses. This simple, but elegant, system has been found to be very challenging to analyze. We consider a simple and uniform abstraction of this model to highlight the fairness issues in terms of average waiting time. We introduce a new approximation method to analyze the performance of DQDB system in terms of the average waiting time of a node expressed as a function of its position. Our approach abstracts the intimate relationship between the load of the system and its fairness characteristics, and explains all basic behavior profiles of DQDB observed in previous simulation. For the uniform DQDB with equal distance between adjacent nodes, we show that the system operates under three basic behavior profiles and a finite number of their combinations that depend on the load of the network. Consequently, the system is not fair at any load in terms of the average waiting times. In the vicinity of a critical load of 1 {minus} 4/N, the uniform network runs into a state akin to chaos, where its behavior fluctuates from one extreme to the other with a load variation of 2/N. Our analysis is supported by simulation results. We also show that the main theme of the analysis carries over to the general (non-uniform) DQDB; by suitably choosing the inter-node distances, the DQDB can be made fair around some loads, but such system will become unfair as the load changes.

  14. Analysis of nonlinear systems using ARMA [autoregressive moving average] models

    International Nuclear Information System (INIS)

    Hunter, N.F. Jr.

    1990-01-01

    While many vibration systems exhibit primarily linear behavior, a significant percentage of the systems encountered in vibration and model testing are mildly to severely nonlinear. Analysis methods for such nonlinear systems are not yet well developed and the response of such systems is not accurately predicted by linear models. Nonlinear ARMA (autoregressive moving average) models are one method for the analysis and response prediction of nonlinear vibratory systems. In this paper we review the background of linear and nonlinear ARMA models, and illustrate the application of these models to nonlinear vibration systems. We conclude by summarizing the advantages and disadvantages of ARMA models and emphasizing prospects for future development. 14 refs., 11 figs

  15. Application of autoregressive moving average model in reactor noise analysis

    International Nuclear Information System (INIS)

    Tran Dinh Tri

    1993-01-01

    The application of an autoregressive (AR) model to estimating noise measurements has achieved many successes in reactor noise analysis in the last ten years. The physical processes that take place in the nuclear reactor, however, are described by an autoregressive moving average (ARMA) model rather than by an AR model. Consequently more correct results could be obtained by applying the ARMA model instead of the AR model to reactor noise analysis. In this paper the system of the generalised Yule-Walker equations is derived from the equation of an ARMA model, then a method for its solution is given. Numerical results show the applications of the method proposed. (author)

  16. Relaxing monotonicity in the identification of local average treatment effects

    DEFF Research Database (Denmark)

    Huber, Martin; Mellace, Giovanni

    In heterogeneous treatment effect models with endogeneity, the identification of the local average treatment effect (LATE) typically relies on an instrument that satisfies two conditions: (i) joint independence of the potential post-instrument variables and the instrument and (ii) monotonicity...... of the treatment in the instrument, see Imbens and Angrist (1994). We show that identification is still feasible when replacing monotonicity by a strictly weaker local monotonicity condition. We demonstrate that the latter allows identifying the LATEs on the (i) compliers (whose treatment reacts to the instrument...

  17. Effect of random edge failure on the average path length

    Energy Technology Data Exchange (ETDEWEB)

    Guo Dongchao; Liang Mangui; Li Dandan; Jiang Zhongyuan, E-mail: mgliang58@gmail.com, E-mail: 08112070@bjtu.edu.cn [Institute of Information Science, Beijing Jiaotong University, 100044, Beijing (China)

    2011-10-14

    We study the effect of random removal of edges on the average path length (APL) in a large class of uncorrelated random networks in which vertices are characterized by hidden variables controlling the attachment of edges between pairs of vertices. A formula for approximating the APL of networks suffering random edge removal is derived first. Then, the formula is confirmed by simulations for classical ER (Erdoes and Renyi) random graphs, BA (Barabasi and Albert) networks, networks with exponential degree distributions as well as random networks with asymptotic power-law degree distributions with exponent {alpha} > 2. (paper)

  18. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    OpenAIRE

    Samir Khaled Safi

    2014-01-01

    The autocorrelation function (ACF) measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q). We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj)=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,&hellip...

  19. Increasing PS-SDOCT SNR using correlated coherent averaging

    Science.gov (United States)

    Petrie, Tracy C.; Ramamoorthy, Sripriya; Jacques, Steven L.; Nuttall, Alfred L.

    2013-03-01

    Using data from our previously described otoscope1 that uses 1310 nm phase-sensitive spectral domain optical coherence tomography (PS-SDOCT), we demonstrate a software technique for improving the signal-to-noise (SNR). This method is a software post-processing algorithm applicable to generic PS-SDOCT data describing phase versus time at a specific depth position. By sub-sampling the time trace and shifting the phase of the subsamples to maximize their correlation, the subsamples can be coherently averaged, which increases the SNR.

  20. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    Energy Technology Data Exchange (ETDEWEB)

    Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chorin, Alexandre J. [Univ. of California, Berkeley, CA (United States); Crutchfield, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-04-24

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  1. Image Denoising Using Interquartile Range Filter with Local Averaging

    OpenAIRE

    Jassim, Firas Ajil

    2013-01-01

    Image denoising is one of the fundamental problems in image processing. In this paper, a novel approach to suppress noise from the image is conducted by applying the interquartile range (IQR) which is one of the statistical methods used to detect outlier effect from a dataset. A window of size kXk was implemented to support IQR filter. Each pixel outside the IQR range of the kXk window is treated as noisy pixel. The estimation of the noisy pixels was obtained by local averaging. The essential...

  2. Updated precision measurement of the average lifetime of B hadrons

    CERN Document Server

    Abreu, P; Adye, T; Agasi, E; Ajinenko, I; Aleksan, Roy; Alekseev, G D; Alemany, R; Allport, P P; Almehed, S; Amaldi, Ugo; Amato, S; Andreazza, A; Andrieux, M L; Antilogus, P; Apel, W D; Arnoud, Y; Åsman, B; Augustin, J E; Augustinus, A; Baillon, Paul; Bambade, P; Barate, R; Barbi, M S; Barbiellini, Guido; Bardin, Dimitri Yuri; Baroncelli, A; Bärring, O; Barrio, J A; Bartl, Walter; Bates, M J; Battaglia, Marco; Baubillier, M; Baudot, J; Becks, K H; Begalli, M; Beillière, P; Belokopytov, Yu A; Benvenuti, Alberto C; Berggren, M; Bertrand, D; Bianchi, F; Bigi, M; Bilenky, S M; Billoir, P; Bloch, D; Blume, M; Blyth, S; Bolognese, T; Bonesini, M; Bonivento, W; Booth, P S L; Borisov, G; Bosio, C; Bosworth, S; Botner, O; Boudinov, E; Bouquet, B; Bourdarios, C; Bowcock, T J V; Bozzo, M; Branchini, P; Brand, K D; Brenke, T; Brenner, R A; Bricman, C; Brillault, L; Brown, R C A; Brückman, P; Brunet, J M; Bugge, L; Buran, T; Burgsmüller, T; Buschmann, P; Buys, A; Cabrera, S; Caccia, M; Calvi, M; Camacho-Rozas, A J; Camporesi, T; Canale, V; Canepa, M; Cankocak, K; Cao, F; Carena, F; Carroll, L; Caso, Carlo; Castillo-Gimenez, M V; Cattai, A; Cavallo, F R; Cerrito, L; Chabaud, V; Charpentier, P; Chaussard, L; Chauveau, J; Checchia, P; Chelkov, G A; Chen, M; Chierici, R; Chliapnikov, P V; Chochula, P; Chorowicz, V; Chudoba, J; Cindro, V; Collins, P; Contreras, J L; Contri, R; Cortina, E; Cosme, G; Cossutti, F; Crawley, H B; Crennell, D J; Crosetti, G; Cuevas-Maestro, J; Czellar, S; Dahl-Jensen, Erik; Dahm, J; D'Almagne, B; Dam, M; Damgaard, G; Dauncey, P D; Davenport, Martyn; Da Silva, W; Defoix, C; Deghorain, A; Della Ricca, G; Delpierre, P A; Demaria, N; De Angelis, A; de Boer, Wim; De Brabandere, S; De Clercq, C; La Vaissière, C de; De Lotto, B; De Min, A; De Paula, L S; De Saint-Jean, C; Dijkstra, H; Di Ciaccio, Lucia; Djama, F; Dolbeau, J; Dönszelmann, M; Doroba, K; Dracos, M; Drees, J; Drees, K A; Dris, M; Dufour, Y; Edsall, D M; Ehret, R; Eigen, G; Ekelöf, T J C; Ekspong, Gösta; Elsing, M; Engel, J P; Ershaidat, N; Erzen, B; Espirito-Santo, M C; Falk, E; Fassouliotis, D; Feindt, Michael; Fenyuk, A; Ferrer, A; Filippas-Tassos, A; Firestone, A; Fischer, P A; Föth, H; Fokitis, E; Fontanelli, F; Formenti, F; Franek, B J; Frenkiel, P; Fries, D E C; Frodesen, A G; Frühwirth, R; Fulda-Quenzer, F; Fuster, J A; Galloni, A; Gamba, D; Gandelman, M; García, C; García, J; Gaspar, C; Gasparini, U; Gavillet, P; Gazis, E N; Gelé, D; Gerber, J P; Gibbs, M; Gokieli, R; Golob, B; Gopal, Gian P; Gorn, L; Górski, M; Guz, Yu; Gracco, Valerio; Graziani, E; Grosdidier, G; Grzelak, K; Gumenyuk, S A; Gunnarsson, P; Günther, M; Guy, J; Hahn, F; Hahn, S; Hajduk, Z; Hallgren, A; Hamacher, K; Hao, W; Harris, F J; Hedberg, V; Henriques, R P; Hernández, J J; Herquet, P; Herr, H; Hessing, T L; Higón, E; Hilke, Hans Jürgen; Hill, T S; Holmgren, S O; Holt, P J; Holthuizen, D J; Hoorelbeke, S; Houlden, M A; Hrubec, Josef; Huet, K; Hultqvist, K; Jackson, J N; Jacobsson, R; Jalocha, P; Janik, R; Jarlskog, C; Jarlskog, G; Jarry, P; Jean-Marie, B; Johansson, E K; Jönsson, L B; Jönsson, P E; Joram, Christian; Juillot, P; Kaiser, M; Kapusta, F; Karafasoulis, K; Karlsson, M; Karvelas, E; Katsanevas, S; Katsoufis, E C; Keränen, R; Khokhlov, Yu A; Khomenko, B A; Khovanskii, N N; King, B J; Kjaer, N J; Klein, H; Klovning, A; Kluit, P M; Köne, B; Kokkinias, P; Koratzinos, M; Korcyl, K; Kourkoumelis, C; Kuznetsov, O; Kramer, P H; Krammer, Manfred; Kreuter, C; Kronkvist, I J; Krumshtein, Z; Krupinski, W; Kubinec, P; Kucewicz, W; Kurvinen, K L; Lacasta, C; Laktineh, I; Lamblot, S; Lamsa, J; Lanceri, L; Lane, D W; Langefeld, P; Last, I; Laugier, J P; Lauhakangas, R; Leder, Gerhard; Ledroit, F; Lefébure, V; Legan, C K; Leitner, R; Lemoigne, Y; Lemonne, J; Lenzen, Georg; Lepeltier, V; Lesiak, T; Liko, D; Lindner, R; Lipniacka, A; Lippi, I; Lörstad, B; Loken, J G; López, J M; Loukas, D; Lutz, P; Lyons, L; MacNaughton, J N; Maehlum, G; Maio, A; Malychev, V; Mandl, F; Marco, J; Marco, R P; Maréchal, B; Margoni, M; Marin, J C; Mariotti, C; Markou, A; Maron, T; Martínez-Rivero, C; Martínez-Vidal, F; Martí i García, S; Masik, J; Matorras, F; Matteuzzi, C; Matthiae, Giorgio; Mazzucato, M; McCubbin, M L; McKay, R; McNulty, R; Medbo, J; Merk, M; Meroni, C; Meyer, S; Meyer, W T; Michelotto, M; Migliore, E; Mirabito, L; Mitaroff, Winfried A; Mjörnmark, U; Moa, T; Møller, R; Mönig, K; Monge, M R; Morettini, P; Müller, H; Mundim, L M; Murray, W J; Muryn, B; Myatt, Gerald; Naraghi, F; Navarria, Francesco Luigi; Navas, S; Nawrocki, K; Negri, P; Neumann, W; Nicolaidou, R; Nielsen, B S; Nieuwenhuizen, M; Nikolaenko, V; Niss, P; Nomerotski, A; Normand, Ainsley; Novák, M; Oberschulte-Beckmann, W; Obraztsov, V F; Olshevskii, A G; Onofre, A; Orava, Risto; Österberg, K; Ouraou, A; Paganini, P; Paganoni, M; Pagès, P; Palka, H; Papadopoulou, T D; Papageorgiou, K; Pape, L; Parkes, C; Parodi, F; Passeri, A; Pegoraro, M; Peralta, L; Pernegger, H; Pernicka, Manfred; Perrotta, A; Petridou, C; Petrolini, A; Petrovykh, M; Phillips, H T; Piana, G; Pierre, F; Pimenta, M; Pindo, M; Plaszczynski, S; Podobrin, O; Pol, M E; Polok, G; Poropat, P; Pozdnyakov, V; Prest, M; Privitera, P; Pukhaeva, N; Pullia, Antonio; Radojicic, D; Ragazzi, S; Rahmani, H; Ratoff, P N; Read, A L; Reale, M; Rebecchi, P; Redaelli, N G; Regler, Meinhard; Reid, D; Renton, P B; Resvanis, L K; Richard, F; Richardson, J; Rídky, J; Rinaudo, G; Ripp, I; Romero, A; Roncagliolo, I; Ronchese, P; Ronjin, V M; Roos, L; Rosenberg, E I; Rosso, E; Roudeau, Patrick; Rovelli, T; Rückstuhl, W; Ruhlmann-Kleider, V; Ruiz, A; Rybicki, K; Saarikko, H; Sacquin, Yu; Sadovskii, A; Sajot, G; Salt, J; Sánchez, J; Sannino, M; Schimmelpfennig, M; Schneider, H; Schwickerath, U; Schyns, M A E; Sciolla, G; Scuri, F; Seager, P; Sedykh, Yu; Segar, A M; Seitz, A; Sekulin, R L; Shellard, R C; Siccama, I; Siegrist, P; Simonetti, S; Simonetto, F; Sissakian, A N; Sitár, B; Skaali, T B; Smadja, G; Smirnov, N; Smirnova, O G; Smith, G R; Solovyanov, O; Sosnowski, R; Souza-Santos, D; Spassoff, Tz; Spiriti, E; Sponholz, P; Squarcia, S; Stanescu, C; Stapnes, Steinar; Stavitski, I; Stichelbaut, F; Stocchi, A; Strauss, J; Strub, R; Stugu, B; Szczekowski, M; Szeptycka, M; Tabarelli de Fatis, T; Tavernet, J P; Chikilev, O G; Tilquin, A; Timmermans, J; Tkatchev, L G; Todorov, T; Toet, D Z; Tomaradze, A G; Tomé, B; Tonazzo, A; Tortora, L; Tranströmer, G; Treille, D; Trischuk, W; Tristram, G; Trombini, A; Troncon, C; Tsirou, A L; Turluer, M L; Tyapkin, I A; Tyndel, M; Tzamarias, S; Überschär, B; Ullaland, O; Uvarov, V; Valenti, G; Vallazza, E; Van der Velde, C; van Apeldoorn, G W; van Dam, P; Van Doninck, W K; Van Eldik, J; Vassilopoulos, N; Vegni, G; Ventura, L; Venus, W A; Verbeure, F; Verlato, M; Vertogradov, L S; Vilanova, D; Vincent, P; Vitale, L; Vlasov, E; Vodopyanov, A S; Vrba, V; Wahlen, H; Walck, C; Weierstall, M; Weilhammer, Peter; Weiser, C; Wetherell, Alan M; Wicke, D; Wickens, J H; Wielers, M; Wilkinson, G R; Williams, W S C; Winter, M; Witek, M; Woschnagg, K; Yip, K; Yushchenko, O P; Zach, F; Zaitsev, A; Zalewska-Bak, A; Zalewski, Piotr; Zavrtanik, D; Zevgolatakos, E; Zimin, N I; Zito, M; Zontar, D; Zuberi, R; Zucchelli, G C; Zumerle, G; Belokopytov, Yu; Charpentier, Ph; Gavillet, Ph; Gouz, Yu; Jarlskog, Ch; Khokhlov, Yu; Papadopoulou, Th D

    1996-01-01

    The measurement of the average lifetime of B hadrons using inclusively reconstructed secondary vertices has been updated using both an improved processing of previous data and additional statistics from new data. This has reduced the statistical and systematic uncertainties and gives \\tau_{\\mathrm{B}} = 1.582 \\pm 0.011\\ \\mathrm{(stat.)} \\pm 0.027\\ \\mathrm{(syst.)}\\ \\mathrm{ps.} Combining this result with the previous result based on charged particle impact parameter distributions yields \\tau_{\\mathrm{B}} = 1.575 \\pm 0.010\\ \\mathrm{(stat.)} \\pm 0.026\\ \\mathrm{(syst.)}\\ \\mathrm{ps.}

  3. Domain averaged Fermi hole analysis for open-shell systems.

    Science.gov (United States)

    Ponec, Robert; Feixas, Ferran

    2009-05-14

    The Article reports the extension of the new original methodology for the analysis and visualization of the bonding interactions, known as the analysis of domain averaged Fermi holes (DAFH), to open-shell systems. The proposed generalization is based on straightforward reformulation of the original approach within the framework of unrestricted Hartree-Fock (UHF) and/or Kohn-Sham (UKS) levels of the theory. The application of the new methodology is demonstrated on the detailed analysis of the picture of the bonding in several simple systems involving the doublet state of radical cation NH(3)((+)) and the triplet ground state of the O(2) molecule.

  4. Control of average spacing of OMCVD grown gold nanoparticles

    Science.gov (United States)

    Rezaee, Asad

    Metallic nanostructures and their applications is a rapidly expanding field. Nobel metals such as silver and gold have historically been used to demonstrate plasmon effects due to their strong resonances, which occur in the visible part of the electromagnetic spectrum. Localized surface plasmon resonance (LSPR) produces an enhanced electromagnetic field at the interface between a gold nanoparticle (Au NP) and the surrounding dielectric. This enhanced field can be used for metal-dielectric interfacesensitive optical interactions that form a powerful basis for optical sensing. In addition to the surrounding material, the LSPR spectral position and width depend on the size, shape, and average spacing between these particles. Au NP LSPR based sensors depict their highest sensitivity with optimized parameters and usually operate by investigating absorption peak: shifts. The absorption peak: of randomly deposited Au NPs on surfaces is mostly broad. As a result, the absorption peak: shifts, upon binding of a material onto Au NPs might not be very clear for further analysis. Therefore, novel methods based on three well-known techniques, self-assembly, ion irradiation, and organo-meta1lic chemical vapour deposition (OMCVD) are introduced to control the average-spacing between Au NPs. In addition to covalently binding and other advantages of OMCVD grown Au NPs, interesting optical features due to their non-spherical shapes are presented. The first step towards the average-spacing control is to uniformly form self-assembled monolayers (SAMs) of octadecyltrichlorosilane (OTS) as resists for OMCVD Au NPs. The formation and optimization of the OTS SAMs are extensively studied. The optimized resist SAMs are ion-irradiated by a focused ion beam (Fill) and ions generated by a Tandem accelerator. The irradiated areas are refilled with 3-mercaptopropyl-trimethoxysilane (MPTS) to provide nucleation sites for the OMCVD Au NP growth. Each step during sample preparation is monitored by

  5. Concentration fluctuations and averaging time in vapor clouds

    CERN Document Server

    Wilson, David J

    2010-01-01

    This book contributes to more reliable and realistic predictions by focusing on sampling times from a few seconds to a few hours. Its objectives include developing clear definitions of statistical terms, such as plume sampling time, concentration averaging time, receptor exposure time, and other terms often confused with each other or incorrectly specified in hazard assessments; identifying and quantifying situations for which there is no adequate knowledge to predict concentration fluctuations in the near-field, close to sources, and far downwind where dispersion is dominated by atmospheric t

  6. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    Science.gov (United States)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  7. Edgeworth expansion for the pre-averaging estimator

    DEFF Research Database (Denmark)

    Podolskij, Mark; Veliyev, Bezirgen; Yoshida, Nakahiro

    In this paper, we study the Edgeworth expansion for a pre-averaging estimator of quadratic variation in the framework of continuous diffusion models observed with noise. More specifically, we obtain a second order expansion for the joint density of the estimators of quadratic variation and its as...... asymptotic variance. Our approach is based on martingale embedding, Malliavin calculus and stable central limit theorems for continuous diffusions. Moreover, we derive the density expansion for the studentized statistic, which might be applied to construct asymptotic confidence regions....

  8. Theory of oscillations in average crisis-induced transient lifetimes.

    Science.gov (United States)

    Kacperski, K; Hołyst, J A

    1999-07-01

    Analytical and numerical study of the roughly periodic oscillations emerging on the background of the well-known power law governing the scaling of the average lifetimes of crisis induced chaotic transients is presented. The explicit formula giving the amplitude of "normal" oscillations in terms of the eigenvalues of unstable orbits involved in the crisis is obtained using a simple geometrical model. We also discuss the commonly encountered situation when normal oscillations appear together with "anomalous" ones caused by the fractal structure of basins of attraction.

  9. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    Science.gov (United States)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  10. Vibrationally averaged dipole moments of methane and benzene isotopologues

    Energy Technology Data Exchange (ETDEWEB)

    Arapiraca, A. F. C. [Laboratório de Átomos e Moléculas Especiais, Departamento de Física, ICEx, Universidade Federal de Minas Gerais, P. O. Box 702, 30123-970 Belo Horizonte, MG (Brazil); Centro Federal de Educação Tecnológica de Minas Gerais, Coordenação de Ciências, CEFET-MG, Campus I, 30.421-169 Belo Horizonte, MG (Brazil); Mohallem, J. R., E-mail: rachid@fisica.ufmg.br [Laboratório de Átomos e Moléculas Especiais, Departamento de Física, ICEx, Universidade Federal de Minas Gerais, P. O. Box 702, 30123-970 Belo Horizonte, MG (Brazil)

    2016-04-14

    DFT-B3LYP post-Born-Oppenheimer (finite-nuclear-mass-correction (FNMC)) calculations of vibrationally averaged isotopic dipole moments of methane and benzene, which compare well with experimental values, are reported. For methane, in addition to the principal vibrational contribution to the molecular asymmetry, FNMC accounts for the surprisingly large Born-Oppenheimer error of about 34% to the dipole moments. This unexpected result is explained in terms of concurrent electronic and vibrational contributions. The calculated dipole moment of C{sub 6}H{sub 3}D{sub 3} is about twice as large as the measured dipole moment of C{sub 6}H{sub 5}D. Computational progress is advanced concerning applications to larger systems and the choice of appropriate basis sets. The simpler procedure of performing vibrational averaging on the Born-Oppenheimer level and then adding the FNMC contribution evaluated at the equilibrium distance is shown to be appropriate. Also, the basis set choice is made by heuristic analysis of the physical behavior of the systems, instead of by comparison with experiments.

  11. The visual system discounts emotional deviants when extracting average expression.

    Science.gov (United States)

    Haberman, Jason; Whitney, David

    2010-10-01

    There has been a recent surge in the study of ensemble coding, the idea that the visual system represents a set of similar items using summary statistics (Alvarez & Oliva, 2008; Ariely, 2001; Chong & Treisman, 2003; Parkes, Lund, Angelucci, Solomon, & Morgan, 2001). We previously demonstrated that this ability extends to faces and thus requires a high level of object processing (Haberman & Whitney, 2007, 2009). Recent debate has centered on the nature of the summary representation of size (e.g., Myczek & Simons, 2008) and whether the perceived average simply reflects the sampling of a very small subset of the items in a set. In the present study, we explored this further in the context of faces, asking observers to judge the average expressions of sets of faces containing emotional outliers. Our results suggest that the visual system implicitly and unintentionally discounts the emotional outliers, thereby computing a summary representation that encompasses the vast majority of the information present. Additional computational modeling and behavioral results reveal that an intentional cognitive sampling strategy does not accurately capture observer performance. Observers derive precise ensemble information given a 250-msec exposure, suggesting a rapid and flexible system not bound by the limits of serial attention.

  12. Face averages enhance user recognition for smartphone security.

    Directory of Open Access Journals (Sweden)

    David J Robertson

    Full Text Available Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy. In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1 and for real faces (Experiment 2: users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  13. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    Directory of Open Access Journals (Sweden)

    Samir Khaled Safi

    2014-02-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The autocorrelation function (ACF measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q. We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,…,stt. The forms of the explicit equations depend essentially on the moving average coefficients and covariance structure of the disturbance terms.   /* Style Definitions */ table.MsoNormalTable {mso-style-name:"جدول عادي"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;}

  14. Large interface simulation in an averaged two-fluid code

    International Nuclear Information System (INIS)

    Henriques, A.

    2006-01-01

    Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author) [fr

  15. Image Compression Using Moving Average Histogram and RBF Network

    Directory of Open Access Journals (Sweden)

    Sandar khowaja

    2016-04-01

    Full Text Available Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion?s share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function. Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio, MSE (Mean Square Error, PSNR (Peak Signal to Noise Ratio, computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity

  16. Eighth CW and High Average Power RF Workshop

    CERN Document Server

    2014-01-01

    We are pleased to announce the next Continuous Wave and High Average RF Power Workshop, CWRF2014, to take place at Hotel NH Trieste, Trieste, Italy from 13 to 16 May, 2014. This is the eighth in the CWRF workshop series and will be hosted by Elettra - Sincrotrone Trieste S.C.p.A. (www.elettra.eu). CWRF2014 will provide an opportunity for designers and users of CW and high average power RF systems to meet and interact in a convivial environment to share experiences and ideas on applications which utilize high-power klystrons, gridded tubes, combined solid-state architectures, high-voltage power supplies, high-voltage modulators, high-power combiners, circulators, cavities, power couplers and tuners. New ideas for high-power RF system upgrades and novel ways of RF power generation and distribution will also be discussed. CWRF2014 sessions will start on Tuesday morning and will conclude on Friday lunchtime. A visit to Elettra and FERMI will be organized during the workshop. ORGANIZING COMMITTEE (OC): Al...

  17. Orbit-averaged Darwin quasi-neutral hybrid code

    International Nuclear Information System (INIS)

    Zachary, A.L.; Cohen, B.I.

    1986-01-01

    We have developed an orbit-averaged Darwin quasi-neutral hydbrid code to study the in situ acceleration of cosmic ray by supernova-remnant shock waves. The orbit-averaged alogorithm is well suited to following the slow growth of Alfven wave driven by resonances with rapidly gyrating cosmic rays. We present a complete description of our algorithm, along with stability and noise analyses. The code is numerically unstable, but a single e-folding may require as many as 10 5 time-steps! It can therefore be used to study instabilities for which /sub physical/> Γ/sub n//sub u//sub m//sub e//sub r//sub i//sub c//sub a//sub l/, provided that Γ/sub n//sub u//sub m//sub e//sub r//sub i//sub c//sub a//sub l/ tau /sup f//sup i//sup n//sup a//sup l/< O(1). We also analyze a physical instability which provides a successful test of our algorithm

  18. Average gluon and quark jet multiplicities at higher orders

    Energy Technology Data Exchange (ETDEWEB)

    Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2013-05-15

    We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.

  19. Monthly streamflow forecasting with auto-regressive integrated moving average

    Science.gov (United States)

    Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani

    2017-09-01

    Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.

  20. Dynamic logistic regression and dynamic model averaging for binary classification.

    Science.gov (United States)

    McCormick, Tyler H; Raftery, Adrian E; Madigan, David; Burd, Randall S

    2012-03-01

    We propose an online binary classification procedure for cases when there is uncertainty about the model to use and parameters within a model change over time. We account for model uncertainty through dynamic model averaging, a dynamic extension of Bayesian model averaging in which posterior model probabilities may also change with time. We apply a state-space model to the parameters of each model and we allow the data-generating model to change over time according to a Markov chain. Calibrating a "forgetting" factor accommodates different levels of change in the data-generating mechanism. We propose an algorithm that adjusts the level of forgetting in an online fashion using the posterior predictive distribution, and so accommodates various levels of change at different times. We apply our method to data from children with appendicitis who receive either a traditional (open) appendectomy or a laparoscopic procedure. Factors associated with which children receive a particular type of procedure changed substantially over the 7 years of data collection, a feature that is not captured using standard regression modeling. Because our procedure can be implemented completely online, future data collection for similar studies would require storing sensitive patient information only temporarily, reducing the risk of a breach of confidentiality. © 2011, The International Biometric Society.

  1. Face averages enhance user recognition for smartphone security.

    Science.gov (United States)

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  2. SPATIAL DISTRIBUTION OF THE AVERAGE RUNOFF IN THE IZA AND VIȘEU WATERSHEDS

    Directory of Open Access Journals (Sweden)

    HORVÁTH CS.

    2015-03-01

    Full Text Available The average runoff represents the main parameter with which one can best evaluate an area’s water resources and it is also an important characteristic in al river runoff research. In this paper we choose a GIS methodology for assessing the spatial evolution of the average runoff, using validity curves we identifies three validity areas in which the runoff changes differently with altitude. The tree curves were charted using the average runoff values of 16 hydrometric stations from the area, eight in the Vișeu and eight in the Iza river catchment. Identifying the appropriate areas of the obtained correlations curves (between specific average runoff and catchments mean altitude allowed the assessment of potential runoff at catchment level and on altitudinal intervals. By integrating the curves functions in to GIS we created an average runoff map for the area; from which one can easily extract runoff data using GIS spatial analyst functions. The study shows that from the three areas the highest runoff corresponds with the third zone but because it’s small area the water volume is also minor. It is also shown that with the use of the created runoff map we can compute relatively quickly correct runoff values for areas without hydrologic control.

  3. The metric geometric mean transference and the problem of the average eye

    Directory of Open Access Journals (Sweden)

    W. F. Harris

    2008-12-01

    Full Text Available An average refractive error is readily obtained as an arithmetic average of refractive errors.  But how does one characterize the first-order optical character of an average eye?  Solutions have been offered including via the exponential-mean-log transference.  The exponential-mean-log transference ap-pears to work well in practice but there is the niggling problem that the method does not work with all optical systems.  Ideally one would like to be able to calculate an average for eyes in exactly the same way for all optical systems. This paper examines the potential of a relatively newly described mean, the metric geometric mean of positive definite (and, therefore, symmetric matrices.  We extend the definition of the metric geometric mean to matrices that are not symmetric and then apply it to ray transferences of optical systems.  The metric geometric mean of two transferences is shown to satisfy the requirement that symplecticity be pre-served.  Numerical examples show that the mean seems to give a reasonable average for two eyes.  Unfortunately, however, what seem reasonable generalizations to the mean of more than two eyes turn out not to be satisfactory in general.  These generalizations do work well for thin systems.  One concludes that, unless other generalizations can be found, the metric geometric mean suffers from more disadvantages than the exponential-mean-logarithm and has no advantages over it.

  4. Preference for facial averageness: Evidence for a common mechanism in human and macaque infants.

    Science.gov (United States)

    Damon, Fabrice; Méary, David; Quinn, Paul C; Lee, Kang; Simpson, Elizabeth A; Paukner, Annika; Suomi, Stephen J; Pascalis, Olivier

    2017-04-13

    Human adults and infants show a preference for average faces, which could stem from a general processing mechanism and may be shared among primates. However, little is known about preference for facial averageness in monkeys. We used a comparative developmental approach and eye-tracking methodology to assess visual attention in human and macaque infants to faces naturally varying in their distance from a prototypical face. In Experiment 1, we examined the preference for faces relatively close to or far from the prototype in 12-month-old human infants with human adult female faces. Infants preferred faces closer to the average than faces farther from it. In Experiment 2, we measured the looking time of 3-month-old rhesus macaques (Macaca mulatta) viewing macaque faces varying in their distance from the prototype. Like human infants, macaque infants looked longer to faces closer to the average. In Experiments 3 and 4, both species were presented with unfamiliar categories of faces (i.e., macaque infants tested with adult macaque faces; human infants and adults tested with infant macaque faces) and showed no prototype preferences, suggesting that the prototypicality effect is experience-dependent. Overall, the findings suggest a common processing mechanism across species, leading to averageness preferences in primates.

  5. A Derivation of the Nonlocal Volume-Averaged Equations for Two-Phase Flow Transport

    Directory of Open Access Journals (Sweden)

    Gilberto Espinosa-Paredes

    2012-01-01

    Full Text Available In this paper a detailed derivation of the general transport equations for two-phase systems using a method based on nonlocal volume averaging is presented. The local volume averaging equations are commonly applied in nuclear reactor system for optimal design and safe operation. Unfortunately, these equations are limited to length-scale restriction and according with the theory of the averaging volume method, these fail in transition of the flow patterns and boundaries between two-phase flow and solid, which produce rapid changes in the physical properties and void fraction. The non-local volume averaging equations derived in this work contain new terms related with non-local transport effects due to accumulation, convection diffusion and transport properties for two-phase flow; for instance, they can be applied in the boundary between a two-phase flow and a solid phase, or in the boundary of the transition region of two-phase flows where the local volume averaging equations fail.

  6. The relationship between limit of Dysphagia and average volume per swallow in patients with Parkinson's disease.

    Science.gov (United States)

    Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes

    2014-08-01

    The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.

  7. Multifractal detrending moving-average cross-correlation analysis.

    Science.gov (United States)

    Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2011-07-01

    There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents h(xy) extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of h(xy)(q) since its h(xy)(2) is closest to 0.5, as expected, and

  8. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This

  9. Ultra-low noise miniaturized neural amplifier with hardware averaging

    Science.gov (United States)

    Dweiri, Yazan M.; Eggers, Thomas; McCallum, Grant; Durand, Dominique M.

    2015-08-01

    Objective. Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Approach. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. Main results. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. Significance. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and

  10. GOZCARDS Source Data for Temperature Monthly Zonal Averages on a Geodetic Latitude and Pressure Grid V1.00

    Data.gov (United States)

    National Aeronautics and Space Administration — The GOZCARDS Source Data for Temperature Monthly Zonal Averages on a Geodetic Latitude and Pressure Grid product (GozSmlpT) contains zonal means and related...

  11. Forecasting natural gas consumption in China by Bayesian Model Averaging

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2015-11-01

    Full Text Available With rapid growth of natural gas consumption in China, it is in urgent need of more accurate and reliable models to make a reasonable forecast. Considering the limitations of the single model and the model uncertainty, this paper presents a combinative method to forecast natural gas consumption by Bayesian Model Averaging (BMA. It can effectively handle the uncertainty associated with model structure and parameters, and thus improves the forecasting accuracy. This paper chooses six variables for forecasting the natural gas consumption, including GDP, urban population, energy consumption structure, industrial structure, energy efficiency and exports of goods and services. The results show that comparing to Gray prediction model, Linear regression model and Artificial neural networks, the BMA method provides a flexible tool to forecast natural gas consumption that will have a rapid growth in the future. This study can provide insightful information on natural gas consumption in the future.

  12. A Note on Functional Averages over Gaussian Ensembles

    Directory of Open Access Journals (Sweden)

    Gabriel H. Tucci

    2013-01-01

    Full Text Available We find a new formula for matrix averages over the Gaussian ensemble. Let H be an n×n Gaussian random matrix with complex, independent, and identically distributed entries of zero mean and unit variance. Given an n×n positive definite matrix A and a continuous function f:ℝ+→ℝ such that ∫0∞‍e-αt|f(t|2dt0, we find a new formula for the expectation [Tr(f(HAH*]. Taking f(x=log(1+x gives another formula for the capacity of the MIMO communication channel, and taking f(x=(1+x-1 gives the MMSE achieved by a linear receiver.

  13. Data Point Averaging for Computational Fluid Dynamics Data

    Science.gov (United States)

    Norman, Jr., David (Inventor)

    2016-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  14. Averaged multivalued solutions and time discretization for conservation laws

    International Nuclear Information System (INIS)

    Brenier, Y.

    1985-01-01

    It is noted that the correct shock solutions can be approximated by averaging in some sense the multivalued solution given by the method of characteristics for the nonlinear scalar conservation law (NSCL). A time discretization for the NSCL equation based on this principle is considered. An equivalent analytical formulation is shown to lead quite easily to a convergence result, and a third formulation is introduced which can be generalized for the systems of conservation laws. Various numerical schemes are constructed from the proposed time discretization. The first family of schemes is obtained by using a spatial grid and projecting the results of the time discretization. Many known schemes are then recognized (mainly schemes by Osher, Roe, and LeVeque). A second way to discretize leads to a particle scheme without space grid, which is very efficient (at least in the scalar case). Finally, a close relationship between the proposed method and the Boltzmann type schemes is established. 14 references

  15. Quantum gravity unification via transfinite arithmetic and geometrical averaging

    International Nuclear Information System (INIS)

    El Naschie, M.S.

    2008-01-01

    In E-Infinity theory, we have not only infinitely many dimensions but also infinitely many fundamental forces. However, due to the hierarchical structure of ε (∞) spacetime we have a finite expectation number for its dimensionality and likewise a finite expectation number for the corresponding interactions. Starting from the preceding fundamental principles and using the experimental findings as well as the theoretical value of the coupling constants of the electroweak and the strong forces we present an extremely simple averaging procedure for determining the quantum gravity unification coupling constant with and without super symmetry. The work draws heavily on previous results, in particular a paper, by the Slovian Prof. Marek-Crnjac [Marek-Crnjac L. On the unification of all fundamental forces in a fundamentally fuzzy Cantorian ε (∞) manifold and high energy physics. Chaos, Solitons and Fractals 2004;4:657-68

  16. Modeling and Forecasting Average Temperature for Weather Derivative Pricing

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2015-01-01

    Full Text Available The main purpose of this paper is to present a feasible model for the daily average temperature on the area of Zhengzhou and apply it to weather derivatives pricing. We start by exploring the background of weather derivatives market and then use the 62 years of daily historical data to apply the mean-reverting Ornstein-Uhlenbeck process to describe the evolution of the temperature. Finally, Monte Carlo simulations are used to price heating degree day (HDD call option for this city, and the slow convergence of the price of the HDD call can be found through taking 100,000 simulations. The methods of the research will provide a frame work for modeling temperature and pricing weather derivatives in other similar places in China.

  17. The modification of turbulent transport by orbit averaging

    Energy Technology Data Exchange (ETDEWEB)

    Mynick, H.E.; Zweben, S.J.

    1991-05-01

    The effect on plasma turbulence of orbit averaging by thermal ions is considered, and illustrated for two modes of potential importance for tokamaks. The effect can reduce the ion response below that in earlier treatments, modifying the predicted mode growth rate, which in turn modifies the turbulent transport. For both modes, the effect modifies earlier transport expressions with a neoclassical factor,'' which makes the scalings of the resultant transport coefficients with plasma current and magnetic field closer to those found experimentally. Additionally, for the trapped electron mode, this mechanism provides a potential explanation of the observed more favorable scaling of {chi}{sub i} with T{sub i} in supershots than in L-modes. 21 refs., 2 figs.

  18. A note on computing average state occupation times

    Directory of Open Access Journals (Sweden)

    Jan Beyersmann

    2014-05-01

    Full Text Available Objective: This review discusses how biometricians would probably compute or estimate expected waiting times, if they had the data. Methods: Our framework is a time-inhomogeneous Markov multistate model, where all transition hazards are allowed to be time-varying. We assume that the cumulative transition hazards are given. That is, they are either known, as in a simulation, determined by expert guesses, or obtained via some method of statistical estimation. Our basic tool is product integration, which transforms the transition hazards into the matrix of transition probabilities. Product integration enjoys a rich mathematical theory, which has successfully been used to study probabilistic and statistical aspects of multistate models. Our emphasis will be on practical implementation of product integration, which allows us to numerically approximate the transition probabilities. Average state occupation times and other quantities of interest may then be derived from the transition probabilities.

  19. Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis

    Directory of Open Access Journals (Sweden)

    LiMin Wang

    2014-01-01

    Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.

  20. Applications of ordered weighted averaging (OWA operators in environmental problems

    Directory of Open Access Journals (Sweden)

    Carlos Llopis-Albert

    2017-04-01

    Full Text Available This paper presents an application of a prioritized weighted aggregation operator based on ordered weighted averaging (OWA to deal with stakeholders' constructive participation in water resources projects. They have different degree of acceptance or preference regarding the measures and policies to be carried out, which lead to different environmental and socio-economic outcomes, and hence, to different levels of stakeholders’ satisfaction. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology has been successfully applied to a Public Participation Project (PPP in watershed management, thus obtaining efficient environmental measures in conflict resolution problems under actors’ preference uncertainties.

  1. A time Fourier analysis of zonal averaged ozone heating rates

    Science.gov (United States)

    Wang, P.-H.; Wu, M.-F.; Deepak, A.; Hong, S.-S.

    1981-01-01

    A time-Fourier analysis is presented for the yearly variation of the zonal averaged ozone heating rates in the middle atmosphere based on a model study. The ozone heating rates are determined by utilizing two-dimensional ozone distributions, the altitude and latitude, and by including the effect of the curved earth's atmosphere. In addition, assumptions are introduced to the yearly variations of the ozone distributions due to the lack of sufficient existing ozone data. Among other results, it is shown that the first harmonic component indicates that the heating rates are completely out of phase between the northern and southern hemispheres. The second Fourier component shows a symmetric pattern with respect to the equator, as well as five distinct local extreme values of the ozone heating rate. The third harmonic component shows a pattern close to that of the first component except in the regions above 70 deg between 45-95 km in both hemispheres.

  2. Quantitative metagenomic analyses based on average genome size normalization

    DEFF Research Database (Denmark)

    Frank, Jeremy Alexander; Sørensen, Søren Johannes

    2011-01-01

    Over the past quarter-century, microbiologists have used DNA sequence information to aid in the characterization of microbial communities. During the last decade, this has expanded from single genes to microbial community genomics, or metagenomics, in which the gene content of an environment can...... provide not just a census of the community members but direct information on metabolic capabilities and potential interactions among community members. Here we introduce a method for the quantitative characterization and comparison of microbial communities based on the normalization of metagenomic data...... by estimating average genome sizes. This normalization can relieve comparative biases introduced by differences in community structure, number of sequencing reads, and sequencing read lengths between different metagenomes. We demonstrate the utility of this approach by comparing metagenomes from two different...

  3. Ocean tides in GRACE monthly averaged gravity fields

    DEFF Research Database (Denmark)

    Knudsen, Per

    2003-01-01

    The GRACE mission will map the Earth's gravity fields and its variations with unprecedented accuracy during its 5-year lifetime. Unless ocean tide signals and their load upon the solid earth are removed from the GRACE data, their long period aliases obscure more subtle climate signals which GRACE...... aims at. In this analysis the results of Knudsen and Andersen (2002) have been verified using actual post-launch orbit parameter of the GRACE mission. The current ocean tide models are not accurate enough to correct GRACE data at harmonic degrees lower than 47. The accumulated tidal errors may affect...... the GRACE data up to harmonic degree 60. A study of the revised alias frequencies confirm that the ocean tide errors will not cancel in the GRACE monthly averaged temporal gravity fields. The S-2 and the K-2 terms have alias frequencies much longer than 30 days, so they remain almost unreduced...

  4. Volume Averaging Theory (VAT) based modeling and closure evaluation for fin-and-tube heat exchangers

    Science.gov (United States)

    Zhou, Feng; Catton, Ivan

    2012-10-01

    A fin-and-tube heat exchanger was modeled based on Volume Averaging Theory (VAT) in such a way that the details of the original structure was replaced by their averaged counterparts, so that the VAT based governing equations can be efficiently solved for a wide range of parameters. To complete the VAT based model, proper closure is needed, which is related to a local friction factor and a heat transfer coefficient of a Representative Elementary Volume (REV). The terms in the closure expressions are complex and sometimes relating experimental data to the closure terms is difficult. In this work we use CFD to evaluate the rigorously derived closure terms over one of the selected REVs. The objective is to show how heat exchangers can be modeled as a porous media and how CFD can be used in place of a detailed, often formidable, experimental effort to obtain closure for the model.

  5. Quantum oscillations in one-dimensional metal rings: Average over disorder

    International Nuclear Information System (INIS)

    Li, Q.; Soukoulis, C.M.

    1986-01-01

    We study the Aharonov-Bohm effect in single normal-metal rings and show that averaging the transmission coefficient T over disorder gives oscillations with a period of a half-flux quantum. As the elastic scattering gets stronger, the periodicity of oscillation of the conductance, which is related to T, gradually changes to a full-flux quantum, in agreement with recent experiments

  6. Average glandular dose in digital mammography and breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Olgar, T. [Ankara Univ. (Turkey). Dept. of Engineering Physics; Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Kahn, T.; Gosch, D. [Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie

    2012-10-15

    Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.

  7. Average glandular dose in digital mammography and breast tomosynthesis

    International Nuclear Information System (INIS)

    Olgar, T.; Universitaetsklinikum Leipzig AoeR; Kahn, T.; Gosch, D.

    2012-01-01

    Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.

  8. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  9. Strengthened glass for high average power laser applications

    International Nuclear Information System (INIS)

    Cerqua, K.A.; Lindquist, A.; Jacobs, S.D.; Lambropoulos, J.

    1987-01-01

    Recent advancements in high repetition rate and high average power laser systems have put increasing demands on the development of improved solid state laser materials with high thermal loading capabilities. The authors have developed a process for strengthening a commercially available Nd doped phosphate glass utilizing an ion-exchange process. Results of thermal loading fracture tests on moderate size (160 x 15 x 8 mm) glass slabs have shown a 6-fold improvement in power loading capabilities for strengthened samples over unstrengthened slabs. Fractographic analysis of post-fracture samples has given insight into the mechanism of fracture in both unstrengthened and strengthened samples. Additional stress analysis calculations have supported these findings. In addition to processing the glass' surface during strengthening in a manner which preserves its post-treatment optical quality, the authors have developed an in-house optical fabrication technique utilizing acid polishing to minimize subsurface damage in samples prior to exchange treatment. Finally, extension of the strengthening process to alternate geometries of laser glass has produced encouraging results, which may expand the potential or strengthened glass in laser systems, making it an exciting prospect for many applications

  10. An Exponentially Weighted Moving Average Control Chart for Bernoulli Data

    DEFF Research Database (Denmark)

    Spliid, Henrik

    2010-01-01

    We consider a production process in which units are produced in a sequential manner. The units can, for example, be manufactured items or services, provided to clients. Each unit produced can be a failure with probability p or a success (non-failure) with probability (1-p). A novel exponentially ...... weighted moving average (EWMA) control chart intended for surveillance of the probability of failure, p, is described. The chart is based on counting the number of non-failures produced between failures in combination with a variance-stabilizing transformation. The distribution function...... of the transformation is given and its limit for small values of p is derived. Control of high yield processes is discussed and the chart is shown to perform very well in comparison with both the most common alternative EWMA chart and the CUSUM chart. The construction and the use of the proposed EWMA chart...... are described and a practical example is given. It is demonstrated how the method communicates the current failure probability in a direct and interpretable way, which makes it well suited for surveillance of a great variety of activities in industry or in the service sector such as in hospitals, for example...

  11. Average glandular dose in patients submitted to mammography exams

    Energy Technology Data Exchange (ETDEWEB)

    Gomes, Danielle S.; Barragan, Carolina V.M.; Costa, Katiane C.; Donato, Sabrina; Castro, William J.; Nogueira, Maria S., E-mail: dsg@cdtn.br, E-mail: kcc@cdtn.br, E-mail: sds@cdtn.br, E-mail: wjc@cdtn.br, E-mail: mnogue@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN), Belo Horizonte, MG (Brazil); Rezende, Adriana M.L. [Clinica Radiologica Davi Rezende, Belo Horizonte, MG (Brazil); Pinheiro, Luciana J.S. [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN), Belo Horizonte, MG (Brazil). Post-graduation in Sciences and Technology of Radiations, Minerals and Materials; Oliveira, Marcio A. de [Superintendencia Estadual de Vigilancia Sanitaria, Belo Horizonte, MG (Brazil)

    2011-07-01

    Doses in mammography should be maintained as low as possible, however without reducing the standards of image quality necessary for an early detection of breast cancer. As the breast is composed of tissues with very soft composition and densities, detection of small changes in the normal anatomical structures that may be associated with breast cancer becomes more difficult. In order to achieve the standards of resolution and contrast for mammography, quality and intensity of the X- ray beam, breast positioning and compression, film-screen system, and the film processing must be in optimal operational conditions. This study aims at evaluating the average glandular dose in patients undergoing routine tests in a mammography unit in the city of Belo Horizonte. Patient image analysis was done by a radiologist who took into account 10 evaluation criteria for each CC and MLO incidences. The estimation of each patient's glandular dose and the radiographic technique parameters (kV and mA.s) as well as the thickness of the compressed breast were recorded. European image quality criteria were adopted by the radiologist in order to make the image acceptable for diagnostic purposes. For breast densities of 50%/50%, 70%/30%, 30%/70%, adipose and glandular tissues and the incident air-kerma were measured and the glandular dose calculated taking into account the X-ray output during the test. In the study carried out with 63 patients, the mean glandular dose varied from 30% incidence of CC to MLO. (author)

  12. Principles of resonance-averaged gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1981-01-01

    The unambiguous determination of excitation energies, spins, parities, and other properties of nuclear levels is the paramount goal of the nuclear spectroscopist. All developments of nuclear models depend upon the availability of a reliable data base on which to build. In this regard, slow neutron capture gamma-ray spectroscopy has proved to be a valuable tool. The observation of primary radiative transitions connecting initial and final states can provide definite level positions. In particular the use of the resonance-averaged capture technique has received much recent attention because of the claims advanced for this technique (Chrien 1980a, Casten 1980); that it is able to identify all states in a given spin-parity range and to provide definite spin parity information for these states. In view of the importance of this method, it is perhaps surprising that until now no firm analytical basis has been provided which delineates its capabilities and limitations. Such an analysis is necessary to establish the spin-parity assignments derived from this method on a quantitative basis; in other words a quantitative statement of the limits of error must be provided. It is the principal aim of the present paper to present such an analysis. To do this, a historical description of the technique and its applications is presented and the principles of the method are stated. Finally a method of statistical analysis is described, and the results are applied to recent measurements carried out at the filtered beam facilities at the Brookhaven National Laboratory

  13. Accelerated Distributed Dual Averaging Over Evolving Networks of Growing Connectivity

    Science.gov (United States)

    Liu, Sijia; Chen, Pin-Yu; Hero, Alfred O.

    2018-04-01

    We consider the problem of accelerating distributed optimization in multi-agent networks by sequentially adding edges. Specifically, we extend the distributed dual averaging (DDA) subgradient algorithm to evolving networks of growing connectivity and analyze the corresponding improvement in convergence rate. It is known that the convergence rate of DDA is influenced by the algebraic connectivity of the underlying network, where better connectivity leads to faster convergence. However, the impact of network topology design on the convergence rate of DDA has not been fully understood. In this paper, we begin by designing network topologies via edge selection and scheduling. For edge selection, we determine the best set of candidate edges that achieves the optimal tradeoff between the growth of network connectivity and the usage of network resources. The dynamics of network evolution is then incurred by edge scheduling. Further, we provide a tractable approach to analyze the improvement in the convergence rate of DDA induced by the growth of network connectivity. Our analysis reveals the connection between network topology design and the convergence rate of DDA, and provides quantitative evaluation of DDA acceleration for distributed optimization that is absent in the existing analysis. Lastly, numerical experiments show that DDA can be significantly accelerated using a sequence of well-designed networks, and our theoretical predictions are well matched to its empirical convergence behavior.

  14. Determination of the average lifetime of b-baryons

    CERN Document Server

    Abreu, P; Adye, T; Agasi, E; Ajinenko, I; Aleksan, Roy; Alekseev, G D; Alemany, R; Allport, P P; Almehed, S; Amaldi, Ugo; Amato, S; Andreazza, A; Andrieux, M L; Antilogus, P; Apel, W D; Arnoud, Y; Åsman, B; Augustin, J E; Augustinus, A; Baillon, Paul; Bambade, P; Barão, F; Barate, R; Barbi, M S; Bardin, Dimitri Yuri; Baroncelli, A; Bärring, O; Barrio, J A; Bartl, Walter; Bates, M J; Battaglia, Marco; Baubillier, M; Baudot, J; Becks, K H; Begalli, M; Beillière, P; Belokopytov, Yu A; Belous, K S; Benvenuti, Alberto C; Berggren, M; Bertrand, D; Bianchi, F; Bigi, M; Bilenky, S M; Billoir, P; Bloch, D; Blume, M; Blyth, S; Bolognese, T; Bonesini, M; Bonivento, W; Booth, P S L; Borisov, G; Bosio, C; Bosworth, S; Botner, O; Boudinov, E; Bouquet, B; Bourdarios, C; Bowcock, T J V; Bozzo, M; Branchini, P; Brand, K D; Brenke, T; Brenner, R A; Bricman, C; Brillault, L; Brown, R C A; Brückman, P; Brunet, J M; Bugge, L; Buran, T; Burgsmüller, T; Buschmann, P; Buys, A; Cabrera, S; Caccia, M; Calvi, M; Camacho-Rozas, A J; Camporesi, T; Canale, V; Canepa, M; Cankocak, K; Cao, F; Carena, F; Carroll, L; Caso, Carlo; Castillo-Gimenez, M V; Cattai, A; Cavallo, F R; Cerrito, L; Chabaud, V; Chapkin, M M; Charpentier, P; Chaussard, L; Chauveau, J; Checchia, P; Chelkov, G A; Chen, M; Chierici, R; Chliapnikov, P V; Chochula, P; Chorowicz, V; Chudoba, J; Cindro, V; Collins, P; Contreras, J L; Contri, R; Cortina, E; Cosme, G; Cossutti, F; Crawley, H B; Crennell, D J; Crosetti, G; Cuevas-Maestro, J; Czellar, S; Dahl-Jensen, Erik; Dahm, J; D'Almagne, B; Dam, M; Damgaard, G; Dauncey, P D; Davenport, Martyn; Da Silva, W; Defoix, C; Deghorain, A; Della Ricca, G; Delpierre, P A; Demaria, N; De Angelis, A; de Boer, Wim; De Brabandere, S; De Clercq, C; La Vaissière, C de; De Lotto, B; De Min, A; De Paula, L S; De Saint-Jean, C; Dijkstra, H; Di Ciaccio, Lucia; Djama, F; Dolbeau, J; Dönszelmann, M; Doroba, K; Dracos, M; Drees, J; Drees, K A; Dris, M; Dufour, Y; Edsall, D M; Ehret, R; Eigen, G; Ekelöf, T J C; Ekspong, Gösta; Elsing, M; Engel, J P; Ershaidat, N; Erzen, B; Falk, E; Fassouliotis, D; Feindt, Michael; Ferrer, A; Filippas-Tassos, A; Firestone, A; Fischer, P A; Föth, H; Fokitis, E; Fontanelli, F; Formenti, F; Franek, B J; Frenkiel, P; Fries, D E C; Frodesen, A G; Frühwirth, R; Fulda-Quenzer, F; Fuster, J A; Galloni, A; Gamba, D; Gandelman, M; García, C; García, J; Gaspar, C; Gasparini, U; Gavillet, P; Gazis, E N; Gelé, D; Gerber, J P; Gerdyukov, L N; Gibbs, M; Gokieli, R; Golob, B; Gopal, Gian P; Gorn, L; Górski, M; Guz, Yu; Gracco, Valerio; Graziani, E; Grosdidier, G; Grzelak, K; Gumenyuk, S A; Gunnarsson, P; Günther, M; Guy, J; Hahn, F; Hahn, S; Hajduk, Z; Hallgren, A; Hamacher, K; Hao, W; Harris, F J; Hedberg, V; Henriques, R P; Hernández, J J; Herquet, P; Herr, H; Hessing, T L; Higón, E; Hilke, Hans Jürgen; Hill, T S; Holmgren, S O; Holt, P J; Holthuizen, D J; Hoorelbeke, S; Houlden, M A; Hrubec, Josef; Huet, K; Hultqvist, K; Jackson, J N; Jacobsson, R; Jalocha, P; Janik, R; Jarlskog, C; Jarlskog, G; Jarry, P; Jean-Marie, B; Johansson, E K; Jönsson, L B; Jönsson, P E; Joram, Christian; Juillot, P; Kaiser, M; Kapusta, F; Karafasoulis, K; Karlsson, M; Karvelas, E; Katsanevas, S; Katsoufis, E C; Keränen, R; Khokhlov, Yu A; Khomenko, B A; Khovanskii, N N; King, B J; Kjaer, N J; Klein, H; Klovning, A; Kluit, P M; Köne, B; Kokkinias, P; Koratzinos, M; Korcyl, K; Kourkoumelis, C; Kuznetsov, O; Kramer, P H; Krammer, Manfred; Kreuter, C; Kronkvist, I J; Krumshtein, Z; Krupinski, W; Kubinec, P; Kucewicz, W; Kurvinen, K L; Lacasta, C; Laktineh, I; Lamblot, S; Lamsa, J; Lanceri, L; Lane, D W; Langefeld, P; Last, I; Laugier, J P; Lauhakangas, R; Leder, Gerhard; Ledroit, F; Lefébure, V; Legan, C K; Leitner, R; Lemoigne, Y; Lemonne, J; Lenzen, Georg; Lepeltier, V; Lesiak, T; Liko, D; Lindner, R; Lipniacka, A; Lippi, I; Lörstad, B; Loken, J G; López, J M; Loukas, D; Lutz, P; Lyons, L; MacNaughton, J N; Maehlum, G; Maio, A; Malychev, V; Mandl, F; Marco, J; Marco, R P; Maréchal, B; Margoni, M; Marin, J C; Mariotti, C; Markou, A; Maron, T; Martínez-Rivero, C; Martínez-Vidal, F; Martí i García, S; Masik, J; Matorras, F; Matteuzzi, C; Matthiae, Giorgio; Mazzucato, M; McCubbin, M L; McKay, R; McNulty, R; Medbo, J; Merk, M; Meroni, C; Meyer, S; Meyer, W T; Michelotto, M; Migliore, E; Mirabito, L; Mitaroff, Winfried A; Mjörnmark, U; Moa, T; Møller, R; Mönig, K; Monge, M R; Morettini, P; Müller, H; Mundim, L M; Murray, W J; Muryn, B; Myatt, Gerald; Naraghi, F; Navarria, Francesco Luigi; Navas, S; Nawrocki, K; Negri, P; Némécek, S; Neumann, W; Neumeister, N; Nicolaidou, R; Nielsen, B S; Nieuwenhuizen, M; Nikolaenko, V; Niss, P; Nomerotski, A; Normand, Ainsley; Oberschulte-Beckmann, W; Obraztsov, V F; Olshevskii, A G; Onofre, A; Orava, Risto; Österberg, K; Ouraou, A; Paganini, P; Paganoni, M; Pagès, P; Palka, H; Papadopoulou, T D; Papageorgiou, K; Pape, L; Parkes, C; Parodi, F; Passeri, A; Pegoraro, M; Peralta, L; Pernegger, H; Perrotta, A; Petridou, C; Petrolini, A; Petrovykh, M; Phillips, H T; Piana, G; Pierre, F; Pimenta, M; Pindo, M; Plaszczynski, S; Podobrin, O; Pol, M E; Polok, G; Poropat, P; Pozdnyakov, V; Prest, M; Privitera, P; Pukhaeva, N; Pullia, Antonio; Radojicic, D; Ragazzi, S; Rahmani, H; Ratoff, P N; Read, A L; Reale, M; Rebecchi, P; Redaelli, N G; Regler, Meinhard; Reid, D; Renton, P B; Resvanis, L K; Richard, F; Richardson, J; Rídky, J; Rinaudo, G; Ripp, I; Romero, A; Roncagliolo, I; Ronchese, P; Roos, L; Rosenberg, E I; Rosso, E; Roudeau, Patrick; Rovelli, T; Rückstuhl, W; Ruhlmann-Kleider, V; Ruiz, A; Rybicki, K; Saarikko, H; Sacquin, Yu; Sadovskii, A; Sajot, G; Salt, J; Sánchez, J; Sannino, M; Schimmelpfennig, M; Schneider, H; Schwickerath, U; Schyns, M A E; Sciolla, G; Scuri, F; Seager, P; Sedykh, Yu; Segar, A M; Seitz, A; Sekulin, R L; Shellard, R C; Siccama, I; Siegrist, P; Simonetti, S; Simonetto, F; Sissakian, A N; Sitár, B; Skaali, T B; Smadja, G; Smirnov, N; Smirnova, O G; Smith, G R; Solovyanov, O; Sosnowski, R; Souza-Santos, D; Spassoff, Tz; Spiriti, E; Sponholz, P; Squarcia, S; Stanescu, C; Stapnes, Steinar; Stavitski, I; Stichelbaut, F; Stocchi, A; Strauss, J; Strub, R; Stugu, B; Szczekowski, M; Szeptycka, M; Tabarelli de Fatis, T; Tavernet, J P; Chikilev, O G; Tilquin, A; Timmermans, J; Tkatchev, L G; Todorov, T; Toet, D Z; Tomaradze, A G; Tomé, B; Tonazzo, A; Tortora, L; Tranströmer, G; Treille, D; Trischuk, W; Tristram, G; Trombini, A; Troncon, C; Tsirou, A L; Turluer, M L; Tyapkin, I A; Tyndel, M; Tzamarias, S; Überschär, B; Ullaland, O; Uvarov, V; Valenti, G; Vallazza, E; Van der Velde, C; van Apeldoorn, G W; van Dam, P; Van Doninck, W K; Van Eldik, J; Vassilopoulos, N; Vegni, G; Ventura, L; Venus, W A; Verbeure, F; Verlato, M; Vertogradov, L S; Vilanova, D; Vincent, P; Vitale, L; Vlasov, E; Vodopyanov, A S; Vrba, V; Wahlen, H; Walck, C; Waldner, F; Weierstall, M; Weilhammer, Peter; Weiser, C; Wetherell, Alan M; Wicke, D; Wickens, J H; Wielers, M; Wilkinson, G R; Williams, W S C; Winter, M; Witek, M; Woschnagg, K; Yip, K; Yushchenko, O P; Zach, F; Zaitsev, A; Zalewska-Bak, A; Zalewski, Piotr; Zavrtanik, D; Zevgolatakos, E; Zimin, N I; Zito, M; Zontar, D; Zuberi, R; Zucchelli, G C; Zumerle, G; Belokopytov, Yu; Charpentier, Ph; Gavillet, Ph; Gouz, Yu; Jarlskog, Ch; Khokhlov, Yu; Papadopoulou, Th D

    1996-01-01

    The average lifetime of b-baryons has been studied using 3 \\times 10^6 hadronic Z^0 decays collected by the DELPHI detector at LEP. Three methods have been used, based on the measurement of different observables: the proper decay time distribution of 206 vertices reconstructed with a \\Lambda, a lepton and an oppositely charged pion; the impact parameter distribution of 441 muons with high transverse momentum accompanied by a \\mLs in the same jet; and the proper decay time distribution of 125 \\Lambda_c-lepton decay vertices with the \\Lambda_c exclusively reconstructed through its pK\\pi, pK^0 and \\mLs3\\pi decay modes. The combined result is~:\\par \\begin{center} \\tau(b-baryon) = (1.25^{+0.13}_{-0.11}\\ pm0.04(syst)^{+0.03}_{-0. 05}(syst)) ps\\par \\end{center} where the first systematic error is due to experimental uncertainties and the second to the uncertainties in the modelling of the b-baryon production and semi-leptonic decay. Including the measurement recently published by DELPHI based on a sample of proton-m...

  15. Statistical properties of the gyro-averaged standard map

    Science.gov (United States)

    da Fonseca, Julio D.; Sokolov, Igor M.; Del-Castillo-Negrete, Diego; Caldas, Ibere L.

    2015-11-01

    A statistical study of the gyro-averaged standard map (GSM) is presented. The GSM is an area preserving map model proposed in as a simplified description of finite Larmor radius (FLR) effects on ExB chaotic transport in magnetized plasmas with zonal flows perturbed by drift waves. The GSM's effective perturbation parameter, gamma, is proportional to the zero-order Bessel function of the particle's Larmor radius. In the limit of zero Larmor radius, the GSM reduces to the standard, Chirikov-Taylor map. We consider plasmas in thermal equilibrium and assume a Larmor radius' probability density function (pdf) resulting from a Maxwell-Boltzmann distribution. Since the particles have in general different Larmor radii, each orbit is computed using a different perturbation parameter, gamma. We present analytical and numerical computations of the pdf of gamma for a Maxwellian distribution. We also compute the pdf of global chaos, which gives the probability that a particle with a given Larmor radius exhibits global chaos, i.e. the probability that Kolmogorov-Arnold-Moser (KAM) transport barriers do not exist.

  16. Perceptual learning in Williams syndrome: looking beyond averages.

    Directory of Open Access Journals (Sweden)

    Patricia Gervan

    Full Text Available Williams Syndrome is a genetically determined neurodevelopmental disorder characterized by an uneven cognitive profile and surprisingly large neurobehavioral differences among individuals. Previous studies have already shown different forms of memory deficiencies and learning difficulties in WS. Here we studied the capacity of WS subjects to improve their performance in a basic visual task. We employed a contour integration paradigm that addresses occipital visual function, and analyzed the initial (i.e. baseline and after-learning performance of WS individuals. Instead of pooling the very inhomogeneous results of WS subjects together, we evaluated individual performance by expressing it in terms of the deviation from the average performance of the group of typically developing subjects of similar age. This approach helped us to reveal information about the possible origins of poor performance of WS subjects in contour integration. Although the majority of WS individuals showed both reduced baseline and reduced learning performance, individual analysis also revealed a dissociation between baseline and learning capacity in several WS subjects. In spite of impaired initial contour integration performance, some WS individuals presented learning capacity comparable to learning in the typically developing population, and vice versa, poor learning was also observed in subjects with high initial performance levels. These data indicate a dissociation between factors determining initial performance and perceptual learning.

  17. Average accelerator simulation Truebeam using phase space in IAEA format

    International Nuclear Information System (INIS)

    Santana, Emico Ferreira; Milian, Felix Mas; Paixao, Paulo Oliveira; Costa, Raranna Alves da; Velasco, Fermin Garcia

    2015-01-01

    In this paper is used a computational code of radiation transport simulation based on Monte Carlo technique, in order to model a linear accelerator of treatment by Radiotherapy. This work is the initial step of future proposals which aim to study several treatment of patient by Radiotherapy, employing computational modeling in cooperation with the institutions UESC, IPEN, UFRJ e COI. The Chosen simulation code is GATE/Geant4. The average accelerator is TrueBeam of Varian Company. The geometric modeling was based in technical manuals, and radiation sources on the phase space for photons, provided by manufacturer in the IAEA (International Atomic Energy Agency) format. The simulations were carried out in equal conditions to experimental measurements. Were studied photons beams of 6MV, with 10 per 10 cm of field, focusing on a water phantom. For validation were compared dose curves in depth, lateral profiles in different depths of the simulated results and experimental data. The final modeling of this accelerator will be used in future works involving treatments and real patients. (author)

  18. Averaging interval selection for the calculation of Reynolds shear stress for studies of boundary layer turbulence.

    Science.gov (United States)

    Lee, Zoe; Baas, Andreas

    2013-04-01

    It is widely recognised that boundary layer turbulence plays an important role in sediment transport dynamics in aeolian environments. Improvements in the design and affordability of ultrasonic anemometers have provided significant contributions to studies of aeolian turbulence, by facilitating high frequency monitoring of three dimensional wind velocities. Consequently, research has moved beyond studies of mean airflow properties, to investigations into quasi-instantaneous turbulent fluctuations at high spatio-temporal scales. To fully understand, how temporal fluctuations in shear stress drive wind erosivity and sediment transport, research into the best practice for calculating shear stress is necessary. This paper builds upon work published by Lee and Baas (2012) on the influence of streamline correction techniques on Reynolds shear stress, by investigating the time-averaging interval used in the calculation. Concerns relating to the selection of appropriate averaging intervals for turbulence research, where the data are typically non-stationary at all timescales, are well documented in the literature (e.g. Treviño and Andreas, 2000). For example, Finnigan et al. (2003) found that underestimating the required averaging interval can lead to a reduction in the calculated momentum flux, as contributions from turbulent eddies longer than the averaging interval are lost. To avoid the risk of underestimating fluxes, researchers have typically used the total measurement duration as a single averaging period. For non-stationary data, however, using the whole measurement run as a single block average is inadequate for defining turbulent fluctuations. The data presented in this paper were collected in a field study of boundary layer turbulence conducted at Tramore beach near Rosapenna, County Donegal, Ireland. High-frequency (50 Hz) 3D wind velocity measurements were collected using ultrasonic anemometry at thirteen different heights between 0.11 and 1.62 metres above

  19. The influence of different El Nino flavours on global average tempeature

    Science.gov (United States)

    Donner, S. D.; Banholzer, S. P.

    2014-12-01

    The El Niño-Southern Oscillation is known to influence surface temperatures worldwide. El Niño conditions are thought to lead to anomalously warm global average surface temperature, absent other forcings. Recent research has identified distinct possible types or flavours of El Niño events, based on the location of peak sea surface temperature anomalies and other variables. Here we analyze the relationship between the type of El Niño event and the global surface average temperature anomaly, using three historical temperature data sets. Separating El Niño events into types or flavours reveals that the global average surface temperatures are anomalously warm during and after canonical eastern Pacific El Niño events or "super" El Ninos. However, the global average surface temperatures during and after central Pacific or "mixed" events, like the 2002-3 event, are not statistically distinct from that of neutral or other years. Historical analysis indicated that slowdowns in the rate of global surface warming since the late 1800s may be related to decadal variability in the frequency of different types of El Niño events.

  20. Consumer understanding of food labels: toward a generic tool for identifying the average consumer

    DEFF Research Database (Denmark)

    Sørensen, Henrik Selsøe; Holm, Lotte; Møgelvang-Hansen, Peter

    2013-01-01

    The ‘average consumer’ is referred to as a standard in regulatory contexts when attempts are made to benchmark how consumers are expected to reason while decoding food labels. An attempt is made to operationalize this hypothetical ‘average consumer’ by proposing a tool for measuring the level of ...... that independent future studies of consumer behavior and decision making in relation to food products in different contexts could benefit from this type of benchmarking tool.......The ‘average consumer’ is referred to as a standard in regulatory contexts when attempts are made to benchmark how consumers are expected to reason while decoding food labels. An attempt is made to operationalize this hypothetical ‘average consumer’ by proposing a tool for measuring the level...... of informedness of an individual consumer against the national median at any time. Informedness, i.e. the individual consumer's ability to interpret correctly the meaning of the words and signs on a food label is isolated as one essential dimension for dividing consumers into three groups: less-informed, informed...

  1. A CORRELATION BETWEEN STAR FORMATION RATE AND AVERAGE BLACK HOLE ACCRETION IN STAR-FORMING GALAXIES

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chien-Ting J.; Hickox, Ryan C. [Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Laboratory, Hanover, NH 03755 (United States); Alberts, Stacey; Pope, Alexandra [Department of Astronomy, University of Massachusetts, Amherst, MA 01003 (United States); Brodwin, Mark [Department of Physics and Astronomy, University of Missouri, 5110 Rockhill Road, Kansas City, MO 64110 (United States); Jones, Christine; Forman, William R.; Goulding, Andrew D. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Murray, Stephen S. [Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218 (United States); Alexander, David M.; Mullaney, James R. [Department of Physics, Durham University, South Road, Durham DH1 3LE (United Kingdom); Assef, Roberto J.; Gorjian, Varoujan [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States); Brown, Michael J. I. [School of Physics, Monash University, Clayton 3800, Victoria (Australia); Dey, Arjun; Jannuzi, Buell T. [National Optical Astronomy Observatory, Tucson, AZ 85726 (United States); Le Floc' h, Emeric, E-mail: ctchen@dartmouth.edu [Laboratoire AIM-Paris-Saclay, CEA/DSM/Irfu-CNRS-Universite Paris Diderot, CE-Saclay, pt courrier 131, F-91191 Gif-sur-Yvette (France)

    2013-08-10

    We present a measurement of the average supermassive black hole accretion rate (BHAR) as a function of the star formation rate (SFR) for galaxies in the redshift range 0.25 < z < 0.8. We study a sample of 1767 far-IR-selected star-forming galaxies in the 9 deg{sup 2} Booetes multi-wavelength survey field. The SFR is estimated using 250 {mu}m observations from the Herschel Space Observatory, for which the contribution from the active galactic nucleus (AGN) is minimal. In this sample, 121 AGNs are directly identified using X-ray or mid-IR selection criteria. We combined these detected AGNs and an X-ray stacking analysis for undetected sources to study the average BHAR for all of the star-forming galaxies in our sample. We find an almost linear relation between the average BHAR (in M{sub Sun} yr{sup -1}) and the SFR (in M{sub Sun} yr{sup -1}) for galaxies across a wide SFR range 0.85 < log SFR < 2.56: log BHAR = (- 3.72 {+-} 0.52) + (1.05 {+-} 0.33)log SFR. This global correlation between SFR and average BHAR is consistent with a simple picture in which SFR and AGN activity are tightly linked over galaxy evolution timescales.

  2. Time Average Holography Study of Human Tympanic Membrane with Altered Middle Ear Ossicular Chain

    Science.gov (United States)

    Cheng, Jeffrey T.; Ravicz, Michael E.; Rosowski, John J.; Hulli, Nesim; Hernandez-Montes, Maria S.; Furlong, Cosme

    2009-02-01

    Computer-assisted time average holographic interferometry was used to study the vibration of the human tympanic membrane (TM) in cadaveric temporal bones before and after alterations of the ossicular chain. Simultaneous laser Doppler vibrometer measurements of stapes velocity were performed to estimate the conductive hearing loss caused by ossicular alterations. The quantified TM motion described from holographic images was correlated with stapes velocity to define relations between TM motion and stapes velocity in various ossicular disorders. The results suggest that motions of the TM are relatively uncoupled from stapes motion at frequencies above 1000 Hz.

  3. Robust estimation of average twitch contraction forces of populations of motor units in humans.

    Science.gov (United States)

    Negro, Francesco; Orizio, Claudio

    2017-12-01

    The characteristics of motor unit force twitch profiles provide important information for the understanding of the muscle force generation. The twitch force is commonly estimated with the spike-triggered averaging technique, which, despite the many limitations, has been important for clarifying central issues in force generation. In this study, we propose a new technique for the estimation of the average twitch profile of populations of motor units with uniform contractile properties. The method encompasses a model-based deconvolution of the force signal using the identified discharge times of a population of motor units. The proposed technique was validated using simulations and tested on signals recorded during voluntary activation. The results of the simulations showed that the proposed method provides accurate estimates (relative error twitch force when the number of identified motor units is between 5% and 15% of the total number of active motor units. It is discussed that current detection and decomposition methods of multi-channel surface EMG signals allow decoding this relative sample of the active motor unit pool. However, even when this condition is not met, our results show that the estimates provided by the new method are anyway always superior to those obtained by the spike triggered average approach, especially for high motor unit synchronization levels and when a relatively small number of triggers is available. In conclusion, we present a new method that overcome the main limitations of the spike-triggered average for the study of contractile properties of individual motor units. The method provides a new reliable tool for the investigation of the determinants of muscle force. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A spatially-averaged mathematical model of kidney branching morphogenesis

    KAUST Repository

    Zubkov, V.S.

    2015-08-01

    © 2015 Published by Elsevier Ltd. Kidney development is initiated by the outgrowth of an epithelial ureteric bud into a population of mesenchymal cells. Reciprocal morphogenetic responses between these two populations generate a highly branched epithelial ureteric tree with the mesenchyme differentiating into nephrons, the functional units of the kidney. While we understand some of the mechanisms involved, current knowledge fails to explain the variability of organ sizes and nephron endowment in mice and humans. Here we present a spatially-averaged mathematical model of kidney morphogenesis in which the growth of the two key populations is described by a system of time-dependant ordinary differential equations. We assume that branching is symmetric and is invoked when the number of epithelial cells per tip reaches a threshold value. This process continues until the number of mesenchymal cells falls below a critical value that triggers cessation of branching. The mathematical model and its predictions are validated against experimentally quantified C57Bl6 mouse embryonic kidneys. Numerical simulations are performed to determine how the final number of branches changes as key system parameters are varied (such as the growth rate of tip cells, mesenchyme cells, or component cell population exit rate). Our results predict that the developing kidney responds differently to loss of cap and tip cells. They also indicate that the final number of kidney branches is less sensitive to changes in the growth rate of the ureteric tip cells than to changes in the growth rate of the mesenchymal cells. By inference, increasing the growth rate of mesenchymal cells should maximise branch number. Our model also provides a framework for predicting the branching outcome when ureteric tip or mesenchyme cells change behaviour in response to different genetic or environmental developmental stresses.

  5. A Genome Scan for Quantitative Trait Loci Affecting Average Daily ...

    Indian Academy of Sciences (India)

    reviewer

    reproductive system, cell proliferation and differentiation, protein folding and levels of gene transcription thereupon affect muscle growth and fat deposit in sheep. In different periods of ADG and KR traits, some of significant markers were same and some of them were different. The records related to ADG and KR traits are ...

  6. Estimation of average bioburden values on flexible gastrointestinal ...

    African Journals Online (AJOL)

    Infections related to flexible endoscopic procedures are caused by either endogenous flora or exogenous microbes. The first major challenge of reprocessing is infection control, most episodes of infection can be traced to procedural errors in cleaning and disinfecting, the second major challenge is to protect personnel and ...

  7. Grain centre mapping - 3DXRD measurements of average grain characteristics

    DEFF Research Database (Denmark)

    Oddershede, Jette; Schmidt, Søren; Lyckegaard, Allan

    2014-01-01

    and the closely related boxscan method is given. Both validation experiments and applications for in situ studies of microstructural changes during plastic deformation and crack growth are given. Finally an outlook with special emphasis on coupling the measured results with modelling is given....

  8. Estimation of average bioburden values on flexible gastrointestinal ...

    African Journals Online (AJOL)

    Medhat Mohammed Anwar Hamed

    2014-06-21

    Jun 21, 2014 ... between models from the same manufacturer. However, all flexible endoscopes have the same basic components. Infections related to flexible endoscopic procedures are caused by either endogenous flora or exogenous microbes. The first major challenge of reprocessing is infection control, most epi-.

  9. Averaging hydraulic head, pressure head, and gravitational head in subsurface hydrology, and implications for averaged fluxes, and hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    G. H. de Rooij

    2009-07-01

    Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.

  10. Potential breeding distributions of U.S. birds predicted with both short-term variability and long-term average climate data

    Science.gov (United States)

    Brooke L. Bateman; Anna M. Pidgeon; Volker C. Radeloff; Curtis H. Flather; Jeremy VanDerWal; H. Resit Akcakaya; Wayne E. Thogmartin; Thomas P. Albright; Stephen J. Vavrus; Patricia J. Heglund

    2016-01-01

    Climate conditions, such as temperature or precipitation, averaged over several decades strongly affect species distributions, as evidenced by experimental results and a plethora of models demonstrating statistical relations between species occurrences and long-term climate averages. However, long-term averages can conceal climate changes that have occurred in...

  11. MONTHLY AVERAGE FLOW IN RÂUL NEGRU HYDROGRAPHIC BASIN

    Directory of Open Access Journals (Sweden)

    VIGH MELINDA

    2014-03-01

    Full Text Available Râul Negru hydrographic basin represents a well individualised and relatively homogenous physical-geographical unity from Braşov Depression. The flow is controlled by six hydrometric stations placed on the main collector and on two of the most powerful tributaries. Our analysis period is represented by the last 25 years (1988 - 2012 and it’s acceptable for make pertinent conclusions. The maximum discharge month is April, that it’s placed in the high flow period: March – June. Minimum discharges appear in November - because of the lack of pluvial precipitations; in January because of high solid precipitations and because of water volume retention in ice. Extreme discharge frequencies vary according to their position: in the mountain area – small basin surface; into a depression – high basin surface. Variation coefficients point out very similar variation principles, showing a relative homogeneity of flow processes.

  12. Mental health care and average happiness: strong effect in developed nations.

    Science.gov (United States)

    Touburg, Giorgio; Veenhoven, Ruut

    2015-07-01

    Mental disorder is a main cause of unhappiness in modern society and investment in mental health care is therefore likely to add to average happiness. This prediction was checked in a comparison of 143 nations around 2005. Absolute investment in mental health care was measured using the per capita number of psychiatrists and psychologists working in mental health care. Relative investment was measured using the share of mental health care in the total health budget. Average happiness in nations was measured with responses to survey questions about life-satisfaction. Average happiness appeared to be higher in countries that invest more in mental health care, both absolutely and relative to investment in somatic medicine. A data split by level of development shows that this difference exists only among developed nations. Among these nations the link between mental health care and happiness is quite strong, both in an absolute sense and compared to other known societal determinants of happiness. The correlation between happiness and share of mental health care in the total health budget is twice as strong as the correlation between happiness and size of the health budget. A causal effect is likely, but cannot be proved in this cross-sectional analysis.

  13. Signal-averaged electrocardiogram in chronic Chagas' heart disease

    Directory of Open Access Journals (Sweden)

    Aguinaldo Pereira de Moraes

    Full Text Available The aim of the study was to register the prevalence of late potentials (LP in patients with chronic Chagas' heart disease (CCD and the relationship with sustained ventricular tachycardia (SVT. 192 patients (96 males, mean age 42.9 years, with CCD were studied through a Signal Averaged ECG using time domain analysis. According to presence or absence of bundle branch block (BBB and SVT, four groups of patients were created: Group I (n = 72: without SVT (VT- and without BBB (BBB-: Group II (n = 27: with SVT (VT+ and BBB-; Group III (n = 63: VT- and with BBB (BBB+; and Group IV (N = 30: VT+ and BBB+. The LP was admitted, with 40 Hz filter, in the groups without BBB using standard criteria of the method. In the group with BBB, the root-mean-square amplitude of the last 40 ms (RMS < =14µV was considered as an indicator of LP. RESULTS: In groups I and II, LP was present in 21 (78% of the patients with SVT and in 22 (31% of the patients without SVT (p < 0.001, with Sensitivity (S 78%; Specificity (SP 70% and Accuracy (Ac 72%. LP was present in 30 (48% of the patients without and 20 (67% of the patients with SVT, in groups III and IV. p = 0.066, with S = 66%; SP = 52%; and Ac = 57%. In the follow-up, there were 4 deaths unrelated to arrhythmic events, all of them did not have LP. Eight (29,6% of the patients from group II and 4 (13% from group IV presented recurrence of SVT and 91,6% of these patients had LP. CONCLUSIONS: LP occurred in 77.7% of the patients with SVT and without BBB. In the groups with BBB, there was association of LP with SVT in 66,6% of the cases. The recurrence of SVT was present in 21% of the cases from which 91,6% had LP.

  14. Average and local structure of selected metal deuterides

    Energy Technology Data Exchange (ETDEWEB)

    Soerby, Magnus H.

    2005-07-01

    deuterides at 1 bar D2 and elevated temperatures (373-573 K) is presented in Paper 1. Deuterium atoms occupy chiefly three types of tetrahedral interstitial sites; two coordinated by 4 Zr atoms and one coordinated by 3 Zr and 1 Ni atoms. The site preference is predominantly ruled by sample composition and less by temperature. On the other hand, the spatial deuterium distribution among the preferred sites is strongly temperature dependant as the long-range correlations break down on heating. The sample is fully decomposed into tetragonal ZrD2 and Zr7Ni10 at 873 K. Th2AlD4 was the only metal deuteride with reported D-D separation substantially below 2 Aa (1.79 Aa) prior to the discovery of RENiInD1.33. However, as being the first ternary deuteride ever studied by PND, the original structure solution was based on very low-resolution data. The present reinvestigation (Paper 2) shows that the site preference was correctly determined, but the deuterium atoms are slightly shifted compared to the earlier report, now yielding acceptable interatomic separations. Solely Th4 tetrahedra are occupied in various Th2Al deuterides. Th8Al4D11 (Th2AlD2.75) takes a superstructure with tripled c-axis due to deuterium ordering. Th2AlD2.3 is disordered and the average distance between partly occupied sites appears as just 1.55 Aa in Rietveld refinements. However, short-range order is expected to prevent D-D distances under 2 Aa. Paper 3 present the first Reverse Monte Carlo (RMC) study of a metal deuteride. RMC is used in combination with total neutron scattering to model short-range deuterium correlations in disordered c-VD0.77. A practically complete blocking of interstitial sites closer than 2 Aa from any occupied deuterium site is observed. The short-range correlations resemble those of the fully ordered low temperature phase c-VD0.75 at length scales up to about 3 Aa, i.e. for the first two coordination spheres. Paper 4 concerns RMC modelling of short-range deuterium correlations in ZrCr2D4

  15. Average and local structure of selected metal deuterides

    International Nuclear Information System (INIS)

    Soerby, Magnus H.

    2004-01-01

    elevated temperatures (373-573 K) is presented in Paper 1. Deuterium atoms occupy chiefly three types of tetrahedral interstitial sites; two coordinated by 4 Zr atoms and one coordinated by 3 Zr and 1 Ni atoms. The site preference is predominantly ruled by sample composition and less by temperature. On the other hand, the spatial deuterium distribution among the preferred sites is strongly temperature dependant as the long-range correlations break down on heating. The sample is fully decomposed into tetragonal ZrD2 and Zr7Ni10 at 873 K. Th2AlD4 was the only metal deuteride with reported D-D separation substantially below 2 Aa (1.79 Aa) prior to the discovery of RENiInD1.33. However, as being the first ternary deuteride ever studied by PND, the original structure solution was based on very low-resolution data. The present reinvestigation (Paper 2) shows that the site preference was correctly determined, but the deuterium atoms are slightly shifted compared to the earlier report, now yielding acceptable interatomic separations. Solely Th4 tetrahedra are occupied in various Th2Al deuterides. Th8Al4D11 (Th2AlD2.75) takes a superstructure with tripled c-axis due to deuterium ordering. Th2AlD2.3 is disordered and the average distance between partly occupied sites appears as just 1.55 Aa in Rietveld refinements. However, short-range order is expected to prevent D-D distances under 2 Aa. Paper 3 present the first Reverse Monte Carlo (RMC) study of a metal deuteride. RMC is used in combination with total neutron scattering to model short-range deuterium correlations in disordered c-VD0.77. A practically complete blocking of interstitial sites closer than 2 Aa from any occupied deuterium site is observed. The short-range correlations resemble those of the fully ordered low temperature phase c-VD0.75 at length scales up to about 3 Aa, i.e. for the first two coordination spheres. Paper 4 concerns RMC modelling of short-range deuterium correlations in ZrCr2D4 at ambient and low

  16. PRICE VS QUALITY COMPETITION AND THE SPATIAL PATTERN OF AVERAGE PRICES IN INTERNATIONAL TRADE

    Directory of Open Access Journals (Sweden)

    Mattoscio Nicola

    2012-07-01

    Full Text Available This work investigates the relationship between the average export prices and the distance between the origin and the destination market in international trade. Distance between trading partners obviously stands at the core of I international trade literature and is strictly related with the issue of how countries and firms compete on export markets when transport costs become increasingly stiff. Heterogeneous-Firm Trade (HFT models predict that only most competitive firms are able to export on distant markets, where it is more difficult to recover from freight costs. However, this simple concept does not lead to unambiguous predictions on the spatial pattern of average export f.o.b. prices. \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\r\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\

  17. The Impact of Reviews and Average Rating on Hotel-Booking-Intention

    DEFF Research Database (Denmark)

    Buus, Line Thomassen; Jensen, Charlotte Thodberg; Jessen, Anne Mette Karnøe

    2016-01-01

    User-generated information types (ratings and reviews) are highly used when booking hotel rooms on Online Travel Agency (OTA) websites. The impact of user-generated information on decision-making is often investigated through quantitative research, thereby not examining in depth how and why...... travelers use this information. This paper therefore presents a qualitative study conducted to achieve a deeper understanding. We investigated the use of reviews and average rating in a hotel-booking-context through a laboratory experiment, which involved a task of examining a hotel on a pre-designed OTA...... website followed by an interview. We processed the data from the interview, and the analysis resulted in a model generalizing the use of reviews and average rating in the deliberation phase of a hotel-booking. The findings are overall consistent with related research. Yet, beyond this, the qualitative...

  18. Renormalization, averaging, conservation laws and AdS (in)stability

    International Nuclear Information System (INIS)

    Craps, Ben; Evnin, Oleg; Vanhoof, Joris

    2015-01-01

    We continue our analytic investigations of non-linear spherically symmetric perturbations around the anti-de Sitter background in gravity-scalar field systems, and focus on conservation laws restricting the (perturbatively) slow drift of energy between the different normal modes due to non-linearities. We discover two conservation laws in addition to the energy conservation previously discussed in relation to AdS instability. A similar set of three conservation laws was previously noted for a self-interacting scalar field in a non-dynamical AdS background, and we highlight the similarities of this system to the fully dynamical case of gravitational instability. The nature of these conservation laws is best understood through an appeal to averaging methods which allow one to derive an effective Lagrangian or Hamiltonian description of the slow energy transfer between the normal modes. The conservation laws in question then follow from explicit symmetries of this averaged effective theory.

  19. Passive magnetic bearing systems stabilizer/bearing utilizing time-averaging of a periodic magnetic field

    Science.gov (United States)

    Post, Richard F.

    2017-10-03

    A high-stiffness stabilizer/bearings for passive magnetic bearing systems is provide where the key to its operation resides in the fact that when the frequency of variation of the repelling forces of the periodic magnet array is large compared to the reciprocal of the growth time of the unstable motion, the rotating system will feel only the time-averaged value of the force. When the time-averaged value of the force is radially repelling by the choice of the geometry of the periodic magnet array, the Earnshaw-related unstable hit motion that would occur at zero rotational speed is suppressed when the system is rotating at operating speeds.

  20. Interpreting Dynamically-Averaged Scalar Couplings in Proteins

    DEFF Research Database (Denmark)

    Lindorff-Larsen, Kresten; Best, Robert B.; Vendruscolo, Michele

    2005-01-01

    to be related to the torsion angles of the molecules. As the measured couplings are sensitive to thermal fluctuations, the parameters in the Karplus relationships are better derived from ensembles representing the distributions of dihedral angles present in solution, rather than from single conformations. We......The experimental determination of scalar three-bond coupling constants represents a powerful method to probe both the structure and dynamics of proteins. The detailed structural interpretation of such coupling constants is usually based on Karplus relationships, which allow the measured couplings...

  1. Econometric modelling of Serbian current account determinants: Jackknife Model Averaging approach

    Directory of Open Access Journals (Sweden)

    Petrović Predrag

    2014-01-01

    Full Text Available This research aims to model Serbian current account determinants for the period Q1 2002 - Q4 2012. Taking into account the majority of relevant determinants, using the Jackknife Model Averaging approach, 48 different models have been estimated, where 1254 equations needed to be estimated and averaged for each of the models. The results of selected representative models indicate moderate persistence of the CA and positive influence of: fiscal balance, oil trade balance, terms of trade, relative income and real effective exchange rates, where we should emphasise: (i a rather strong influence of relative income, (ii the fact that the worsening of oil trade balance results in worsening of other components (probably non-oil trade balance of CA and (iii that the positive influence of terms of trade reveals functionality of the Harberger-Laursen-Metzler effect in Serbia. On the other hand, negative influence is evident in case of: relative economic growth, gross fixed capital formation, net foreign assets and trade openness. What particularly stands out is the strong effect of relative economic growth that, most likely, reveals high citizens' future income growth expectations, which has negative impact on the CA.

  2. Applied Hierarchical Cluster Analysis with Average Linkage Algoritm

    Directory of Open Access Journals (Sweden)

    Cindy Cahyaning Astuti

    2017-11-01

    Full Text Available This research was conducted in Sidoarjo District where source of data used from secondary data contained in the book "Kabupaten Sidoarjo Dalam Angka 2016" .In this research the authors chose 12 variables that can represent sub-district characteristics in Sidoarjo. The variable that represents the characteristics of the sub-district consists of four sectors namely geography, education, agriculture and industry. To determine the equitable geographical conditions, education, agriculture and industry each district, it would require an analysis to classify sub-districts based on the sub-district characteristics. Hierarchical cluster analysis is the analytical techniques used to classify or categorize the object of each case into a relatively homogeneous group expressed as a cluster. The results are expected to provide information about dominant sub-district characteristics and non-dominant sub-district characteristics in four sectors based on the results of the cluster is formed.

  3. Grain centre mapping - 3DXRD measurements of average grain characteristics

    DEFF Research Database (Denmark)

    Oddershede, Jette; Schmidt, Søren; Lyckegaard, Allan

    2014-01-01

    Three-Dimensional X-ray Diraction (3DXRD) Microscopy is a generic term covering a variety of dierent techniques for characterising the mi- crostructure within the bulk of polycrystalline materials. One strategy | namely grain centre mapping | enables fast measurements of the av- erage characteris......Three-Dimensional X-ray Diraction (3DXRD) Microscopy is a generic term covering a variety of dierent techniques for characterising the mi- crostructure within the bulk of polycrystalline materials. One strategy | namely grain centre mapping | enables fast measurements of the av- erage...... and the closely related boxscan method is given. Both validation experiments and applications for in situ studies of microstructural changes during plastic deformation and crack growth are given. Finally an outlook with special emphasis on coupling the measured results with modelling is given....

  4. Teachers’ socio-emotional perceptions on gifted and average adolescents

    Directory of Open Access Journals (Sweden)

    María Carmen Fernández

    2011-10-01

    Full Text Available The aim of this paper is to study the teachers’ perception on socioemotional competence of their adolescent students, according to exceptionality (high abilities versus non high abilities and gender. The sample was composed of 443 teachers from 55 Secondary Schools in Murcia (Spain. The instrument used was the EQ-i: YV-O for teachers (BAR-ON & PARKER, in press. According to the exceptionality of the students (high abilities versus non high abilities the results showed that teachers perceived students with high abilities as being more adaptable, having greater general mood and having greater interpersonal abilities. Moreover, in relation to gender, teachers scored boys as having better stress management. In addition, with regard to exceptionality (high abilities versus non high abilities and gender, the results showed statistically significant differences in the adaptability, general mood, and intrapersonal dimensions.

  5. Contextual convolutional neural networks for lung nodule classification using Gaussian-weighted average image patches

    Science.gov (United States)

    Lee, Haeil; Lee, Hansang; Park, Minseok; Kim, Junmo

    2017-03-01

    Lung cancer is the most common cause of cancer-related death. To diagnose lung cancers in early stages, numerous studies and approaches have been developed for cancer screening with computed tomography (CT) imaging. In recent years, convolutional neural networks (CNN) have become one of the most common and reliable techniques in computer aided detection (CADe) and diagnosis (CADx) by achieving state-of-the-art-level performances for various tasks. In this study, we propose a CNN classification system for false positive reduction of initially detected lung nodule candidates. First, image patches of lung nodule candidates are extracted from CT scans to train a CNN classifier. To reflect the volumetric contextual information of lung nodules to 2D image patch, we propose a weighted average image patch (WAIP) generation by averaging multiple slice images of lung nodule candidates. Moreover, to emphasize central slices of lung nodules, slice images are locally weighted according to Gaussian distribution and averaged to generate the 2D WAIP. With these extracted patches, 2D CNN is trained to achieve the classification of WAIPs of lung nodule candidates into positive and negative labels. We used LUNA 2016 public challenge database to validate the performance of our approach for false positive reduction in lung CT nodule classification. Experiments show our approach improves the classification accuracy of lung nodules compared to the baseline 2D CNN with patches from single slice image.

  6. The CAIRN method: automated, reproducible calculation of catchment-averaged denudation rates from cosmogenic nuclide concentrations

    Science.gov (United States)

    Marius Mudd, Simon; Harel, Marie-Alice; Hurst, Martin D.; Grieve, Stuart W. D.; Marrero, Shasta M.

    2016-08-01

    We report a new program for calculating catchment-averaged denudation rates from cosmogenic nuclide concentrations. The method (Catchment-Averaged denudatIon Rates from cosmogenic Nuclides: CAIRN) bundles previously reported production scaling and topographic shielding algorithms. In addition, it calculates production and shielding on a pixel-by-pixel basis. We explore the effect of sampling frequency across both azimuth (Δθ) and altitude (Δϕ) angles for topographic shielding and show that in high relief terrain a relatively high sampling frequency is required, with a good balance achieved between accuracy and computational expense at Δθ = 8° and Δϕ = 5°. CAIRN includes both internal and external uncertainty analysis, and is packaged in freely available software in order to facilitate easily reproducible denudation rate estimates. CAIRN calculates denudation rates but also automates catchment averaging of shielding and production, and thus can be used to provide reproducible input parameters for the CRONUS family of online calculators.

  7. Solvent effects and dynamic averaging of 195Pt NMR shielding in cisplatin derivatives.

    Science.gov (United States)

    Truflandier, Lionel A; Sutter, Kiplangat; Autschbach, Jochen

    2011-03-07

    The influences of solvent effects and dynamic averaging on the (195)Pt NMR shielding and chemical shifts of cisplatin and three cisplatin derivatives in aqueous solution were computed using explicit and implicit solvation models. Within the density functional theory framework, these simulations were carried out by combining ab initio molecular dynamics (aiMD) simulations for the phase space sampling with all-electron relativistic NMR shielding tensor calculations using the zeroth-order regular approximation. Structural analyses support the presence of a solvent-assisted "inverse" or "anionic" hydration previously observed in similar square-planar transition-metal complexes. Comparisons with computationally less demanding implicit solvent models show that error cancellation is ubiquitous when dealing with liquid-state NMR simulations. After aiMD averaging, the calculated chemical shifts for the four complexes are in good agreement with experiment, with relative deviations between theory and experiment of about 5% on average (1% of the Pt(II) chemical shift range). © 2011 American Chemical Society

  8. Integrating angle-frequency domain synchronous averaging technique with feature extraction for gear fault diagnosis

    Science.gov (United States)

    Zhang, Shengli; Tang, J.

    2018-01-01

    Gear fault diagnosis relies heavily on the scrutiny of vibration responses measured. In reality, gear vibration signals are noisy and dominated by meshing frequencies as well as their harmonics, which oftentimes overlay the fault related components. Moreover, many gear transmission systems, e.g., those in wind turbines, constantly operate under non-stationary conditions. To reduce the influences of non-synchronous components and noise, a fault signature enhancement method that is built upon angle-frequency domain synchronous averaging is developed in this paper. Instead of being averaged in the time domain, the signals are processed in the angle-frequency domain to solve the issue of phase shifts between signal segments due to uncertainties caused by clearances, input disturbances, and sampling errors, etc. The enhanced results are then analyzed through feature extraction algorithms to identify the most distinct features for fault classification and identification. Specifically, Kernel Principal Component Analysis (KPCA) targeting at nonlinearity, Multilinear Principal Component Analysis (MPCA) targeting at high dimensionality, and Locally Linear Embedding (LLE) targeting at local similarity among the enhanced data are employed and compared to yield insights. Numerical and experimental investigations are performed, and the results reveal the effectiveness of angle-frequency domain synchronous averaging in enabling feature extraction and classification.

  9. Efficient determination of average valence of manganese in manganese oxides by reaction headspace gas chromatography.

    Science.gov (United States)

    Xie, Wei-Qi; Gong, Yi-Xian; Yu, Kong-Xian

    2017-08-18

    This work investigates a new reaction headspace gas chromatographic (HS-GC) technique for efficient quantifying average valence of manganese (Mn) in manganese oxides. This method is on the basis of the oxidation reaction between manganese oxides and sodium oxalate under the acidic condition. The carbon dioxide (CO 2 ) formed from the oxidation reaction can be quantitatively analyzed by headspace gas chromatography. The data showed that the reaction in the closed headspace vial can be completed in 20min at 80°C. The relative standard deviation of this reaction HS-GC method in the precision testing was within 1.08%, the relative differences between the new method and the reference method (titration method) were no more than 5.71%. The new HS-GC method is automated, efficient, and can be a reliable tool for the quantitative analysis of average valence of manganese in the manganese oxide related research and applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Averaging Tesseral Effects: Closed Form Relegation versus Expansions of Elliptic Motion

    Directory of Open Access Journals (Sweden)

    Martin Lara

    2013-01-01

    Full Text Available Longitude-dependent terms of the geopotential cause nonnegligible short-period effects in orbit propagation of artificial satellites. Hence, accurate analytical and semianalytical theories must cope with tesseral harmonics. Modern algorithms for dealing analytically with them allow for closed form relegation. Nevertheless, current procedures for the relegation of tesseral effects from subsynchronous orbits are unavoidably related to orbit eccentricity, a key fact that is not enough emphasized and constrains application of this technique to small and moderate eccentricities. Comparisons with averaging procedures based on classical expansions of elliptic motion are carried out, and the pros and cons of each approach are discussed.

  11. Average Behavior of Battery - Electric Vehicles for Distributed Energy System Studies

    DEFF Research Database (Denmark)

    Marra, Francesco; Træholt, Chresten; Larsen, Esben

    2010-01-01

    use conditions cannot be neglected for a proper estimation of available fleet energy. In this paper we describe an average behavior of battery-EVs. Main points of this concept include the definition of the energy window and lifetime of the batteries, in relation to existing models and battery use......The increase of focus on electric vehicles (EVs) as distributed energy resources calls for new concepts of aggregated models of batteries. Despite the developed battery models for EVs applications, when looking at energy storage scenarios using EVs, both geographical-temporal aspects and battery...

  12. Free-free opacity in dense plasmas with an average atom model

    Science.gov (United States)

    Shaffer, N. R.; Ferris, N. G.; Colgan, J.; Kilcrease, D. P.; Starrett, C. E.

    2017-06-01

    A model for the free-free opacity of dense plasmas is presented. The model uses a previously developed average atom model, together with the Kubo-Greenwood model for optical conductivity. This, in turn, is used to calculate the opacity with the Kramers-Kronig dispersion relations. Comparisons to other methods for dense deuterium results in excellent agreement with DFT-MD simulations, and reasonable agreement with a simple Yukawa screening model corrected to satisfy the conductivity sum rule. Comparisons against the very recent experiments of Kettle et al. for dense aluminum also reveal very good agreement, in contrast to existing models. Weaknesses in the model are also highlighted.

  13. On the laws of thermodynamics from the escort average and on the uniqueness of statistical factors

    International Nuclear Information System (INIS)

    Yamano, Takuya

    2003-01-01

    We consider the relation between the statistical weight and the laws of thermodynamics. Our path bases on the infinitesimal perturbation from outside of the thermodynamical system. It is fair to say that the form of the first laws of thermodynamics and the Clausius' definition of thermodynamic entropy are commensurately altered once we employ the escort average of statistical weight but preserves the forms in the limit of the usual weight. We also see an example for the unique determination of the statistical factor (so-called the Gibbs theorem) in addition to the Boltzmann and the Tsallis ones

  14. Attenuation correction of emission PET images with average CT: Interpolation from breath-hold CT

    Science.gov (United States)

    Huang, Tzung-Chi; Zhang, Geoffrey; Chen, Chih-Hao; Yang, Bang-Hung; Wu, Nien-Yun; Wang, Shyh-Jen; Wu, Tung-Hsin

    2011-05-01

    Misregistration resulting from the difference of temporal resolution in PET and CT scans occur frequently in PET/CT imaging, which causes distortion in tumor quantification in PET. Respiration cine average CT (CACT) for PET attenuation correction has been reported to improve the misalignment effectively by several papers. However, the radiation dose to the patient from a four-dimensional CT scan is relatively high. In this study, we propose a method to interpolate respiratory CT images over a respiratory cycle from inhalation and exhalation breath-hold CT images, and use the average CT from the generated CT set for PET attenuation correction. The radiation dose to the patient is reduced using this method. Six cancer patients of various lesion sites underwent routine free-breath helical CT (HCT), respiration CACT, interpolated average CT (IACT), and 18F-FDG PET. Deformable image registration was used to interpolate the middle phases of a respiratory cycle based on the end-inspiration and end-expiration breath-hold CT scans. The average CT image was calculated from the eight interpolated CT image sets of middle respiratory phases and the two original inspiration and expiration CT images. Then the PET images were reconstructed by these three methods for attenuation correction using HCT, CACT, and IACT. Misalignment of PET image using either CACT or IACT for attenuation correction in PET/CT was improved. The difference in standard uptake value (SUV) from tumor in PET images was most significant between the use of HCT and CACT, while the least significant between the use of CACT and IACT. Besides the similar improvement in tumor quantification compared to the use of CACT, using IACT for PET attenuation correction reduces the radiation dose to the patient.

  15. Microbial Carbon Substrate Utilization Differences among High- and Average-Yield Soybean Areas

    Directory of Open Access Journals (Sweden)

    Taylor C. Adams

    2017-05-01

    Full Text Available Since soybean (Glycine max L. (Merr. yields greater than 6719 kg ha−1 have only recently and infrequently been achieved, little is known about the soil microbiological environment related to high-yield soybean production. Soil microbiological properties are often overlooked when assessing agronomic practices for optimal production. Therefore, a greater understanding is needed regarding how soil biological properties may differ between high- and average-yielding areas within fields. The objectives of this study were to (i evaluate the effects of region on soil microbial carbon substrate utilization differences between high- (HY and average-yield (AY areas and (ii assess the effect of yield area on selected microbiological property differences. Replicate soil samples were collected from the 0–10 cm depth from yield-contest-entered fields in close proximity that had both a HY and an AY area. Samples were collected immediately prior to or just after soybean harvest in 2014 and 2015 from each of seven geographic regions within Arkansas. Averaged across yield area, community-level carbon substrate utilization and Shannon’s and Simpson’s functional diversity and evenness were greater (p < 0.05 in Region 7 than all other regions. Averaged across regions, Shannon’s functional diversity and evenness were greater (p < 0.05 in HY than in AY areas. Principal component analysis demonstrated that a greater variety of carbon substrates were used in HY than AY areas. These results may help producers understand the soil microbiological environment in their own fields that contribute to or hinder achieving high-yielding soybeans; however, additional parameters may need to be assessed for a more comprehensive understanding of the soil environment that is associated with high-yielding soybean.

  16. Bounding Averages Rigorously Using Semidefinite Programming: Mean Moments of the Lorenz System

    Science.gov (United States)

    Goluskin, David

    2018-04-01

    We describe methods for proving bounds on infinite-time averages in differential dynamical systems. The methods rely on the construction of nonnegative polynomials with certain properties, similarly to the way nonlinear stability can be proved using Lyapunov functions. Nonnegativity is enforced by requiring the polynomials to be sums of squares, a condition which is then formulated as a semidefinite program (SDP) that can be solved computationally. Although such computations are subject to numerical error, we demonstrate two ways to obtain rigorous results: using interval arithmetic to control the error of an approximate SDP solution, and finding exact analytical solutions to relatively small SDPs. Previous formulations are extended to allow for bounds depending analytically on parametric variables. These methods are illustrated using the Lorenz equations, a system with three state variables ( x, y, z) and three parameters (β ,σ ,r). Bounds are reported for infinite-time averages of all eighteen moments x^ly^mz^n up to quartic degree that are symmetric under (x,y)\\mapsto (-x,-y). These bounds apply to all solutions regardless of stability, including chaotic trajectories, periodic orbits, and equilibrium points. The analytical approach yields two novel bounds that are sharp: the mean of z^3 can be no larger than its value of (r-1)^3 at the nonzero equilibria, and the mean of xy^3 must be nonnegative. The interval arithmetic approach is applied at the standard chaotic parameters to bound eleven average moments that all appear to be maximized on the shortest periodic orbit. Our best upper bound on each such average exceeds its value on the maximizing orbit by less than 1%. Many bounds reported here are much tighter than would be possible without computer assistance.

  17. PSA testing for men at average risk of prostate cancer

    Directory of Open Access Journals (Sweden)

    Bruce K Armstrong

    2017-07-01

    Full Text Available Prostate-specific antigen (PSA testing of men at normal risk of prostate cancer is one of the most contested issues in cancer screening. There is no formal screening program, but testing is common – arguably a practice that ran ahead of the evidence. Public and professional communication about PSA screening has been highly varied and potentially confusing for practitioners and patients alike. There has been much research and policy activity relating to PSA testing in recent years. Landmark randomised controlled trials have been reported; authorities – including the 2013 Prostate Cancer World Congress, the Prostate Cancer Foundation of Australia, Cancer Council Australia, and the National Health and Medical Research Council – have made or endorsed public statements and/or issued clinical practice guidelines; and the US Preventive Services Task Force is revising its recommendations. But disagreement continues. The contention is partly over what the new evidence means. It is also a result of different valuing and prioritisation of outcomes that are hard to compare: prostate cancer deaths prevented (a small and disputed number; prevention of metastatic disease (somewhat more common; and side-effects of treatment such as incontinence, impotence and bowel trouble (more common again. A sizeable proportion of men diagnosed through PSA testing (somewhere between 20% and 50% would never have had prostate cancer symptoms sufficient to prompt investigation; many of these men are older, with competing comorbidities. It is a complex picture. Below are four viewpoints from expert participants in the evolving debate, commissioned for this cancer screening themed issue of Public Health Research & Practice. We asked the authors to respond to the challenge of PSA testing of asymptomatic, normal-risk men. They raise important considerations: uncertainty, harms, the trustworthiness and interpretation of the evidence, cost (e.g. of using multiparametric

  18. On monogamy of non-locality and macroscopic averages: examples and preliminary results

    Directory of Open Access Journals (Sweden)

    Rui Soares Barbosa

    2014-12-01

    Full Text Available We explore a connection between monogamy of non-locality and a weak macroscopic locality condition: the locality of the average behaviour. These are revealed by our analysis as being two sides of the same coin. Moreover, we exhibit a structural reason for both in the case of Bell-type multipartite scenarios, shedding light on but also generalising the results in the literature [Ramanathan et al., Phys. Rev. Lett. 107, 060405 (2001; Pawlowski & Brukner, Phys. Rev. Lett. 102, 030403 (2009]. More specifically, we show that, provided the number of particles in each site is large enough compared to the number of allowed measurement settings, and whatever the microscopic state of the system, the macroscopic average behaviour is local realistic, or equivalently, general multipartite monogamy relations hold. This result relies on a classical mathematical theorem by Vorob'ev [Theory Probab. Appl. 7(2, 147-163 (1962] about extending compatible families of probability distributions defined on the faces of a simplicial complex – in the language of the sheaf-theoretic framework of Abramsky & Brandenburger [New J. Phys. 13, 113036 (2011], such families correspond to no-signalling empirical models, and the existence of an extension corresponds to locality or non-contextuality. Since Vorob'ev's theorem depends solely on the structure of the simplicial complex, which encodes the compatibility of the measurements, and not on the specific probability distributions (i.e. the empirical models, our result about monogamy relations and locality of macroscopic averages holds not just for quantum theory, but for any empirical model satisfying the no-signalling condition. In this extended abstract, we illustrate our approach by working out a couple of examples, which convey the intuition behind our analysis while keeping the discussion at an elementary level.

  19. New Validated Signal-averaging-based Electrocardiography Method to Determine His-ventricle Interval.

    Science.gov (United States)

    Németh, Balázs; Kellényi, Lóránd; Péterfi, István; Simor, Tamás; Ruzsa, Diána; Lőrinc, Holczer; Kiss, István; Péter, Iván; Ajtay, Zénó

    The signal-averaging (SA) technique is used to record high-resolution electrocardiograms (HRECGs) showing cardiac micropotentials. We aimed to develop a non-invasive signal-averaging-based portable bedside device to determine His-ventricle interval. After amplifying the HRECG recordings, signal duration and voltage can be measured up to four decimal precision. To validate our system, comparison of the invasively and non-invasively determined HV intervals has been performed in 20 patients. Our workgroup has developed a system capable of displaying and measuring cardiac micropotentials on storable ECG. Neither related paired-sample T-test (p=0.263) nor Wilcoxon's non-parametric signed ranks test (p=0.245) showed significant deviations of the HV intervals. Furthermore, related paired-sample T-test showed strong correlation (corr=0.910, p<0.001) between HV intervals determined by electrophysiology (EP) and non-invasive measurements. Our research group managed to assemble and validate an easy to use device capable of determining HV intervals even under ambulatory conditions. Copyright © 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  20. What Do s- and p-Wave Neutron Average Radiative Widths Reveal

    Energy Technology Data Exchange (ETDEWEB)

    Mughabghab, S.F.

    2010-04-30

    A first observation of two resonance-like structures at mass numbers 92 and 112 in the average capture widths of the p-wave neutron resonances relative to the s-wave component is interpreted in terms of a spin-orbit splitting of the 3p single-particle state into P{sub 3/2} and P{sub 1/2} components at the neutron separation energy. A third structure at about A = 124, which is not correlated with the 3p-wave neutron strength function, is possibly due to the Pygmy Dipole Resonance. Five significant results emerge from this investigation: (i) The strength of the spin-orbit potential of the optical-model is determined as 5.7 {+-} 0.5 MeV, (ii) Non-statistical effects dominate the p-wave neutron-capture in the mass region A = 85 - 130, (iii) The background magnitude of the p-wave average capture-width relative to that of the s-wave is determined as 0.50 {+-} 0.05, which is accounted for quantitatively in tenns of the generalized Fermi liquid model of Mughabghab and Dunford, (iv) The p-wave resonances arc partially decoupled from the giant-dipole resonance (GDR), and (v) Gamma-ray transitions, enhanced over the predictions of the GDR, are observed in the {sup 90}Zr - {sup 98}Mo and Sn-Ba regions.

  1. Greenhouse Gas Emissions and the Australian Diet—Comparing Dietary Recommendations with Average Intakes

    Directory of Open Access Journals (Sweden)

    Gilly A. Hendrie

    2014-01-01

    Full Text Available Nutrition guidelines now consider the environmental impact of food choices as well as maintaining health. In Australia there is insufficient data quantifying the environmental impact of diets, limiting our ability to make evidence-based recommendations. This paper used an environmentally extended input-output model of the economy to estimate greenhouse gas emissions (GHGe for different food sectors. These data were augmented with food intake estimates from the 1995 Australian National Nutrition Survey. The GHGe of the average Australian diet was 14.5 kg carbon dioxide equivalents (CO2e per person per day. The recommended dietary patterns in the Australian Dietary Guidelines are nutrient rich and have the lowest GHGe (~25% lower than the average diet. Food groups that made the greatest contribution to diet-related GHGe were red meat (8.0 kg CO2e per person per day and energy-dense, nutrient poor “non-core” foods (3.9 kg CO2e. Non-core foods accounted for 27% of the diet-related emissions. A reduction in non-core foods and consuming the recommended serves of core foods are strategies which may achieve benefits for population health and the environment. These data will enable comparisons between changes in dietary intake and GHGe over time, and provide a reference point for diets which meet population nutrient requirements and have the lowest GHGe.

  2. Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns

    DEFF Research Database (Denmark)

    Gonçalves, Sílvia; Hounyo, Ulrich; Meddahi, Nour

    correction term) as the (scaled) sum of squared pre-averaged returns, where the pre-averaging is done over all possible non-overlapping blocks of consecutive observations. Pre-averaging reduces the influence of the noise and allows for realized volatility estimation on the pre-averaged returns. The non...

  3. Volume averaging: Local and nonlocal closures using a Green’s function approach

    Science.gov (United States)

    Wood, Brian D.; Valdés-Parada, Francisco J.

    2013-01-01

    Modeling transport phenomena in discretely hierarchical systems can be carried out using any number of upscaling techniques. In this paper, we revisit the method of volume averaging as a technique to pass from a microscopic level of description to a macroscopic one. Our focus is primarily on developing a more consistent and rigorous foundation for the relation between the microscale and averaged levels of description. We have put a particular focus on (1) carefully establishing statistical representations of the length scales used in volume averaging, (2) developing a time-space nonlocal closure scheme with as few assumptions and constraints as are possible, and (3) carefully identifying a sequence of simplifications (in terms of scaling postulates) that explain the conditions for which various upscaled models are valid. Although the approach is general for linear differential equations, we upscale the problem of linear convective diffusion as an example to help keep the discussion from becoming overly abstract. In our efforts, we have also revisited the concept of a closure variable, and explain how closure variables can be based on an integral formulation in terms of Green’s functions. In such a framework, a closure variable then represents the integration (in time and space) of the associated Green’s functions that describe the influence of the average sources over the spatial deviations. The approach using Green’s functions has utility not only in formalizing the method of volume averaging, but by clearly identifying how the method can be extended to transient and time or space nonlocal formulations. In addition to formalizing the upscaling process using Green’s functions, we also discuss the upscaling process itself in some detail to help foster improved understanding of how the process works. Discussion about the role of scaling postulates in the upscaling process is provided, and poised, whenever possible, in terms of measurable properties of (1) the

  4. Discussion based on analysis of the suicide rate and the average disposable income per household in Japan.

    Science.gov (United States)

    Inoue, K; Nishimura, Y; Okazazi, Y; Fukunaga, T

    2014-08-01

    Suicide is one of the major social issues in Japan. According to a report of the National Policy Agency, there were approximately 22 000 to 24 000 annual suicides between 1994 and 1997 and there have been over 30 000 annual suicides in Japan since 1998. For these reasons, we think it is important to discuss the economic factor related to suicides in recent years. In this study, we examined suicide rates and the average disposable income per household in Japan in the last 15 years (ie 1994-2008) and discuss the statistical analysis of the average disposable income per household and the associated suicide rates. During the research period, annual suicide rates per 100 000 population in Japan ranged from 16.9 to 25.5 among the total population, from 23.1 to 38.0 among men, and from 10.9 to 14.7 among women. The annual average disposable income per household (ten thousand yen) ranged from 424.0 to 549.9. The average disposable income per household was related to the suicide rate among the total population and among men. The average disposable income per household was not related to the suicide rate among women. We believe that this discussion will be useful in developing specific suicide preventive measures.

  5. Limitations of the spike-triggered averaging for estimating motor unit twitch force: a theoretical analysis.

    Directory of Open Access Journals (Sweden)

    Francesco Negro

    Full Text Available Contractile properties of human motor units provide information on the force capacity and fatigability of muscles. The spike-triggered averaging technique (STA is a conventional method used to estimate the twitch waveform of single motor units in vivo by averaging the joint force signal. Several limitations of this technique have been previously discussed in an empirical way, using simulated and experimental data. In this study, we provide a theoretical analysis of this technique in the frequency domain and describe its intrinsic limitations. By analyzing the analytical expression of STA, first we show that a certain degree of correlation between the motor unit activities prevents an accurate estimation of the twitch force, even from relatively long recordings. Second, we show that the quality of the twitch estimates by STA is highly related to the relative variability of the inter-spike intervals of motor unit action potentials. Interestingly, if this variability is extremely high, correct estimates could be obtained even for high discharge rates. However, for physiological inter-spike interval variability and discharge rate, the technique performs with relatively low estimation accuracy and high estimation variance. Finally, we show that the selection of the triggers that are most distant from the previous and next, which is often suggested, is not an effective way for improving STA estimates and in some cases can even be detrimental. These results show the intrinsic limitations of the STA technique and provide a theoretical framework for the design of new methods for the measurement of motor unit force twitch.

  6. Limitations of the Spike-Triggered Averaging for Estimating Motor Unit Twitch Force: A Theoretical Analysis

    Science.gov (United States)

    Negro, Francesco; Yavuz, Utku Ş.; Farina, Dario

    2014-01-01

    Contractile properties of human motor units provide information on the force capacity and fatigability of muscles. The spike-triggered averaging technique (STA) is a conventional method used to estimate the twitch waveform of single motor units in vivo by averaging the joint force signal. Several limitations of this technique have been previously discussed in an empirical way, using simulated and experimental data. In this study, we provide a theoretical analysis of this technique in the frequency domain and describe its intrinsic limitations. By analyzing the analytical expression of STA, first we show that a certain degree of correlation between the motor unit activities prevents an accurate estimation of the twitch force, even from relatively long recordings. Second, we show that the quality of the twitch estimates by STA is highly related to the relative variability of the inter-spike intervals of motor unit action potentials. Interestingly, if this variability is extremely high, correct estimates could be obtained even for high discharge rates. However, for physiological inter-spike interval variability and discharge rate, the technique performs with relatively low estimation accuracy and high estimation variance. Finally, we show that the selection of the triggers that are most distant from the previous and next, which is often suggested, is not an effective way for improving STA estimates and in some cases can even be detrimental. These results show the intrinsic limitations of the STA technique and provide a theoretical framework for the design of new methods for the measurement of motor unit force twitch. PMID:24667744

  7. Recursive Averaging

    Science.gov (United States)

    Smith, Scott G.

    2015-01-01

    In this article, Scott Smith presents an innocent problem (Problem 12 of the May 2001 Calendar from "Mathematics Teacher" ("MT" May 2001, vol. 94, no. 5, p. 384) that was transformed by several timely "what if?" questions into a rewarding investigation of some interesting mathematics. These investigations led to two…

  8. Notes on Well-Posed, Ensemble Averaged Conservation Equations for Multiphase, Multi-Component, and Multi-Material Flows

    International Nuclear Information System (INIS)

    Ray A. Berry

    2005-01-01

    At the INL researchers and engineers routinely encounter multiphase, multi-component, and/or multi-material flows. Some examples include: Reactor coolant flows Molten corium flows Dynamic compaction of metal powders Spray forming and thermal plasma spraying Plasma quench reactor Subsurface flows, particularly in the vadose zone Internal flows within fuel cells Black liquor atomization and combustion Wheat-chaff classification in combine harvesters Generation IV pebble bed, high temperature gas reactor The complexity of these flows dictates that they be examined in an averaged sense. Typically one would begin with known (or at least postulated) microscopic flow relations that hold on the ''small'' scale. These include continuum level conservation of mass, balance of species mass and momentum, conservation of energy, and a statement of the second law of thermodynamics often in the form of an entropy inequality (such as the Clausius-Duhem inequality). The averaged or macroscopic conservation equations and entropy inequalities are then obtained from the microscopic equations through suitable averaging procedures. At this stage a stronger form of the second law may also be postulated for the mixture of phases or materials. To render the evolutionary material flow balance system unique, constitutive equations and phase or material interaction relations are introduced from experimental observation, or by postulation, through strict enforcement of the constraints or restrictions resulting from the averaged entropy inequalities. These averaged equations form the governing equation system for the dynamic evolution of these mixture flows. Most commonly, the averaging technique utilized is either volume or time averaging or a combination of the two. The flow restrictions required for volume and time averaging to be valid can be severe, and violations of these restrictions are often found. A more general, less restrictive (and far less commonly used) type of averaging known as

  9. Studies concerning average volume flow and waterpacking anomalies in thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Lyczkowski, R.W.; Ching, J.T.; Mecham, D.C.

    1977-01-01

    One-dimensional hydrodynamic codes have been observed to exhibit anomalous behavior in the form of non-physical pressure oscillations and spikes. It is our experience that sometimes this anomaloous behavior can result in mass depletion, steam table failure and in severe cases, problem abortion. In addition, these non-physical pressure spikes can result in long running times when small time steps are needed in an attempt to cope with anomalous solution behavior. The source of these pressure spikes has been conjectured to be caused by nonuniform enthalpy distribution or wave reflection off the closed end of a pipe or abrupt changes in pressure history when the fluid changes from subcooled to two-phase conditions. It is demonstrated in this paper that many of the faults can be attributed to inadequate modeling of the average volume flow and the sharp fluid density front crossing a junction. General corrective models are difficult to devise since the causes of the problems touch on the very theoretical bases of the differential field equations and associated solution scheme. For example, the fluid homogeneity assumption and the numerical extrapolation scheme have placed severe restrictions on the capability of a code to adequately model certain physical phenomena involving fluid discontinuities. The need for accurate junction and local properties to describe phenomena internal to a control volume often points to additional lengthy computations that are difficult to justify in terms of computational efficiency. Corrective models that are economical to implement and use are developed. When incorporated into the one-dimensional, homogeneous transient thermal-hydraulic analysis computer code, RELAP4, they help mitigate many of the code's difficulties related to average volume flow and water-packing anomalies. An average volume flow model and a critical density model are presented. Computational improvements due to these models are also demonstrated

  10. Green Suppliers Performance Evaluation in Belt and Road Using Fuzzy Weighted Average with Social Media Information

    Directory of Open Access Journals (Sweden)

    Kuo-Ping Lin

    2017-12-01

    Full Text Available A decision model for selecting a suitable supplier is a key to reducing the environmental impact in green supply chain management for high-tech companies. Traditional fuzzy weight average (FWA adopts linguistic variable to determine weight by experts. However, the weights of FWA have not considered the public voice, meaning the viewpoints of consumers in green supply chain management. This paper focuses on developing a novel decision model for green supplier selection in the One Belt and One Road (OBOR initiative through a fuzzy weighted average approach with social media. The proposed decision model uses the membership grade of the criteria and sub-criteria and its relative weights, which consider the volume of social media, to establish an analysis matrix of green supplier selection. Then, the proposed fuzzy weighted average approach is considered as an aggregating tool to calculate a synthetic score for each green supplier in the Belt and Road initiative. The final score of the green supplier is ordered by a non-fuzzy performance value ranking method to help the consumer make a decision. A case of green supplier selection in the light-emitting diode (LED industry is used to demonstrate the proposed decision model. The findings demonstrate (1 the consumer’s main concerns are the “Quality” and “Green products” in LED industry, hence, the ranking of suitable supplier in FWA with social media information model obtained the difference result with tradition FWA; (2 OBOR in the LED industry is not fervently discussed in searches of Google and Twitter; and (3 the FWA with social media information could objectively analyze the green supplier selection because the novel model considers the viewpoints of the consumer.

  11. Variability of average SUV from several hottest voxels is lower than that of SUVmax and SUVpeak

    Energy Technology Data Exchange (ETDEWEB)

    Laffon, E. [CHU de Bordeaux, Service de Medecine Nucleaire, Hopital du Haut-Leveque, Pessac (France); Universite de Bordeaux 2, Centre de Recherche Cardio-Thoracique, Bordeaux (France); INSERM U 1045, Centre de Recherche Cardio-Thoracique, Bordeaux (France); Lamare, F.; Clermont, H. de [CHU de Bordeaux, Service de Medecine Nucleaire, Hopital du Haut-Leveque, Pessac (France); Burger, I.A. [University Hospital of Zurich, Division of Nuclear Medicine, Department Medical Radiology, Zurich (Switzerland); Marthan, R. [Universite de Bordeaux 2, Centre de Recherche Cardio-Thoracique, Bordeaux (France); INSERM U 1045, Centre de Recherche Cardio-Thoracique, Bordeaux (France)

    2014-08-15

    To assess variability of the average standard uptake value (SUV) computed by varying the number of hottest voxels within an {sup 18}F-fluorodeoxyglucose ({sup 18}F-FDG)-positive lesion. This SUV metric was compared with the maximal SUV (SUV{sub max}: the hottest voxel) and peak SUV (SUV{sub peak}: SUV{sub max} and its 26 neighbouring voxels). Twelve lung cancer patients (20 lesions) were analysed using PET dynamic acquisition involving ten successive 2.5-min frames. In each frame and lesion, average SUV obtained from the N = 5, 10, 15, 20, 25 or 30 hottest voxels (SUV{sub max-N}){sub ,} SUV{sub max} and SUV{sub peak} were assessed. The relative standard deviations (SDrs) from ten frames were calculated for each SUV metric and lesion, yielding the mean relative SD from 20 lesions for each SUV metric (SDr{sub N}, SDr{sub max} and SDr{sub peak}), and hence relative measurement error and repeatability (MEr-R). For each N, SDr{sub N} was significantly lower than SDr{sub max} and SDr{sub peak}. SDr{sub N} correlated strongly with N: 6.471 x N{sup -0.103} (r = 0.994; P < 0.01). MEr-R of SUV{sub max-30} was 8.94-12.63 % (95 % CL), versus 13.86-19.59 % and 13.41-18.95 % for SUV{sub max} and SUV{sub peak} respectively. Variability of SUV{sub max-N} is significantly lower than for SUV{sub max} and SUV{sub peak}. Further prospective studies should be performed to determine the optimal total hottest volume, as voxel volume may depend on the PET system. (orig.)

  12. Average Extinction Curves and Abundances at 1

    Science.gov (United States)

    Vanden Berk, D. E.; York, D. G.; Khare, P.; Kulkarni, V. P.; Crotts, A. P. S.; Lauroesch, J. T.; Richards, G. T.; Yip, C.-W.; Schneider, D. P.; Welty, D.; Alsayyad, Y.; Shanidze, N.; Vanlandingham, J.; Tumlinson, J.; Kumar, A.; Lundgren, B.; Baugher, B.; Hall, P. B.; Jenkins, E. B.; Menard, B.; Rao, S.; Turnshek, D.; Brinkman, J.; SDSS Collaboration

    2005-12-01

    We present average extinction curves and relative abundance measurements for a sample of 809 MgII absorption line systems, with 1.0 < zabs < 1.86, identified in the spectra of SDSS quasars. Extinction curves for numerous sub-samples were generated by comparing geometric mean absorber-frame spectra with those of matching quasar spectra without absorbers. There is clear evidence for the presence of dust in the intervening systems. All of the extinction curves are similar to the SMC extinction curve, and the 2175{Å} absorption feature is not detectable in the curves of any of the sub-samples. Quasars with absorbers are at least three times as likely to have highly reddened spectra, compared to quasars without detected absorption systems. The average absorber-frame color excess, E(B-V), ranges from <0.001 to 0.085, and depends on the properties of the absorbers in the sub-samples. The column densities of numerous first ions do not show as correspondingly large a variation as the color excess. The depletion pattern in the high E(B-V) samples is similar to that of Galactic halo clouds, and is consistent with those found for individual damped Ly α systems. Funding for the Sloan Digital Sky Survey (SDSS) has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, the Japanese Monbukagakusho, the Max Planck Society, and the HEFCE.

  13. Prediction of average annual surface temperature for both flexible and rigid pavements

    Directory of Open Access Journals (Sweden)

    Karthikeyan LOGANATHAN

    2017-12-01

    Full Text Available The surface temperature of pavements is a critical attribute during pavement design. Surface temperature must be measured at locations of interest based on time-consuming field tests. The key idea of this study is to develop a temperature profile model to predict the surface temperature of flexible and rigid pavements based on weather parameters. Determination of surface temperature with traditional techniques and sensors are replaced by a newly developed method. The method includes the development of a regression model to predict the average annual surface temperature based on weather parameters such as ambient air temperature, relative humidity, wind speed, and precipitation. Detailed information about temperature and other parameters are extracted from the Federal Highway Administration's (FHWA Long Term Pavement Performance (LTPP online database. The study was conducted on 61 pavement sections in the state of Alabama for a 10-year period. The developed model would predict the average annual surface temperature based on the known weather parameters. The predicted surface temperature model for asphalt pavements was very reliable and can be utilized while designing a pavement. The study was also conducted on seven rigid pavement sections in Alabama to predict their surface temperature, in which a successful model was developed. The outcome of this study would help the transportation agencies by saving time and effort invested in expensive field tests to measure the surface temperature of pavements.

  14. Averaging, not internal noise, limits the development of coherent motion processing

    Directory of Open Access Journals (Sweden)

    Catherine Manning

    2014-10-01

    Full Text Available The development of motion processing is a critical part of visual development, allowing children to interact with moving objects and navigate within a dynamic environment. However, global motion processing, which requires pooling motion information across space, develops late, reaching adult-like levels only by mid-to-late childhood. The reasons underlying this protracted development are not yet fully understood. In this study, we sought to determine whether the development of motion coherence sensitivity is limited by internal noise (i.e., imprecision in estimating the directions of individual elements and/or global pooling across local estimates. To this end, we presented equivalent noise direction discrimination tasks and motion coherence tasks at both slow (1.5°/s and fast (6°/s speeds to children aged 5, 7, 9 and 11 years, and adults. We show that, as children get older, their levels of internal noise reduce, and they are able to average across more local motion estimates. Regression analyses indicated, however, that age-related improvements in coherent motion perception are driven solely by improvements in averaging and not by reductions in internal noise. Our results suggest that the development of coherent motion sensitivity is primarily limited by developmental changes within brain regions involved in integrating motion signals (e.g., MT/V5.

  15. Average Frequency – RA Value for Reinforced Concrete Beam Strengthened with Carbon Fibre Sheet

    Directory of Open Access Journals (Sweden)

    Mohamad M. Z.

    2016-01-01

    Full Text Available Acoustic Emission (AE is one of the tools that can be used to detect the crack and to classify the type of the crack of reinforced concrete (RC structure. Dislocation or movement of the material inside the RC may release the transient elastic wave. In this situation, AE plays important role whereby it can be used to capture the transient elastic wave and convert it into AE parameters such as amplitude, count, rise time and duration. Certain parameter can be used directly to evaluate the crack behavior. But in certain cases, the AE parameter needs to add and calculate by using related formula in order to observe the behavior of the crack. Using analysis of average frequency and RA value, the crack can be classified into tensile or shear cracks. In this study, seven phases of increasing static load were used to observe the crack behavior. The beams were tested in two conditions. For the first condition, the beams were tested in original stated without strengthened with carbon fibre sheet (CFS at the bottom of the beam or called as tension part of the beam. For the second condition, the beams were strengthened with CFS at the tension part of the beam. It was found that, beam wrapped with CFS enhanced the strength of the beams in term of maximum ultimate load. Based on the relationship between average frequency (AF and RA value, the cracks of the beams can be classified.

  16. Average household income, crime, and smoking behaviour in a local area: the Finnish 10-Town study.

    Science.gov (United States)

    Virtanen, Marianna; Kivimäki, Mika; Kouvonen, Anne; Elovainio, Marko; Linna, Anne; Oksanen, Tuula; Vahtera, Jussi

    2007-05-01

    Social environments, like neighbourhoods, are increasingly recognised as determinants of health. While several studies have reported an association of low neighbourhood socio-economic status with morbidity, mortality and health risk behaviour, little is known of the health effects of neighbourhood crime rates. Using the ongoing 10-Town study in Finland, we examined the relations of average household income and crime rate measured at the local area level, with smoking status and intensity by linking census data of local area characteristics from 181 postal zip codes to survey responses to smoking behaviour in a cohort of 23,008 municipal employees. Gender-stratified multilevel analyses adjusted for age and individual occupational status revealed an association between low local area income rate and current smoking. High local area crime rate was also associated with current smoking. Both local area characteristics were strongly associated with smoking intensity. Among ever-smokers, being an ex-smoker was less likely among residents in areas with low average household income and a high crime rate. In the fully adjusted model, the association between local area income and smoking behaviour among women was substantially explained by the area-level crime rate. This study extends our knowledge of potential pathways through which social environmental factors may affect health.

  17. Gearbox fault diagnosis based on time-frequency domain synchronous averaging and feature extraction technique

    Science.gov (United States)

    Zhang, Shengli; Tang, Jiong

    2016-04-01

    Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.

  18. On the Coplanar Integrable Case of the Twice-Averaged Hill Problem with Central Body Oblateness

    Science.gov (United States)

    Vashkov'yak, M. A.

    2018-01-01

    The twice-averaged Hill problem with the oblateness of the central planet is considered in the case where its equatorial plane coincides with the plane of its orbital motion relative to the perturbing body. A qualitative study of this so-called coplanar integrable case was begun by Y. Kozai in 1963 and continued by M.L. Lidov and M.V. Yarskaya in 1974. However, no rigorous analytical solution of the problem can be obtained due to the complexity of the integrals. In this paper we obtain some quantitative evolution characteristics and propose an approximate constructive-analytical solution of the evolution system in the form of explicit time dependences of satellite orbit elements. The methodical accuracy has been estimated for several orbits of artificial lunar satellites by comparison with the numerical solution of the evolution system.

  19. Optimal Designing of Variables Chain Sampling Plan by Minimizing the Average Sample Number

    Directory of Open Access Journals (Sweden)

    S. Balamurali

    2013-01-01

    Full Text Available We investigate the optimal designing of chain sampling plan for the application of normally distributed quality characteristics. The chain sampling plan is one of the conditional sampling procedures and this plan under variables inspection will be useful when testing is costly and destructive. The advantages of this proposed variables plan over variables single sampling plan and variables double sampling plan are discussed. Tables are also constructed for the selection of optimal parameters of known and unknown standard deviation variables chain sampling plan for specified two points on the operating characteristic curve, namely, the acceptable quality level and the limiting quality level, along with the producer’s and consumer’s risks. The optimization problem is formulated as a nonlinear programming where the objective function to be minimized is the average sample number and the constraints are related to lot acceptance probabilities at acceptable quality level and limiting quality level under the operating characteristic curve.

  20. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    Science.gov (United States)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan

    2014-09-01

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.

  1. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    International Nuclear Information System (INIS)

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval

  2. Optimizing the average longitudinal phase of the beam in the SLC linac

    International Nuclear Information System (INIS)

    Bane, K.L.F.

    1989-09-01

    The relation of the beam's average linac phase, φ 0 , to the final energy spectrum in the SLC linac has been studied by many people over the years, with much of the work left unpublished. In this note we perform a somewhat thorough in vestigation of the problem. First we describe the calculation method, and discuss some common features of the energy spectrum. Then we calculate the value of φ 0 that minimizes δ rms for the conceivable range of bunch population and bunch lengths of the SLC linac. This is followed by luminosity calculations, including the sensitivity of luminosity to variations in φ 0 . Finally we suggest a practical method of implementing the proper phase setting on the real machine

  3. Microclim: Global estimates of hourly microclimate based on long-term monthly climate averages.

    Science.gov (United States)

    Kearney, Michael R; Isaac, Andrew P; Porter, Warren P

    2014-01-01

    The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms.

  4. The Value of Multivariate Model Sophistication: An Application to pricing Dow Jones Industrial Average options

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    innovation for a Laplace innovation assumption improves the pricing in a smaller way. Apart from investigating directly the value of model sophistication in terms of dollar losses, we also use the model condence set approach to statistically infer the set of models that delivers the best pricing performance.......We assess the predictive accuracy of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set 248 multivariate models that differer...... in their specification of the conditional variance, conditional correlation, and innovation distribution. All models belong to the dynamic conditional correlation class which is particularly suited because it allows to consistently estimate the risk neutral dynamics with a manageable computational effort in relatively...

  5. The average impulse response of a rough surface and its applications. [in radar altimetry

    Science.gov (United States)

    Brown, G. S.

    1977-01-01

    This paper is concerned with the theoretical model for short pulse scattering from a statistically random planar surface with particular application to current state of the art radar altimetry. A short review of the assumptions inherent in the convolutional model is presented. Simplified expressions are obtained for both the impulse response and the average backscattered power for near normal incidence under the assumptions common to satellite radar altimetry systems. In particular, it is shown that the conventional two-dimensional surface integration can be reduced to a closed form solution. Two applications of these results are presented relative to radar altimetry, namely, radar antenna pointing angle determination and altitude bias correction for pointing angle and surface roughness effects. It is also shown that these results have direct application to the analysis of the two frequency system proposed by Weissman, and a possible combined long pulse altimeter and two frequency system is suggested.

  6. Medium modification of averaged jet charge in heavy-ion collisions

    Science.gov (United States)

    Chen, Shi-Yong; Zhang, Ben-Wei; Wang, Enke

    2017-08-01

    Jet charge characterizes the electric charge distribution inside a jet. In this talk we make the first theoretical study of jet charge in high-energy nuclear collisions and calculate numerically the medium alternations of jet charge due to parton energy loss in the quark-gluon plasma. The parton multiple scattering in hot/dense QCD medium is simulated by a modified version of PYQUEN Monte Carlo model with 3+1D ideal hydrodynamical evolution of the fireball. Our preliminary results show that the averaged jet charge is significant modified in A+A collisions relative to that in p+p. The different features of quark jet charge and gluon jet charge in heavy-ion collisions, and the sensitivity of jet charge modifications to flavour dependence of energy loss are observed, which could then be used to discriminate quark and gluon jet as well as their energy loss patterns in heavy-ion collisions.

  7. Rheological properties of poly (vinylpiyrrolidone) as a function of average molecular weight and its applications

    DEFF Research Database (Denmark)

    Marani, Debora; Sudireddy, Bhaskar Reddy; Kiebach, Ragnar

    of formulations for pharmaceutics, food, personal care, or coatings, paintings, and printing applications. It is also widely used as organic additive, i.e. dispersant for ceramics. Regardless of the application, the control over the polymer behavior in solution is required for an efficient optimization...... of the formulation. Specifically, understanding the rheological properties is of paramount interest. Among the different factors that influence the rheological behavior, viscosity average molecular weight of the polymer is the most relevant. In this work, PVP polymers with various molecular weights have been....... The MHS equation relates the intrinsic viscosity [η] of a polymer in a given solvent at fixed temperature to the molecular weight. The adopted method also enables for the evaluation of the two MHS equation parameters (a and K), and of the polydispersity correction factor (qMHS). The intrinsic viscosity...

  8. Effect of alloy deformation on the average spacing parameters of non-deforming particles

    International Nuclear Information System (INIS)

    Fisher, J.; Gurland, J.

    1980-02-01

    It is shown on the basis of stereological definitions and a few simple experiments that the commonly used average dispersion parameters, area fraction (A/sub A/)/sub β/, areal particle density N/sub Aβ/ and mean free path lambda/sub α/, remain invariant during plastic deformation in the case of non-deforming equiaxed particles. Directional effects on the spacing parameters N/sub Aβ/ and lambda/sub α/ arise during uniaxial deformation by rotation and preferred orientation of nonequiaxed particles. Particle arrangement in stringered or layered structures and the effect of deformation on nearest neighbor distances of particles and voids are briefly discussed in relation to strength and fracture theories

  9. Attention problems and hyperactivity as predictors of college grade point average.

    Science.gov (United States)

    Schwanz, Kerry A; Palm, Linda J; Brallier, Sara A

    2007-11-01

    This study examined the relative contributions of measures of attention problems and hyperactivity to the prediction of college grade point average (GPA). A sample of 316 students enrolled in introductory psychology and sociology classes at a southeastern university completed the BASC-2 Self-Report of Personality College Form. Scores on the attention problems scale and the hyperactivity scale of the BASC-2 were entered into a regression equation as predictors of cumulative GPA. Each of the independent variables made a significant contribution to the prediction of GPA. Attention problem scores alone explained 7% of the variability in GPAs. The addition of hyperactivity scores to the equation produced a 2% increase in explanatory power. The implications of these results for assessing symptoms of inattention and hyperactivity in college students are discussed.

  10. Non-Gaussian Closed Form Solutions for Geometric Average Asian Options in the Framework of Non-Extensive Statistical Mechanics

    Directory of Open Access Journals (Sweden)

    Pan Zhao

    2018-01-01

    Full Text Available In this paper we consider pricing problems of the geometric average Asian options under a non-Gaussian model, in which the underlying stock price is driven by a process based on non-extensive statistical mechanics. The model can describe the peak and fat tail characteristics of returns. Thus, the description of underlying asset price and the pricing of options are more accurate. Moreover, using the martingale method, we obtain closed form solutions for geometric average Asian options. Furthermore, the numerical analysis shows that the model can avoid underestimating risks relative to the Black-Scholes model.

  11. Frequency averaging of fluctuations in the cross-correlation reception of noiselike signals reflected from a rough sea surface

    Science.gov (United States)

    Baranov, V. F.; Gerasimova, T. I.; Gulin, É. P.

    2007-04-01

    For noiselike signals reflected from a rough sea surface and received by a correlation receiver, the effect achieved at the receiver output as a result of frequency averaging of signal fluctuations is considered. Expressions characterizing the effect of frequency averaging are derived by using the generalized two-scale model describing the frequency correlation of strong fluctuations of the transfer function. Results of numerical calculations for the variance of fluctuations at the output of the correlation receiver are presented for different relative values of the frequency bandwidth of noiselike signals and the frequency correlation scales for the cases of both weak and strong fluctuations.

  12. Timescale Halo: Average-Speed Targets Elicit More Positive and Less Negative Attributions than Slow or Fast Targets

    Science.gov (United States)

    Hernandez, Ivan; Preston, Jesse Lee; Hepler, Justin

    2014-01-01

    Research on the timescale bias has found that observers perceive more capacity for mind in targets moving at an average speed, relative to slow or fast moving targets. The present research revisited the timescale bias as a type of halo effect, where normal-speed people elicit positive evaluations and abnormal-speed (slow and fast) people elicit negative evaluations. In two studies, participants viewed videos of people walking at a slow, average, or fast speed. We find evidence for a timescale halo effect: people walking at an average-speed were attributed more positive mental traits, but fewer negative mental traits, relative to slow or fast moving people. These effects held across both cognitive and emotional dimensions of mind and were mediated by overall positive/negative ratings of the person. These results suggest that, rather than eliciting greater perceptions of general mind, the timescale bias may reflect a generalized positivity toward average speed people relative to slow or fast moving people. PMID:24421882

  13. Achievement differences and self-concept differences: stronger associations for above or below average students?

    Science.gov (United States)

    Möller, Jens; Pohlmann, Britta

    2010-09-01

    On the one hand, achievement indicators like grades or standardized test results are strongly associated with students' domain-specific self-concepts. On the other hand, self-evaluation processes seem to be triggered by a self-enhancing means of information processing. As a consequence, above average students have more positive self-concepts than average students whereas below average students have lower self-concepts than average students. Imagine that two students, one above average, the other below average, have identical achievement differences to an average student. Will their self-concepts also share identical differences with the average students' self-concept? Our hypothesis is that students who achieve above average develop self-concepts that are more distinct from average achieving students' self-concepts than are below average achieving students' self-concepts. In Study 1, N=382 7th-10th graders (62.2% female) from several academic track (Gymnasium) schools in Germany served as participants. Students' ages ranged between 12 and 16 years (M=14.76, SD=6.24). In Study 2, the sample comprised N=1,349 students (49% girls) with a mean age of M=10.87 (SD=0.56) from 60 primary schools that were drawn representatively from a federal German state. In an experimental Study 3, N=81 German teacher education students (76.5% female) aged between 18 and 40 years (M=22.38, SD=3.80) served as participants. Two field studies and one experimental study were conducted. In all three studies, achievement differences between above average and average students were identical to those between average and below average students. However, self-concept differences between above average and average achieving students were greater than those identified between average and below average students. As our studies show, self-enhancement and self-protection processes lead above average students to develop self-concepts that are more distinct from average students' self-concepts than those

  14. Typology of end-of-life priorities in Saudi females: averaging analysis and Q-methodology.

    Science.gov (United States)

    Hammami, Muhammad M; Hammami, Safa; Amer, Hala A; Khodr, Nesrine A

    2016-01-01

    Understanding culture-and sex-related end-of-life preferences is essential to provide quality end-of-life care. We have previously explored end-of-life choices in Saudi males and found important culture-related differences and that Q-methodology is useful in identifying intraculture, opinion-based groups. Here, we explore Saudi females' end-of-life choices. A volunteer sample of 68 females rank-ordered 47 opinion statements on end-of-life issues into a nine-category symmetrical distribution. The ranking scores of the statements were analyzed by averaging analysis and Q-methodology. The mean age of the females in the sample was 30.3 years (range, 19-55 years). Among them, 51% reported average religiosity, 78% reported very good health, 79% reported very good life quality, and 100% reported high-school education or more. The extreme five overall priorities were to be able to say the statement of faith, be at peace with God, die without having the body exposed, maintain dignity, and resolve all conflicts. The extreme five overall dis-priorities were to die in the hospital, die well dressed, be informed about impending death by family/friends rather than doctor, die at peak of life, and not know if one has a fatal illness. Q-methodology identified five opinion-based groups with qualitatively different characteristics: "physical and emotional privacy concerned, family caring" (younger, lower religiosity), "whole person" (higher religiosity), "pain and informational privacy concerned" (lower life quality), "decisional privacy concerned" (older, higher life quality), and "life quantity concerned, family dependent" (high life quality, low life satisfaction). Out of the extreme 14 priorities/dis-priorities for each group, 21%-50% were not represented among the extreme 20 priorities/dis-priorities for the entire sample. Consistent with the previously reported findings in Saudi males, transcendence and dying in the hospital were the extreme end-of-life priority and dis

  15. [Groups of resource utilization in acute care units and average length of stay at geriatrics services].

    Science.gov (United States)

    Solano Jaurrieta, J J; Baztán Cortés, J J; Hornillos Calvo, M; Carbonell Collar, A; Tardón García, A

    2001-01-01

    In recent years, Patient Classifications Systems (PCS's) have been implement in Spain for the purpose of gauging the "hospital product". However, the most conventional systems are not very well-suited to the senior citizen population, among whom illness-related disability is a determining factor with regard to explaining the usage of resources and the results of the health care provided. Therefore, the idea was brought forth of implementing a system in units providing senior citizen care which would entail this parameter, that is, the Resource Usage Groups (RUG's), analyzing the characteristics and differences in the RUG-related spread in four Geriatrics Units. A cross-sectional study based on consecutive cutoff points in periods longer than the average stay in each unit, the patients admitted in the acute care units and average stay in the Geriatrics Unit of the Hospital Monte Narango (HMN) (n = 318), Hospital Central de la Cruz Roja (HCCR) (n = 384), Hospital General de Guadalajara (HG) (n = 272) and Hospital Virgen del Valle (HVV) (n = 390), with regard to the spread thereof according to the RUG-T18 classification. The possible differences among the hospitals in question were analyzed by means of the chi-square statistical test (SPSS for Windows). For the overall sample, the patients were divided into groups R, S and C of the classification, groups P and B being represented to a very small degree, differences having been found to exist among the different hospitals. Hence, the HCCR is that which handles the largest percentage of patients in the R group (47.64% vs. 23.66% at HMN; 20.57% at HG and 20.53% at HVV) and a smaller percentage of patients in the S Group (3.12% vs. 6.40% at HMN; 9.92% at HG and 9.76% at HVV) and the C Group (48.94% vs. 76.29% at HMN; 66.89% at HG and 68.36% at HVV). Differences were likewise found to exist in the individual analysis for the acute care units and average length of stay. The resource usage groups can be useful with regard to

  16. Experimental study of relationship between average isotopic fractionation factor and evaporation rate

    Directory of Open Access Journals (Sweden)

    Tao Wang

    2010-12-01

    Full Text Available Isotopic fractionation is the basis of tracing the water cycle using hydrogen and oxygen isotopes. Isotopic fractionation factors in water evaporating from free water bodies are mainly affected by temperature and relative humidity, and vary significantly with these atmospheric factors over the course of a day. The evaporation rate (E can reveal the effects of atmospheric factors. Therefore, there should be a certain functional relationship between isotopic fractionation factors and E. An average isotopic fractionation factor (α* was defined to describe isotopic differences between vapor and liquid phases in evaporation with time intervals of days. The relationship between α* and E based on the isotopic mass balance was investigated through an evaporation pan experiment with no inflow. The experimental results showed that the isotopic compositions of residual water were more enriched with time; α* was affected by air temperature, relative humidity, and other atmospheric factors, and had a strong functional relation with E. The values of α* can be easily calculated with the known values of E, the initial volume of water in the pan, and isotopic compositions of residual water.

  17. 77 FR 34411 - Branch Technical Position on Concentration Averaging and Encapsulation

    Science.gov (United States)

    2012-06-11

    ... COMMISSION Branch Technical Position on Concentration Averaging and Encapsulation AGENCY: Nuclear Regulatory... its Branch Technical Position on Concentration Averaging and Encapsulation (CA BTP). An earlier draft... bases for its concentration averaging positions. It also needs to be revised to incorporate new...

  18. 40 CFR 1051.705 - How do I average emission levels?

    Science.gov (United States)

    2010-07-01

    ... described in § 1051.701(d)(4). (c) After the end of your model year, calculate a final average emission... verify them in reviewing the end-of-year report. (e) If your average emission level is above the... 40 Protection of Environment 32 2010-07-01 2010-07-01 false How do I average emission levels? 1051...

  19. 76 FR 5518 - Antidumping Proceedings: Calculation of the Weighted Average Dumping Margin and Assessment Rate...

    Science.gov (United States)

    2011-02-01

    ... Weighted Average Dumping Margin and Assessment Rate in Certain Antidumping Duty Proceedings AGENCY: Import... regarding the calculation of the weighted average dumping margin and antidumping duty assessment rate in... regarding the calculation of the weighted average dumping margin and antidumping duty assessment rate in...

  20. 75 FR 81533 - Antidumping Proceedings: Calculation of the Weighted Average Dumping Margin and Assessment Rate...

    Science.gov (United States)

    2010-12-28

    ... Weighted Average Dumping Margin and Assessment Rate in Certain Antidumping Duty Proceedings AGENCY: Import... comments regarding the calculation of the weighted average dumping margin and antidumping duty assessment... calculated the weighted average margins of dumping using transaction-to-transaction comparisons, the...